Jul 9 23:56:16.730323 kernel: Linux version 6.6.95-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT_DYNAMIC Wed Jul 9 22:08:48 -00 2025 Jul 9 23:56:16.730339 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=c257b65f06e0ad68d969d5b3e057f031663dc29a4487d91a77595a40c4dc82d6 Jul 9 23:56:16.730345 kernel: Disabled fast string operations Jul 9 23:56:16.730350 kernel: BIOS-provided physical RAM map: Jul 9 23:56:16.730354 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ebff] usable Jul 9 23:56:16.730358 kernel: BIOS-e820: [mem 0x000000000009ec00-0x000000000009ffff] reserved Jul 9 23:56:16.730364 kernel: BIOS-e820: [mem 0x00000000000dc000-0x00000000000fffff] reserved Jul 9 23:56:16.730368 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007fedffff] usable Jul 9 23:56:16.730373 kernel: BIOS-e820: [mem 0x000000007fee0000-0x000000007fefefff] ACPI data Jul 9 23:56:16.730377 kernel: BIOS-e820: [mem 0x000000007feff000-0x000000007fefffff] ACPI NVS Jul 9 23:56:16.730381 kernel: BIOS-e820: [mem 0x000000007ff00000-0x000000007fffffff] usable Jul 9 23:56:16.730385 kernel: BIOS-e820: [mem 0x00000000f0000000-0x00000000f7ffffff] reserved Jul 9 23:56:16.730389 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec0ffff] reserved Jul 9 23:56:16.730394 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved Jul 9 23:56:16.730400 kernel: BIOS-e820: [mem 0x00000000fffe0000-0x00000000ffffffff] reserved Jul 9 23:56:16.730405 kernel: NX (Execute Disable) protection: active Jul 9 23:56:16.730410 kernel: APIC: Static calls initialized Jul 9 23:56:16.730414 kernel: SMBIOS 2.7 present. Jul 9 23:56:16.730419 kernel: DMI: VMware, Inc. VMware Virtual Platform/440BX Desktop Reference Platform, BIOS 6.00 05/28/2020 Jul 9 23:56:16.730424 kernel: vmware: hypercall mode: 0x00 Jul 9 23:56:16.730429 kernel: Hypervisor detected: VMware Jul 9 23:56:16.730434 kernel: vmware: TSC freq read from hypervisor : 3408.000 MHz Jul 9 23:56:16.730440 kernel: vmware: Host bus clock speed read from hypervisor : 66000000 Hz Jul 9 23:56:16.730445 kernel: vmware: using clock offset of 2467991641 ns Jul 9 23:56:16.730449 kernel: tsc: Detected 3408.000 MHz processor Jul 9 23:56:16.730454 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jul 9 23:56:16.730460 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jul 9 23:56:16.730465 kernel: last_pfn = 0x80000 max_arch_pfn = 0x400000000 Jul 9 23:56:16.730469 kernel: total RAM covered: 3072M Jul 9 23:56:16.730474 kernel: Found optimal setting for mtrr clean up Jul 9 23:56:16.730480 kernel: gran_size: 64K chunk_size: 64K num_reg: 2 lose cover RAM: 0G Jul 9 23:56:16.730485 kernel: MTRR map: 6 entries (5 fixed + 1 variable; max 21), built from 8 variable MTRRs Jul 9 23:56:16.730491 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jul 9 23:56:16.730496 kernel: Using GB pages for direct mapping Jul 9 23:56:16.730501 kernel: ACPI: Early table checksum verification disabled Jul 9 23:56:16.730506 kernel: ACPI: RSDP 0x00000000000F6A00 000024 (v02 PTLTD ) Jul 9 23:56:16.730511 kernel: ACPI: XSDT 0x000000007FEE965B 00005C (v01 INTEL 440BX 06040000 VMW 01324272) Jul 9 23:56:16.730516 kernel: ACPI: FACP 0x000000007FEFEE73 0000F4 (v04 INTEL 440BX 06040000 PTL 000F4240) Jul 9 23:56:16.730521 kernel: ACPI: DSDT 0x000000007FEEAD55 01411E (v01 PTLTD Custom 06040000 MSFT 03000001) Jul 9 23:56:16.730526 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 Jul 9 23:56:16.730533 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 Jul 9 23:56:16.730538 kernel: ACPI: BOOT 0x000000007FEEAD2D 000028 (v01 PTLTD $SBFTBL$ 06040000 LTP 00000001) Jul 9 23:56:16.730544 kernel: ACPI: APIC 0x000000007FEEA5EB 000742 (v01 PTLTD ? APIC 06040000 LTP 00000000) Jul 9 23:56:16.730549 kernel: ACPI: MCFG 0x000000007FEEA5AF 00003C (v01 PTLTD $PCITBL$ 06040000 LTP 00000001) Jul 9 23:56:16.730554 kernel: ACPI: SRAT 0x000000007FEE9757 0008A8 (v02 VMWARE MEMPLUG 06040000 VMW 00000001) Jul 9 23:56:16.730559 kernel: ACPI: HPET 0x000000007FEE971F 000038 (v01 VMWARE VMW HPET 06040000 VMW 00000001) Jul 9 23:56:16.730565 kernel: ACPI: WAET 0x000000007FEE96F7 000028 (v01 VMWARE VMW WAET 06040000 VMW 00000001) Jul 9 23:56:16.730570 kernel: ACPI: Reserving FACP table memory at [mem 0x7fefee73-0x7fefef66] Jul 9 23:56:16.730576 kernel: ACPI: Reserving DSDT table memory at [mem 0x7feead55-0x7fefee72] Jul 9 23:56:16.730581 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] Jul 9 23:56:16.730586 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] Jul 9 23:56:16.730591 kernel: ACPI: Reserving BOOT table memory at [mem 0x7feead2d-0x7feead54] Jul 9 23:56:16.730596 kernel: ACPI: Reserving APIC table memory at [mem 0x7feea5eb-0x7feead2c] Jul 9 23:56:16.730601 kernel: ACPI: Reserving MCFG table memory at [mem 0x7feea5af-0x7feea5ea] Jul 9 23:56:16.730606 kernel: ACPI: Reserving SRAT table memory at [mem 0x7fee9757-0x7fee9ffe] Jul 9 23:56:16.730612 kernel: ACPI: Reserving HPET table memory at [mem 0x7fee971f-0x7fee9756] Jul 9 23:56:16.730618 kernel: ACPI: Reserving WAET table memory at [mem 0x7fee96f7-0x7fee971e] Jul 9 23:56:16.730623 kernel: system APIC only can use physical flat Jul 9 23:56:16.730628 kernel: APIC: Switched APIC routing to: physical flat Jul 9 23:56:16.730633 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jul 9 23:56:16.730638 kernel: SRAT: PXM 0 -> APIC 0x02 -> Node 0 Jul 9 23:56:16.730643 kernel: SRAT: PXM 0 -> APIC 0x04 -> Node 0 Jul 9 23:56:16.730648 kernel: SRAT: PXM 0 -> APIC 0x06 -> Node 0 Jul 9 23:56:16.730653 kernel: SRAT: PXM 0 -> APIC 0x08 -> Node 0 Jul 9 23:56:16.730658 kernel: SRAT: PXM 0 -> APIC 0x0a -> Node 0 Jul 9 23:56:16.730664 kernel: SRAT: PXM 0 -> APIC 0x0c -> Node 0 Jul 9 23:56:16.730669 kernel: SRAT: PXM 0 -> APIC 0x0e -> Node 0 Jul 9 23:56:16.730674 kernel: SRAT: PXM 0 -> APIC 0x10 -> Node 0 Jul 9 23:56:16.730679 kernel: SRAT: PXM 0 -> APIC 0x12 -> Node 0 Jul 9 23:56:16.730684 kernel: SRAT: PXM 0 -> APIC 0x14 -> Node 0 Jul 9 23:56:16.730689 kernel: SRAT: PXM 0 -> APIC 0x16 -> Node 0 Jul 9 23:56:16.730694 kernel: SRAT: PXM 0 -> APIC 0x18 -> Node 0 Jul 9 23:56:16.730699 kernel: SRAT: PXM 0 -> APIC 0x1a -> Node 0 Jul 9 23:56:16.730704 kernel: SRAT: PXM 0 -> APIC 0x1c -> Node 0 Jul 9 23:56:16.730709 kernel: SRAT: PXM 0 -> APIC 0x1e -> Node 0 Jul 9 23:56:16.730715 kernel: SRAT: PXM 0 -> APIC 0x20 -> Node 0 Jul 9 23:56:16.730720 kernel: SRAT: PXM 0 -> APIC 0x22 -> Node 0 Jul 9 23:56:16.730724 kernel: SRAT: PXM 0 -> APIC 0x24 -> Node 0 Jul 9 23:56:16.730730 kernel: SRAT: PXM 0 -> APIC 0x26 -> Node 0 Jul 9 23:56:16.730735 kernel: SRAT: PXM 0 -> APIC 0x28 -> Node 0 Jul 9 23:56:16.730739 kernel: SRAT: PXM 0 -> APIC 0x2a -> Node 0 Jul 9 23:56:16.730744 kernel: SRAT: PXM 0 -> APIC 0x2c -> Node 0 Jul 9 23:56:16.730749 kernel: SRAT: PXM 0 -> APIC 0x2e -> Node 0 Jul 9 23:56:16.730754 kernel: SRAT: PXM 0 -> APIC 0x30 -> Node 0 Jul 9 23:56:16.730759 kernel: SRAT: PXM 0 -> APIC 0x32 -> Node 0 Jul 9 23:56:16.730765 kernel: SRAT: PXM 0 -> APIC 0x34 -> Node 0 Jul 9 23:56:16.730770 kernel: SRAT: PXM 0 -> APIC 0x36 -> Node 0 Jul 9 23:56:16.730775 kernel: SRAT: PXM 0 -> APIC 0x38 -> Node 0 Jul 9 23:56:16.730780 kernel: SRAT: PXM 0 -> APIC 0x3a -> Node 0 Jul 9 23:56:16.730785 kernel: SRAT: PXM 0 -> APIC 0x3c -> Node 0 Jul 9 23:56:16.730790 kernel: SRAT: PXM 0 -> APIC 0x3e -> Node 0 Jul 9 23:56:16.730795 kernel: SRAT: PXM 0 -> APIC 0x40 -> Node 0 Jul 9 23:56:16.730800 kernel: SRAT: PXM 0 -> APIC 0x42 -> Node 0 Jul 9 23:56:16.730805 kernel: SRAT: PXM 0 -> APIC 0x44 -> Node 0 Jul 9 23:56:16.730810 kernel: SRAT: PXM 0 -> APIC 0x46 -> Node 0 Jul 9 23:56:16.730815 kernel: SRAT: PXM 0 -> APIC 0x48 -> Node 0 Jul 9 23:56:16.730821 kernel: SRAT: PXM 0 -> APIC 0x4a -> Node 0 Jul 9 23:56:16.730826 kernel: SRAT: PXM 0 -> APIC 0x4c -> Node 0 Jul 9 23:56:16.730831 kernel: SRAT: PXM 0 -> APIC 0x4e -> Node 0 Jul 9 23:56:16.730836 kernel: SRAT: PXM 0 -> APIC 0x50 -> Node 0 Jul 9 23:56:16.730841 kernel: SRAT: PXM 0 -> APIC 0x52 -> Node 0 Jul 9 23:56:16.730846 kernel: SRAT: PXM 0 -> APIC 0x54 -> Node 0 Jul 9 23:56:16.730851 kernel: SRAT: PXM 0 -> APIC 0x56 -> Node 0 Jul 9 23:56:16.730856 kernel: SRAT: PXM 0 -> APIC 0x58 -> Node 0 Jul 9 23:56:16.730860 kernel: SRAT: PXM 0 -> APIC 0x5a -> Node 0 Jul 9 23:56:16.730865 kernel: SRAT: PXM 0 -> APIC 0x5c -> Node 0 Jul 9 23:56:16.730871 kernel: SRAT: PXM 0 -> APIC 0x5e -> Node 0 Jul 9 23:56:16.730876 kernel: SRAT: PXM 0 -> APIC 0x60 -> Node 0 Jul 9 23:56:16.730881 kernel: SRAT: PXM 0 -> APIC 0x62 -> Node 0 Jul 9 23:56:16.730886 kernel: SRAT: PXM 0 -> APIC 0x64 -> Node 0 Jul 9 23:56:16.730891 kernel: SRAT: PXM 0 -> APIC 0x66 -> Node 0 Jul 9 23:56:16.730896 kernel: SRAT: PXM 0 -> APIC 0x68 -> Node 0 Jul 9 23:56:16.730901 kernel: SRAT: PXM 0 -> APIC 0x6a -> Node 0 Jul 9 23:56:16.730906 kernel: SRAT: PXM 0 -> APIC 0x6c -> Node 0 Jul 9 23:56:16.730911 kernel: SRAT: PXM 0 -> APIC 0x6e -> Node 0 Jul 9 23:56:16.730916 kernel: SRAT: PXM 0 -> APIC 0x70 -> Node 0 Jul 9 23:56:16.730922 kernel: SRAT: PXM 0 -> APIC 0x72 -> Node 0 Jul 9 23:56:16.730927 kernel: SRAT: PXM 0 -> APIC 0x74 -> Node 0 Jul 9 23:56:16.730936 kernel: SRAT: PXM 0 -> APIC 0x76 -> Node 0 Jul 9 23:56:16.730974 kernel: SRAT: PXM 0 -> APIC 0x78 -> Node 0 Jul 9 23:56:16.730980 kernel: SRAT: PXM 0 -> APIC 0x7a -> Node 0 Jul 9 23:56:16.730985 kernel: SRAT: PXM 0 -> APIC 0x7c -> Node 0 Jul 9 23:56:16.730990 kernel: SRAT: PXM 0 -> APIC 0x7e -> Node 0 Jul 9 23:56:16.730996 kernel: SRAT: PXM 0 -> APIC 0x80 -> Node 0 Jul 9 23:56:16.731001 kernel: SRAT: PXM 0 -> APIC 0x82 -> Node 0 Jul 9 23:56:16.731007 kernel: SRAT: PXM 0 -> APIC 0x84 -> Node 0 Jul 9 23:56:16.731013 kernel: SRAT: PXM 0 -> APIC 0x86 -> Node 0 Jul 9 23:56:16.731018 kernel: SRAT: PXM 0 -> APIC 0x88 -> Node 0 Jul 9 23:56:16.731023 kernel: SRAT: PXM 0 -> APIC 0x8a -> Node 0 Jul 9 23:56:16.731028 kernel: SRAT: PXM 0 -> APIC 0x8c -> Node 0 Jul 9 23:56:16.731034 kernel: SRAT: PXM 0 -> APIC 0x8e -> Node 0 Jul 9 23:56:16.731039 kernel: SRAT: PXM 0 -> APIC 0x90 -> Node 0 Jul 9 23:56:16.731044 kernel: SRAT: PXM 0 -> APIC 0x92 -> Node 0 Jul 9 23:56:16.731049 kernel: SRAT: PXM 0 -> APIC 0x94 -> Node 0 Jul 9 23:56:16.731055 kernel: SRAT: PXM 0 -> APIC 0x96 -> Node 0 Jul 9 23:56:16.731061 kernel: SRAT: PXM 0 -> APIC 0x98 -> Node 0 Jul 9 23:56:16.731066 kernel: SRAT: PXM 0 -> APIC 0x9a -> Node 0 Jul 9 23:56:16.731072 kernel: SRAT: PXM 0 -> APIC 0x9c -> Node 0 Jul 9 23:56:16.731077 kernel: SRAT: PXM 0 -> APIC 0x9e -> Node 0 Jul 9 23:56:16.731082 kernel: SRAT: PXM 0 -> APIC 0xa0 -> Node 0 Jul 9 23:56:16.731087 kernel: SRAT: PXM 0 -> APIC 0xa2 -> Node 0 Jul 9 23:56:16.731093 kernel: SRAT: PXM 0 -> APIC 0xa4 -> Node 0 Jul 9 23:56:16.731098 kernel: SRAT: PXM 0 -> APIC 0xa6 -> Node 0 Jul 9 23:56:16.731103 kernel: SRAT: PXM 0 -> APIC 0xa8 -> Node 0 Jul 9 23:56:16.731109 kernel: SRAT: PXM 0 -> APIC 0xaa -> Node 0 Jul 9 23:56:16.731114 kernel: SRAT: PXM 0 -> APIC 0xac -> Node 0 Jul 9 23:56:16.731120 kernel: SRAT: PXM 0 -> APIC 0xae -> Node 0 Jul 9 23:56:16.731125 kernel: SRAT: PXM 0 -> APIC 0xb0 -> Node 0 Jul 9 23:56:16.731130 kernel: SRAT: PXM 0 -> APIC 0xb2 -> Node 0 Jul 9 23:56:16.731136 kernel: SRAT: PXM 0 -> APIC 0xb4 -> Node 0 Jul 9 23:56:16.731141 kernel: SRAT: PXM 0 -> APIC 0xb6 -> Node 0 Jul 9 23:56:16.731147 kernel: SRAT: PXM 0 -> APIC 0xb8 -> Node 0 Jul 9 23:56:16.731152 kernel: SRAT: PXM 0 -> APIC 0xba -> Node 0 Jul 9 23:56:16.731157 kernel: SRAT: PXM 0 -> APIC 0xbc -> Node 0 Jul 9 23:56:16.731162 kernel: SRAT: PXM 0 -> APIC 0xbe -> Node 0 Jul 9 23:56:16.731168 kernel: SRAT: PXM 0 -> APIC 0xc0 -> Node 0 Jul 9 23:56:16.731174 kernel: SRAT: PXM 0 -> APIC 0xc2 -> Node 0 Jul 9 23:56:16.731179 kernel: SRAT: PXM 0 -> APIC 0xc4 -> Node 0 Jul 9 23:56:16.731184 kernel: SRAT: PXM 0 -> APIC 0xc6 -> Node 0 Jul 9 23:56:16.731190 kernel: SRAT: PXM 0 -> APIC 0xc8 -> Node 0 Jul 9 23:56:16.731195 kernel: SRAT: PXM 0 -> APIC 0xca -> Node 0 Jul 9 23:56:16.731200 kernel: SRAT: PXM 0 -> APIC 0xcc -> Node 0 Jul 9 23:56:16.731206 kernel: SRAT: PXM 0 -> APIC 0xce -> Node 0 Jul 9 23:56:16.731211 kernel: SRAT: PXM 0 -> APIC 0xd0 -> Node 0 Jul 9 23:56:16.731216 kernel: SRAT: PXM 0 -> APIC 0xd2 -> Node 0 Jul 9 23:56:16.731221 kernel: SRAT: PXM 0 -> APIC 0xd4 -> Node 0 Jul 9 23:56:16.731228 kernel: SRAT: PXM 0 -> APIC 0xd6 -> Node 0 Jul 9 23:56:16.731233 kernel: SRAT: PXM 0 -> APIC 0xd8 -> Node 0 Jul 9 23:56:16.731238 kernel: SRAT: PXM 0 -> APIC 0xda -> Node 0 Jul 9 23:56:16.731244 kernel: SRAT: PXM 0 -> APIC 0xdc -> Node 0 Jul 9 23:56:16.731249 kernel: SRAT: PXM 0 -> APIC 0xde -> Node 0 Jul 9 23:56:16.731254 kernel: SRAT: PXM 0 -> APIC 0xe0 -> Node 0 Jul 9 23:56:16.731259 kernel: SRAT: PXM 0 -> APIC 0xe2 -> Node 0 Jul 9 23:56:16.731265 kernel: SRAT: PXM 0 -> APIC 0xe4 -> Node 0 Jul 9 23:56:16.731270 kernel: SRAT: PXM 0 -> APIC 0xe6 -> Node 0 Jul 9 23:56:16.731275 kernel: SRAT: PXM 0 -> APIC 0xe8 -> Node 0 Jul 9 23:56:16.731280 kernel: SRAT: PXM 0 -> APIC 0xea -> Node 0 Jul 9 23:56:16.731287 kernel: SRAT: PXM 0 -> APIC 0xec -> Node 0 Jul 9 23:56:16.731292 kernel: SRAT: PXM 0 -> APIC 0xee -> Node 0 Jul 9 23:56:16.731297 kernel: SRAT: PXM 0 -> APIC 0xf0 -> Node 0 Jul 9 23:56:16.731302 kernel: SRAT: PXM 0 -> APIC 0xf2 -> Node 0 Jul 9 23:56:16.731308 kernel: SRAT: PXM 0 -> APIC 0xf4 -> Node 0 Jul 9 23:56:16.731313 kernel: SRAT: PXM 0 -> APIC 0xf6 -> Node 0 Jul 9 23:56:16.731318 kernel: SRAT: PXM 0 -> APIC 0xf8 -> Node 0 Jul 9 23:56:16.731324 kernel: SRAT: PXM 0 -> APIC 0xfa -> Node 0 Jul 9 23:56:16.731329 kernel: SRAT: PXM 0 -> APIC 0xfc -> Node 0 Jul 9 23:56:16.731334 kernel: SRAT: PXM 0 -> APIC 0xfe -> Node 0 Jul 9 23:56:16.731340 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Jul 9 23:56:16.731346 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Jul 9 23:56:16.731351 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000-0xbfffffff] hotplug Jul 9 23:56:16.731357 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7fffffff] -> [mem 0x00000000-0x7fffffff] Jul 9 23:56:16.731363 kernel: NODE_DATA(0) allocated [mem 0x7fffa000-0x7fffffff] Jul 9 23:56:16.731368 kernel: Zone ranges: Jul 9 23:56:16.731374 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jul 9 23:56:16.731379 kernel: DMA32 [mem 0x0000000001000000-0x000000007fffffff] Jul 9 23:56:16.731385 kernel: Normal empty Jul 9 23:56:16.731391 kernel: Movable zone start for each node Jul 9 23:56:16.731397 kernel: Early memory node ranges Jul 9 23:56:16.731402 kernel: node 0: [mem 0x0000000000001000-0x000000000009dfff] Jul 9 23:56:16.731413 kernel: node 0: [mem 0x0000000000100000-0x000000007fedffff] Jul 9 23:56:16.731419 kernel: node 0: [mem 0x000000007ff00000-0x000000007fffffff] Jul 9 23:56:16.731425 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007fffffff] Jul 9 23:56:16.731430 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 9 23:56:16.731436 kernel: On node 0, zone DMA: 98 pages in unavailable ranges Jul 9 23:56:16.731441 kernel: On node 0, zone DMA32: 32 pages in unavailable ranges Jul 9 23:56:16.731447 kernel: ACPI: PM-Timer IO Port: 0x1008 Jul 9 23:56:16.731453 kernel: system APIC only can use physical flat Jul 9 23:56:16.731459 kernel: ACPI: LAPIC_NMI (acpi_id[0x00] high edge lint[0x1]) Jul 9 23:56:16.731464 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) Jul 9 23:56:16.731470 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) Jul 9 23:56:16.731475 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) Jul 9 23:56:16.731480 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) Jul 9 23:56:16.731486 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) Jul 9 23:56:16.731491 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) Jul 9 23:56:16.731496 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) Jul 9 23:56:16.731502 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) Jul 9 23:56:16.731508 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) Jul 9 23:56:16.731513 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) Jul 9 23:56:16.731519 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) Jul 9 23:56:16.731524 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) Jul 9 23:56:16.731529 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) Jul 9 23:56:16.731535 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) Jul 9 23:56:16.731540 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) Jul 9 23:56:16.731546 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) Jul 9 23:56:16.731551 kernel: ACPI: LAPIC_NMI (acpi_id[0x11] high edge lint[0x1]) Jul 9 23:56:16.731557 kernel: ACPI: LAPIC_NMI (acpi_id[0x12] high edge lint[0x1]) Jul 9 23:56:16.731563 kernel: ACPI: LAPIC_NMI (acpi_id[0x13] high edge lint[0x1]) Jul 9 23:56:16.731568 kernel: ACPI: LAPIC_NMI (acpi_id[0x14] high edge lint[0x1]) Jul 9 23:56:16.731573 kernel: ACPI: LAPIC_NMI (acpi_id[0x15] high edge lint[0x1]) Jul 9 23:56:16.731578 kernel: ACPI: LAPIC_NMI (acpi_id[0x16] high edge lint[0x1]) Jul 9 23:56:16.731584 kernel: ACPI: LAPIC_NMI (acpi_id[0x17] high edge lint[0x1]) Jul 9 23:56:16.731589 kernel: ACPI: LAPIC_NMI (acpi_id[0x18] high edge lint[0x1]) Jul 9 23:56:16.731595 kernel: ACPI: LAPIC_NMI (acpi_id[0x19] high edge lint[0x1]) Jul 9 23:56:16.731600 kernel: ACPI: LAPIC_NMI (acpi_id[0x1a] high edge lint[0x1]) Jul 9 23:56:16.731605 kernel: ACPI: LAPIC_NMI (acpi_id[0x1b] high edge lint[0x1]) Jul 9 23:56:16.731612 kernel: ACPI: LAPIC_NMI (acpi_id[0x1c] high edge lint[0x1]) Jul 9 23:56:16.731617 kernel: ACPI: LAPIC_NMI (acpi_id[0x1d] high edge lint[0x1]) Jul 9 23:56:16.731623 kernel: ACPI: LAPIC_NMI (acpi_id[0x1e] high edge lint[0x1]) Jul 9 23:56:16.731628 kernel: ACPI: LAPIC_NMI (acpi_id[0x1f] high edge lint[0x1]) Jul 9 23:56:16.731633 kernel: ACPI: LAPIC_NMI (acpi_id[0x20] high edge lint[0x1]) Jul 9 23:56:16.731638 kernel: ACPI: LAPIC_NMI (acpi_id[0x21] high edge lint[0x1]) Jul 9 23:56:16.731644 kernel: ACPI: LAPIC_NMI (acpi_id[0x22] high edge lint[0x1]) Jul 9 23:56:16.731649 kernel: ACPI: LAPIC_NMI (acpi_id[0x23] high edge lint[0x1]) Jul 9 23:56:16.731654 kernel: ACPI: LAPIC_NMI (acpi_id[0x24] high edge lint[0x1]) Jul 9 23:56:16.731660 kernel: ACPI: LAPIC_NMI (acpi_id[0x25] high edge lint[0x1]) Jul 9 23:56:16.731666 kernel: ACPI: LAPIC_NMI (acpi_id[0x26] high edge lint[0x1]) Jul 9 23:56:16.731671 kernel: ACPI: LAPIC_NMI (acpi_id[0x27] high edge lint[0x1]) Jul 9 23:56:16.731677 kernel: ACPI: LAPIC_NMI (acpi_id[0x28] high edge lint[0x1]) Jul 9 23:56:16.731682 kernel: ACPI: LAPIC_NMI (acpi_id[0x29] high edge lint[0x1]) Jul 9 23:56:16.731687 kernel: ACPI: LAPIC_NMI (acpi_id[0x2a] high edge lint[0x1]) Jul 9 23:56:16.731693 kernel: ACPI: LAPIC_NMI (acpi_id[0x2b] high edge lint[0x1]) Jul 9 23:56:16.731698 kernel: ACPI: LAPIC_NMI (acpi_id[0x2c] high edge lint[0x1]) Jul 9 23:56:16.731703 kernel: ACPI: LAPIC_NMI (acpi_id[0x2d] high edge lint[0x1]) Jul 9 23:56:16.731709 kernel: ACPI: LAPIC_NMI (acpi_id[0x2e] high edge lint[0x1]) Jul 9 23:56:16.731714 kernel: ACPI: LAPIC_NMI (acpi_id[0x2f] high edge lint[0x1]) Jul 9 23:56:16.731720 kernel: ACPI: LAPIC_NMI (acpi_id[0x30] high edge lint[0x1]) Jul 9 23:56:16.731726 kernel: ACPI: LAPIC_NMI (acpi_id[0x31] high edge lint[0x1]) Jul 9 23:56:16.731731 kernel: ACPI: LAPIC_NMI (acpi_id[0x32] high edge lint[0x1]) Jul 9 23:56:16.731737 kernel: ACPI: LAPIC_NMI (acpi_id[0x33] high edge lint[0x1]) Jul 9 23:56:16.731742 kernel: ACPI: LAPIC_NMI (acpi_id[0x34] high edge lint[0x1]) Jul 9 23:56:16.731747 kernel: ACPI: LAPIC_NMI (acpi_id[0x35] high edge lint[0x1]) Jul 9 23:56:16.731753 kernel: ACPI: LAPIC_NMI (acpi_id[0x36] high edge lint[0x1]) Jul 9 23:56:16.731758 kernel: ACPI: LAPIC_NMI (acpi_id[0x37] high edge lint[0x1]) Jul 9 23:56:16.731763 kernel: ACPI: LAPIC_NMI (acpi_id[0x38] high edge lint[0x1]) Jul 9 23:56:16.731768 kernel: ACPI: LAPIC_NMI (acpi_id[0x39] high edge lint[0x1]) Jul 9 23:56:16.731775 kernel: ACPI: LAPIC_NMI (acpi_id[0x3a] high edge lint[0x1]) Jul 9 23:56:16.731780 kernel: ACPI: LAPIC_NMI (acpi_id[0x3b] high edge lint[0x1]) Jul 9 23:56:16.731786 kernel: ACPI: LAPIC_NMI (acpi_id[0x3c] high edge lint[0x1]) Jul 9 23:56:16.731791 kernel: ACPI: LAPIC_NMI (acpi_id[0x3d] high edge lint[0x1]) Jul 9 23:56:16.731796 kernel: ACPI: LAPIC_NMI (acpi_id[0x3e] high edge lint[0x1]) Jul 9 23:56:16.731802 kernel: ACPI: LAPIC_NMI (acpi_id[0x3f] high edge lint[0x1]) Jul 9 23:56:16.731807 kernel: ACPI: LAPIC_NMI (acpi_id[0x40] high edge lint[0x1]) Jul 9 23:56:16.731812 kernel: ACPI: LAPIC_NMI (acpi_id[0x41] high edge lint[0x1]) Jul 9 23:56:16.731818 kernel: ACPI: LAPIC_NMI (acpi_id[0x42] high edge lint[0x1]) Jul 9 23:56:16.731824 kernel: ACPI: LAPIC_NMI (acpi_id[0x43] high edge lint[0x1]) Jul 9 23:56:16.731829 kernel: ACPI: LAPIC_NMI (acpi_id[0x44] high edge lint[0x1]) Jul 9 23:56:16.731834 kernel: ACPI: LAPIC_NMI (acpi_id[0x45] high edge lint[0x1]) Jul 9 23:56:16.731840 kernel: ACPI: LAPIC_NMI (acpi_id[0x46] high edge lint[0x1]) Jul 9 23:56:16.731845 kernel: ACPI: LAPIC_NMI (acpi_id[0x47] high edge lint[0x1]) Jul 9 23:56:16.731851 kernel: ACPI: LAPIC_NMI (acpi_id[0x48] high edge lint[0x1]) Jul 9 23:56:16.731856 kernel: ACPI: LAPIC_NMI (acpi_id[0x49] high edge lint[0x1]) Jul 9 23:56:16.731862 kernel: ACPI: LAPIC_NMI (acpi_id[0x4a] high edge lint[0x1]) Jul 9 23:56:16.731867 kernel: ACPI: LAPIC_NMI (acpi_id[0x4b] high edge lint[0x1]) Jul 9 23:56:16.731872 kernel: ACPI: LAPIC_NMI (acpi_id[0x4c] high edge lint[0x1]) Jul 9 23:56:16.731879 kernel: ACPI: LAPIC_NMI (acpi_id[0x4d] high edge lint[0x1]) Jul 9 23:56:16.731884 kernel: ACPI: LAPIC_NMI (acpi_id[0x4e] high edge lint[0x1]) Jul 9 23:56:16.731889 kernel: ACPI: LAPIC_NMI (acpi_id[0x4f] high edge lint[0x1]) Jul 9 23:56:16.731895 kernel: ACPI: LAPIC_NMI (acpi_id[0x50] high edge lint[0x1]) Jul 9 23:56:16.731900 kernel: ACPI: LAPIC_NMI (acpi_id[0x51] high edge lint[0x1]) Jul 9 23:56:16.731905 kernel: ACPI: LAPIC_NMI (acpi_id[0x52] high edge lint[0x1]) Jul 9 23:56:16.731911 kernel: ACPI: LAPIC_NMI (acpi_id[0x53] high edge lint[0x1]) Jul 9 23:56:16.731916 kernel: ACPI: LAPIC_NMI (acpi_id[0x54] high edge lint[0x1]) Jul 9 23:56:16.731921 kernel: ACPI: LAPIC_NMI (acpi_id[0x55] high edge lint[0x1]) Jul 9 23:56:16.731927 kernel: ACPI: LAPIC_NMI (acpi_id[0x56] high edge lint[0x1]) Jul 9 23:56:16.731933 kernel: ACPI: LAPIC_NMI (acpi_id[0x57] high edge lint[0x1]) Jul 9 23:56:16.731956 kernel: ACPI: LAPIC_NMI (acpi_id[0x58] high edge lint[0x1]) Jul 9 23:56:16.731962 kernel: ACPI: LAPIC_NMI (acpi_id[0x59] high edge lint[0x1]) Jul 9 23:56:16.731968 kernel: ACPI: LAPIC_NMI (acpi_id[0x5a] high edge lint[0x1]) Jul 9 23:56:16.731973 kernel: ACPI: LAPIC_NMI (acpi_id[0x5b] high edge lint[0x1]) Jul 9 23:56:16.731978 kernel: ACPI: LAPIC_NMI (acpi_id[0x5c] high edge lint[0x1]) Jul 9 23:56:16.731984 kernel: ACPI: LAPIC_NMI (acpi_id[0x5d] high edge lint[0x1]) Jul 9 23:56:16.731989 kernel: ACPI: LAPIC_NMI (acpi_id[0x5e] high edge lint[0x1]) Jul 9 23:56:16.731994 kernel: ACPI: LAPIC_NMI (acpi_id[0x5f] high edge lint[0x1]) Jul 9 23:56:16.732000 kernel: ACPI: LAPIC_NMI (acpi_id[0x60] high edge lint[0x1]) Jul 9 23:56:16.732007 kernel: ACPI: LAPIC_NMI (acpi_id[0x61] high edge lint[0x1]) Jul 9 23:56:16.732012 kernel: ACPI: LAPIC_NMI (acpi_id[0x62] high edge lint[0x1]) Jul 9 23:56:16.732017 kernel: ACPI: LAPIC_NMI (acpi_id[0x63] high edge lint[0x1]) Jul 9 23:56:16.732023 kernel: ACPI: LAPIC_NMI (acpi_id[0x64] high edge lint[0x1]) Jul 9 23:56:16.732028 kernel: ACPI: LAPIC_NMI (acpi_id[0x65] high edge lint[0x1]) Jul 9 23:56:16.732033 kernel: ACPI: LAPIC_NMI (acpi_id[0x66] high edge lint[0x1]) Jul 9 23:56:16.732039 kernel: ACPI: LAPIC_NMI (acpi_id[0x67] high edge lint[0x1]) Jul 9 23:56:16.732044 kernel: ACPI: LAPIC_NMI (acpi_id[0x68] high edge lint[0x1]) Jul 9 23:56:16.732049 kernel: ACPI: LAPIC_NMI (acpi_id[0x69] high edge lint[0x1]) Jul 9 23:56:16.732055 kernel: ACPI: LAPIC_NMI (acpi_id[0x6a] high edge lint[0x1]) Jul 9 23:56:16.732061 kernel: ACPI: LAPIC_NMI (acpi_id[0x6b] high edge lint[0x1]) Jul 9 23:56:16.732066 kernel: ACPI: LAPIC_NMI (acpi_id[0x6c] high edge lint[0x1]) Jul 9 23:56:16.732072 kernel: ACPI: LAPIC_NMI (acpi_id[0x6d] high edge lint[0x1]) Jul 9 23:56:16.732077 kernel: ACPI: LAPIC_NMI (acpi_id[0x6e] high edge lint[0x1]) Jul 9 23:56:16.732083 kernel: ACPI: LAPIC_NMI (acpi_id[0x6f] high edge lint[0x1]) Jul 9 23:56:16.732088 kernel: ACPI: LAPIC_NMI (acpi_id[0x70] high edge lint[0x1]) Jul 9 23:56:16.732093 kernel: ACPI: LAPIC_NMI (acpi_id[0x71] high edge lint[0x1]) Jul 9 23:56:16.732099 kernel: ACPI: LAPIC_NMI (acpi_id[0x72] high edge lint[0x1]) Jul 9 23:56:16.732104 kernel: ACPI: LAPIC_NMI (acpi_id[0x73] high edge lint[0x1]) Jul 9 23:56:16.732109 kernel: ACPI: LAPIC_NMI (acpi_id[0x74] high edge lint[0x1]) Jul 9 23:56:16.732116 kernel: ACPI: LAPIC_NMI (acpi_id[0x75] high edge lint[0x1]) Jul 9 23:56:16.732121 kernel: ACPI: LAPIC_NMI (acpi_id[0x76] high edge lint[0x1]) Jul 9 23:56:16.732126 kernel: ACPI: LAPIC_NMI (acpi_id[0x77] high edge lint[0x1]) Jul 9 23:56:16.732132 kernel: ACPI: LAPIC_NMI (acpi_id[0x78] high edge lint[0x1]) Jul 9 23:56:16.732137 kernel: ACPI: LAPIC_NMI (acpi_id[0x79] high edge lint[0x1]) Jul 9 23:56:16.732143 kernel: ACPI: LAPIC_NMI (acpi_id[0x7a] high edge lint[0x1]) Jul 9 23:56:16.732148 kernel: ACPI: LAPIC_NMI (acpi_id[0x7b] high edge lint[0x1]) Jul 9 23:56:16.732153 kernel: ACPI: LAPIC_NMI (acpi_id[0x7c] high edge lint[0x1]) Jul 9 23:56:16.732159 kernel: ACPI: LAPIC_NMI (acpi_id[0x7d] high edge lint[0x1]) Jul 9 23:56:16.732164 kernel: ACPI: LAPIC_NMI (acpi_id[0x7e] high edge lint[0x1]) Jul 9 23:56:16.732170 kernel: ACPI: LAPIC_NMI (acpi_id[0x7f] high edge lint[0x1]) Jul 9 23:56:16.732176 kernel: IOAPIC[0]: apic_id 1, version 17, address 0xfec00000, GSI 0-23 Jul 9 23:56:16.732181 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 high edge) Jul 9 23:56:16.732187 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jul 9 23:56:16.732192 kernel: ACPI: HPET id: 0x8086af01 base: 0xfed00000 Jul 9 23:56:16.732198 kernel: TSC deadline timer available Jul 9 23:56:16.732203 kernel: smpboot: Allowing 128 CPUs, 126 hotplug CPUs Jul 9 23:56:16.732209 kernel: [mem 0x80000000-0xefffffff] available for PCI devices Jul 9 23:56:16.732215 kernel: Booting paravirtualized kernel on VMware hypervisor Jul 9 23:56:16.732221 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jul 9 23:56:16.732227 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:128 nr_cpu_ids:128 nr_node_ids:1 Jul 9 23:56:16.732232 kernel: percpu: Embedded 58 pages/cpu s197096 r8192 d32280 u262144 Jul 9 23:56:16.732238 kernel: pcpu-alloc: s197096 r8192 d32280 u262144 alloc=1*2097152 Jul 9 23:56:16.732243 kernel: pcpu-alloc: [0] 000 001 002 003 004 005 006 007 Jul 9 23:56:16.732248 kernel: pcpu-alloc: [0] 008 009 010 011 012 013 014 015 Jul 9 23:56:16.732254 kernel: pcpu-alloc: [0] 016 017 018 019 020 021 022 023 Jul 9 23:56:16.732259 kernel: pcpu-alloc: [0] 024 025 026 027 028 029 030 031 Jul 9 23:56:16.732264 kernel: pcpu-alloc: [0] 032 033 034 035 036 037 038 039 Jul 9 23:56:16.732276 kernel: pcpu-alloc: [0] 040 041 042 043 044 045 046 047 Jul 9 23:56:16.732283 kernel: pcpu-alloc: [0] 048 049 050 051 052 053 054 055 Jul 9 23:56:16.732289 kernel: pcpu-alloc: [0] 056 057 058 059 060 061 062 063 Jul 9 23:56:16.732294 kernel: pcpu-alloc: [0] 064 065 066 067 068 069 070 071 Jul 9 23:56:16.732300 kernel: pcpu-alloc: [0] 072 073 074 075 076 077 078 079 Jul 9 23:56:16.732306 kernel: pcpu-alloc: [0] 080 081 082 083 084 085 086 087 Jul 9 23:56:16.732311 kernel: pcpu-alloc: [0] 088 089 090 091 092 093 094 095 Jul 9 23:56:16.732317 kernel: pcpu-alloc: [0] 096 097 098 099 100 101 102 103 Jul 9 23:56:16.732323 kernel: pcpu-alloc: [0] 104 105 106 107 108 109 110 111 Jul 9 23:56:16.732329 kernel: pcpu-alloc: [0] 112 113 114 115 116 117 118 119 Jul 9 23:56:16.732335 kernel: pcpu-alloc: [0] 120 121 122 123 124 125 126 127 Jul 9 23:56:16.732342 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=c257b65f06e0ad68d969d5b3e057f031663dc29a4487d91a77595a40c4dc82d6 Jul 9 23:56:16.732348 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 9 23:56:16.732354 kernel: random: crng init done Jul 9 23:56:16.732359 kernel: printk: log_buf_len individual max cpu contribution: 4096 bytes Jul 9 23:56:16.732366 kernel: printk: log_buf_len total cpu_extra contributions: 520192 bytes Jul 9 23:56:16.732373 kernel: printk: log_buf_len min size: 262144 bytes Jul 9 23:56:16.732379 kernel: printk: log_buf_len: 1048576 bytes Jul 9 23:56:16.732384 kernel: printk: early log buf free: 239648(91%) Jul 9 23:56:16.732390 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 9 23:56:16.732396 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jul 9 23:56:16.732401 kernel: Fallback order for Node 0: 0 Jul 9 23:56:16.732407 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515808 Jul 9 23:56:16.732413 kernel: Policy zone: DMA32 Jul 9 23:56:16.732419 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 9 23:56:16.732425 kernel: Memory: 1934276K/2096628K available (14336K kernel code, 2295K rwdata, 22872K rodata, 43488K init, 1588K bss, 162092K reserved, 0K cma-reserved) Jul 9 23:56:16.732432 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=128, Nodes=1 Jul 9 23:56:16.732438 kernel: ftrace: allocating 37940 entries in 149 pages Jul 9 23:56:16.732444 kernel: ftrace: allocated 149 pages with 4 groups Jul 9 23:56:16.732450 kernel: Dynamic Preempt: voluntary Jul 9 23:56:16.732456 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 9 23:56:16.732463 kernel: rcu: RCU event tracing is enabled. Jul 9 23:56:16.732469 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=128. Jul 9 23:56:16.732475 kernel: Trampoline variant of Tasks RCU enabled. Jul 9 23:56:16.732480 kernel: Rude variant of Tasks RCU enabled. Jul 9 23:56:16.732486 kernel: Tracing variant of Tasks RCU enabled. Jul 9 23:56:16.732492 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 9 23:56:16.732502 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=128 Jul 9 23:56:16.732510 kernel: NR_IRQS: 33024, nr_irqs: 1448, preallocated irqs: 16 Jul 9 23:56:16.732516 kernel: rcu: srcu_init: Setting srcu_struct sizes to big. Jul 9 23:56:16.732521 kernel: Console: colour VGA+ 80x25 Jul 9 23:56:16.732530 kernel: printk: console [tty0] enabled Jul 9 23:56:16.732535 kernel: printk: console [ttyS0] enabled Jul 9 23:56:16.732541 kernel: ACPI: Core revision 20230628 Jul 9 23:56:16.732547 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 133484882848 ns Jul 9 23:56:16.732553 kernel: APIC: Switch to symmetric I/O mode setup Jul 9 23:56:16.732559 kernel: x2apic enabled Jul 9 23:56:16.732565 kernel: APIC: Switched APIC routing to: physical x2apic Jul 9 23:56:16.732570 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jul 9 23:56:16.732576 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns Jul 9 23:56:16.732584 kernel: Calibrating delay loop (skipped) preset value.. 6816.00 BogoMIPS (lpj=3408000) Jul 9 23:56:16.732589 kernel: Disabled fast string operations Jul 9 23:56:16.732595 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Jul 9 23:56:16.732601 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Jul 9 23:56:16.732607 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jul 9 23:56:16.732613 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Jul 9 23:56:16.732618 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Jul 9 23:56:16.732624 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Jul 9 23:56:16.732630 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Jul 9 23:56:16.732637 kernel: RETBleed: Mitigation: Enhanced IBRS Jul 9 23:56:16.732643 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jul 9 23:56:16.732649 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jul 9 23:56:16.732655 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jul 9 23:56:16.732660 kernel: SRBDS: Unknown: Dependent on hypervisor status Jul 9 23:56:16.732666 kernel: GDS: Unknown: Dependent on hypervisor status Jul 9 23:56:16.732672 kernel: ITS: Mitigation: Aligned branch/return thunks Jul 9 23:56:16.732679 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jul 9 23:56:16.732684 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jul 9 23:56:16.732691 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jul 9 23:56:16.732697 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jul 9 23:56:16.732703 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jul 9 23:56:16.732709 kernel: Freeing SMP alternatives memory: 32K Jul 9 23:56:16.732715 kernel: pid_max: default: 131072 minimum: 1024 Jul 9 23:56:16.732720 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jul 9 23:56:16.732726 kernel: landlock: Up and running. Jul 9 23:56:16.732732 kernel: SELinux: Initializing. Jul 9 23:56:16.732738 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jul 9 23:56:16.732745 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jul 9 23:56:16.732751 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) Jul 9 23:56:16.732757 kernel: RCU Tasks: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. Jul 9 23:56:16.732763 kernel: RCU Tasks Rude: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. Jul 9 23:56:16.732769 kernel: RCU Tasks Trace: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. Jul 9 23:56:16.732775 kernel: Performance Events: Skylake events, core PMU driver. Jul 9 23:56:16.732781 kernel: core: CPUID marked event: 'cpu cycles' unavailable Jul 9 23:56:16.732787 kernel: core: CPUID marked event: 'instructions' unavailable Jul 9 23:56:16.732794 kernel: core: CPUID marked event: 'bus cycles' unavailable Jul 9 23:56:16.732800 kernel: core: CPUID marked event: 'cache references' unavailable Jul 9 23:56:16.732805 kernel: core: CPUID marked event: 'cache misses' unavailable Jul 9 23:56:16.732811 kernel: core: CPUID marked event: 'branch instructions' unavailable Jul 9 23:56:16.732816 kernel: core: CPUID marked event: 'branch misses' unavailable Jul 9 23:56:16.732822 kernel: ... version: 1 Jul 9 23:56:16.732828 kernel: ... bit width: 48 Jul 9 23:56:16.732833 kernel: ... generic registers: 4 Jul 9 23:56:16.732839 kernel: ... value mask: 0000ffffffffffff Jul 9 23:56:16.732847 kernel: ... max period: 000000007fffffff Jul 9 23:56:16.732852 kernel: ... fixed-purpose events: 0 Jul 9 23:56:16.732858 kernel: ... event mask: 000000000000000f Jul 9 23:56:16.732864 kernel: signal: max sigframe size: 1776 Jul 9 23:56:16.732870 kernel: rcu: Hierarchical SRCU implementation. Jul 9 23:56:16.732876 kernel: rcu: Max phase no-delay instances is 400. Jul 9 23:56:16.732882 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jul 9 23:56:16.732888 kernel: smp: Bringing up secondary CPUs ... Jul 9 23:56:16.732893 kernel: smpboot: x86: Booting SMP configuration: Jul 9 23:56:16.732900 kernel: .... node #0, CPUs: #1 Jul 9 23:56:16.732906 kernel: Disabled fast string operations Jul 9 23:56:16.732912 kernel: smpboot: CPU 1 Converting physical 2 to logical package 1 Jul 9 23:56:16.732917 kernel: smpboot: CPU 1 Converting physical 0 to logical die 1 Jul 9 23:56:16.732923 kernel: smp: Brought up 1 node, 2 CPUs Jul 9 23:56:16.732929 kernel: smpboot: Max logical packages: 128 Jul 9 23:56:16.732935 kernel: smpboot: Total of 2 processors activated (13632.00 BogoMIPS) Jul 9 23:56:16.733083 kernel: devtmpfs: initialized Jul 9 23:56:16.733090 kernel: x86/mm: Memory block size: 128MB Jul 9 23:56:16.733096 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7feff000-0x7fefffff] (4096 bytes) Jul 9 23:56:16.733105 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 9 23:56:16.733111 kernel: futex hash table entries: 32768 (order: 9, 2097152 bytes, linear) Jul 9 23:56:16.733116 kernel: pinctrl core: initialized pinctrl subsystem Jul 9 23:56:16.733122 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 9 23:56:16.733128 kernel: audit: initializing netlink subsys (disabled) Jul 9 23:56:16.733134 kernel: audit: type=2000 audit(1752105375.084:1): state=initialized audit_enabled=0 res=1 Jul 9 23:56:16.733140 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 9 23:56:16.733336 kernel: thermal_sys: Registered thermal governor 'user_space' Jul 9 23:56:16.733342 kernel: cpuidle: using governor menu Jul 9 23:56:16.733351 kernel: Simple Boot Flag at 0x36 set to 0x80 Jul 9 23:56:16.733357 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 9 23:56:16.733362 kernel: dca service started, version 1.12.1 Jul 9 23:56:16.733368 kernel: PCI: MMCONFIG for domain 0000 [bus 00-7f] at [mem 0xf0000000-0xf7ffffff] (base 0xf0000000) Jul 9 23:56:16.733374 kernel: PCI: Using configuration type 1 for base access Jul 9 23:56:16.733380 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jul 9 23:56:16.733386 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 9 23:56:16.733392 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jul 9 23:56:16.733398 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 9 23:56:16.733405 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jul 9 23:56:16.733411 kernel: ACPI: Added _OSI(Module Device) Jul 9 23:56:16.733416 kernel: ACPI: Added _OSI(Processor Device) Jul 9 23:56:16.733422 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 9 23:56:16.733428 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 9 23:56:16.733434 kernel: ACPI: [Firmware Bug]: BIOS _OSI(Linux) query ignored Jul 9 23:56:16.733440 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jul 9 23:56:16.733445 kernel: ACPI: Interpreter enabled Jul 9 23:56:16.733451 kernel: ACPI: PM: (supports S0 S1 S5) Jul 9 23:56:16.733458 kernel: ACPI: Using IOAPIC for interrupt routing Jul 9 23:56:16.733465 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jul 9 23:56:16.733471 kernel: PCI: Using E820 reservations for host bridge windows Jul 9 23:56:16.733477 kernel: ACPI: Enabled 4 GPEs in block 00 to 0F Jul 9 23:56:16.733483 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-7f]) Jul 9 23:56:16.733566 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 9 23:56:16.733622 kernel: acpi PNP0A03:00: _OSC: platform does not support [AER LTR] Jul 9 23:56:16.733676 kernel: acpi PNP0A03:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability] Jul 9 23:56:16.733685 kernel: PCI host bridge to bus 0000:00 Jul 9 23:56:16.733741 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jul 9 23:56:16.733789 kernel: pci_bus 0000:00: root bus resource [mem 0x000cc000-0x000dbfff window] Jul 9 23:56:16.733835 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jul 9 23:56:16.733880 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jul 9 23:56:16.733925 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xfeff window] Jul 9 23:56:16.733994 kernel: pci_bus 0000:00: root bus resource [bus 00-7f] Jul 9 23:56:16.734057 kernel: pci 0000:00:00.0: [8086:7190] type 00 class 0x060000 Jul 9 23:56:16.734119 kernel: pci 0000:00:01.0: [8086:7191] type 01 class 0x060400 Jul 9 23:56:16.734178 kernel: pci 0000:00:07.0: [8086:7110] type 00 class 0x060100 Jul 9 23:56:16.734235 kernel: pci 0000:00:07.1: [8086:7111] type 00 class 0x01018a Jul 9 23:56:16.734288 kernel: pci 0000:00:07.1: reg 0x20: [io 0x1060-0x106f] Jul 9 23:56:16.734359 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Jul 9 23:56:16.734413 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Jul 9 23:56:16.734465 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Jul 9 23:56:16.734527 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Jul 9 23:56:16.734585 kernel: pci 0000:00:07.3: [8086:7113] type 00 class 0x068000 Jul 9 23:56:16.734638 kernel: pci 0000:00:07.3: quirk: [io 0x1000-0x103f] claimed by PIIX4 ACPI Jul 9 23:56:16.734690 kernel: pci 0000:00:07.3: quirk: [io 0x1040-0x104f] claimed by PIIX4 SMB Jul 9 23:56:16.734750 kernel: pci 0000:00:07.7: [15ad:0740] type 00 class 0x088000 Jul 9 23:56:16.734804 kernel: pci 0000:00:07.7: reg 0x10: [io 0x1080-0x10bf] Jul 9 23:56:16.734856 kernel: pci 0000:00:07.7: reg 0x14: [mem 0xfebfe000-0xfebfffff 64bit] Jul 9 23:56:16.734912 kernel: pci 0000:00:0f.0: [15ad:0405] type 00 class 0x030000 Jul 9 23:56:16.734993 kernel: pci 0000:00:0f.0: reg 0x10: [io 0x1070-0x107f] Jul 9 23:56:16.735047 kernel: pci 0000:00:0f.0: reg 0x14: [mem 0xe8000000-0xefffffff pref] Jul 9 23:56:16.736017 kernel: pci 0000:00:0f.0: reg 0x18: [mem 0xfe000000-0xfe7fffff] Jul 9 23:56:16.736078 kernel: pci 0000:00:0f.0: reg 0x30: [mem 0x00000000-0x00007fff pref] Jul 9 23:56:16.736133 kernel: pci 0000:00:0f.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jul 9 23:56:16.736195 kernel: pci 0000:00:11.0: [15ad:0790] type 01 class 0x060401 Jul 9 23:56:16.736253 kernel: pci 0000:00:15.0: [15ad:07a0] type 01 class 0x060400 Jul 9 23:56:16.736308 kernel: pci 0000:00:15.0: PME# supported from D0 D3hot D3cold Jul 9 23:56:16.736365 kernel: pci 0000:00:15.1: [15ad:07a0] type 01 class 0x060400 Jul 9 23:56:16.736422 kernel: pci 0000:00:15.1: PME# supported from D0 D3hot D3cold Jul 9 23:56:16.736479 kernel: pci 0000:00:15.2: [15ad:07a0] type 01 class 0x060400 Jul 9 23:56:16.736533 kernel: pci 0000:00:15.2: PME# supported from D0 D3hot D3cold Jul 9 23:56:16.736592 kernel: pci 0000:00:15.3: [15ad:07a0] type 01 class 0x060400 Jul 9 23:56:16.736646 kernel: pci 0000:00:15.3: PME# supported from D0 D3hot D3cold Jul 9 23:56:16.736704 kernel: pci 0000:00:15.4: [15ad:07a0] type 01 class 0x060400 Jul 9 23:56:16.736761 kernel: pci 0000:00:15.4: PME# supported from D0 D3hot D3cold Jul 9 23:56:16.736819 kernel: pci 0000:00:15.5: [15ad:07a0] type 01 class 0x060400 Jul 9 23:56:16.736873 kernel: pci 0000:00:15.5: PME# supported from D0 D3hot D3cold Jul 9 23:56:16.736931 kernel: pci 0000:00:15.6: [15ad:07a0] type 01 class 0x060400 Jul 9 23:56:16.739788 kernel: pci 0000:00:15.6: PME# supported from D0 D3hot D3cold Jul 9 23:56:16.739857 kernel: pci 0000:00:15.7: [15ad:07a0] type 01 class 0x060400 Jul 9 23:56:16.739920 kernel: pci 0000:00:15.7: PME# supported from D0 D3hot D3cold Jul 9 23:56:16.739999 kernel: pci 0000:00:16.0: [15ad:07a0] type 01 class 0x060400 Jul 9 23:56:16.740062 kernel: pci 0000:00:16.0: PME# supported from D0 D3hot D3cold Jul 9 23:56:16.740121 kernel: pci 0000:00:16.1: [15ad:07a0] type 01 class 0x060400 Jul 9 23:56:16.740175 kernel: pci 0000:00:16.1: PME# supported from D0 D3hot D3cold Jul 9 23:56:16.740233 kernel: pci 0000:00:16.2: [15ad:07a0] type 01 class 0x060400 Jul 9 23:56:16.740292 kernel: pci 0000:00:16.2: PME# supported from D0 D3hot D3cold Jul 9 23:56:16.740350 kernel: pci 0000:00:16.3: [15ad:07a0] type 01 class 0x060400 Jul 9 23:56:16.740404 kernel: pci 0000:00:16.3: PME# supported from D0 D3hot D3cold Jul 9 23:56:16.740461 kernel: pci 0000:00:16.4: [15ad:07a0] type 01 class 0x060400 Jul 9 23:56:16.740524 kernel: pci 0000:00:16.4: PME# supported from D0 D3hot D3cold Jul 9 23:56:16.740583 kernel: pci 0000:00:16.5: [15ad:07a0] type 01 class 0x060400 Jul 9 23:56:16.740642 kernel: pci 0000:00:16.5: PME# supported from D0 D3hot D3cold Jul 9 23:56:16.740700 kernel: pci 0000:00:16.6: [15ad:07a0] type 01 class 0x060400 Jul 9 23:56:16.740754 kernel: pci 0000:00:16.6: PME# supported from D0 D3hot D3cold Jul 9 23:56:16.740813 kernel: pci 0000:00:16.7: [15ad:07a0] type 01 class 0x060400 Jul 9 23:56:16.740867 kernel: pci 0000:00:16.7: PME# supported from D0 D3hot D3cold Jul 9 23:56:16.740925 kernel: pci 0000:00:17.0: [15ad:07a0] type 01 class 0x060400 Jul 9 23:56:16.741012 kernel: pci 0000:00:17.0: PME# supported from D0 D3hot D3cold Jul 9 23:56:16.741070 kernel: pci 0000:00:17.1: [15ad:07a0] type 01 class 0x060400 Jul 9 23:56:16.741124 kernel: pci 0000:00:17.1: PME# supported from D0 D3hot D3cold Jul 9 23:56:16.741180 kernel: pci 0000:00:17.2: [15ad:07a0] type 01 class 0x060400 Jul 9 23:56:16.741233 kernel: pci 0000:00:17.2: PME# supported from D0 D3hot D3cold Jul 9 23:56:16.741290 kernel: pci 0000:00:17.3: [15ad:07a0] type 01 class 0x060400 Jul 9 23:56:16.741347 kernel: pci 0000:00:17.3: PME# supported from D0 D3hot D3cold Jul 9 23:56:16.741405 kernel: pci 0000:00:17.4: [15ad:07a0] type 01 class 0x060400 Jul 9 23:56:16.741459 kernel: pci 0000:00:17.4: PME# supported from D0 D3hot D3cold Jul 9 23:56:16.741516 kernel: pci 0000:00:17.5: [15ad:07a0] type 01 class 0x060400 Jul 9 23:56:16.741570 kernel: pci 0000:00:17.5: PME# supported from D0 D3hot D3cold Jul 9 23:56:16.741627 kernel: pci 0000:00:17.6: [15ad:07a0] type 01 class 0x060400 Jul 9 23:56:16.741681 kernel: pci 0000:00:17.6: PME# supported from D0 D3hot D3cold Jul 9 23:56:16.741742 kernel: pci 0000:00:17.7: [15ad:07a0] type 01 class 0x060400 Jul 9 23:56:16.741796 kernel: pci 0000:00:17.7: PME# supported from D0 D3hot D3cold Jul 9 23:56:16.741853 kernel: pci 0000:00:18.0: [15ad:07a0] type 01 class 0x060400 Jul 9 23:56:16.741906 kernel: pci 0000:00:18.0: PME# supported from D0 D3hot D3cold Jul 9 23:56:16.741971 kernel: pci 0000:00:18.1: [15ad:07a0] type 01 class 0x060400 Jul 9 23:56:16.742026 kernel: pci 0000:00:18.1: PME# supported from D0 D3hot D3cold Jul 9 23:56:16.742086 kernel: pci 0000:00:18.2: [15ad:07a0] type 01 class 0x060400 Jul 9 23:56:16.742140 kernel: pci 0000:00:18.2: PME# supported from D0 D3hot D3cold Jul 9 23:56:16.742197 kernel: pci 0000:00:18.3: [15ad:07a0] type 01 class 0x060400 Jul 9 23:56:16.742315 kernel: pci 0000:00:18.3: PME# supported from D0 D3hot D3cold Jul 9 23:56:16.742377 kernel: pci 0000:00:18.4: [15ad:07a0] type 01 class 0x060400 Jul 9 23:56:16.742431 kernel: pci 0000:00:18.4: PME# supported from D0 D3hot D3cold Jul 9 23:56:16.742492 kernel: pci 0000:00:18.5: [15ad:07a0] type 01 class 0x060400 Jul 9 23:56:16.742547 kernel: pci 0000:00:18.5: PME# supported from D0 D3hot D3cold Jul 9 23:56:16.742622 kernel: pci 0000:00:18.6: [15ad:07a0] type 01 class 0x060400 Jul 9 23:56:16.742713 kernel: pci 0000:00:18.6: PME# supported from D0 D3hot D3cold Jul 9 23:56:16.742774 kernel: pci 0000:00:18.7: [15ad:07a0] type 01 class 0x060400 Jul 9 23:56:16.742828 kernel: pci 0000:00:18.7: PME# supported from D0 D3hot D3cold Jul 9 23:56:16.742889 kernel: pci_bus 0000:01: extended config space not accessible Jul 9 23:56:16.744965 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Jul 9 23:56:16.745036 kernel: pci_bus 0000:02: extended config space not accessible Jul 9 23:56:16.745046 kernel: acpiphp: Slot [32] registered Jul 9 23:56:16.745056 kernel: acpiphp: Slot [33] registered Jul 9 23:56:16.745062 kernel: acpiphp: Slot [34] registered Jul 9 23:56:16.745068 kernel: acpiphp: Slot [35] registered Jul 9 23:56:16.745074 kernel: acpiphp: Slot [36] registered Jul 9 23:56:16.745082 kernel: acpiphp: Slot [37] registered Jul 9 23:56:16.745088 kernel: acpiphp: Slot [38] registered Jul 9 23:56:16.745094 kernel: acpiphp: Slot [39] registered Jul 9 23:56:16.745100 kernel: acpiphp: Slot [40] registered Jul 9 23:56:16.745106 kernel: acpiphp: Slot [41] registered Jul 9 23:56:16.745112 kernel: acpiphp: Slot [42] registered Jul 9 23:56:16.745117 kernel: acpiphp: Slot [43] registered Jul 9 23:56:16.745123 kernel: acpiphp: Slot [44] registered Jul 9 23:56:16.745129 kernel: acpiphp: Slot [45] registered Jul 9 23:56:16.745135 kernel: acpiphp: Slot [46] registered Jul 9 23:56:16.745142 kernel: acpiphp: Slot [47] registered Jul 9 23:56:16.745147 kernel: acpiphp: Slot [48] registered Jul 9 23:56:16.745153 kernel: acpiphp: Slot [49] registered Jul 9 23:56:16.745159 kernel: acpiphp: Slot [50] registered Jul 9 23:56:16.745165 kernel: acpiphp: Slot [51] registered Jul 9 23:56:16.745171 kernel: acpiphp: Slot [52] registered Jul 9 23:56:16.745176 kernel: acpiphp: Slot [53] registered Jul 9 23:56:16.745182 kernel: acpiphp: Slot [54] registered Jul 9 23:56:16.745188 kernel: acpiphp: Slot [55] registered Jul 9 23:56:16.745195 kernel: acpiphp: Slot [56] registered Jul 9 23:56:16.745201 kernel: acpiphp: Slot [57] registered Jul 9 23:56:16.745206 kernel: acpiphp: Slot [58] registered Jul 9 23:56:16.745212 kernel: acpiphp: Slot [59] registered Jul 9 23:56:16.745218 kernel: acpiphp: Slot [60] registered Jul 9 23:56:16.745224 kernel: acpiphp: Slot [61] registered Jul 9 23:56:16.745230 kernel: acpiphp: Slot [62] registered Jul 9 23:56:16.745235 kernel: acpiphp: Slot [63] registered Jul 9 23:56:16.745293 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] (subtractive decode) Jul 9 23:56:16.745351 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] Jul 9 23:56:16.745404 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] Jul 9 23:56:16.745456 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] Jul 9 23:56:16.745508 kernel: pci 0000:00:11.0: bridge window [mem 0x000a0000-0x000bffff window] (subtractive decode) Jul 9 23:56:16.745560 kernel: pci 0000:00:11.0: bridge window [mem 0x000cc000-0x000dbfff window] (subtractive decode) Jul 9 23:56:16.745614 kernel: pci 0000:00:11.0: bridge window [mem 0xc0000000-0xfebfffff window] (subtractive decode) Jul 9 23:56:16.745667 kernel: pci 0000:00:11.0: bridge window [io 0x0000-0x0cf7 window] (subtractive decode) Jul 9 23:56:16.745719 kernel: pci 0000:00:11.0: bridge window [io 0x0d00-0xfeff window] (subtractive decode) Jul 9 23:56:16.745782 kernel: pci 0000:03:00.0: [15ad:07c0] type 00 class 0x010700 Jul 9 23:56:16.745838 kernel: pci 0000:03:00.0: reg 0x10: [io 0x4000-0x4007] Jul 9 23:56:16.745892 kernel: pci 0000:03:00.0: reg 0x14: [mem 0xfd5f8000-0xfd5fffff 64bit] Jul 9 23:56:16.747044 kernel: pci 0000:03:00.0: reg 0x30: [mem 0x00000000-0x0000ffff pref] Jul 9 23:56:16.747104 kernel: pci 0000:03:00.0: PME# supported from D0 D3hot D3cold Jul 9 23:56:16.747159 kernel: pci 0000:03:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' Jul 9 23:56:16.747214 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] Jul 9 23:56:16.747270 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] Jul 9 23:56:16.747322 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] Jul 9 23:56:16.747375 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] Jul 9 23:56:16.747428 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] Jul 9 23:56:16.747480 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] Jul 9 23:56:16.747536 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] Jul 9 23:56:16.747590 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] Jul 9 23:56:16.747646 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] Jul 9 23:56:16.747699 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] Jul 9 23:56:16.747752 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] Jul 9 23:56:16.747805 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] Jul 9 23:56:16.747858 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] Jul 9 23:56:16.747909 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] Jul 9 23:56:16.750989 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] Jul 9 23:56:16.751053 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] Jul 9 23:56:16.751114 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] Jul 9 23:56:16.751172 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] Jul 9 23:56:16.751226 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] Jul 9 23:56:16.751280 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] Jul 9 23:56:16.751337 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] Jul 9 23:56:16.751391 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] Jul 9 23:56:16.751444 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] Jul 9 23:56:16.751498 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] Jul 9 23:56:16.751551 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] Jul 9 23:56:16.751604 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] Jul 9 23:56:16.751664 kernel: pci 0000:0b:00.0: [15ad:07b0] type 00 class 0x020000 Jul 9 23:56:16.751720 kernel: pci 0000:0b:00.0: reg 0x10: [mem 0xfd4fc000-0xfd4fcfff] Jul 9 23:56:16.751778 kernel: pci 0000:0b:00.0: reg 0x14: [mem 0xfd4fd000-0xfd4fdfff] Jul 9 23:56:16.751832 kernel: pci 0000:0b:00.0: reg 0x18: [mem 0xfd4fe000-0xfd4fffff] Jul 9 23:56:16.751886 kernel: pci 0000:0b:00.0: reg 0x1c: [io 0x5000-0x500f] Jul 9 23:56:16.753527 kernel: pci 0000:0b:00.0: reg 0x30: [mem 0x00000000-0x0000ffff pref] Jul 9 23:56:16.753594 kernel: pci 0000:0b:00.0: supports D1 D2 Jul 9 23:56:16.753648 kernel: pci 0000:0b:00.0: PME# supported from D0 D1 D2 D3hot D3cold Jul 9 23:56:16.753703 kernel: pci 0000:0b:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' Jul 9 23:56:16.753757 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] Jul 9 23:56:16.753813 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] Jul 9 23:56:16.753867 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] Jul 9 23:56:16.753921 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] Jul 9 23:56:16.754202 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] Jul 9 23:56:16.754260 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] Jul 9 23:56:16.754314 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] Jul 9 23:56:16.754369 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] Jul 9 23:56:16.754422 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] Jul 9 23:56:16.754545 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] Jul 9 23:56:16.754622 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] Jul 9 23:56:16.754680 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] Jul 9 23:56:16.754733 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] Jul 9 23:56:16.754785 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] Jul 9 23:56:16.754838 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] Jul 9 23:56:16.754891 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] Jul 9 23:56:16.756590 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] Jul 9 23:56:16.756656 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] Jul 9 23:56:16.756713 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] Jul 9 23:56:16.756768 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] Jul 9 23:56:16.756823 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] Jul 9 23:56:16.756878 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] Jul 9 23:56:16.756931 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] Jul 9 23:56:16.757005 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] Jul 9 23:56:16.757062 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] Jul 9 23:56:16.757115 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] Jul 9 23:56:16.757168 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] Jul 9 23:56:16.757221 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] Jul 9 23:56:16.757274 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] Jul 9 23:56:16.757328 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] Jul 9 23:56:16.757382 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] Jul 9 23:56:16.757436 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] Jul 9 23:56:16.757492 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] Jul 9 23:56:16.757545 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] Jul 9 23:56:16.757600 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] Jul 9 23:56:16.757654 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] Jul 9 23:56:16.757707 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] Jul 9 23:56:16.757759 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] Jul 9 23:56:16.757813 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] Jul 9 23:56:16.757882 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] Jul 9 23:56:16.757965 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] Jul 9 23:56:16.758024 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] Jul 9 23:56:16.758077 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] Jul 9 23:56:16.758131 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] Jul 9 23:56:16.758184 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] Jul 9 23:56:16.758238 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] Jul 9 23:56:16.758291 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] Jul 9 23:56:16.758345 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] Jul 9 23:56:16.758402 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] Jul 9 23:56:16.758455 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] Jul 9 23:56:16.758513 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] Jul 9 23:56:16.758567 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] Jul 9 23:56:16.758620 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] Jul 9 23:56:16.758674 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] Jul 9 23:56:16.758727 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] Jul 9 23:56:16.758780 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] Jul 9 23:56:16.758836 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] Jul 9 23:56:16.758889 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] Jul 9 23:56:16.758973 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] Jul 9 23:56:16.759031 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] Jul 9 23:56:16.759084 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] Jul 9 23:56:16.759139 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] Jul 9 23:56:16.759191 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] Jul 9 23:56:16.759246 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] Jul 9 23:56:16.759299 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] Jul 9 23:56:16.759350 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] Jul 9 23:56:16.759402 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] Jul 9 23:56:16.759454 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] Jul 9 23:56:16.759507 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] Jul 9 23:56:16.759558 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] Jul 9 23:56:16.759611 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] Jul 9 23:56:16.759666 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] Jul 9 23:56:16.759720 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] Jul 9 23:56:16.759774 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] Jul 9 23:56:16.759826 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] Jul 9 23:56:16.759878 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] Jul 9 23:56:16.759933 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] Jul 9 23:56:16.759994 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] Jul 9 23:56:16.760047 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] Jul 9 23:56:16.760058 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 9 Jul 9 23:56:16.760064 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 0 Jul 9 23:56:16.760070 kernel: ACPI: PCI: Interrupt link LNKB disabled Jul 9 23:56:16.760077 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jul 9 23:56:16.760083 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 10 Jul 9 23:56:16.760089 kernel: iommu: Default domain type: Translated Jul 9 23:56:16.760095 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jul 9 23:56:16.760101 kernel: PCI: Using ACPI for IRQ routing Jul 9 23:56:16.760107 kernel: PCI: pci_cache_line_size set to 64 bytes Jul 9 23:56:16.760114 kernel: e820: reserve RAM buffer [mem 0x0009ec00-0x0009ffff] Jul 9 23:56:16.760120 kernel: e820: reserve RAM buffer [mem 0x7fee0000-0x7fffffff] Jul 9 23:56:16.760172 kernel: pci 0000:00:0f.0: vgaarb: setting as boot VGA device Jul 9 23:56:16.760224 kernel: pci 0000:00:0f.0: vgaarb: bridge control possible Jul 9 23:56:16.760276 kernel: pci 0000:00:0f.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jul 9 23:56:16.760285 kernel: vgaarb: loaded Jul 9 23:56:16.760292 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 Jul 9 23:56:16.760298 kernel: hpet0: 16 comparators, 64-bit 14.318180 MHz counter Jul 9 23:56:16.760304 kernel: clocksource: Switched to clocksource tsc-early Jul 9 23:56:16.760312 kernel: VFS: Disk quotas dquot_6.6.0 Jul 9 23:56:16.760318 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 9 23:56:16.760324 kernel: pnp: PnP ACPI init Jul 9 23:56:16.760379 kernel: system 00:00: [io 0x1000-0x103f] has been reserved Jul 9 23:56:16.760428 kernel: system 00:00: [io 0x1040-0x104f] has been reserved Jul 9 23:56:16.760477 kernel: system 00:00: [io 0x0cf0-0x0cf1] has been reserved Jul 9 23:56:16.760529 kernel: system 00:04: [mem 0xfed00000-0xfed003ff] has been reserved Jul 9 23:56:16.760586 kernel: pnp 00:06: [dma 2] Jul 9 23:56:16.760640 kernel: system 00:07: [io 0xfce0-0xfcff] has been reserved Jul 9 23:56:16.760691 kernel: system 00:07: [mem 0xf0000000-0xf7ffffff] has been reserved Jul 9 23:56:16.760740 kernel: system 00:07: [mem 0xfe800000-0xfe9fffff] has been reserved Jul 9 23:56:16.760749 kernel: pnp: PnP ACPI: found 8 devices Jul 9 23:56:16.760755 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jul 9 23:56:16.760761 kernel: NET: Registered PF_INET protocol family Jul 9 23:56:16.760767 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 9 23:56:16.760775 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jul 9 23:56:16.760781 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 9 23:56:16.760788 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jul 9 23:56:16.760794 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jul 9 23:56:16.760800 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jul 9 23:56:16.760806 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jul 9 23:56:16.760811 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jul 9 23:56:16.760817 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 9 23:56:16.760824 kernel: NET: Registered PF_XDP protocol family Jul 9 23:56:16.760878 kernel: pci 0000:00:15.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 Jul 9 23:56:16.760933 kernel: pci 0000:00:15.3: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Jul 9 23:56:16.760995 kernel: pci 0000:00:15.4: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Jul 9 23:56:16.761050 kernel: pci 0000:00:15.5: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Jul 9 23:56:16.761104 kernel: pci 0000:00:15.6: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Jul 9 23:56:16.761159 kernel: pci 0000:00:15.7: bridge window [io 0x1000-0x0fff] to [bus 0a] add_size 1000 Jul 9 23:56:16.761217 kernel: pci 0000:00:16.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 0b] add_size 200000 add_align 100000 Jul 9 23:56:16.761271 kernel: pci 0000:00:16.3: bridge window [io 0x1000-0x0fff] to [bus 0e] add_size 1000 Jul 9 23:56:16.761326 kernel: pci 0000:00:16.4: bridge window [io 0x1000-0x0fff] to [bus 0f] add_size 1000 Jul 9 23:56:16.761381 kernel: pci 0000:00:16.5: bridge window [io 0x1000-0x0fff] to [bus 10] add_size 1000 Jul 9 23:56:16.761435 kernel: pci 0000:00:16.6: bridge window [io 0x1000-0x0fff] to [bus 11] add_size 1000 Jul 9 23:56:16.761491 kernel: pci 0000:00:16.7: bridge window [io 0x1000-0x0fff] to [bus 12] add_size 1000 Jul 9 23:56:16.761548 kernel: pci 0000:00:17.3: bridge window [io 0x1000-0x0fff] to [bus 16] add_size 1000 Jul 9 23:56:16.761602 kernel: pci 0000:00:17.4: bridge window [io 0x1000-0x0fff] to [bus 17] add_size 1000 Jul 9 23:56:16.761655 kernel: pci 0000:00:17.5: bridge window [io 0x1000-0x0fff] to [bus 18] add_size 1000 Jul 9 23:56:16.761718 kernel: pci 0000:00:17.6: bridge window [io 0x1000-0x0fff] to [bus 19] add_size 1000 Jul 9 23:56:16.761772 kernel: pci 0000:00:17.7: bridge window [io 0x1000-0x0fff] to [bus 1a] add_size 1000 Jul 9 23:56:16.761828 kernel: pci 0000:00:18.2: bridge window [io 0x1000-0x0fff] to [bus 1d] add_size 1000 Jul 9 23:56:16.761882 kernel: pci 0000:00:18.3: bridge window [io 0x1000-0x0fff] to [bus 1e] add_size 1000 Jul 9 23:56:16.761935 kernel: pci 0000:00:18.4: bridge window [io 0x1000-0x0fff] to [bus 1f] add_size 1000 Jul 9 23:56:16.762032 kernel: pci 0000:00:18.5: bridge window [io 0x1000-0x0fff] to [bus 20] add_size 1000 Jul 9 23:56:16.762086 kernel: pci 0000:00:18.6: bridge window [io 0x1000-0x0fff] to [bus 21] add_size 1000 Jul 9 23:56:16.762138 kernel: pci 0000:00:18.7: bridge window [io 0x1000-0x0fff] to [bus 22] add_size 1000 Jul 9 23:56:16.763048 kernel: pci 0000:00:15.0: BAR 15: assigned [mem 0xc0000000-0xc01fffff 64bit pref] Jul 9 23:56:16.763109 kernel: pci 0000:00:16.0: BAR 15: assigned [mem 0xc0200000-0xc03fffff 64bit pref] Jul 9 23:56:16.763166 kernel: pci 0000:00:15.3: BAR 13: no space for [io size 0x1000] Jul 9 23:56:16.763221 kernel: pci 0000:00:15.3: BAR 13: failed to assign [io size 0x1000] Jul 9 23:56:16.763275 kernel: pci 0000:00:15.4: BAR 13: no space for [io size 0x1000] Jul 9 23:56:16.763328 kernel: pci 0000:00:15.4: BAR 13: failed to assign [io size 0x1000] Jul 9 23:56:16.763382 kernel: pci 0000:00:15.5: BAR 13: no space for [io size 0x1000] Jul 9 23:56:16.763435 kernel: pci 0000:00:15.5: BAR 13: failed to assign [io size 0x1000] Jul 9 23:56:16.763491 kernel: pci 0000:00:15.6: BAR 13: no space for [io size 0x1000] Jul 9 23:56:16.763544 kernel: pci 0000:00:15.6: BAR 13: failed to assign [io size 0x1000] Jul 9 23:56:16.763597 kernel: pci 0000:00:15.7: BAR 13: no space for [io size 0x1000] Jul 9 23:56:16.763649 kernel: pci 0000:00:15.7: BAR 13: failed to assign [io size 0x1000] Jul 9 23:56:16.763702 kernel: pci 0000:00:16.3: BAR 13: no space for [io size 0x1000] Jul 9 23:56:16.763754 kernel: pci 0000:00:16.3: BAR 13: failed to assign [io size 0x1000] Jul 9 23:56:16.763807 kernel: pci 0000:00:16.4: BAR 13: no space for [io size 0x1000] Jul 9 23:56:16.763860 kernel: pci 0000:00:16.4: BAR 13: failed to assign [io size 0x1000] Jul 9 23:56:16.763916 kernel: pci 0000:00:16.5: BAR 13: no space for [io size 0x1000] Jul 9 23:56:16.765003 kernel: pci 0000:00:16.5: BAR 13: failed to assign [io size 0x1000] Jul 9 23:56:16.765065 kernel: pci 0000:00:16.6: BAR 13: no space for [io size 0x1000] Jul 9 23:56:16.765120 kernel: pci 0000:00:16.6: BAR 13: failed to assign [io size 0x1000] Jul 9 23:56:16.765174 kernel: pci 0000:00:16.7: BAR 13: no space for [io size 0x1000] Jul 9 23:56:16.765228 kernel: pci 0000:00:16.7: BAR 13: failed to assign [io size 0x1000] Jul 9 23:56:16.765302 kernel: pci 0000:00:17.3: BAR 13: no space for [io size 0x1000] Jul 9 23:56:16.765383 kernel: pci 0000:00:17.3: BAR 13: failed to assign [io size 0x1000] Jul 9 23:56:16.765460 kernel: pci 0000:00:17.4: BAR 13: no space for [io size 0x1000] Jul 9 23:56:16.765516 kernel: pci 0000:00:17.4: BAR 13: failed to assign [io size 0x1000] Jul 9 23:56:16.765569 kernel: pci 0000:00:17.5: BAR 13: no space for [io size 0x1000] Jul 9 23:56:16.765622 kernel: pci 0000:00:17.5: BAR 13: failed to assign [io size 0x1000] Jul 9 23:56:16.765676 kernel: pci 0000:00:17.6: BAR 13: no space for [io size 0x1000] Jul 9 23:56:16.765730 kernel: pci 0000:00:17.6: BAR 13: failed to assign [io size 0x1000] Jul 9 23:56:16.765783 kernel: pci 0000:00:17.7: BAR 13: no space for [io size 0x1000] Jul 9 23:56:16.765837 kernel: pci 0000:00:17.7: BAR 13: failed to assign [io size 0x1000] Jul 9 23:56:16.765894 kernel: pci 0000:00:18.2: BAR 13: no space for [io size 0x1000] Jul 9 23:56:16.766335 kernel: pci 0000:00:18.2: BAR 13: failed to assign [io size 0x1000] Jul 9 23:56:16.766400 kernel: pci 0000:00:18.3: BAR 13: no space for [io size 0x1000] Jul 9 23:56:16.766457 kernel: pci 0000:00:18.3: BAR 13: failed to assign [io size 0x1000] Jul 9 23:56:16.766515 kernel: pci 0000:00:18.4: BAR 13: no space for [io size 0x1000] Jul 9 23:56:16.766570 kernel: pci 0000:00:18.4: BAR 13: failed to assign [io size 0x1000] Jul 9 23:56:16.766623 kernel: pci 0000:00:18.5: BAR 13: no space for [io size 0x1000] Jul 9 23:56:16.766676 kernel: pci 0000:00:18.5: BAR 13: failed to assign [io size 0x1000] Jul 9 23:56:16.766732 kernel: pci 0000:00:18.6: BAR 13: no space for [io size 0x1000] Jul 9 23:56:16.766785 kernel: pci 0000:00:18.6: BAR 13: failed to assign [io size 0x1000] Jul 9 23:56:16.766838 kernel: pci 0000:00:18.7: BAR 13: no space for [io size 0x1000] Jul 9 23:56:16.766891 kernel: pci 0000:00:18.7: BAR 13: failed to assign [io size 0x1000] Jul 9 23:56:16.766951 kernel: pci 0000:00:18.7: BAR 13: no space for [io size 0x1000] Jul 9 23:56:16.767014 kernel: pci 0000:00:18.7: BAR 13: failed to assign [io size 0x1000] Jul 9 23:56:16.767068 kernel: pci 0000:00:18.6: BAR 13: no space for [io size 0x1000] Jul 9 23:56:16.767121 kernel: pci 0000:00:18.6: BAR 13: failed to assign [io size 0x1000] Jul 9 23:56:16.767175 kernel: pci 0000:00:18.5: BAR 13: no space for [io size 0x1000] Jul 9 23:56:16.767231 kernel: pci 0000:00:18.5: BAR 13: failed to assign [io size 0x1000] Jul 9 23:56:16.767284 kernel: pci 0000:00:18.4: BAR 13: no space for [io size 0x1000] Jul 9 23:56:16.767338 kernel: pci 0000:00:18.4: BAR 13: failed to assign [io size 0x1000] Jul 9 23:56:16.767392 kernel: pci 0000:00:18.3: BAR 13: no space for [io size 0x1000] Jul 9 23:56:16.767444 kernel: pci 0000:00:18.3: BAR 13: failed to assign [io size 0x1000] Jul 9 23:56:16.767496 kernel: pci 0000:00:18.2: BAR 13: no space for [io size 0x1000] Jul 9 23:56:16.767554 kernel: pci 0000:00:18.2: BAR 13: failed to assign [io size 0x1000] Jul 9 23:56:16.767608 kernel: pci 0000:00:17.7: BAR 13: no space for [io size 0x1000] Jul 9 23:56:16.767661 kernel: pci 0000:00:17.7: BAR 13: failed to assign [io size 0x1000] Jul 9 23:56:16.767717 kernel: pci 0000:00:17.6: BAR 13: no space for [io size 0x1000] Jul 9 23:56:16.767770 kernel: pci 0000:00:17.6: BAR 13: failed to assign [io size 0x1000] Jul 9 23:56:16.767823 kernel: pci 0000:00:17.5: BAR 13: no space for [io size 0x1000] Jul 9 23:56:16.767876 kernel: pci 0000:00:17.5: BAR 13: failed to assign [io size 0x1000] Jul 9 23:56:16.767929 kernel: pci 0000:00:17.4: BAR 13: no space for [io size 0x1000] Jul 9 23:56:16.767999 kernel: pci 0000:00:17.4: BAR 13: failed to assign [io size 0x1000] Jul 9 23:56:16.768054 kernel: pci 0000:00:17.3: BAR 13: no space for [io size 0x1000] Jul 9 23:56:16.768108 kernel: pci 0000:00:17.3: BAR 13: failed to assign [io size 0x1000] Jul 9 23:56:16.768162 kernel: pci 0000:00:16.7: BAR 13: no space for [io size 0x1000] Jul 9 23:56:16.768214 kernel: pci 0000:00:16.7: BAR 13: failed to assign [io size 0x1000] Jul 9 23:56:16.768271 kernel: pci 0000:00:16.6: BAR 13: no space for [io size 0x1000] Jul 9 23:56:16.768325 kernel: pci 0000:00:16.6: BAR 13: failed to assign [io size 0x1000] Jul 9 23:56:16.768377 kernel: pci 0000:00:16.5: BAR 13: no space for [io size 0x1000] Jul 9 23:56:16.768429 kernel: pci 0000:00:16.5: BAR 13: failed to assign [io size 0x1000] Jul 9 23:56:16.768482 kernel: pci 0000:00:16.4: BAR 13: no space for [io size 0x1000] Jul 9 23:56:16.768535 kernel: pci 0000:00:16.4: BAR 13: failed to assign [io size 0x1000] Jul 9 23:56:16.768587 kernel: pci 0000:00:16.3: BAR 13: no space for [io size 0x1000] Jul 9 23:56:16.768640 kernel: pci 0000:00:16.3: BAR 13: failed to assign [io size 0x1000] Jul 9 23:56:16.768694 kernel: pci 0000:00:15.7: BAR 13: no space for [io size 0x1000] Jul 9 23:56:16.768749 kernel: pci 0000:00:15.7: BAR 13: failed to assign [io size 0x1000] Jul 9 23:56:16.768802 kernel: pci 0000:00:15.6: BAR 13: no space for [io size 0x1000] Jul 9 23:56:16.768855 kernel: pci 0000:00:15.6: BAR 13: failed to assign [io size 0x1000] Jul 9 23:56:16.768909 kernel: pci 0000:00:15.5: BAR 13: no space for [io size 0x1000] Jul 9 23:56:16.768982 kernel: pci 0000:00:15.5: BAR 13: failed to assign [io size 0x1000] Jul 9 23:56:16.769038 kernel: pci 0000:00:15.4: BAR 13: no space for [io size 0x1000] Jul 9 23:56:16.769091 kernel: pci 0000:00:15.4: BAR 13: failed to assign [io size 0x1000] Jul 9 23:56:16.769144 kernel: pci 0000:00:15.3: BAR 13: no space for [io size 0x1000] Jul 9 23:56:16.769198 kernel: pci 0000:00:15.3: BAR 13: failed to assign [io size 0x1000] Jul 9 23:56:16.769251 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Jul 9 23:56:16.769308 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] Jul 9 23:56:16.769361 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] Jul 9 23:56:16.769413 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] Jul 9 23:56:16.769465 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] Jul 9 23:56:16.769521 kernel: pci 0000:03:00.0: BAR 6: assigned [mem 0xfd500000-0xfd50ffff pref] Jul 9 23:56:16.769574 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] Jul 9 23:56:16.769627 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] Jul 9 23:56:16.769680 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] Jul 9 23:56:16.769735 kernel: pci 0000:00:15.0: bridge window [mem 0xc0000000-0xc01fffff 64bit pref] Jul 9 23:56:16.769790 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] Jul 9 23:56:16.769843 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] Jul 9 23:56:16.769896 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] Jul 9 23:56:16.770003 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] Jul 9 23:56:16.770062 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] Jul 9 23:56:16.770115 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] Jul 9 23:56:16.770169 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] Jul 9 23:56:16.770221 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] Jul 9 23:56:16.770277 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] Jul 9 23:56:16.770329 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] Jul 9 23:56:16.770383 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] Jul 9 23:56:16.770435 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] Jul 9 23:56:16.770487 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] Jul 9 23:56:16.770549 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] Jul 9 23:56:16.770606 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] Jul 9 23:56:16.770659 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] Jul 9 23:56:16.770712 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] Jul 9 23:56:16.770765 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] Jul 9 23:56:16.770818 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] Jul 9 23:56:16.770871 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] Jul 9 23:56:16.770924 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] Jul 9 23:56:16.771024 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] Jul 9 23:56:16.771077 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] Jul 9 23:56:16.771136 kernel: pci 0000:0b:00.0: BAR 6: assigned [mem 0xfd400000-0xfd40ffff pref] Jul 9 23:56:16.771189 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] Jul 9 23:56:16.771242 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] Jul 9 23:56:16.771295 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] Jul 9 23:56:16.771348 kernel: pci 0000:00:16.0: bridge window [mem 0xc0200000-0xc03fffff 64bit pref] Jul 9 23:56:16.771401 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] Jul 9 23:56:16.771454 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] Jul 9 23:56:16.771508 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] Jul 9 23:56:16.771561 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] Jul 9 23:56:16.772885 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] Jul 9 23:56:16.772976 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] Jul 9 23:56:16.773043 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] Jul 9 23:56:16.773098 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] Jul 9 23:56:16.773152 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] Jul 9 23:56:16.773205 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] Jul 9 23:56:16.773258 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] Jul 9 23:56:16.773312 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] Jul 9 23:56:16.773364 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] Jul 9 23:56:16.773417 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] Jul 9 23:56:16.773473 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] Jul 9 23:56:16.773533 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] Jul 9 23:56:16.773587 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] Jul 9 23:56:16.773640 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] Jul 9 23:56:16.773693 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] Jul 9 23:56:16.773747 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] Jul 9 23:56:16.773799 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] Jul 9 23:56:16.773852 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] Jul 9 23:56:16.773904 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] Jul 9 23:56:16.773986 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] Jul 9 23:56:16.774043 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] Jul 9 23:56:16.774096 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] Jul 9 23:56:16.774150 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] Jul 9 23:56:16.774204 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] Jul 9 23:56:16.774257 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] Jul 9 23:56:16.774311 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] Jul 9 23:56:16.774364 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] Jul 9 23:56:16.774419 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] Jul 9 23:56:16.774472 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] Jul 9 23:56:16.774534 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] Jul 9 23:56:16.774589 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] Jul 9 23:56:16.774641 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] Jul 9 23:56:16.774694 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] Jul 9 23:56:16.774747 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] Jul 9 23:56:16.774800 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] Jul 9 23:56:16.774853 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] Jul 9 23:56:16.774906 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] Jul 9 23:56:16.775562 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] Jul 9 23:56:16.775628 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] Jul 9 23:56:16.775684 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] Jul 9 23:56:16.775738 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] Jul 9 23:56:16.775793 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] Jul 9 23:56:16.775847 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] Jul 9 23:56:16.775900 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] Jul 9 23:56:16.775977 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] Jul 9 23:56:16.776033 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] Jul 9 23:56:16.776087 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] Jul 9 23:56:16.776140 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] Jul 9 23:56:16.776197 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] Jul 9 23:56:16.776249 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] Jul 9 23:56:16.776302 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] Jul 9 23:56:16.776354 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] Jul 9 23:56:16.776408 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] Jul 9 23:56:16.776461 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] Jul 9 23:56:16.776516 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] Jul 9 23:56:16.776569 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] Jul 9 23:56:16.776622 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] Jul 9 23:56:16.776678 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] Jul 9 23:56:16.776730 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] Jul 9 23:56:16.776783 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] Jul 9 23:56:16.776836 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] Jul 9 23:56:16.776889 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] Jul 9 23:56:16.777967 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] Jul 9 23:56:16.778038 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] Jul 9 23:56:16.778097 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] Jul 9 23:56:16.778153 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] Jul 9 23:56:16.778208 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] Jul 9 23:56:16.778266 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] Jul 9 23:56:16.778320 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] Jul 9 23:56:16.778374 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] Jul 9 23:56:16.778428 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] Jul 9 23:56:16.778481 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] Jul 9 23:56:16.778533 kernel: pci_bus 0000:00: resource 4 [mem 0x000a0000-0x000bffff window] Jul 9 23:56:16.778581 kernel: pci_bus 0000:00: resource 5 [mem 0x000cc000-0x000dbfff window] Jul 9 23:56:16.778629 kernel: pci_bus 0000:00: resource 6 [mem 0xc0000000-0xfebfffff window] Jul 9 23:56:16.778675 kernel: pci_bus 0000:00: resource 7 [io 0x0000-0x0cf7 window] Jul 9 23:56:16.778725 kernel: pci_bus 0000:00: resource 8 [io 0x0d00-0xfeff window] Jul 9 23:56:16.778776 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x3fff] Jul 9 23:56:16.778826 kernel: pci_bus 0000:02: resource 1 [mem 0xfd600000-0xfdffffff] Jul 9 23:56:16.778873 kernel: pci_bus 0000:02: resource 2 [mem 0xe7b00000-0xe7ffffff 64bit pref] Jul 9 23:56:16.778922 kernel: pci_bus 0000:02: resource 4 [mem 0x000a0000-0x000bffff window] Jul 9 23:56:16.779690 kernel: pci_bus 0000:02: resource 5 [mem 0x000cc000-0x000dbfff window] Jul 9 23:56:16.779748 kernel: pci_bus 0000:02: resource 6 [mem 0xc0000000-0xfebfffff window] Jul 9 23:56:16.779802 kernel: pci_bus 0000:02: resource 7 [io 0x0000-0x0cf7 window] Jul 9 23:56:16.779850 kernel: pci_bus 0000:02: resource 8 [io 0x0d00-0xfeff window] Jul 9 23:56:16.779903 kernel: pci_bus 0000:03: resource 0 [io 0x4000-0x4fff] Jul 9 23:56:16.780530 kernel: pci_bus 0000:03: resource 1 [mem 0xfd500000-0xfd5fffff] Jul 9 23:56:16.780589 kernel: pci_bus 0000:03: resource 2 [mem 0xc0000000-0xc01fffff 64bit pref] Jul 9 23:56:16.780645 kernel: pci_bus 0000:04: resource 0 [io 0x8000-0x8fff] Jul 9 23:56:16.780716 kernel: pci_bus 0000:04: resource 1 [mem 0xfd100000-0xfd1fffff] Jul 9 23:56:16.780769 kernel: pci_bus 0000:04: resource 2 [mem 0xe7800000-0xe78fffff 64bit pref] Jul 9 23:56:16.780823 kernel: pci_bus 0000:05: resource 0 [io 0xc000-0xcfff] Jul 9 23:56:16.780873 kernel: pci_bus 0000:05: resource 1 [mem 0xfcd00000-0xfcdfffff] Jul 9 23:56:16.780922 kernel: pci_bus 0000:05: resource 2 [mem 0xe7400000-0xe74fffff 64bit pref] Jul 9 23:56:16.782019 kernel: pci_bus 0000:06: resource 1 [mem 0xfc900000-0xfc9fffff] Jul 9 23:56:16.782076 kernel: pci_bus 0000:06: resource 2 [mem 0xe7000000-0xe70fffff 64bit pref] Jul 9 23:56:16.782131 kernel: pci_bus 0000:07: resource 1 [mem 0xfc500000-0xfc5fffff] Jul 9 23:56:16.782184 kernel: pci_bus 0000:07: resource 2 [mem 0xe6c00000-0xe6cfffff 64bit pref] Jul 9 23:56:16.782237 kernel: pci_bus 0000:08: resource 1 [mem 0xfc100000-0xfc1fffff] Jul 9 23:56:16.782286 kernel: pci_bus 0000:08: resource 2 [mem 0xe6800000-0xe68fffff 64bit pref] Jul 9 23:56:16.782348 kernel: pci_bus 0000:09: resource 1 [mem 0xfbd00000-0xfbdfffff] Jul 9 23:56:16.782398 kernel: pci_bus 0000:09: resource 2 [mem 0xe6400000-0xe64fffff 64bit pref] Jul 9 23:56:16.782450 kernel: pci_bus 0000:0a: resource 1 [mem 0xfb900000-0xfb9fffff] Jul 9 23:56:16.782503 kernel: pci_bus 0000:0a: resource 2 [mem 0xe6000000-0xe60fffff 64bit pref] Jul 9 23:56:16.782567 kernel: pci_bus 0000:0b: resource 0 [io 0x5000-0x5fff] Jul 9 23:56:16.782616 kernel: pci_bus 0000:0b: resource 1 [mem 0xfd400000-0xfd4fffff] Jul 9 23:56:16.782665 kernel: pci_bus 0000:0b: resource 2 [mem 0xc0200000-0xc03fffff 64bit pref] Jul 9 23:56:16.782717 kernel: pci_bus 0000:0c: resource 0 [io 0x9000-0x9fff] Jul 9 23:56:16.782770 kernel: pci_bus 0000:0c: resource 1 [mem 0xfd000000-0xfd0fffff] Jul 9 23:56:16.782819 kernel: pci_bus 0000:0c: resource 2 [mem 0xe7700000-0xe77fffff 64bit pref] Jul 9 23:56:16.782871 kernel: pci_bus 0000:0d: resource 0 [io 0xd000-0xdfff] Jul 9 23:56:16.782921 kernel: pci_bus 0000:0d: resource 1 [mem 0xfcc00000-0xfccfffff] Jul 9 23:56:16.785158 kernel: pci_bus 0000:0d: resource 2 [mem 0xe7300000-0xe73fffff 64bit pref] Jul 9 23:56:16.785220 kernel: pci_bus 0000:0e: resource 1 [mem 0xfc800000-0xfc8fffff] Jul 9 23:56:16.785271 kernel: pci_bus 0000:0e: resource 2 [mem 0xe6f00000-0xe6ffffff 64bit pref] Jul 9 23:56:16.785329 kernel: pci_bus 0000:0f: resource 1 [mem 0xfc400000-0xfc4fffff] Jul 9 23:56:16.785378 kernel: pci_bus 0000:0f: resource 2 [mem 0xe6b00000-0xe6bfffff 64bit pref] Jul 9 23:56:16.785431 kernel: pci_bus 0000:10: resource 1 [mem 0xfc000000-0xfc0fffff] Jul 9 23:56:16.785479 kernel: pci_bus 0000:10: resource 2 [mem 0xe6700000-0xe67fffff 64bit pref] Jul 9 23:56:16.785534 kernel: pci_bus 0000:11: resource 1 [mem 0xfbc00000-0xfbcfffff] Jul 9 23:56:16.785584 kernel: pci_bus 0000:11: resource 2 [mem 0xe6300000-0xe63fffff 64bit pref] Jul 9 23:56:16.785657 kernel: pci_bus 0000:12: resource 1 [mem 0xfb800000-0xfb8fffff] Jul 9 23:56:16.786833 kernel: pci_bus 0000:12: resource 2 [mem 0xe5f00000-0xe5ffffff 64bit pref] Jul 9 23:56:16.786904 kernel: pci_bus 0000:13: resource 0 [io 0x6000-0x6fff] Jul 9 23:56:16.786965 kernel: pci_bus 0000:13: resource 1 [mem 0xfd300000-0xfd3fffff] Jul 9 23:56:16.787015 kernel: pci_bus 0000:13: resource 2 [mem 0xe7a00000-0xe7afffff 64bit pref] Jul 9 23:56:16.787067 kernel: pci_bus 0000:14: resource 0 [io 0xa000-0xafff] Jul 9 23:56:16.787115 kernel: pci_bus 0000:14: resource 1 [mem 0xfcf00000-0xfcffffff] Jul 9 23:56:16.787166 kernel: pci_bus 0000:14: resource 2 [mem 0xe7600000-0xe76fffff 64bit pref] Jul 9 23:56:16.787218 kernel: pci_bus 0000:15: resource 0 [io 0xe000-0xefff] Jul 9 23:56:16.787266 kernel: pci_bus 0000:15: resource 1 [mem 0xfcb00000-0xfcbfffff] Jul 9 23:56:16.787313 kernel: pci_bus 0000:15: resource 2 [mem 0xe7200000-0xe72fffff 64bit pref] Jul 9 23:56:16.787364 kernel: pci_bus 0000:16: resource 1 [mem 0xfc700000-0xfc7fffff] Jul 9 23:56:16.787412 kernel: pci_bus 0000:16: resource 2 [mem 0xe6e00000-0xe6efffff 64bit pref] Jul 9 23:56:16.787464 kernel: pci_bus 0000:17: resource 1 [mem 0xfc300000-0xfc3fffff] Jul 9 23:56:16.787534 kernel: pci_bus 0000:17: resource 2 [mem 0xe6a00000-0xe6afffff 64bit pref] Jul 9 23:56:16.787601 kernel: pci_bus 0000:18: resource 1 [mem 0xfbf00000-0xfbffffff] Jul 9 23:56:16.787649 kernel: pci_bus 0000:18: resource 2 [mem 0xe6600000-0xe66fffff 64bit pref] Jul 9 23:56:16.787703 kernel: pci_bus 0000:19: resource 1 [mem 0xfbb00000-0xfbbfffff] Jul 9 23:56:16.787752 kernel: pci_bus 0000:19: resource 2 [mem 0xe6200000-0xe62fffff 64bit pref] Jul 9 23:56:16.787803 kernel: pci_bus 0000:1a: resource 1 [mem 0xfb700000-0xfb7fffff] Jul 9 23:56:16.787853 kernel: pci_bus 0000:1a: resource 2 [mem 0xe5e00000-0xe5efffff 64bit pref] Jul 9 23:56:16.787906 kernel: pci_bus 0000:1b: resource 0 [io 0x7000-0x7fff] Jul 9 23:56:16.787963 kernel: pci_bus 0000:1b: resource 1 [mem 0xfd200000-0xfd2fffff] Jul 9 23:56:16.788014 kernel: pci_bus 0000:1b: resource 2 [mem 0xe7900000-0xe79fffff 64bit pref] Jul 9 23:56:16.788066 kernel: pci_bus 0000:1c: resource 0 [io 0xb000-0xbfff] Jul 9 23:56:16.788117 kernel: pci_bus 0000:1c: resource 1 [mem 0xfce00000-0xfcefffff] Jul 9 23:56:16.788164 kernel: pci_bus 0000:1c: resource 2 [mem 0xe7500000-0xe75fffff 64bit pref] Jul 9 23:56:16.788216 kernel: pci_bus 0000:1d: resource 1 [mem 0xfca00000-0xfcafffff] Jul 9 23:56:16.788265 kernel: pci_bus 0000:1d: resource 2 [mem 0xe7100000-0xe71fffff 64bit pref] Jul 9 23:56:16.788317 kernel: pci_bus 0000:1e: resource 1 [mem 0xfc600000-0xfc6fffff] Jul 9 23:56:16.788365 kernel: pci_bus 0000:1e: resource 2 [mem 0xe6d00000-0xe6dfffff 64bit pref] Jul 9 23:56:16.788417 kernel: pci_bus 0000:1f: resource 1 [mem 0xfc200000-0xfc2fffff] Jul 9 23:56:16.788467 kernel: pci_bus 0000:1f: resource 2 [mem 0xe6900000-0xe69fffff 64bit pref] Jul 9 23:56:16.788518 kernel: pci_bus 0000:20: resource 1 [mem 0xfbe00000-0xfbefffff] Jul 9 23:56:16.788567 kernel: pci_bus 0000:20: resource 2 [mem 0xe6500000-0xe65fffff 64bit pref] Jul 9 23:56:16.788619 kernel: pci_bus 0000:21: resource 1 [mem 0xfba00000-0xfbafffff] Jul 9 23:56:16.788667 kernel: pci_bus 0000:21: resource 2 [mem 0xe6100000-0xe61fffff 64bit pref] Jul 9 23:56:16.788720 kernel: pci_bus 0000:22: resource 1 [mem 0xfb600000-0xfb6fffff] Jul 9 23:56:16.788771 kernel: pci_bus 0000:22: resource 2 [mem 0xe5d00000-0xe5dfffff 64bit pref] Jul 9 23:56:16.788828 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jul 9 23:56:16.788839 kernel: PCI: CLS 32 bytes, default 64 Jul 9 23:56:16.788845 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jul 9 23:56:16.788852 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns Jul 9 23:56:16.788859 kernel: clocksource: Switched to clocksource tsc Jul 9 23:56:16.788865 kernel: Initialise system trusted keyrings Jul 9 23:56:16.788871 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jul 9 23:56:16.788879 kernel: Key type asymmetric registered Jul 9 23:56:16.788885 kernel: Asymmetric key parser 'x509' registered Jul 9 23:56:16.788891 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jul 9 23:56:16.788898 kernel: io scheduler mq-deadline registered Jul 9 23:56:16.788904 kernel: io scheduler kyber registered Jul 9 23:56:16.788911 kernel: io scheduler bfq registered Jul 9 23:56:16.789250 kernel: pcieport 0000:00:15.0: PME: Signaling with IRQ 24 Jul 9 23:56:16.789312 kernel: pcieport 0000:00:15.0: pciehp: Slot #160 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 9 23:56:16.789369 kernel: pcieport 0000:00:15.1: PME: Signaling with IRQ 25 Jul 9 23:56:16.789428 kernel: pcieport 0000:00:15.1: pciehp: Slot #161 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 9 23:56:16.789499 kernel: pcieport 0000:00:15.2: PME: Signaling with IRQ 26 Jul 9 23:56:16.789585 kernel: pcieport 0000:00:15.2: pciehp: Slot #162 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 9 23:56:16.789651 kernel: pcieport 0000:00:15.3: PME: Signaling with IRQ 27 Jul 9 23:56:16.789709 kernel: pcieport 0000:00:15.3: pciehp: Slot #163 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 9 23:56:16.789769 kernel: pcieport 0000:00:15.4: PME: Signaling with IRQ 28 Jul 9 23:56:16.789827 kernel: pcieport 0000:00:15.4: pciehp: Slot #164 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 9 23:56:16.789882 kernel: pcieport 0000:00:15.5: PME: Signaling with IRQ 29 Jul 9 23:56:16.789936 kernel: pcieport 0000:00:15.5: pciehp: Slot #165 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 9 23:56:16.790046 kernel: pcieport 0000:00:15.6: PME: Signaling with IRQ 30 Jul 9 23:56:16.790100 kernel: pcieport 0000:00:15.6: pciehp: Slot #166 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 9 23:56:16.790159 kernel: pcieport 0000:00:15.7: PME: Signaling with IRQ 31 Jul 9 23:56:16.790211 kernel: pcieport 0000:00:15.7: pciehp: Slot #167 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 9 23:56:16.790264 kernel: pcieport 0000:00:16.0: PME: Signaling with IRQ 32 Jul 9 23:56:16.790315 kernel: pcieport 0000:00:16.0: pciehp: Slot #192 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 9 23:56:16.790369 kernel: pcieport 0000:00:16.1: PME: Signaling with IRQ 33 Jul 9 23:56:16.790421 kernel: pcieport 0000:00:16.1: pciehp: Slot #193 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 9 23:56:16.790475 kernel: pcieport 0000:00:16.2: PME: Signaling with IRQ 34 Jul 9 23:56:16.790565 kernel: pcieport 0000:00:16.2: pciehp: Slot #194 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 9 23:56:16.790618 kernel: pcieport 0000:00:16.3: PME: Signaling with IRQ 35 Jul 9 23:56:16.790670 kernel: pcieport 0000:00:16.3: pciehp: Slot #195 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 9 23:56:16.790723 kernel: pcieport 0000:00:16.4: PME: Signaling with IRQ 36 Jul 9 23:56:16.790775 kernel: pcieport 0000:00:16.4: pciehp: Slot #196 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 9 23:56:16.790829 kernel: pcieport 0000:00:16.5: PME: Signaling with IRQ 37 Jul 9 23:56:16.790885 kernel: pcieport 0000:00:16.5: pciehp: Slot #197 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 9 23:56:16.790951 kernel: pcieport 0000:00:16.6: PME: Signaling with IRQ 38 Jul 9 23:56:16.791013 kernel: pcieport 0000:00:16.6: pciehp: Slot #198 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 9 23:56:16.791066 kernel: pcieport 0000:00:16.7: PME: Signaling with IRQ 39 Jul 9 23:56:16.791120 kernel: pcieport 0000:00:16.7: pciehp: Slot #199 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 9 23:56:16.791177 kernel: pcieport 0000:00:17.0: PME: Signaling with IRQ 40 Jul 9 23:56:16.791230 kernel: pcieport 0000:00:17.0: pciehp: Slot #224 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 9 23:56:16.791282 kernel: pcieport 0000:00:17.1: PME: Signaling with IRQ 41 Jul 9 23:56:16.791335 kernel: pcieport 0000:00:17.1: pciehp: Slot #225 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 9 23:56:16.791388 kernel: pcieport 0000:00:17.2: PME: Signaling with IRQ 42 Jul 9 23:56:16.791440 kernel: pcieport 0000:00:17.2: pciehp: Slot #226 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 9 23:56:16.791493 kernel: pcieport 0000:00:17.3: PME: Signaling with IRQ 43 Jul 9 23:56:16.791566 kernel: pcieport 0000:00:17.3: pciehp: Slot #227 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 9 23:56:16.791620 kernel: pcieport 0000:00:17.4: PME: Signaling with IRQ 44 Jul 9 23:56:16.791692 kernel: pcieport 0000:00:17.4: pciehp: Slot #228 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 9 23:56:16.791747 kernel: pcieport 0000:00:17.5: PME: Signaling with IRQ 45 Jul 9 23:56:16.791800 kernel: pcieport 0000:00:17.5: pciehp: Slot #229 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 9 23:56:16.791857 kernel: pcieport 0000:00:17.6: PME: Signaling with IRQ 46 Jul 9 23:56:16.791911 kernel: pcieport 0000:00:17.6: pciehp: Slot #230 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 9 23:56:16.791989 kernel: pcieport 0000:00:17.7: PME: Signaling with IRQ 47 Jul 9 23:56:16.792043 kernel: pcieport 0000:00:17.7: pciehp: Slot #231 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 9 23:56:16.792097 kernel: pcieport 0000:00:18.0: PME: Signaling with IRQ 48 Jul 9 23:56:16.792151 kernel: pcieport 0000:00:18.0: pciehp: Slot #256 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 9 23:56:16.792207 kernel: pcieport 0000:00:18.1: PME: Signaling with IRQ 49 Jul 9 23:56:16.792261 kernel: pcieport 0000:00:18.1: pciehp: Slot #257 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 9 23:56:16.792314 kernel: pcieport 0000:00:18.2: PME: Signaling with IRQ 50 Jul 9 23:56:16.792367 kernel: pcieport 0000:00:18.2: pciehp: Slot #258 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 9 23:56:16.792422 kernel: pcieport 0000:00:18.3: PME: Signaling with IRQ 51 Jul 9 23:56:16.792476 kernel: pcieport 0000:00:18.3: pciehp: Slot #259 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 9 23:56:16.792534 kernel: pcieport 0000:00:18.4: PME: Signaling with IRQ 52 Jul 9 23:56:16.792588 kernel: pcieport 0000:00:18.4: pciehp: Slot #260 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 9 23:56:16.792643 kernel: pcieport 0000:00:18.5: PME: Signaling with IRQ 53 Jul 9 23:56:16.792698 kernel: pcieport 0000:00:18.5: pciehp: Slot #261 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 9 23:56:16.792753 kernel: pcieport 0000:00:18.6: PME: Signaling with IRQ 54 Jul 9 23:56:16.792809 kernel: pcieport 0000:00:18.6: pciehp: Slot #262 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 9 23:56:16.792863 kernel: pcieport 0000:00:18.7: PME: Signaling with IRQ 55 Jul 9 23:56:16.792918 kernel: pcieport 0000:00:18.7: pciehp: Slot #263 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 9 23:56:16.792929 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jul 9 23:56:16.792936 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 9 23:56:16.792962 kernel: 00:05: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jul 9 23:56:16.792969 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBC,PNP0f13:MOUS] at 0x60,0x64 irq 1,12 Jul 9 23:56:16.792978 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jul 9 23:56:16.792985 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jul 9 23:56:16.793045 kernel: rtc_cmos 00:01: registered as rtc0 Jul 9 23:56:16.793097 kernel: rtc_cmos 00:01: setting system clock to 2025-07-09T23:56:16 UTC (1752105376) Jul 9 23:56:16.793147 kernel: rtc_cmos 00:01: alarms up to one month, y3k, 114 bytes nvram Jul 9 23:56:16.793156 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1 Jul 9 23:56:16.793162 kernel: intel_pstate: CPU model not supported Jul 9 23:56:16.793169 kernel: NET: Registered PF_INET6 protocol family Jul 9 23:56:16.793177 kernel: Segment Routing with IPv6 Jul 9 23:56:16.793183 kernel: In-situ OAM (IOAM) with IPv6 Jul 9 23:56:16.793190 kernel: NET: Registered PF_PACKET protocol family Jul 9 23:56:16.793196 kernel: Key type dns_resolver registered Jul 9 23:56:16.793203 kernel: IPI shorthand broadcast: enabled Jul 9 23:56:16.793209 kernel: sched_clock: Marking stable (896003232, 217393118)->(1164387466, -50991116) Jul 9 23:56:16.793215 kernel: registered taskstats version 1 Jul 9 23:56:16.793221 kernel: Loading compiled-in X.509 certificates Jul 9 23:56:16.793228 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.95-flatcar: 50743221a03cbb928e294992219bf2bc20f6f14b' Jul 9 23:56:16.793235 kernel: Key type .fscrypt registered Jul 9 23:56:16.793241 kernel: Key type fscrypt-provisioning registered Jul 9 23:56:16.793247 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 9 23:56:16.793254 kernel: ima: Allocated hash algorithm: sha1 Jul 9 23:56:16.793260 kernel: ima: No architecture policies found Jul 9 23:56:16.793267 kernel: clk: Disabling unused clocks Jul 9 23:56:16.793273 kernel: Freeing unused kernel image (initmem) memory: 43488K Jul 9 23:56:16.793279 kernel: Write protecting the kernel read-only data: 38912k Jul 9 23:56:16.793286 kernel: Freeing unused kernel image (rodata/data gap) memory: 1704K Jul 9 23:56:16.793293 kernel: Run /init as init process Jul 9 23:56:16.793299 kernel: with arguments: Jul 9 23:56:16.793305 kernel: /init Jul 9 23:56:16.793312 kernel: with environment: Jul 9 23:56:16.793318 kernel: HOME=/ Jul 9 23:56:16.793324 kernel: TERM=linux Jul 9 23:56:16.793330 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 9 23:56:16.793337 systemd[1]: Successfully made /usr/ read-only. Jul 9 23:56:16.793346 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 9 23:56:16.793354 systemd[1]: Detected virtualization vmware. Jul 9 23:56:16.793361 systemd[1]: Detected architecture x86-64. Jul 9 23:56:16.793367 systemd[1]: Running in initrd. Jul 9 23:56:16.793373 systemd[1]: No hostname configured, using default hostname. Jul 9 23:56:16.793380 systemd[1]: Hostname set to . Jul 9 23:56:16.793387 systemd[1]: Initializing machine ID from random generator. Jul 9 23:56:16.793393 systemd[1]: Queued start job for default target initrd.target. Jul 9 23:56:16.793401 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 9 23:56:16.793407 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 9 23:56:16.793414 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 9 23:56:16.793421 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 9 23:56:16.793428 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 9 23:56:16.793435 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 9 23:56:16.793442 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 9 23:56:16.793450 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 9 23:56:16.793457 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 9 23:56:16.793464 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 9 23:56:16.793470 systemd[1]: Reached target paths.target - Path Units. Jul 9 23:56:16.793477 systemd[1]: Reached target slices.target - Slice Units. Jul 9 23:56:16.793484 systemd[1]: Reached target swap.target - Swaps. Jul 9 23:56:16.793490 systemd[1]: Reached target timers.target - Timer Units. Jul 9 23:56:16.793497 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 9 23:56:16.793505 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 9 23:56:16.793512 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 9 23:56:16.793518 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jul 9 23:56:16.793525 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 9 23:56:16.793531 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 9 23:56:16.793538 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 9 23:56:16.793545 systemd[1]: Reached target sockets.target - Socket Units. Jul 9 23:56:16.793551 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 9 23:56:16.793558 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 9 23:56:16.793566 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 9 23:56:16.793573 systemd[1]: Starting systemd-fsck-usr.service... Jul 9 23:56:16.793580 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 9 23:56:16.793586 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 9 23:56:16.793593 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 9 23:56:16.793615 systemd-journald[217]: Collecting audit messages is disabled. Jul 9 23:56:16.793634 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 9 23:56:16.793641 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 9 23:56:16.793649 systemd[1]: Finished systemd-fsck-usr.service. Jul 9 23:56:16.793657 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 9 23:56:16.793663 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 9 23:56:16.793670 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 9 23:56:16.793677 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 9 23:56:16.793683 kernel: Bridge firewalling registered Jul 9 23:56:16.793690 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 9 23:56:16.793697 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 9 23:56:16.793705 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 9 23:56:16.793711 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 9 23:56:16.793718 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 9 23:56:16.793724 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 9 23:56:16.793731 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 9 23:56:16.793738 systemd-journald[217]: Journal started Jul 9 23:56:16.793753 systemd-journald[217]: Runtime Journal (/run/log/journal/1bfdcebbffb9471ea387da9f8521c0ae) is 4.8M, max 38.6M, 33.8M free. Jul 9 23:56:16.743574 systemd-modules-load[218]: Inserted module 'overlay' Jul 9 23:56:16.765969 systemd-modules-load[218]: Inserted module 'br_netfilter' Jul 9 23:56:16.798786 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 9 23:56:16.798799 systemd[1]: Started systemd-journald.service - Journal Service. Jul 9 23:56:16.799315 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 9 23:56:16.804985 dracut-cmdline[241]: dracut-dracut-053 Jul 9 23:56:16.805847 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 9 23:56:16.807076 dracut-cmdline[241]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=c257b65f06e0ad68d969d5b3e057f031663dc29a4487d91a77595a40c4dc82d6 Jul 9 23:56:16.810080 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 9 23:56:16.831566 systemd-resolved[264]: Positive Trust Anchors: Jul 9 23:56:16.831576 systemd-resolved[264]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 9 23:56:16.831599 systemd-resolved[264]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 9 23:56:16.833573 systemd-resolved[264]: Defaulting to hostname 'linux'. Jul 9 23:56:16.834482 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 9 23:56:16.834629 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 9 23:56:16.854966 kernel: SCSI subsystem initialized Jul 9 23:56:16.862950 kernel: Loading iSCSI transport class v2.0-870. Jul 9 23:56:16.869958 kernel: iscsi: registered transport (tcp) Jul 9 23:56:16.884313 kernel: iscsi: registered transport (qla4xxx) Jul 9 23:56:16.884335 kernel: QLogic iSCSI HBA Driver Jul 9 23:56:16.904458 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 9 23:56:16.908159 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 9 23:56:16.923776 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 9 23:56:16.923807 kernel: device-mapper: uevent: version 1.0.3 Jul 9 23:56:16.923818 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jul 9 23:56:16.954955 kernel: raid6: avx2x4 gen() 47337 MB/s Jul 9 23:56:16.971986 kernel: raid6: avx2x2 gen() 53375 MB/s Jul 9 23:56:16.989146 kernel: raid6: avx2x1 gen() 44481 MB/s Jul 9 23:56:16.989167 kernel: raid6: using algorithm avx2x2 gen() 53375 MB/s Jul 9 23:56:17.007130 kernel: raid6: .... xor() 32267 MB/s, rmw enabled Jul 9 23:56:17.007172 kernel: raid6: using avx2x2 recovery algorithm Jul 9 23:56:17.019951 kernel: xor: automatically using best checksumming function avx Jul 9 23:56:17.109962 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 9 23:56:17.114839 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 9 23:56:17.119031 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 9 23:56:17.127015 systemd-udevd[437]: Using default interface naming scheme 'v255'. Jul 9 23:56:17.130133 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 9 23:56:17.135025 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 9 23:56:17.141756 dracut-pre-trigger[441]: rd.md=0: removing MD RAID activation Jul 9 23:56:17.156290 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 9 23:56:17.161111 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 9 23:56:17.234689 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 9 23:56:17.240027 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 9 23:56:17.248040 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 9 23:56:17.248383 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 9 23:56:17.248608 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 9 23:56:17.248918 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 9 23:56:17.252042 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 9 23:56:17.259158 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 9 23:56:17.306966 kernel: VMware PVSCSI driver - version 1.0.7.0-k Jul 9 23:56:17.312967 kernel: VMware vmxnet3 virtual NIC driver - version 1.7.0.0-k-NAPI Jul 9 23:56:17.318369 kernel: vmw_pvscsi: using 64bit dma Jul 9 23:56:17.318391 kernel: vmw_pvscsi: max_id: 16 Jul 9 23:56:17.318403 kernel: vmw_pvscsi: setting ring_pages to 8 Jul 9 23:56:17.321949 kernel: vmxnet3 0000:0b:00.0: # of Tx queues : 2, # of Rx queues : 2 Jul 9 23:56:17.326051 kernel: vmw_pvscsi: enabling reqCallThreshold Jul 9 23:56:17.326071 kernel: vmw_pvscsi: driver-based request coalescing enabled Jul 9 23:56:17.326080 kernel: vmw_pvscsi: using MSI-X Jul 9 23:56:17.329951 kernel: vmxnet3 0000:0b:00.0 eth0: NIC Link is Up 10000 Mbps Jul 9 23:56:17.330062 kernel: scsi host0: VMware PVSCSI storage adapter rev 2, req/cmp/msg rings: 8/8/1 pages, cmd_per_lun=254 Jul 9 23:56:17.333322 kernel: vmxnet3 0000:0b:00.0 ens192: renamed from eth0 Jul 9 23:56:17.335957 kernel: vmw_pvscsi 0000:03:00.0: VMware PVSCSI rev 2 host #0 Jul 9 23:56:17.338971 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 9 23:56:17.340718 kernel: scsi 0:0:0:0: Direct-Access VMware Virtual disk 2.0 PQ: 0 ANSI: 6 Jul 9 23:56:17.340805 kernel: cryptd: max_cpu_qlen set to 1000 Jul 9 23:56:17.339053 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 9 23:56:17.340847 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 9 23:56:17.341488 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 9 23:56:17.341565 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 9 23:56:17.341754 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 9 23:56:17.345046 kernel: libata version 3.00 loaded. Jul 9 23:56:17.347120 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 9 23:56:17.348951 kernel: ata_piix 0000:00:07.1: version 2.13 Jul 9 23:56:17.353098 kernel: scsi host1: ata_piix Jul 9 23:56:17.358826 kernel: scsi host2: ata_piix Jul 9 23:56:17.358994 kernel: AVX2 version of gcm_enc/dec engaged. Jul 9 23:56:17.359011 kernel: ata1: PATA max UDMA/33 cmd 0x1f0 ctl 0x3f6 bmdma 0x1060 irq 14 Jul 9 23:56:17.359026 kernel: ata2: PATA max UDMA/33 cmd 0x170 ctl 0x376 bmdma 0x1068 irq 15 Jul 9 23:56:17.363454 kernel: AES CTR mode by8 optimization enabled Jul 9 23:56:17.366021 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 9 23:56:17.371041 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 9 23:56:17.377752 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 9 23:56:17.527989 kernel: ata2.00: ATAPI: VMware Virtual IDE CDROM Drive, 00000001, max UDMA/33 Jul 9 23:56:17.531975 kernel: scsi 2:0:0:0: CD-ROM NECVMWar VMware IDE CDR10 1.00 PQ: 0 ANSI: 5 Jul 9 23:56:17.545319 kernel: sd 0:0:0:0: [sda] 17805312 512-byte logical blocks: (9.12 GB/8.49 GiB) Jul 9 23:56:17.545435 kernel: sd 0:0:0:0: [sda] Write Protect is off Jul 9 23:56:17.545504 kernel: sd 0:0:0:0: [sda] Mode Sense: 31 00 00 00 Jul 9 23:56:17.545568 kernel: sd 0:0:0:0: [sda] Cache data unavailable Jul 9 23:56:17.545629 kernel: sd 0:0:0:0: [sda] Assuming drive cache: write through Jul 9 23:56:17.548953 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 9 23:56:17.549966 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jul 9 23:56:17.551951 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 1x/1x writer dvd-ram cd/rw xa/form2 cdda tray Jul 9 23:56:17.552036 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jul 9 23:56:17.563970 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jul 9 23:56:17.620953 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by (udev-worker) (486) Jul 9 23:56:17.620991 kernel: BTRFS: device fsid 2ea7ed46-2399-4750-93a6-9faa0c83416c devid 1 transid 37 /dev/sda3 scanned by (udev-worker) (491) Jul 9 23:56:17.632540 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_disk ROOT. Jul 9 23:56:17.638000 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_disk EFI-SYSTEM. Jul 9 23:56:17.642533 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_disk USR-A. Jul 9 23:56:17.642797 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_disk USR-A. Jul 9 23:56:17.648327 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_disk OEM. Jul 9 23:56:17.656021 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 9 23:56:17.677953 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 9 23:56:18.684956 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 9 23:56:18.685719 disk-uuid[597]: The operation has completed successfully. Jul 9 23:56:18.730627 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 9 23:56:18.730922 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 9 23:56:18.740060 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 9 23:56:18.741672 sh[613]: Success Jul 9 23:56:18.749957 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jul 9 23:56:18.795013 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 9 23:56:18.800768 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 9 23:56:18.801119 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 9 23:56:18.816540 kernel: BTRFS info (device dm-0): first mount of filesystem 2ea7ed46-2399-4750-93a6-9faa0c83416c Jul 9 23:56:18.816575 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jul 9 23:56:18.816584 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jul 9 23:56:18.817627 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jul 9 23:56:18.818419 kernel: BTRFS info (device dm-0): using free space tree Jul 9 23:56:18.825953 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jul 9 23:56:18.827726 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 9 23:56:18.838065 systemd[1]: Starting afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments... Jul 9 23:56:18.839268 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 9 23:56:18.857669 kernel: BTRFS info (device sda6): first mount of filesystem 8e2332fd-cd78-45f6-aab3-8af291a1450c Jul 9 23:56:18.857705 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jul 9 23:56:18.857713 kernel: BTRFS info (device sda6): using free space tree Jul 9 23:56:18.862953 kernel: BTRFS info (device sda6): enabling ssd optimizations Jul 9 23:56:18.866457 kernel: BTRFS info (device sda6): last unmount of filesystem 8e2332fd-cd78-45f6-aab3-8af291a1450c Jul 9 23:56:18.869005 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 9 23:56:18.872060 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 9 23:56:18.915905 systemd[1]: Finished afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments. Jul 9 23:56:18.922420 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 9 23:56:18.978672 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 9 23:56:18.983835 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 9 23:56:18.984399 ignition[670]: Ignition 2.20.0 Jul 9 23:56:18.984405 ignition[670]: Stage: fetch-offline Jul 9 23:56:18.984425 ignition[670]: no configs at "/usr/lib/ignition/base.d" Jul 9 23:56:18.984430 ignition[670]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Jul 9 23:56:18.984480 ignition[670]: parsed url from cmdline: "" Jul 9 23:56:18.984482 ignition[670]: no config URL provided Jul 9 23:56:18.984485 ignition[670]: reading system config file "/usr/lib/ignition/user.ign" Jul 9 23:56:18.984489 ignition[670]: no config at "/usr/lib/ignition/user.ign" Jul 9 23:56:18.984860 ignition[670]: config successfully fetched Jul 9 23:56:18.984877 ignition[670]: parsing config with SHA512: 9561bf9eeeb90f5f8987fcea2a37125952c05081681fdeca0ef70f7682cf18833721dff51a2919aec50b8e478cd9a9d8ec3e028a5909ff83dbffaed455138b77 Jul 9 23:56:18.987888 ignition[670]: fetch-offline: fetch-offline passed Jul 9 23:56:18.987663 unknown[670]: fetched base config from "system" Jul 9 23:56:18.987932 ignition[670]: Ignition finished successfully Jul 9 23:56:18.987667 unknown[670]: fetched user config from "vmware" Jul 9 23:56:18.989970 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 9 23:56:19.002504 systemd-networkd[802]: lo: Link UP Jul 9 23:56:19.002511 systemd-networkd[802]: lo: Gained carrier Jul 9 23:56:19.003550 systemd-networkd[802]: Enumeration completed Jul 9 23:56:19.003705 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 9 23:56:19.003850 systemd[1]: Reached target network.target - Network. Jul 9 23:56:19.003879 systemd-networkd[802]: ens192: Configuring with /etc/systemd/network/10-dracut-cmdline-99.network. Jul 9 23:56:19.003933 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jul 9 23:56:19.006949 kernel: vmxnet3 0000:0b:00.0 ens192: intr type 3, mode 0, 3 vectors allocated Jul 9 23:56:19.007102 kernel: vmxnet3 0000:0b:00.0 ens192: NIC Link is Up 10000 Mbps Jul 9 23:56:19.007376 systemd-networkd[802]: ens192: Link UP Jul 9 23:56:19.007383 systemd-networkd[802]: ens192: Gained carrier Jul 9 23:56:19.012097 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 9 23:56:19.021936 ignition[807]: Ignition 2.20.0 Jul 9 23:56:19.021954 ignition[807]: Stage: kargs Jul 9 23:56:19.022063 ignition[807]: no configs at "/usr/lib/ignition/base.d" Jul 9 23:56:19.022069 ignition[807]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Jul 9 23:56:19.022625 ignition[807]: kargs: kargs passed Jul 9 23:56:19.022656 ignition[807]: Ignition finished successfully Jul 9 23:56:19.023785 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 9 23:56:19.028064 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 9 23:56:19.035568 ignition[815]: Ignition 2.20.0 Jul 9 23:56:19.035577 ignition[815]: Stage: disks Jul 9 23:56:19.035690 ignition[815]: no configs at "/usr/lib/ignition/base.d" Jul 9 23:56:19.035696 ignition[815]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Jul 9 23:56:19.036203 ignition[815]: disks: disks passed Jul 9 23:56:19.036233 ignition[815]: Ignition finished successfully Jul 9 23:56:19.036913 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 9 23:56:19.037274 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 9 23:56:19.037435 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 9 23:56:19.037679 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 9 23:56:19.037872 systemd[1]: Reached target sysinit.target - System Initialization. Jul 9 23:56:19.038059 systemd[1]: Reached target basic.target - Basic System. Jul 9 23:56:19.042051 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 9 23:56:19.052179 systemd-fsck[824]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Jul 9 23:56:19.053600 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 9 23:56:19.057002 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 9 23:56:19.109980 kernel: EXT4-fs (sda9): mounted filesystem 147af866-f15a-4a2f-aea7-d9959c235d2a r/w with ordered data mode. Quota mode: none. Jul 9 23:56:19.110283 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 9 23:56:19.110652 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 9 23:56:19.115020 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 9 23:56:19.117080 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 9 23:56:19.117488 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 9 23:56:19.117667 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 9 23:56:19.117684 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 9 23:56:19.121257 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 9 23:56:19.122105 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 9 23:56:19.124966 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by mount (832) Jul 9 23:56:19.127185 kernel: BTRFS info (device sda6): first mount of filesystem 8e2332fd-cd78-45f6-aab3-8af291a1450c Jul 9 23:56:19.127213 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jul 9 23:56:19.127222 kernel: BTRFS info (device sda6): using free space tree Jul 9 23:56:19.131959 kernel: BTRFS info (device sda6): enabling ssd optimizations Jul 9 23:56:19.133345 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 9 23:56:19.152209 initrd-setup-root[856]: cut: /sysroot/etc/passwd: No such file or directory Jul 9 23:56:19.155200 initrd-setup-root[863]: cut: /sysroot/etc/group: No such file or directory Jul 9 23:56:19.157485 initrd-setup-root[870]: cut: /sysroot/etc/shadow: No such file or directory Jul 9 23:56:19.159430 initrd-setup-root[877]: cut: /sysroot/etc/gshadow: No such file or directory Jul 9 23:56:19.212121 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 9 23:56:19.217056 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 9 23:56:19.219560 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 9 23:56:19.222947 kernel: BTRFS info (device sda6): last unmount of filesystem 8e2332fd-cd78-45f6-aab3-8af291a1450c Jul 9 23:56:19.237115 ignition[945]: INFO : Ignition 2.20.0 Jul 9 23:56:19.237115 ignition[945]: INFO : Stage: mount Jul 9 23:56:19.239022 ignition[945]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 9 23:56:19.239022 ignition[945]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Jul 9 23:56:19.239022 ignition[945]: INFO : mount: mount passed Jul 9 23:56:19.239022 ignition[945]: INFO : Ignition finished successfully Jul 9 23:56:19.238560 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 9 23:56:19.242047 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 9 23:56:19.242551 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 9 23:56:19.815155 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 9 23:56:19.820088 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 9 23:56:19.828879 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (957) Jul 9 23:56:19.828906 kernel: BTRFS info (device sda6): first mount of filesystem 8e2332fd-cd78-45f6-aab3-8af291a1450c Jul 9 23:56:19.828916 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jul 9 23:56:19.829976 kernel: BTRFS info (device sda6): using free space tree Jul 9 23:56:19.833951 kernel: BTRFS info (device sda6): enabling ssd optimizations Jul 9 23:56:19.834219 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 9 23:56:19.849340 ignition[974]: INFO : Ignition 2.20.0 Jul 9 23:56:19.849975 ignition[974]: INFO : Stage: files Jul 9 23:56:19.849975 ignition[974]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 9 23:56:19.849975 ignition[974]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Jul 9 23:56:19.850871 ignition[974]: DEBUG : files: compiled without relabeling support, skipping Jul 9 23:56:19.851723 ignition[974]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 9 23:56:19.851868 ignition[974]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 9 23:56:19.853901 ignition[974]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 9 23:56:19.854060 ignition[974]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 9 23:56:19.854198 ignition[974]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 9 23:56:19.854134 unknown[974]: wrote ssh authorized keys file for user: core Jul 9 23:56:19.855845 ignition[974]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jul 9 23:56:19.856077 ignition[974]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Jul 9 23:56:20.492480 ignition[974]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 9 23:56:20.986052 systemd-networkd[802]: ens192: Gained IPv6LL Jul 9 23:56:21.568786 ignition[974]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jul 9 23:56:21.568786 ignition[974]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 9 23:56:21.569580 ignition[974]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jul 9 23:56:22.047194 ignition[974]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jul 9 23:56:22.124582 ignition[974]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 9 23:56:22.124827 ignition[974]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jul 9 23:56:22.124827 ignition[974]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jul 9 23:56:22.124827 ignition[974]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 9 23:56:22.124827 ignition[974]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 9 23:56:22.124827 ignition[974]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 9 23:56:22.125640 ignition[974]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 9 23:56:22.125640 ignition[974]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 9 23:56:22.125640 ignition[974]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 9 23:56:22.125640 ignition[974]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 9 23:56:22.125640 ignition[974]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 9 23:56:22.125640 ignition[974]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jul 9 23:56:22.125640 ignition[974]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jul 9 23:56:22.125640 ignition[974]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jul 9 23:56:22.125640 ignition[974]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Jul 9 23:56:22.768741 ignition[974]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jul 9 23:56:23.067557 ignition[974]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jul 9 23:56:23.067557 ignition[974]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/etc/systemd/network/00-vmware.network" Jul 9 23:56:23.068062 ignition[974]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/etc/systemd/network/00-vmware.network" Jul 9 23:56:23.068062 ignition[974]: INFO : files: op(d): [started] processing unit "prepare-helm.service" Jul 9 23:56:23.068367 ignition[974]: INFO : files: op(d): op(e): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 9 23:56:23.068538 ignition[974]: INFO : files: op(d): op(e): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 9 23:56:23.068538 ignition[974]: INFO : files: op(d): [finished] processing unit "prepare-helm.service" Jul 9 23:56:23.068538 ignition[974]: INFO : files: op(f): [started] processing unit "coreos-metadata.service" Jul 9 23:56:23.068538 ignition[974]: INFO : files: op(f): op(10): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 9 23:56:23.068538 ignition[974]: INFO : files: op(f): op(10): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 9 23:56:23.068538 ignition[974]: INFO : files: op(f): [finished] processing unit "coreos-metadata.service" Jul 9 23:56:23.068538 ignition[974]: INFO : files: op(11): [started] setting preset to disabled for "coreos-metadata.service" Jul 9 23:56:23.127737 ignition[974]: INFO : files: op(11): op(12): [started] removing enablement symlink(s) for "coreos-metadata.service" Jul 9 23:56:23.130817 ignition[974]: INFO : files: op(11): op(12): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jul 9 23:56:23.130817 ignition[974]: INFO : files: op(11): [finished] setting preset to disabled for "coreos-metadata.service" Jul 9 23:56:23.130817 ignition[974]: INFO : files: op(13): [started] setting preset to enabled for "prepare-helm.service" Jul 9 23:56:23.130817 ignition[974]: INFO : files: op(13): [finished] setting preset to enabled for "prepare-helm.service" Jul 9 23:56:23.130817 ignition[974]: INFO : files: createResultFile: createFiles: op(14): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 9 23:56:23.130817 ignition[974]: INFO : files: createResultFile: createFiles: op(14): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 9 23:56:23.130817 ignition[974]: INFO : files: files passed Jul 9 23:56:23.130817 ignition[974]: INFO : Ignition finished successfully Jul 9 23:56:23.131544 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 9 23:56:23.135035 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 9 23:56:23.137014 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 9 23:56:23.143328 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 9 23:56:23.143811 initrd-setup-root-after-ignition[1003]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 9 23:56:23.143811 initrd-setup-root-after-ignition[1003]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 9 23:56:23.143383 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 9 23:56:23.144252 initrd-setup-root-after-ignition[1007]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 9 23:56:23.144905 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 9 23:56:23.145224 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 9 23:56:23.148027 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 9 23:56:23.159681 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 9 23:56:23.159739 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 9 23:56:23.160032 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 9 23:56:23.160154 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 9 23:56:23.160346 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 9 23:56:23.160776 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 9 23:56:23.178591 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 9 23:56:23.182163 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 9 23:56:23.187440 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 9 23:56:23.187605 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 9 23:56:23.187823 systemd[1]: Stopped target timers.target - Timer Units. Jul 9 23:56:23.188030 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 9 23:56:23.188093 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 9 23:56:23.188430 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 9 23:56:23.188591 systemd[1]: Stopped target basic.target - Basic System. Jul 9 23:56:23.188766 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 9 23:56:23.189098 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 9 23:56:23.189296 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 9 23:56:23.189503 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 9 23:56:23.189724 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 9 23:56:23.189928 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 9 23:56:23.190130 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 9 23:56:23.190312 systemd[1]: Stopped target swap.target - Swaps. Jul 9 23:56:23.190470 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 9 23:56:23.190547 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 9 23:56:23.190798 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 9 23:56:23.191070 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 9 23:56:23.191246 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 9 23:56:23.191285 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 9 23:56:23.191451 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 9 23:56:23.191516 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 9 23:56:23.191767 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 9 23:56:23.191828 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 9 23:56:23.192061 systemd[1]: Stopped target paths.target - Path Units. Jul 9 23:56:23.192190 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 9 23:56:23.196056 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 9 23:56:23.196254 systemd[1]: Stopped target slices.target - Slice Units. Jul 9 23:56:23.196442 systemd[1]: Stopped target sockets.target - Socket Units. Jul 9 23:56:23.196683 systemd[1]: iscsid.socket: Deactivated successfully. Jul 9 23:56:23.196748 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 9 23:56:23.196952 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 9 23:56:23.196995 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 9 23:56:23.197250 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 9 23:56:23.197330 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 9 23:56:23.197548 systemd[1]: ignition-files.service: Deactivated successfully. Jul 9 23:56:23.197627 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 9 23:56:23.202183 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 9 23:56:23.202277 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 9 23:56:23.202363 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 9 23:56:23.204747 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 9 23:56:23.204850 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 9 23:56:23.204945 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 9 23:56:23.205119 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 9 23:56:23.205178 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 9 23:56:23.208569 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 9 23:56:23.208625 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 9 23:56:23.211465 ignition[1029]: INFO : Ignition 2.20.0 Jul 9 23:56:23.211465 ignition[1029]: INFO : Stage: umount Jul 9 23:56:23.212308 ignition[1029]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 9 23:56:23.212308 ignition[1029]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Jul 9 23:56:23.212743 ignition[1029]: INFO : umount: umount passed Jul 9 23:56:23.212926 ignition[1029]: INFO : Ignition finished successfully Jul 9 23:56:23.213421 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 9 23:56:23.213483 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 9 23:56:23.213686 systemd[1]: Stopped target network.target - Network. Jul 9 23:56:23.213761 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 9 23:56:23.213789 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 9 23:56:23.213882 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 9 23:56:23.213903 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 9 23:56:23.214004 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 9 23:56:23.214026 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 9 23:56:23.214716 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 9 23:56:23.214739 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 9 23:56:23.214957 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 9 23:56:23.215303 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 9 23:56:23.216680 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 9 23:56:23.220958 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 9 23:56:23.221030 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 9 23:56:23.222434 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jul 9 23:56:23.222711 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 9 23:56:23.222751 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 9 23:56:23.223463 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jul 9 23:56:23.225631 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 9 23:56:23.225684 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 9 23:56:23.226606 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jul 9 23:56:23.226807 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 9 23:56:23.226831 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 9 23:56:23.231095 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 9 23:56:23.231188 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 9 23:56:23.231215 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 9 23:56:23.231340 systemd[1]: afterburn-network-kargs.service: Deactivated successfully. Jul 9 23:56:23.231363 systemd[1]: Stopped afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments. Jul 9 23:56:23.231480 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 9 23:56:23.231519 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 9 23:56:23.232777 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 9 23:56:23.232804 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 9 23:56:23.232926 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 9 23:56:23.233520 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 9 23:56:23.237866 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 9 23:56:23.237917 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 9 23:56:23.243313 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 9 23:56:23.243385 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 9 23:56:23.243667 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 9 23:56:23.243697 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 9 23:56:23.244051 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 9 23:56:23.244068 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 9 23:56:23.244220 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 9 23:56:23.244243 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 9 23:56:23.244496 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 9 23:56:23.244546 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 9 23:56:23.244831 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 9 23:56:23.244854 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 9 23:56:23.248082 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 9 23:56:23.248206 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 9 23:56:23.248232 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 9 23:56:23.249142 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 9 23:56:23.249171 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 9 23:56:23.251043 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 9 23:56:23.251107 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 9 23:56:23.321467 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 9 23:56:23.321551 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 9 23:56:23.322047 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 9 23:56:23.322200 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 9 23:56:23.322249 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 9 23:56:23.325054 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 9 23:56:23.335936 systemd[1]: Switching root. Jul 9 23:56:23.365753 systemd-journald[217]: Journal stopped Jul 9 23:56:24.599696 systemd-journald[217]: Received SIGTERM from PID 1 (systemd). Jul 9 23:56:24.599715 kernel: SELinux: policy capability network_peer_controls=1 Jul 9 23:56:24.599723 kernel: SELinux: policy capability open_perms=1 Jul 9 23:56:24.599729 kernel: SELinux: policy capability extended_socket_class=1 Jul 9 23:56:24.599734 kernel: SELinux: policy capability always_check_network=0 Jul 9 23:56:24.599739 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 9 23:56:24.599746 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 9 23:56:24.599752 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 9 23:56:24.599758 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 9 23:56:24.599764 systemd[1]: Successfully loaded SELinux policy in 31.005ms. Jul 9 23:56:24.599771 kernel: audit: type=1403 audit(1752105384.042:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 9 23:56:24.599777 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 6.904ms. Jul 9 23:56:24.599784 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 9 23:56:24.599791 systemd[1]: Detected virtualization vmware. Jul 9 23:56:24.599798 systemd[1]: Detected architecture x86-64. Jul 9 23:56:24.599804 systemd[1]: Detected first boot. Jul 9 23:56:24.599811 systemd[1]: Initializing machine ID from random generator. Jul 9 23:56:24.599818 zram_generator::config[1073]: No configuration found. Jul 9 23:56:24.599897 kernel: vmw_vmci 0000:00:07.7: Using capabilities 0xc Jul 9 23:56:24.599908 kernel: Guest personality initialized and is active Jul 9 23:56:24.599914 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Jul 9 23:56:24.599920 kernel: Initialized host personality Jul 9 23:56:24.599926 kernel: NET: Registered PF_VSOCK protocol family Jul 9 23:56:24.599933 systemd[1]: Populated /etc with preset unit settings. Jul 9 23:56:24.599992 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Jul 9 23:56:24.600002 systemd[1]: COREOS_CUSTOM_PUBLIC_IPV4=$(ip addr show ens192 | grep -v "inet 10." | grep -Po "inet \K[\d.]+")" > ${OUTPUT}" Jul 9 23:56:24.600009 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jul 9 23:56:24.600015 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 9 23:56:24.600022 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 9 23:56:24.600028 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 9 23:56:24.600037 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 9 23:56:24.600044 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 9 23:56:24.600051 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 9 23:56:24.600057 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 9 23:56:24.600063 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 9 23:56:24.600070 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 9 23:56:24.600077 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 9 23:56:24.600083 systemd[1]: Created slice user.slice - User and Session Slice. Jul 9 23:56:24.600091 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 9 23:56:24.600098 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 9 23:56:24.600194 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 9 23:56:24.600205 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 9 23:56:24.600212 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 9 23:56:24.600219 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 9 23:56:24.600226 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jul 9 23:56:24.600233 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 9 23:56:24.600241 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 9 23:56:24.600248 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 9 23:56:24.600255 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 9 23:56:24.600262 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 9 23:56:24.600268 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 9 23:56:24.600275 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 9 23:56:24.600282 systemd[1]: Reached target slices.target - Slice Units. Jul 9 23:56:24.600288 systemd[1]: Reached target swap.target - Swaps. Jul 9 23:56:24.600296 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 9 23:56:24.600303 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 9 23:56:24.601971 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jul 9 23:56:24.601982 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 9 23:56:24.601990 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 9 23:56:24.601999 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 9 23:56:24.602006 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 9 23:56:24.602013 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 9 23:56:24.602020 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 9 23:56:24.602027 systemd[1]: Mounting media.mount - External Media Directory... Jul 9 23:56:24.602034 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 9 23:56:24.602041 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 9 23:56:24.602048 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 9 23:56:24.602056 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 9 23:56:24.602064 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 9 23:56:24.602071 systemd[1]: Reached target machines.target - Containers. Jul 9 23:56:24.602078 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 9 23:56:24.602085 systemd[1]: Starting ignition-delete-config.service - Ignition (delete config)... Jul 9 23:56:24.602092 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 9 23:56:24.602099 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 9 23:56:24.602106 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 9 23:56:24.602114 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 9 23:56:24.602121 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 9 23:56:24.602128 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 9 23:56:24.602135 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 9 23:56:24.602142 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 9 23:56:24.602149 kernel: fuse: init (API version 7.39) Jul 9 23:56:24.602156 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 9 23:56:24.602163 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 9 23:56:24.602170 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 9 23:56:24.602178 systemd[1]: Stopped systemd-fsck-usr.service. Jul 9 23:56:24.602185 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 9 23:56:24.602192 kernel: loop: module loaded Jul 9 23:56:24.602198 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 9 23:56:24.602205 kernel: ACPI: bus type drm_connector registered Jul 9 23:56:24.602211 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 9 23:56:24.602218 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 9 23:56:24.602225 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 9 23:56:24.602233 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jul 9 23:56:24.602240 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 9 23:56:24.602247 systemd[1]: verity-setup.service: Deactivated successfully. Jul 9 23:56:24.602254 systemd[1]: Stopped verity-setup.service. Jul 9 23:56:24.602261 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 9 23:56:24.602281 systemd-journald[1170]: Collecting audit messages is disabled. Jul 9 23:56:24.602300 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 9 23:56:24.602307 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 9 23:56:24.602314 systemd[1]: Mounted media.mount - External Media Directory. Jul 9 23:56:24.602321 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 9 23:56:24.602328 systemd-journald[1170]: Journal started Jul 9 23:56:24.602344 systemd-journald[1170]: Runtime Journal (/run/log/journal/1db33e7a8e3f44929ee46675421b91ef) is 4.8M, max 38.6M, 33.8M free. Jul 9 23:56:24.424439 systemd[1]: Queued start job for default target multi-user.target. Jul 9 23:56:24.434336 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jul 9 23:56:24.434610 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 9 23:56:24.602854 jq[1143]: true Jul 9 23:56:24.603320 systemd[1]: Started systemd-journald.service - Journal Service. Jul 9 23:56:24.603790 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 9 23:56:24.605054 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 9 23:56:24.605345 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 9 23:56:24.610479 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 9 23:56:24.610805 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 9 23:56:24.610915 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 9 23:56:24.611236 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 9 23:56:24.611333 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 9 23:56:24.611607 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 9 23:56:24.611704 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 9 23:56:24.611996 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 9 23:56:24.612093 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 9 23:56:24.612374 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 9 23:56:24.612469 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 9 23:56:24.612746 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 9 23:56:24.612843 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 9 23:56:24.613149 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 9 23:56:24.613425 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 9 23:56:24.613704 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 9 23:56:24.614012 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jul 9 23:56:24.615305 jq[1189]: true Jul 9 23:56:24.625686 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 9 23:56:24.632008 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 9 23:56:24.634915 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 9 23:56:24.635076 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 9 23:56:24.635133 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 9 23:56:24.635928 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jul 9 23:56:24.642254 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 9 23:56:24.645124 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 9 23:56:24.645275 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 9 23:56:24.651207 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 9 23:56:24.654025 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 9 23:56:24.654150 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 9 23:56:24.655068 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 9 23:56:24.655187 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 9 23:56:24.659020 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 9 23:56:24.665170 systemd-journald[1170]: Time spent on flushing to /var/log/journal/1db33e7a8e3f44929ee46675421b91ef is 40.012ms for 1844 entries. Jul 9 23:56:24.665170 systemd-journald[1170]: System Journal (/var/log/journal/1db33e7a8e3f44929ee46675421b91ef) is 8M, max 584.8M, 576.8M free. Jul 9 23:56:24.738104 systemd-journald[1170]: Received client request to flush runtime journal. Jul 9 23:56:24.738129 kernel: loop0: detected capacity change from 0 to 138176 Jul 9 23:56:24.660064 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 9 23:56:24.661126 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 9 23:56:24.662188 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 9 23:56:24.662334 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 9 23:56:24.662568 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 9 23:56:24.709175 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 9 23:56:24.709603 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 9 23:56:24.718052 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jul 9 23:56:24.724825 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 9 23:56:24.729309 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jul 9 23:56:24.744205 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 9 23:56:24.745259 udevadm[1230]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jul 9 23:56:24.751116 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 9 23:56:24.795389 ignition[1193]: Ignition 2.20.0 Jul 9 23:56:24.795575 ignition[1193]: deleting config from guestinfo properties Jul 9 23:56:24.860807 ignition[1193]: Successfully deleted config Jul 9 23:56:24.861749 systemd[1]: Finished ignition-delete-config.service - Ignition (delete config). Jul 9 23:56:24.982678 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jul 9 23:56:25.001337 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 9 23:56:25.013006 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 9 23:56:25.017962 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 9 23:56:25.052026 kernel: loop1: detected capacity change from 0 to 2960 Jul 9 23:56:25.053567 systemd-tmpfiles[1244]: ACLs are not supported, ignoring. Jul 9 23:56:25.053579 systemd-tmpfiles[1244]: ACLs are not supported, ignoring. Jul 9 23:56:25.058240 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 9 23:56:25.117345 kernel: loop2: detected capacity change from 0 to 147912 Jul 9 23:56:25.156962 kernel: loop3: detected capacity change from 0 to 224512 Jul 9 23:56:25.212048 kernel: loop4: detected capacity change from 0 to 138176 Jul 9 23:56:25.234118 kernel: loop5: detected capacity change from 0 to 2960 Jul 9 23:56:25.251990 kernel: loop6: detected capacity change from 0 to 147912 Jul 9 23:56:25.273958 kernel: loop7: detected capacity change from 0 to 224512 Jul 9 23:56:25.289479 (sd-merge)[1250]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-vmware'. Jul 9 23:56:25.290078 (sd-merge)[1250]: Merged extensions into '/usr'. Jul 9 23:56:25.294449 systemd[1]: Reload requested from client PID 1219 ('systemd-sysext') (unit systemd-sysext.service)... Jul 9 23:56:25.294458 systemd[1]: Reloading... Jul 9 23:56:25.356988 zram_generator::config[1274]: No configuration found. Jul 9 23:56:25.441491 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Jul 9 23:56:25.461170 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 9 23:56:25.503477 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 9 23:56:25.503735 systemd[1]: Reloading finished in 208 ms. Jul 9 23:56:25.519900 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 9 23:56:25.520337 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 9 23:56:25.524854 systemd[1]: Starting ensure-sysext.service... Jul 9 23:56:25.527801 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 9 23:56:25.530848 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 9 23:56:25.543813 systemd-tmpfiles[1335]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 9 23:56:25.544262 systemd-tmpfiles[1335]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 9 23:56:25.544758 systemd-tmpfiles[1335]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 9 23:56:25.544914 systemd-tmpfiles[1335]: ACLs are not supported, ignoring. Jul 9 23:56:25.545831 systemd-tmpfiles[1335]: ACLs are not supported, ignoring. Jul 9 23:56:25.547726 systemd-tmpfiles[1335]: Detected autofs mount point /boot during canonicalization of boot. Jul 9 23:56:25.547729 systemd-tmpfiles[1335]: Skipping /boot Jul 9 23:56:25.551966 systemd[1]: Reload requested from client PID 1334 ('systemctl') (unit ensure-sysext.service)... Jul 9 23:56:25.551976 systemd[1]: Reloading... Jul 9 23:56:25.553534 systemd-tmpfiles[1335]: Detected autofs mount point /boot during canonicalization of boot. Jul 9 23:56:25.553577 systemd-tmpfiles[1335]: Skipping /boot Jul 9 23:56:25.584905 systemd-udevd[1336]: Using default interface naming scheme 'v255'. Jul 9 23:56:25.599956 ldconfig[1214]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 9 23:56:25.606990 zram_generator::config[1362]: No configuration found. Jul 9 23:56:25.706388 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Jul 9 23:56:25.711062 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jul 9 23:56:25.719951 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (1385) Jul 9 23:56:25.721005 kernel: ACPI: button: Power Button [PWRF] Jul 9 23:56:25.733896 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 9 23:56:25.795497 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jul 9 23:56:25.795665 systemd[1]: Reloading finished in 243 ms. Jul 9 23:56:25.801675 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 9 23:56:25.802018 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 9 23:56:25.809794 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 9 23:56:25.811955 kernel: piix4_smbus 0000:00:07.3: SMBus Host Controller not enabled! Jul 9 23:56:25.822388 systemd[1]: Finished ensure-sysext.service. Jul 9 23:56:25.836704 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_disk OEM. Jul 9 23:56:25.837433 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 9 23:56:25.838992 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input4 Jul 9 23:56:25.839195 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 9 23:56:25.841882 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 9 23:56:25.845019 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 9 23:56:25.847578 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 9 23:56:25.850165 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 9 23:56:25.859063 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 9 23:56:25.859261 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 9 23:56:25.861177 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 9 23:56:25.861285 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 9 23:56:25.870083 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 9 23:56:25.876022 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 9 23:56:25.883015 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 9 23:56:25.886362 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jul 9 23:56:25.890021 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 9 23:56:25.890150 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 9 23:56:25.890723 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 9 23:56:25.891979 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 9 23:56:25.892265 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 9 23:56:25.892378 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 9 23:56:25.892602 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 9 23:56:25.892705 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 9 23:56:25.892956 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 9 23:56:25.893062 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 9 23:56:25.893650 (udev-worker)[1377]: id: Truncating stdout of 'dmi_memory_id' up to 16384 byte. Jul 9 23:56:25.899483 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 9 23:56:25.899557 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 9 23:56:25.906980 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 9 23:56:25.907288 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 9 23:56:25.911069 kernel: mousedev: PS/2 mouse device common for all mice Jul 9 23:56:25.911437 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 9 23:56:25.917570 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 9 23:56:25.924140 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 9 23:56:25.931334 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 9 23:56:25.951498 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 9 23:56:26.036462 augenrules[1503]: No rules Jul 9 23:56:26.038686 systemd[1]: audit-rules.service: Deactivated successfully. Jul 9 23:56:26.038829 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 9 23:56:26.072880 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 9 23:56:26.073727 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jul 9 23:56:26.074161 systemd[1]: Reached target time-set.target - System Time Set. Jul 9 23:56:26.081032 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 9 23:56:26.081405 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jul 9 23:56:26.086187 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jul 9 23:56:26.092496 systemd-resolved[1467]: Positive Trust Anchors: Jul 9 23:56:26.092504 systemd-resolved[1467]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 9 23:56:26.092526 systemd-resolved[1467]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 9 23:56:26.097490 systemd-networkd[1466]: lo: Link UP Jul 9 23:56:26.098829 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 9 23:56:26.099099 systemd-networkd[1466]: lo: Gained carrier Jul 9 23:56:26.099984 lvm[1513]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 9 23:56:26.101621 systemd-networkd[1466]: Enumeration completed Jul 9 23:56:26.101663 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 9 23:56:26.102788 systemd-networkd[1466]: ens192: Configuring with /etc/systemd/network/00-vmware.network. Jul 9 23:56:26.105258 kernel: vmxnet3 0000:0b:00.0 ens192: intr type 3, mode 0, 3 vectors allocated Jul 9 23:56:26.105381 kernel: vmxnet3 0000:0b:00.0 ens192: NIC Link is Up 10000 Mbps Jul 9 23:56:26.105132 systemd-resolved[1467]: Defaulting to hostname 'linux'. Jul 9 23:56:26.106573 systemd-networkd[1466]: ens192: Link UP Jul 9 23:56:26.106805 systemd-networkd[1466]: ens192: Gained carrier Jul 9 23:56:26.109084 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jul 9 23:56:26.111232 systemd-timesyncd[1469]: Network configuration changed, trying to establish connection. Jul 9 23:56:26.116097 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 9 23:56:26.116274 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 9 23:56:26.116548 systemd[1]: Reached target network.target - Network. Jul 9 23:56:26.116649 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 9 23:56:26.120142 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jul 9 23:56:26.120371 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 9 23:56:26.121636 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jul 9 23:56:26.126129 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jul 9 23:56:26.127215 lvm[1519]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 9 23:56:26.147255 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jul 9 23:56:26.147668 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 9 23:56:26.148160 systemd[1]: Reached target sysinit.target - System Initialization. Jul 9 23:56:26.148383 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 9 23:56:26.148593 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 9 23:56:26.148796 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 9 23:56:26.148962 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 9 23:56:26.149079 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 9 23:56:26.149191 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 9 23:56:26.149207 systemd[1]: Reached target paths.target - Path Units. Jul 9 23:56:26.149297 systemd[1]: Reached target timers.target - Timer Units. Jul 9 23:56:26.150469 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 9 23:56:26.151768 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 9 23:56:26.153633 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jul 9 23:56:26.153909 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jul 9 23:56:26.154096 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jul 9 23:56:26.155918 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 9 23:56:26.156573 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jul 9 23:56:26.157222 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 9 23:56:26.157416 systemd[1]: Reached target sockets.target - Socket Units. Jul 9 23:56:26.157558 systemd[1]: Reached target basic.target - Basic System. Jul 9 23:56:26.157729 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 9 23:56:26.157781 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 9 23:56:26.158569 systemd[1]: Starting containerd.service - containerd container runtime... Jul 9 23:56:26.161133 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 9 23:56:26.162964 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 9 23:56:26.164027 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 9 23:56:26.166108 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 9 23:56:26.166622 jq[1529]: false Jul 9 23:56:26.167103 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 9 23:56:26.168695 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 9 23:56:26.170027 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 9 23:56:26.172067 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 9 23:56:26.177900 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 9 23:56:26.178517 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 9 23:56:26.179534 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 9 23:56:26.180125 systemd[1]: Starting update-engine.service - Update Engine... Jul 9 23:56:26.182999 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 9 23:56:26.184818 systemd[1]: Starting vgauthd.service - VGAuth Service for open-vm-tools... Jul 9 23:56:26.186410 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 9 23:56:26.186545 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 9 23:56:26.196891 jq[1540]: true Jul 9 23:56:26.208210 systemd[1]: motdgen.service: Deactivated successfully. Jul 9 23:56:26.208348 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 9 23:56:26.208649 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 9 23:56:26.208770 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 9 23:56:26.214911 update_engine[1539]: I20250709 23:56:26.214871 1539 main.cc:92] Flatcar Update Engine starting Jul 9 23:56:26.216836 (ntainerd)[1556]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 9 23:56:26.217119 extend-filesystems[1530]: Found loop4 Jul 9 23:56:26.217310 extend-filesystems[1530]: Found loop5 Jul 9 23:56:26.217310 extend-filesystems[1530]: Found loop6 Jul 9 23:56:26.217310 extend-filesystems[1530]: Found loop7 Jul 9 23:56:26.217310 extend-filesystems[1530]: Found sda Jul 9 23:56:26.217310 extend-filesystems[1530]: Found sda1 Jul 9 23:56:26.217310 extend-filesystems[1530]: Found sda2 Jul 9 23:56:26.217310 extend-filesystems[1530]: Found sda3 Jul 9 23:56:26.217310 extend-filesystems[1530]: Found usr Jul 9 23:56:26.217310 extend-filesystems[1530]: Found sda4 Jul 9 23:56:26.217310 extend-filesystems[1530]: Found sda6 Jul 9 23:56:26.217310 extend-filesystems[1530]: Found sda7 Jul 9 23:56:26.218863 extend-filesystems[1530]: Found sda9 Jul 9 23:56:26.218863 extend-filesystems[1530]: Checking size of /dev/sda9 Jul 9 23:56:26.227166 systemd[1]: Started vgauthd.service - VGAuth Service for open-vm-tools. Jul 9 23:56:26.232012 jq[1555]: true Jul 9 23:56:26.231002 systemd[1]: Starting vmtoolsd.service - Service for virtual machines hosted on VMware... Jul 9 23:56:26.242064 extend-filesystems[1530]: Old size kept for /dev/sda9 Jul 9 23:56:26.242064 extend-filesystems[1530]: Found sr0 Jul 9 23:56:26.243162 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 9 23:56:26.243971 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 9 23:56:26.258467 systemd-logind[1536]: Watching system buttons on /dev/input/event1 (Power Button) Jul 9 23:56:26.258574 systemd-logind[1536]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jul 9 23:56:26.259060 systemd[1]: Started vmtoolsd.service - Service for virtual machines hosted on VMware. Jul 9 23:56:26.259561 systemd-logind[1536]: New seat seat0. Jul 9 23:56:26.261009 systemd[1]: Started systemd-logind.service - User Login Management. Jul 9 23:56:26.262475 tar[1549]: linux-amd64/LICENSE Jul 9 23:56:26.262621 tar[1549]: linux-amd64/helm Jul 9 23:56:26.265777 unknown[1563]: Pref_Init: Using '/etc/vmware-tools/vgauth.conf' as preferences filepath Jul 9 23:56:26.268399 unknown[1563]: Core dump limit set to -1 Jul 9 23:56:26.277992 dbus-daemon[1528]: [system] SELinux support is enabled Jul 9 23:56:26.278089 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 9 23:56:26.279746 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 9 23:56:26.279766 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 9 23:56:26.280175 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 9 23:56:26.280187 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 9 23:56:26.284369 dbus-daemon[1528]: [system] Successfully activated service 'org.freedesktop.systemd1' Jul 9 23:56:26.289056 systemd[1]: Started update-engine.service - Update Engine. Jul 9 23:56:26.289255 update_engine[1539]: I20250709 23:56:26.289108 1539 update_check_scheduler.cc:74] Next update check in 5m7s Jul 9 23:56:26.295772 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 9 23:56:26.303970 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (1370) Jul 9 23:56:26.384269 locksmithd[1591]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 9 23:56:26.788499 containerd[1556]: time="2025-07-09T23:56:26.788102751Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Jul 9 23:56:26.793342 sshd_keygen[1554]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 9 23:56:26.804640 bash[1588]: Updated "/home/core/.ssh/authorized_keys" Jul 9 23:56:26.804675 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 9 23:56:26.805706 containerd[1556]: time="2025-07-09T23:56:26.805686434Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 9 23:56:26.806357 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jul 9 23:56:26.807225 containerd[1556]: time="2025-07-09T23:56:26.807204612Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.95-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jul 9 23:56:26.807270 containerd[1556]: time="2025-07-09T23:56:26.807262639Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jul 9 23:56:26.807305 containerd[1556]: time="2025-07-09T23:56:26.807298455Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jul 9 23:56:26.807449 containerd[1556]: time="2025-07-09T23:56:26.807439660Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jul 9 23:56:26.807792 containerd[1556]: time="2025-07-09T23:56:26.807489673Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jul 9 23:56:26.807792 containerd[1556]: time="2025-07-09T23:56:26.807541665Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jul 9 23:56:26.807792 containerd[1556]: time="2025-07-09T23:56:26.807549884Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 9 23:56:26.807792 containerd[1556]: time="2025-07-09T23:56:26.807667606Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 9 23:56:26.807792 containerd[1556]: time="2025-07-09T23:56:26.807681116Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 9 23:56:26.807792 containerd[1556]: time="2025-07-09T23:56:26.807693324Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jul 9 23:56:26.807792 containerd[1556]: time="2025-07-09T23:56:26.807700516Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 9 23:56:26.807792 containerd[1556]: time="2025-07-09T23:56:26.807751193Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 9 23:56:26.808073 containerd[1556]: time="2025-07-09T23:56:26.808062353Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 9 23:56:26.808191 containerd[1556]: time="2025-07-09T23:56:26.808180473Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 9 23:56:26.808225 containerd[1556]: time="2025-07-09T23:56:26.808218441Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 9 23:56:26.808302 containerd[1556]: time="2025-07-09T23:56:26.808293759Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 9 23:56:26.808366 containerd[1556]: time="2025-07-09T23:56:26.808353215Z" level=info msg="metadata content store policy set" policy=shared Jul 9 23:56:26.814121 containerd[1556]: time="2025-07-09T23:56:26.814093273Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 9 23:56:26.814233 containerd[1556]: time="2025-07-09T23:56:26.814223468Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 9 23:56:26.814454 containerd[1556]: time="2025-07-09T23:56:26.814295652Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jul 9 23:56:26.814454 containerd[1556]: time="2025-07-09T23:56:26.814310216Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jul 9 23:56:26.814454 containerd[1556]: time="2025-07-09T23:56:26.814320463Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 9 23:56:26.814454 containerd[1556]: time="2025-07-09T23:56:26.814420649Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jul 9 23:56:26.814695 containerd[1556]: time="2025-07-09T23:56:26.814680666Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 9 23:56:26.814798 containerd[1556]: time="2025-07-09T23:56:26.814789254Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jul 9 23:56:26.814960 containerd[1556]: time="2025-07-09T23:56:26.814832531Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jul 9 23:56:26.814960 containerd[1556]: time="2025-07-09T23:56:26.814849294Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jul 9 23:56:26.814960 containerd[1556]: time="2025-07-09T23:56:26.814858948Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 9 23:56:26.814960 containerd[1556]: time="2025-07-09T23:56:26.814866275Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 9 23:56:26.814960 containerd[1556]: time="2025-07-09T23:56:26.814873928Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 9 23:56:26.814960 containerd[1556]: time="2025-07-09T23:56:26.814882253Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 9 23:56:26.814960 containerd[1556]: time="2025-07-09T23:56:26.814890203Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 9 23:56:26.814960 containerd[1556]: time="2025-07-09T23:56:26.814897363Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 9 23:56:26.814960 containerd[1556]: time="2025-07-09T23:56:26.814904379Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 9 23:56:26.814960 containerd[1556]: time="2025-07-09T23:56:26.814910825Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jul 9 23:56:26.814960 containerd[1556]: time="2025-07-09T23:56:26.814922193Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jul 9 23:56:26.814960 containerd[1556]: time="2025-07-09T23:56:26.814929819Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jul 9 23:56:26.814960 containerd[1556]: time="2025-07-09T23:56:26.814936916Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jul 9 23:56:26.815299 containerd[1556]: time="2025-07-09T23:56:26.815153348Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jul 9 23:56:26.815299 containerd[1556]: time="2025-07-09T23:56:26.815170279Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jul 9 23:56:26.815299 containerd[1556]: time="2025-07-09T23:56:26.815181594Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jul 9 23:56:26.815299 containerd[1556]: time="2025-07-09T23:56:26.815188515Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jul 9 23:56:26.815299 containerd[1556]: time="2025-07-09T23:56:26.815200752Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jul 9 23:56:26.815299 containerd[1556]: time="2025-07-09T23:56:26.815209016Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jul 9 23:56:26.815299 containerd[1556]: time="2025-07-09T23:56:26.815218106Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jul 9 23:56:26.815299 containerd[1556]: time="2025-07-09T23:56:26.815225126Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jul 9 23:56:26.815299 containerd[1556]: time="2025-07-09T23:56:26.815231954Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jul 9 23:56:26.815299 containerd[1556]: time="2025-07-09T23:56:26.815238589Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jul 9 23:56:26.815299 containerd[1556]: time="2025-07-09T23:56:26.815248062Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jul 9 23:56:26.815299 containerd[1556]: time="2025-07-09T23:56:26.815260531Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jul 9 23:56:26.815299 containerd[1556]: time="2025-07-09T23:56:26.815267435Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jul 9 23:56:26.815299 containerd[1556]: time="2025-07-09T23:56:26.815273114Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jul 9 23:56:26.816099 containerd[1556]: time="2025-07-09T23:56:26.815522136Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jul 9 23:56:26.816099 containerd[1556]: time="2025-07-09T23:56:26.815539166Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jul 9 23:56:26.816099 containerd[1556]: time="2025-07-09T23:56:26.815546670Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jul 9 23:56:26.816099 containerd[1556]: time="2025-07-09T23:56:26.815555960Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jul 9 23:56:26.816099 containerd[1556]: time="2025-07-09T23:56:26.815561925Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jul 9 23:56:26.816099 containerd[1556]: time="2025-07-09T23:56:26.815613380Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jul 9 23:56:26.816099 containerd[1556]: time="2025-07-09T23:56:26.815621956Z" level=info msg="NRI interface is disabled by configuration." Jul 9 23:56:26.816099 containerd[1556]: time="2025-07-09T23:56:26.815628424Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jul 9 23:56:26.816233 containerd[1556]: time="2025-07-09T23:56:26.815790076Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jul 9 23:56:26.816233 containerd[1556]: time="2025-07-09T23:56:26.815817337Z" level=info msg="Connect containerd service" Jul 9 23:56:26.816233 containerd[1556]: time="2025-07-09T23:56:26.815831847Z" level=info msg="using legacy CRI server" Jul 9 23:56:26.816233 containerd[1556]: time="2025-07-09T23:56:26.815836280Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 9 23:56:26.816233 containerd[1556]: time="2025-07-09T23:56:26.815897092Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jul 9 23:56:26.816572 containerd[1556]: time="2025-07-09T23:56:26.816560572Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 9 23:56:26.816706 containerd[1556]: time="2025-07-09T23:56:26.816688189Z" level=info msg="Start subscribing containerd event" Jul 9 23:56:26.816833 containerd[1556]: time="2025-07-09T23:56:26.816825369Z" level=info msg="Start recovering state" Jul 9 23:56:26.816900 containerd[1556]: time="2025-07-09T23:56:26.816888910Z" level=info msg="Start event monitor" Jul 9 23:56:26.817714 containerd[1556]: time="2025-07-09T23:56:26.816930787Z" level=info msg="Start snapshots syncer" Jul 9 23:56:26.817714 containerd[1556]: time="2025-07-09T23:56:26.816807849Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 9 23:56:26.817714 containerd[1556]: time="2025-07-09T23:56:26.816988683Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 9 23:56:26.817984 containerd[1556]: time="2025-07-09T23:56:26.817927543Z" level=info msg="Start cni network conf syncer for default" Jul 9 23:56:26.818031 containerd[1556]: time="2025-07-09T23:56:26.818017597Z" level=info msg="Start streaming server" Jul 9 23:56:26.818175 containerd[1556]: time="2025-07-09T23:56:26.818167328Z" level=info msg="containerd successfully booted in 0.182253s" Jul 9 23:56:26.818219 systemd[1]: Started containerd.service - containerd container runtime. Jul 9 23:56:26.837194 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 9 23:56:26.844160 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 9 23:56:26.848780 systemd[1]: issuegen.service: Deactivated successfully. Jul 9 23:56:26.849222 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 9 23:56:26.851992 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 9 23:56:26.873849 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 9 23:56:26.879366 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 9 23:56:26.880962 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jul 9 23:56:26.881321 systemd[1]: Reached target getty.target - Login Prompts. Jul 9 23:56:27.101566 tar[1549]: linux-amd64/README.md Jul 9 23:56:27.114355 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 9 23:56:27.322084 systemd-networkd[1466]: ens192: Gained IPv6LL Jul 9 23:56:27.323024 systemd-timesyncd[1469]: Network configuration changed, trying to establish connection. Jul 9 23:56:27.323595 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 9 23:56:27.324362 systemd[1]: Reached target network-online.target - Network is Online. Jul 9 23:56:27.330120 systemd[1]: Starting coreos-metadata.service - VMware metadata agent... Jul 9 23:56:27.333249 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 9 23:56:27.335294 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 9 23:56:27.358315 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 9 23:56:27.370603 systemd[1]: coreos-metadata.service: Deactivated successfully. Jul 9 23:56:27.370753 systemd[1]: Finished coreos-metadata.service - VMware metadata agent. Jul 9 23:56:27.371467 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 9 23:56:28.304827 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 9 23:56:28.305421 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 9 23:56:28.305727 systemd[1]: Startup finished in 978ms (kernel) + 7.417s (initrd) + 4.292s (userspace) = 12.687s. Jul 9 23:56:28.312130 (kubelet)[1708]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 9 23:56:28.336214 login[1665]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jul 9 23:56:28.336760 login[1671]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jul 9 23:56:28.345414 systemd-logind[1536]: New session 2 of user core. Jul 9 23:56:28.345845 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 9 23:56:28.351090 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 9 23:56:28.353978 systemd-logind[1536]: New session 1 of user core. Jul 9 23:56:28.359377 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 9 23:56:28.365091 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 9 23:56:28.366920 (systemd)[1715]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 9 23:56:28.368364 systemd-logind[1536]: New session c1 of user core. Jul 9 23:56:28.452927 systemd[1715]: Queued start job for default target default.target. Jul 9 23:56:28.465766 systemd[1715]: Created slice app.slice - User Application Slice. Jul 9 23:56:28.465833 systemd[1715]: Reached target paths.target - Paths. Jul 9 23:56:28.465867 systemd[1715]: Reached target timers.target - Timers. Jul 9 23:56:28.466641 systemd[1715]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 9 23:56:28.475043 systemd[1715]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 9 23:56:28.475074 systemd[1715]: Reached target sockets.target - Sockets. Jul 9 23:56:28.475101 systemd[1715]: Reached target basic.target - Basic System. Jul 9 23:56:28.475123 systemd[1715]: Reached target default.target - Main User Target. Jul 9 23:56:28.475139 systemd[1715]: Startup finished in 103ms. Jul 9 23:56:28.475145 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 9 23:56:28.480026 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 9 23:56:28.480896 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 9 23:56:28.836223 kubelet[1708]: E0709 23:56:28.836192 1708 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 9 23:56:28.837684 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 9 23:56:28.837822 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 9 23:56:28.838145 systemd[1]: kubelet.service: Consumed 646ms CPU time, 268.2M memory peak. Jul 9 23:56:29.010378 systemd-timesyncd[1469]: Network configuration changed, trying to establish connection. Jul 9 23:56:39.088380 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 9 23:56:39.099127 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 9 23:56:39.172188 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 9 23:56:39.174813 (kubelet)[1758]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 9 23:56:39.231731 kubelet[1758]: E0709 23:56:39.231537 1758 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 9 23:56:39.233872 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 9 23:56:39.233972 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 9 23:56:39.234227 systemd[1]: kubelet.service: Consumed 91ms CPU time, 112.2M memory peak. Jul 9 23:56:49.241277 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 9 23:56:49.256075 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 9 23:56:49.582651 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 9 23:56:49.585803 (kubelet)[1773]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 9 23:56:49.619835 kubelet[1773]: E0709 23:56:49.619775 1773 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 9 23:56:49.621198 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 9 23:56:49.621292 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 9 23:56:49.621499 systemd[1]: kubelet.service: Consumed 98ms CPU time, 107.5M memory peak. Jul 9 23:56:56.421149 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 9 23:56:56.421979 systemd[1]: Started sshd@0-139.178.70.109:22-139.178.89.65:47306.service - OpenSSH per-connection server daemon (139.178.89.65:47306). Jul 9 23:56:56.465604 sshd[1781]: Accepted publickey for core from 139.178.89.65 port 47306 ssh2: RSA SHA256:iW7N8ouL0MNZquiamslGIbLBE90/GD9BMvBNQ+/8OB0 Jul 9 23:56:56.466613 sshd-session[1781]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 23:56:56.469353 systemd-logind[1536]: New session 3 of user core. Jul 9 23:56:56.477098 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 9 23:56:56.539120 systemd[1]: Started sshd@1-139.178.70.109:22-139.178.89.65:47314.service - OpenSSH per-connection server daemon (139.178.89.65:47314). Jul 9 23:56:56.565844 sshd[1786]: Accepted publickey for core from 139.178.89.65 port 47314 ssh2: RSA SHA256:iW7N8ouL0MNZquiamslGIbLBE90/GD9BMvBNQ+/8OB0 Jul 9 23:56:56.566819 sshd-session[1786]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 23:56:56.570015 systemd-logind[1536]: New session 4 of user core. Jul 9 23:56:56.584096 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 9 23:56:56.631676 sshd[1788]: Connection closed by 139.178.89.65 port 47314 Jul 9 23:56:56.632038 sshd-session[1786]: pam_unix(sshd:session): session closed for user core Jul 9 23:56:56.641277 systemd[1]: sshd@1-139.178.70.109:22-139.178.89.65:47314.service: Deactivated successfully. Jul 9 23:56:56.642373 systemd[1]: session-4.scope: Deactivated successfully. Jul 9 23:56:56.642880 systemd-logind[1536]: Session 4 logged out. Waiting for processes to exit. Jul 9 23:56:56.648145 systemd[1]: Started sshd@2-139.178.70.109:22-139.178.89.65:47328.service - OpenSSH per-connection server daemon (139.178.89.65:47328). Jul 9 23:56:56.650152 systemd-logind[1536]: Removed session 4. Jul 9 23:56:56.677102 sshd[1793]: Accepted publickey for core from 139.178.89.65 port 47328 ssh2: RSA SHA256:iW7N8ouL0MNZquiamslGIbLBE90/GD9BMvBNQ+/8OB0 Jul 9 23:56:56.678097 sshd-session[1793]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 23:56:56.680797 systemd-logind[1536]: New session 5 of user core. Jul 9 23:56:56.691110 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 9 23:56:56.737466 sshd[1796]: Connection closed by 139.178.89.65 port 47328 Jul 9 23:56:56.737846 sshd-session[1793]: pam_unix(sshd:session): session closed for user core Jul 9 23:56:56.755739 systemd[1]: sshd@2-139.178.70.109:22-139.178.89.65:47328.service: Deactivated successfully. Jul 9 23:56:56.756654 systemd[1]: session-5.scope: Deactivated successfully. Jul 9 23:56:56.757157 systemd-logind[1536]: Session 5 logged out. Waiting for processes to exit. Jul 9 23:56:56.765147 systemd[1]: Started sshd@3-139.178.70.109:22-139.178.89.65:47332.service - OpenSSH per-connection server daemon (139.178.89.65:47332). Jul 9 23:56:56.766297 systemd-logind[1536]: Removed session 5. Jul 9 23:56:56.793511 sshd[1801]: Accepted publickey for core from 139.178.89.65 port 47332 ssh2: RSA SHA256:iW7N8ouL0MNZquiamslGIbLBE90/GD9BMvBNQ+/8OB0 Jul 9 23:56:56.794302 sshd-session[1801]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 23:56:56.796952 systemd-logind[1536]: New session 6 of user core. Jul 9 23:56:56.804044 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 9 23:56:56.854190 sshd[1804]: Connection closed by 139.178.89.65 port 47332 Jul 9 23:56:56.854105 sshd-session[1801]: pam_unix(sshd:session): session closed for user core Jul 9 23:56:56.863191 systemd[1]: sshd@3-139.178.70.109:22-139.178.89.65:47332.service: Deactivated successfully. Jul 9 23:56:56.864286 systemd[1]: session-6.scope: Deactivated successfully. Jul 9 23:56:56.864788 systemd-logind[1536]: Session 6 logged out. Waiting for processes to exit. Jul 9 23:56:56.869266 systemd[1]: Started sshd@4-139.178.70.109:22-139.178.89.65:47344.service - OpenSSH per-connection server daemon (139.178.89.65:47344). Jul 9 23:56:56.871176 systemd-logind[1536]: Removed session 6. Jul 9 23:56:56.898587 sshd[1809]: Accepted publickey for core from 139.178.89.65 port 47344 ssh2: RSA SHA256:iW7N8ouL0MNZquiamslGIbLBE90/GD9BMvBNQ+/8OB0 Jul 9 23:56:56.899419 sshd-session[1809]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 23:56:56.902169 systemd-logind[1536]: New session 7 of user core. Jul 9 23:56:56.908141 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 9 23:56:57.017226 sudo[1813]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 9 23:56:57.017396 sudo[1813]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 9 23:56:57.033377 sudo[1813]: pam_unix(sudo:session): session closed for user root Jul 9 23:56:57.034270 sshd[1812]: Connection closed by 139.178.89.65 port 47344 Jul 9 23:56:57.035236 sshd-session[1809]: pam_unix(sshd:session): session closed for user core Jul 9 23:56:57.044172 systemd[1]: sshd@4-139.178.70.109:22-139.178.89.65:47344.service: Deactivated successfully. Jul 9 23:56:57.045190 systemd[1]: session-7.scope: Deactivated successfully. Jul 9 23:56:57.045659 systemd-logind[1536]: Session 7 logged out. Waiting for processes to exit. Jul 9 23:56:57.049220 systemd[1]: Started sshd@5-139.178.70.109:22-139.178.89.65:47348.service - OpenSSH per-connection server daemon (139.178.89.65:47348). Jul 9 23:56:57.050155 systemd-logind[1536]: Removed session 7. Jul 9 23:56:57.078217 sshd[1818]: Accepted publickey for core from 139.178.89.65 port 47348 ssh2: RSA SHA256:iW7N8ouL0MNZquiamslGIbLBE90/GD9BMvBNQ+/8OB0 Jul 9 23:56:57.079070 sshd-session[1818]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 23:56:57.081813 systemd-logind[1536]: New session 8 of user core. Jul 9 23:56:57.094083 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 9 23:56:57.141725 sudo[1823]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 9 23:56:57.141915 sudo[1823]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 9 23:56:57.143966 sudo[1823]: pam_unix(sudo:session): session closed for user root Jul 9 23:56:57.147223 sudo[1822]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jul 9 23:56:57.147391 sudo[1822]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 9 23:56:57.160197 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 9 23:56:57.176129 augenrules[1845]: No rules Jul 9 23:56:57.176758 systemd[1]: audit-rules.service: Deactivated successfully. Jul 9 23:56:57.177007 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 9 23:56:57.177965 sudo[1822]: pam_unix(sudo:session): session closed for user root Jul 9 23:56:57.179421 sshd[1821]: Connection closed by 139.178.89.65 port 47348 Jul 9 23:56:57.179608 sshd-session[1818]: pam_unix(sshd:session): session closed for user core Jul 9 23:56:57.185121 systemd[1]: sshd@5-139.178.70.109:22-139.178.89.65:47348.service: Deactivated successfully. Jul 9 23:56:57.186090 systemd[1]: session-8.scope: Deactivated successfully. Jul 9 23:56:57.186511 systemd-logind[1536]: Session 8 logged out. Waiting for processes to exit. Jul 9 23:56:57.190194 systemd[1]: Started sshd@6-139.178.70.109:22-139.178.89.65:47358.service - OpenSSH per-connection server daemon (139.178.89.65:47358). Jul 9 23:56:57.191337 systemd-logind[1536]: Removed session 8. Jul 9 23:56:57.219247 sshd[1853]: Accepted publickey for core from 139.178.89.65 port 47358 ssh2: RSA SHA256:iW7N8ouL0MNZquiamslGIbLBE90/GD9BMvBNQ+/8OB0 Jul 9 23:56:57.220171 sshd-session[1853]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 23:56:57.224523 systemd-logind[1536]: New session 9 of user core. Jul 9 23:56:57.226072 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 9 23:56:57.274646 sudo[1857]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 9 23:56:57.275585 sudo[1857]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 9 23:56:57.639152 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 9 23:56:57.639241 (dockerd)[1873]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 9 23:56:58.118381 dockerd[1873]: time="2025-07-09T23:56:58.118345063Z" level=info msg="Starting up" Jul 9 23:56:58.170799 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport1051694975-merged.mount: Deactivated successfully. Jul 9 23:56:58.191355 dockerd[1873]: time="2025-07-09T23:56:58.191326653Z" level=info msg="Loading containers: start." Jul 9 23:56:58.291955 kernel: Initializing XFRM netlink socket Jul 9 23:56:58.307408 systemd-timesyncd[1469]: Network configuration changed, trying to establish connection. Jul 9 23:56:58.338373 systemd-networkd[1466]: docker0: Link UP Jul 9 23:56:58.364922 dockerd[1873]: time="2025-07-09T23:56:58.364887482Z" level=info msg="Loading containers: done." Jul 9 23:56:58.374882 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2264594789-merged.mount: Deactivated successfully. Jul 9 23:56:58.384972 dockerd[1873]: time="2025-07-09T23:56:58.384927971Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 9 23:56:58.385038 dockerd[1873]: time="2025-07-09T23:56:58.385022566Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Jul 9 23:56:58.385108 dockerd[1873]: time="2025-07-09T23:56:58.385090950Z" level=info msg="Daemon has completed initialization" Jul 9 23:56:58.404974 dockerd[1873]: time="2025-07-09T23:56:58.404904007Z" level=info msg="API listen on /run/docker.sock" Jul 9 23:56:58.405237 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 9 23:58:24.954840 systemd-resolved[1467]: Clock change detected. Flushing caches. Jul 9 23:58:24.955053 systemd-timesyncd[1469]: Contacted time server 216.82.35.115:123 (2.flatcar.pool.ntp.org). Jul 9 23:58:24.955092 systemd-timesyncd[1469]: Initial clock synchronization to Wed 2025-07-09 23:58:24.954098 UTC. Jul 9 23:58:25.887766 containerd[1556]: time="2025-07-09T23:58:25.887735681Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.6\"" Jul 9 23:58:26.282239 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jul 9 23:58:26.291002 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 9 23:58:26.481935 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 9 23:58:26.482918 (kubelet)[2068]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 9 23:58:26.508070 kubelet[2068]: E0709 23:58:26.508045 2068 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 9 23:58:26.509741 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 9 23:58:26.509824 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 9 23:58:26.510012 systemd[1]: kubelet.service: Consumed 89ms CPU time, 108.1M memory peak. Jul 9 23:58:26.722279 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4053662470.mount: Deactivated successfully. Jul 9 23:58:27.825802 containerd[1556]: time="2025-07-09T23:58:27.825767658Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:58:27.835984 containerd[1556]: time="2025-07-09T23:58:27.835946342Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.6: active requests=0, bytes read=28799045" Jul 9 23:58:27.840792 containerd[1556]: time="2025-07-09T23:58:27.840767330Z" level=info msg="ImageCreate event name:\"sha256:8c5b95b1b5cb4a908fcbbbe81697c57019f9e9d89bfb5e0355235d440b7a6aa9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:58:27.849273 containerd[1556]: time="2025-07-09T23:58:27.849241377Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:0f5764551d7de4ef70489ff8a70f32df7dea00701f5545af089b60bc5ede4f6f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:58:27.849980 containerd[1556]: time="2025-07-09T23:58:27.849713822Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.6\" with image id \"sha256:8c5b95b1b5cb4a908fcbbbe81697c57019f9e9d89bfb5e0355235d440b7a6aa9\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.6\", repo digest \"registry.k8s.io/kube-apiserver@sha256:0f5764551d7de4ef70489ff8a70f32df7dea00701f5545af089b60bc5ede4f6f\", size \"28795845\" in 1.961951783s" Jul 9 23:58:27.849980 containerd[1556]: time="2025-07-09T23:58:27.849732474Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.6\" returns image reference \"sha256:8c5b95b1b5cb4a908fcbbbe81697c57019f9e9d89bfb5e0355235d440b7a6aa9\"" Jul 9 23:58:27.850210 containerd[1556]: time="2025-07-09T23:58:27.850190117Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.6\"" Jul 9 23:58:29.289876 containerd[1556]: time="2025-07-09T23:58:29.289420876Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:58:29.294722 containerd[1556]: time="2025-07-09T23:58:29.294680278Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.6: active requests=0, bytes read=24783912" Jul 9 23:58:29.300101 containerd[1556]: time="2025-07-09T23:58:29.300072573Z" level=info msg="ImageCreate event name:\"sha256:77d0e7de0c6b41e2331c3997698c3f917527cf7bbe462f5c813f514e788436de\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:58:29.308290 containerd[1556]: time="2025-07-09T23:58:29.308261910Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3425f29c94a77d74cb89f38413e6274277dcf5e2bc7ab6ae953578a91e9e8356\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:58:29.308900 containerd[1556]: time="2025-07-09T23:58:29.308734855Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.6\" with image id \"sha256:77d0e7de0c6b41e2331c3997698c3f917527cf7bbe462f5c813f514e788436de\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.6\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3425f29c94a77d74cb89f38413e6274277dcf5e2bc7ab6ae953578a91e9e8356\", size \"26385746\" in 1.458485898s" Jul 9 23:58:29.308900 containerd[1556]: time="2025-07-09T23:58:29.308755859Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.6\" returns image reference \"sha256:77d0e7de0c6b41e2331c3997698c3f917527cf7bbe462f5c813f514e788436de\"" Jul 9 23:58:29.309082 containerd[1556]: time="2025-07-09T23:58:29.309065461Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.6\"" Jul 9 23:58:30.465177 containerd[1556]: time="2025-07-09T23:58:30.465138634Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:58:30.465763 containerd[1556]: time="2025-07-09T23:58:30.465716635Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.6: active requests=0, bytes read=19176916" Jul 9 23:58:30.466402 containerd[1556]: time="2025-07-09T23:58:30.465947638Z" level=info msg="ImageCreate event name:\"sha256:b34d1cd163151c2491919f315274d85bff904721213f2b19341b403a28a39ae2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:58:30.467520 containerd[1556]: time="2025-07-09T23:58:30.467493071Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:130f633cbd1d70e2f4655350153cb3fc469f4d5a6310b4f0b49d93fb2ba2132b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:58:30.468185 containerd[1556]: time="2025-07-09T23:58:30.468089183Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.6\" with image id \"sha256:b34d1cd163151c2491919f315274d85bff904721213f2b19341b403a28a39ae2\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.6\", repo digest \"registry.k8s.io/kube-scheduler@sha256:130f633cbd1d70e2f4655350153cb3fc469f4d5a6310b4f0b49d93fb2ba2132b\", size \"20778768\" in 1.159005812s" Jul 9 23:58:30.468185 containerd[1556]: time="2025-07-09T23:58:30.468106036Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.6\" returns image reference \"sha256:b34d1cd163151c2491919f315274d85bff904721213f2b19341b403a28a39ae2\"" Jul 9 23:58:30.468706 containerd[1556]: time="2025-07-09T23:58:30.468694331Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.6\"" Jul 9 23:58:31.756478 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3764186651.mount: Deactivated successfully. Jul 9 23:58:32.554869 containerd[1556]: time="2025-07-09T23:58:32.554802994Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:58:32.558700 containerd[1556]: time="2025-07-09T23:58:32.558667561Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.6: active requests=0, bytes read=30895363" Jul 9 23:58:32.565497 containerd[1556]: time="2025-07-09T23:58:32.565457781Z" level=info msg="ImageCreate event name:\"sha256:63f0cbe3b7339c5d006efc9964228e48271bae73039320037c451b5e8f763e02\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:58:32.588311 containerd[1556]: time="2025-07-09T23:58:32.588270472Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:b13d9da413b983d130bf090b83fce12e1ccc704e95f366da743c18e964d9d7e9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:58:32.588863 containerd[1556]: time="2025-07-09T23:58:32.588774938Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.6\" with image id \"sha256:63f0cbe3b7339c5d006efc9964228e48271bae73039320037c451b5e8f763e02\", repo tag \"registry.k8s.io/kube-proxy:v1.32.6\", repo digest \"registry.k8s.io/kube-proxy@sha256:b13d9da413b983d130bf090b83fce12e1ccc704e95f366da743c18e964d9d7e9\", size \"30894382\" in 2.120060094s" Jul 9 23:58:32.588863 containerd[1556]: time="2025-07-09T23:58:32.588797481Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.6\" returns image reference \"sha256:63f0cbe3b7339c5d006efc9964228e48271bae73039320037c451b5e8f763e02\"" Jul 9 23:58:32.589171 containerd[1556]: time="2025-07-09T23:58:32.589155208Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jul 9 23:58:33.413155 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1240994180.mount: Deactivated successfully. Jul 9 23:58:34.203193 containerd[1556]: time="2025-07-09T23:58:34.203154612Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:58:34.211110 containerd[1556]: time="2025-07-09T23:58:34.210944616Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Jul 9 23:58:34.228160 containerd[1556]: time="2025-07-09T23:58:34.228130053Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:58:34.275894 containerd[1556]: time="2025-07-09T23:58:34.275818611Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:58:34.276955 containerd[1556]: time="2025-07-09T23:58:34.276650772Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.687475806s" Jul 9 23:58:34.276955 containerd[1556]: time="2025-07-09T23:58:34.276678064Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jul 9 23:58:34.277158 containerd[1556]: time="2025-07-09T23:58:34.277136786Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 9 23:58:34.831755 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1554412931.mount: Deactivated successfully. Jul 9 23:58:34.834122 containerd[1556]: time="2025-07-09T23:58:34.834086359Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:58:34.834747 containerd[1556]: time="2025-07-09T23:58:34.834584191Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Jul 9 23:58:34.834747 containerd[1556]: time="2025-07-09T23:58:34.834635412Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:58:34.836399 containerd[1556]: time="2025-07-09T23:58:34.836366323Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:58:34.837081 containerd[1556]: time="2025-07-09T23:58:34.836992424Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 559.830501ms" Jul 9 23:58:34.837081 containerd[1556]: time="2025-07-09T23:58:34.837015257Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jul 9 23:58:34.837697 containerd[1556]: time="2025-07-09T23:58:34.837424522Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jul 9 23:58:35.398448 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2416621741.mount: Deactivated successfully. Jul 9 23:58:36.532173 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jul 9 23:58:36.542308 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 9 23:58:36.980208 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 9 23:58:36.984086 (kubelet)[2258]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 9 23:58:37.160090 kubelet[2258]: E0709 23:58:37.160060 2258 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 9 23:58:37.161884 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 9 23:58:37.162049 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 9 23:58:37.162373 systemd[1]: kubelet.service: Consumed 104ms CPU time, 110.5M memory peak. Jul 9 23:58:38.160030 update_engine[1539]: I20250709 23:58:38.159979 1539 update_attempter.cc:509] Updating boot flags... Jul 9 23:58:38.233802 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (2275) Jul 9 23:58:38.332945 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (2277) Jul 9 23:58:40.093404 containerd[1556]: time="2025-07-09T23:58:40.093357150Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:58:40.105836 containerd[1556]: time="2025-07-09T23:58:40.105789432Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57551360" Jul 9 23:58:40.122242 containerd[1556]: time="2025-07-09T23:58:40.122201250Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:58:40.134010 containerd[1556]: time="2025-07-09T23:58:40.133960462Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:58:40.135025 containerd[1556]: time="2025-07-09T23:58:40.134671684Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 5.29722662s" Jul 9 23:58:40.135025 containerd[1556]: time="2025-07-09T23:58:40.134700934Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Jul 9 23:58:42.357636 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 9 23:58:42.357769 systemd[1]: kubelet.service: Consumed 104ms CPU time, 110.5M memory peak. Jul 9 23:58:42.365033 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 9 23:58:42.386349 systemd[1]: Reload requested from client PID 2317 ('systemctl') (unit session-9.scope)... Jul 9 23:58:42.386363 systemd[1]: Reloading... Jul 9 23:58:42.466875 zram_generator::config[2369]: No configuration found. Jul 9 23:58:42.532584 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Jul 9 23:58:42.550645 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 9 23:58:42.615892 systemd[1]: Reloading finished in 229 ms. Jul 9 23:58:42.641408 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 9 23:58:42.641466 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 9 23:58:42.641645 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 9 23:58:42.647033 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 9 23:58:43.056329 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 9 23:58:43.059496 (kubelet)[2429]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 9 23:58:43.100673 kubelet[2429]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 9 23:58:43.100673 kubelet[2429]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 9 23:58:43.100673 kubelet[2429]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 9 23:58:43.100975 kubelet[2429]: I0709 23:58:43.100750 2429 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 9 23:58:43.265676 kubelet[2429]: I0709 23:58:43.265650 2429 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jul 9 23:58:43.265779 kubelet[2429]: I0709 23:58:43.265772 2429 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 9 23:58:43.266006 kubelet[2429]: I0709 23:58:43.265994 2429 server.go:954] "Client rotation is on, will bootstrap in background" Jul 9 23:58:43.594977 kubelet[2429]: E0709 23:58:43.594937 2429 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://139.178.70.109:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 139.178.70.109:6443: connect: connection refused" logger="UnhandledError" Jul 9 23:58:43.616906 kubelet[2429]: I0709 23:58:43.616862 2429 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 9 23:58:43.743185 kubelet[2429]: E0709 23:58:43.743145 2429 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 9 23:58:43.743185 kubelet[2429]: I0709 23:58:43.743180 2429 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 9 23:58:43.771458 kubelet[2429]: I0709 23:58:43.771430 2429 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 9 23:58:43.797072 kubelet[2429]: I0709 23:58:43.797015 2429 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 9 23:58:43.797189 kubelet[2429]: I0709 23:58:43.797071 2429 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 9 23:58:43.807995 kubelet[2429]: I0709 23:58:43.807966 2429 topology_manager.go:138] "Creating topology manager with none policy" Jul 9 23:58:43.807995 kubelet[2429]: I0709 23:58:43.807994 2429 container_manager_linux.go:304] "Creating device plugin manager" Jul 9 23:58:43.815942 kubelet[2429]: I0709 23:58:43.815913 2429 state_mem.go:36] "Initialized new in-memory state store" Jul 9 23:58:43.855875 kubelet[2429]: I0709 23:58:43.855063 2429 kubelet.go:446] "Attempting to sync node with API server" Jul 9 23:58:43.855875 kubelet[2429]: I0709 23:58:43.855104 2429 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 9 23:58:43.855875 kubelet[2429]: I0709 23:58:43.855125 2429 kubelet.go:352] "Adding apiserver pod source" Jul 9 23:58:43.855875 kubelet[2429]: I0709 23:58:43.855134 2429 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 9 23:58:43.881904 kubelet[2429]: W0709 23:58:43.881844 2429 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://139.178.70.109:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 139.178.70.109:6443: connect: connection refused Jul 9 23:58:43.882259 kubelet[2429]: E0709 23:58:43.881999 2429 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://139.178.70.109:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 139.178.70.109:6443: connect: connection refused" logger="UnhandledError" Jul 9 23:58:43.882259 kubelet[2429]: W0709 23:58:43.882066 2429 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://139.178.70.109:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 139.178.70.109:6443: connect: connection refused Jul 9 23:58:43.882259 kubelet[2429]: E0709 23:58:43.882085 2429 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://139.178.70.109:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 139.178.70.109:6443: connect: connection refused" logger="UnhandledError" Jul 9 23:58:43.882259 kubelet[2429]: I0709 23:58:43.882133 2429 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jul 9 23:58:43.887671 kubelet[2429]: I0709 23:58:43.887603 2429 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 9 23:58:43.889055 kubelet[2429]: W0709 23:58:43.888685 2429 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 9 23:58:43.894413 kubelet[2429]: I0709 23:58:43.894247 2429 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 9 23:58:43.894413 kubelet[2429]: I0709 23:58:43.894272 2429 server.go:1287] "Started kubelet" Jul 9 23:58:43.894537 kubelet[2429]: I0709 23:58:43.894509 2429 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jul 9 23:58:43.898578 kubelet[2429]: I0709 23:58:43.898124 2429 server.go:479] "Adding debug handlers to kubelet server" Jul 9 23:58:43.899627 kubelet[2429]: I0709 23:58:43.899361 2429 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 9 23:58:43.899627 kubelet[2429]: I0709 23:58:43.899586 2429 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 9 23:58:43.903870 kubelet[2429]: I0709 23:58:43.902510 2429 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 9 23:58:43.903870 kubelet[2429]: E0709 23:58:43.900554 2429 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://139.178.70.109:6443/api/v1/namespaces/default/events\": dial tcp 139.178.70.109:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1850bab0e0b78f6f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-07-09 23:58:43.894259567 +0000 UTC m=+0.831814307,LastTimestamp:2025-07-09 23:58:43.894259567 +0000 UTC m=+0.831814307,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 9 23:58:43.903870 kubelet[2429]: I0709 23:58:43.903050 2429 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 9 23:58:43.904740 kubelet[2429]: I0709 23:58:43.904724 2429 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 9 23:58:43.904943 kubelet[2429]: E0709 23:58:43.904924 2429 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 9 23:58:43.906558 kubelet[2429]: E0709 23:58:43.906522 2429 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.109:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.109:6443: connect: connection refused" interval="200ms" Jul 9 23:58:43.906649 kubelet[2429]: I0709 23:58:43.906640 2429 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 9 23:58:43.907323 kubelet[2429]: W0709 23:58:43.907293 2429 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://139.178.70.109:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.70.109:6443: connect: connection refused Jul 9 23:58:43.907419 kubelet[2429]: E0709 23:58:43.907393 2429 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://139.178.70.109:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 139.178.70.109:6443: connect: connection refused" logger="UnhandledError" Jul 9 23:58:43.907599 kubelet[2429]: I0709 23:58:43.907588 2429 factory.go:221] Registration of the systemd container factory successfully Jul 9 23:58:43.907711 kubelet[2429]: I0709 23:58:43.907699 2429 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 9 23:58:43.912307 kubelet[2429]: I0709 23:58:43.912293 2429 reconciler.go:26] "Reconciler: start to sync state" Jul 9 23:58:43.913419 kubelet[2429]: E0709 23:58:43.913290 2429 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 9 23:58:43.913574 kubelet[2429]: I0709 23:58:43.913564 2429 factory.go:221] Registration of the containerd container factory successfully Jul 9 23:58:43.940229 kubelet[2429]: I0709 23:58:43.940207 2429 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 9 23:58:43.940548 kubelet[2429]: I0709 23:58:43.940217 2429 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 9 23:58:43.940548 kubelet[2429]: I0709 23:58:43.940368 2429 state_mem.go:36] "Initialized new in-memory state store" Jul 9 23:58:43.946400 kubelet[2429]: I0709 23:58:43.946188 2429 policy_none.go:49] "None policy: Start" Jul 9 23:58:43.946400 kubelet[2429]: I0709 23:58:43.946220 2429 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 9 23:58:43.946400 kubelet[2429]: I0709 23:58:43.946233 2429 state_mem.go:35] "Initializing new in-memory state store" Jul 9 23:58:43.951650 kubelet[2429]: I0709 23:58:43.951621 2429 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 9 23:58:43.952482 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 9 23:58:43.953758 kubelet[2429]: I0709 23:58:43.953743 2429 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 9 23:58:43.953817 kubelet[2429]: I0709 23:58:43.953811 2429 status_manager.go:227] "Starting to sync pod status with apiserver" Jul 9 23:58:43.953896 kubelet[2429]: I0709 23:58:43.953890 2429 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 9 23:58:43.953946 kubelet[2429]: I0709 23:58:43.953942 2429 kubelet.go:2382] "Starting kubelet main sync loop" Jul 9 23:58:43.954015 kubelet[2429]: E0709 23:58:43.954004 2429 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 9 23:58:43.954769 kubelet[2429]: W0709 23:58:43.954750 2429 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://139.178.70.109:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.70.109:6443: connect: connection refused Jul 9 23:58:43.954813 kubelet[2429]: E0709 23:58:43.954790 2429 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://139.178.70.109:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 139.178.70.109:6443: connect: connection refused" logger="UnhandledError" Jul 9 23:58:43.962354 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 9 23:58:43.965427 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 9 23:58:43.976346 kubelet[2429]: I0709 23:58:43.975748 2429 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 9 23:58:43.976346 kubelet[2429]: I0709 23:58:43.975895 2429 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 9 23:58:43.976346 kubelet[2429]: I0709 23:58:43.975903 2429 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 9 23:58:43.976346 kubelet[2429]: I0709 23:58:43.976285 2429 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 9 23:58:43.977176 kubelet[2429]: E0709 23:58:43.977167 2429 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 9 23:58:43.977271 kubelet[2429]: E0709 23:58:43.977263 2429 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jul 9 23:58:44.077632 kubelet[2429]: I0709 23:58:44.077392 2429 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 9 23:58:44.077821 kubelet[2429]: E0709 23:58:44.077806 2429 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://139.178.70.109:6443/api/v1/nodes\": dial tcp 139.178.70.109:6443: connect: connection refused" node="localhost" Jul 9 23:58:44.086326 systemd[1]: Created slice kubepods-burstable-podd1af03769b64da1b1e8089a7035018fc.slice - libcontainer container kubepods-burstable-podd1af03769b64da1b1e8089a7035018fc.slice. Jul 9 23:58:44.101311 kubelet[2429]: E0709 23:58:44.101289 2429 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 9 23:58:44.104428 systemd[1]: Created slice kubepods-burstable-pod8a75e163f27396b2168da0f88f85f8a5.slice - libcontainer container kubepods-burstable-pod8a75e163f27396b2168da0f88f85f8a5.slice. Jul 9 23:58:44.106304 kubelet[2429]: E0709 23:58:44.106255 2429 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 9 23:58:44.107624 kubelet[2429]: E0709 23:58:44.107580 2429 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.109:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.109:6443: connect: connection refused" interval="400ms" Jul 9 23:58:44.108931 systemd[1]: Created slice kubepods-burstable-pod4bc9b69b10c8f0cf6ab7eef38ce21413.slice - libcontainer container kubepods-burstable-pod4bc9b69b10c8f0cf6ab7eef38ce21413.slice. Jul 9 23:58:44.110790 kubelet[2429]: E0709 23:58:44.110734 2429 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 9 23:58:44.113923 kubelet[2429]: I0709 23:58:44.113807 2429 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4bc9b69b10c8f0cf6ab7eef38ce21413-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"4bc9b69b10c8f0cf6ab7eef38ce21413\") " pod="kube-system/kube-apiserver-localhost" Jul 9 23:58:44.113923 kubelet[2429]: I0709 23:58:44.113829 2429 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 9 23:58:44.113923 kubelet[2429]: I0709 23:58:44.113843 2429 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 9 23:58:44.113923 kubelet[2429]: I0709 23:58:44.113879 2429 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4bc9b69b10c8f0cf6ab7eef38ce21413-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"4bc9b69b10c8f0cf6ab7eef38ce21413\") " pod="kube-system/kube-apiserver-localhost" Jul 9 23:58:44.113923 kubelet[2429]: I0709 23:58:44.113899 2429 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 9 23:58:44.114131 kubelet[2429]: I0709 23:58:44.113929 2429 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 9 23:58:44.114131 kubelet[2429]: I0709 23:58:44.113942 2429 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 9 23:58:44.114131 kubelet[2429]: I0709 23:58:44.113955 2429 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8a75e163f27396b2168da0f88f85f8a5-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"8a75e163f27396b2168da0f88f85f8a5\") " pod="kube-system/kube-scheduler-localhost" Jul 9 23:58:44.114131 kubelet[2429]: I0709 23:58:44.113965 2429 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4bc9b69b10c8f0cf6ab7eef38ce21413-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"4bc9b69b10c8f0cf6ab7eef38ce21413\") " pod="kube-system/kube-apiserver-localhost" Jul 9 23:58:44.280016 kubelet[2429]: I0709 23:58:44.279907 2429 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 9 23:58:44.280312 kubelet[2429]: E0709 23:58:44.280286 2429 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://139.178.70.109:6443/api/v1/nodes\": dial tcp 139.178.70.109:6443: connect: connection refused" node="localhost" Jul 9 23:58:44.403161 containerd[1556]: time="2025-07-09T23:58:44.403092077Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d1af03769b64da1b1e8089a7035018fc,Namespace:kube-system,Attempt:0,}" Jul 9 23:58:44.407338 containerd[1556]: time="2025-07-09T23:58:44.407219644Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:8a75e163f27396b2168da0f88f85f8a5,Namespace:kube-system,Attempt:0,}" Jul 9 23:58:44.412081 containerd[1556]: time="2025-07-09T23:58:44.412050002Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:4bc9b69b10c8f0cf6ab7eef38ce21413,Namespace:kube-system,Attempt:0,}" Jul 9 23:58:44.507899 kubelet[2429]: E0709 23:58:44.507838 2429 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.109:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.109:6443: connect: connection refused" interval="800ms" Jul 9 23:58:44.681991 kubelet[2429]: I0709 23:58:44.681756 2429 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 9 23:58:44.681991 kubelet[2429]: E0709 23:58:44.681963 2429 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://139.178.70.109:6443/api/v1/nodes\": dial tcp 139.178.70.109:6443: connect: connection refused" node="localhost" Jul 9 23:58:44.700425 kubelet[2429]: W0709 23:58:44.700381 2429 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://139.178.70.109:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 139.178.70.109:6443: connect: connection refused Jul 9 23:58:44.700526 kubelet[2429]: E0709 23:58:44.700432 2429 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://139.178.70.109:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 139.178.70.109:6443: connect: connection refused" logger="UnhandledError" Jul 9 23:58:44.772596 kubelet[2429]: W0709 23:58:44.772565 2429 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://139.178.70.109:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.70.109:6443: connect: connection refused Jul 9 23:58:44.772685 kubelet[2429]: E0709 23:58:44.772600 2429 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://139.178.70.109:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 139.178.70.109:6443: connect: connection refused" logger="UnhandledError" Jul 9 23:58:45.069453 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3741149296.mount: Deactivated successfully. Jul 9 23:58:45.147342 containerd[1556]: time="2025-07-09T23:58:45.147309457Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 9 23:58:45.159751 containerd[1556]: time="2025-07-09T23:58:45.159718797Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jul 9 23:58:45.164600 containerd[1556]: time="2025-07-09T23:58:45.164582058Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 9 23:58:45.174417 containerd[1556]: time="2025-07-09T23:58:45.174361041Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 9 23:58:45.179021 containerd[1556]: time="2025-07-09T23:58:45.178767099Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 9 23:58:45.183381 containerd[1556]: time="2025-07-09T23:58:45.183306134Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 9 23:58:45.188207 containerd[1556]: time="2025-07-09T23:58:45.188164639Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 9 23:58:45.188688 containerd[1556]: time="2025-07-09T23:58:45.188609215Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 781.327773ms" Jul 9 23:58:45.192793 containerd[1556]: time="2025-07-09T23:58:45.192762297Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 9 23:58:45.194652 containerd[1556]: time="2025-07-09T23:58:45.194294918Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 782.182758ms" Jul 9 23:58:45.195041 containerd[1556]: time="2025-07-09T23:58:45.195019264Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 789.828ms" Jul 9 23:58:45.309506 kubelet[2429]: E0709 23:58:45.309459 2429 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.109:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.109:6443: connect: connection refused" interval="1.6s" Jul 9 23:58:45.386564 containerd[1556]: time="2025-07-09T23:58:45.385582246Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 9 23:58:45.386977 containerd[1556]: time="2025-07-09T23:58:45.386820361Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 9 23:58:45.387080 containerd[1556]: time="2025-07-09T23:58:45.387032181Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 9 23:58:45.387313 containerd[1556]: time="2025-07-09T23:58:45.387226136Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 9 23:58:45.398099 containerd[1556]: time="2025-07-09T23:58:45.395910660Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 9 23:58:45.398099 containerd[1556]: time="2025-07-09T23:58:45.395936637Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 9 23:58:45.398099 containerd[1556]: time="2025-07-09T23:58:45.395946916Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 9 23:58:45.398099 containerd[1556]: time="2025-07-09T23:58:45.396546835Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 9 23:58:45.398099 containerd[1556]: time="2025-07-09T23:58:45.395639653Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 9 23:58:45.398099 containerd[1556]: time="2025-07-09T23:58:45.395678607Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 9 23:58:45.398099 containerd[1556]: time="2025-07-09T23:58:45.395693411Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 9 23:58:45.398099 containerd[1556]: time="2025-07-09T23:58:45.395747217Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 9 23:58:45.416643 kubelet[2429]: W0709 23:58:45.416570 2429 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://139.178.70.109:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 139.178.70.109:6443: connect: connection refused Jul 9 23:58:45.416643 kubelet[2429]: E0709 23:58:45.416614 2429 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://139.178.70.109:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 139.178.70.109:6443: connect: connection refused" logger="UnhandledError" Jul 9 23:58:45.424098 systemd[1]: Started cri-containerd-5b1b077618ac793e90f7361cf67efcf94610537ff4800f0ed5c541ea8667e494.scope - libcontainer container 5b1b077618ac793e90f7361cf67efcf94610537ff4800f0ed5c541ea8667e494. Jul 9 23:58:45.428090 systemd[1]: Started cri-containerd-ab661741dc4af065d90c4def6059d9a1bc8a4553d1988bffe46f3249ad39aadd.scope - libcontainer container ab661741dc4af065d90c4def6059d9a1bc8a4553d1988bffe46f3249ad39aadd. Jul 9 23:58:45.429582 systemd[1]: Started cri-containerd-e8005b28138d05379a48f05b3ccc5843be0a718ccbf26212067621362310c265.scope - libcontainer container e8005b28138d05379a48f05b3ccc5843be0a718ccbf26212067621362310c265. Jul 9 23:58:45.466229 containerd[1556]: time="2025-07-09T23:58:45.466193933Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:8a75e163f27396b2168da0f88f85f8a5,Namespace:kube-system,Attempt:0,} returns sandbox id \"5b1b077618ac793e90f7361cf67efcf94610537ff4800f0ed5c541ea8667e494\"" Jul 9 23:58:45.468403 containerd[1556]: time="2025-07-09T23:58:45.468196680Z" level=info msg="CreateContainer within sandbox \"5b1b077618ac793e90f7361cf67efcf94610537ff4800f0ed5c541ea8667e494\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 9 23:58:45.480055 containerd[1556]: time="2025-07-09T23:58:45.480034545Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:4bc9b69b10c8f0cf6ab7eef38ce21413,Namespace:kube-system,Attempt:0,} returns sandbox id \"ab661741dc4af065d90c4def6059d9a1bc8a4553d1988bffe46f3249ad39aadd\"" Jul 9 23:58:45.481375 containerd[1556]: time="2025-07-09T23:58:45.481364055Z" level=info msg="CreateContainer within sandbox \"ab661741dc4af065d90c4def6059d9a1bc8a4553d1988bffe46f3249ad39aadd\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 9 23:58:45.482904 containerd[1556]: time="2025-07-09T23:58:45.482828702Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d1af03769b64da1b1e8089a7035018fc,Namespace:kube-system,Attempt:0,} returns sandbox id \"e8005b28138d05379a48f05b3ccc5843be0a718ccbf26212067621362310c265\"" Jul 9 23:58:45.483580 kubelet[2429]: I0709 23:58:45.483472 2429 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 9 23:58:45.483731 kubelet[2429]: E0709 23:58:45.483705 2429 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://139.178.70.109:6443/api/v1/nodes\": dial tcp 139.178.70.109:6443: connect: connection refused" node="localhost" Jul 9 23:58:45.484127 containerd[1556]: time="2025-07-09T23:58:45.484109798Z" level=info msg="CreateContainer within sandbox \"e8005b28138d05379a48f05b3ccc5843be0a718ccbf26212067621362310c265\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 9 23:58:45.556751 kubelet[2429]: W0709 23:58:45.556673 2429 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://139.178.70.109:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.70.109:6443: connect: connection refused Jul 9 23:58:45.556751 kubelet[2429]: E0709 23:58:45.556719 2429 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://139.178.70.109:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 139.178.70.109:6443: connect: connection refused" logger="UnhandledError" Jul 9 23:58:45.646567 containerd[1556]: time="2025-07-09T23:58:45.646370673Z" level=info msg="CreateContainer within sandbox \"ab661741dc4af065d90c4def6059d9a1bc8a4553d1988bffe46f3249ad39aadd\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"34e2001022aca4407d857a02cb9a6d8146d1c5bba06f546b28005539b9a62019\"" Jul 9 23:58:45.649050 containerd[1556]: time="2025-07-09T23:58:45.649025978Z" level=info msg="StartContainer for \"34e2001022aca4407d857a02cb9a6d8146d1c5bba06f546b28005539b9a62019\"" Jul 9 23:58:45.660417 containerd[1556]: time="2025-07-09T23:58:45.660342021Z" level=info msg="CreateContainer within sandbox \"5b1b077618ac793e90f7361cf67efcf94610537ff4800f0ed5c541ea8667e494\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"02b9576e5f649c5facd25b0b30b7ec7eef03b1cd9feda8761f571a7044ba4214\"" Jul 9 23:58:45.660844 containerd[1556]: time="2025-07-09T23:58:45.660828870Z" level=info msg="StartContainer for \"02b9576e5f649c5facd25b0b30b7ec7eef03b1cd9feda8761f571a7044ba4214\"" Jul 9 23:58:45.661966 containerd[1556]: time="2025-07-09T23:58:45.661218325Z" level=info msg="CreateContainer within sandbox \"e8005b28138d05379a48f05b3ccc5843be0a718ccbf26212067621362310c265\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"220c46ebba5358672781e674dbd05a6b82603b0af33baa6a720693682ff5ffe5\"" Jul 9 23:58:45.663688 containerd[1556]: time="2025-07-09T23:58:45.663524607Z" level=info msg="StartContainer for \"220c46ebba5358672781e674dbd05a6b82603b0af33baa6a720693682ff5ffe5\"" Jul 9 23:58:45.675960 systemd[1]: Started cri-containerd-34e2001022aca4407d857a02cb9a6d8146d1c5bba06f546b28005539b9a62019.scope - libcontainer container 34e2001022aca4407d857a02cb9a6d8146d1c5bba06f546b28005539b9a62019. Jul 9 23:58:45.687953 systemd[1]: Started cri-containerd-02b9576e5f649c5facd25b0b30b7ec7eef03b1cd9feda8761f571a7044ba4214.scope - libcontainer container 02b9576e5f649c5facd25b0b30b7ec7eef03b1cd9feda8761f571a7044ba4214. Jul 9 23:58:45.691800 systemd[1]: Started cri-containerd-220c46ebba5358672781e674dbd05a6b82603b0af33baa6a720693682ff5ffe5.scope - libcontainer container 220c46ebba5358672781e674dbd05a6b82603b0af33baa6a720693682ff5ffe5. Jul 9 23:58:45.743870 containerd[1556]: time="2025-07-09T23:58:45.743170369Z" level=info msg="StartContainer for \"34e2001022aca4407d857a02cb9a6d8146d1c5bba06f546b28005539b9a62019\" returns successfully" Jul 9 23:58:45.753552 containerd[1556]: time="2025-07-09T23:58:45.753472960Z" level=info msg="StartContainer for \"220c46ebba5358672781e674dbd05a6b82603b0af33baa6a720693682ff5ffe5\" returns successfully" Jul 9 23:58:45.753552 containerd[1556]: time="2025-07-09T23:58:45.753521320Z" level=info msg="StartContainer for \"02b9576e5f649c5facd25b0b30b7ec7eef03b1cd9feda8761f571a7044ba4214\" returns successfully" Jul 9 23:58:45.767683 kubelet[2429]: E0709 23:58:45.767657 2429 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://139.178.70.109:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 139.178.70.109:6443: connect: connection refused" logger="UnhandledError" Jul 9 23:58:45.960085 kubelet[2429]: E0709 23:58:45.960026 2429 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 9 23:58:45.960829 kubelet[2429]: E0709 23:58:45.960740 2429 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 9 23:58:45.961783 kubelet[2429]: E0709 23:58:45.961722 2429 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 9 23:58:46.910153 kubelet[2429]: E0709 23:58:46.910115 2429 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.109:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.109:6443: connect: connection refused" interval="3.2s" Jul 9 23:58:46.964734 kubelet[2429]: E0709 23:58:46.964540 2429 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 9 23:58:46.965126 kubelet[2429]: E0709 23:58:46.964905 2429 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 9 23:58:46.965126 kubelet[2429]: E0709 23:58:46.965012 2429 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 9 23:58:47.085161 kubelet[2429]: I0709 23:58:47.085139 2429 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 9 23:58:47.085587 kubelet[2429]: E0709 23:58:47.085553 2429 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://139.178.70.109:6443/api/v1/nodes\": dial tcp 139.178.70.109:6443: connect: connection refused" node="localhost" Jul 9 23:58:47.135585 kubelet[2429]: W0709 23:58:47.135531 2429 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://139.178.70.109:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 139.178.70.109:6443: connect: connection refused Jul 9 23:58:47.135585 kubelet[2429]: E0709 23:58:47.135586 2429 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://139.178.70.109:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 139.178.70.109:6443: connect: connection refused" logger="UnhandledError" Jul 9 23:58:47.669400 kubelet[2429]: W0709 23:58:47.669353 2429 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://139.178.70.109:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.70.109:6443: connect: connection refused Jul 9 23:58:47.669400 kubelet[2429]: E0709 23:58:47.669402 2429 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://139.178.70.109:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 139.178.70.109:6443: connect: connection refused" logger="UnhandledError" Jul 9 23:58:47.695125 kubelet[2429]: W0709 23:58:47.695039 2429 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://139.178.70.109:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.70.109:6443: connect: connection refused Jul 9 23:58:47.695125 kubelet[2429]: E0709 23:58:47.695071 2429 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://139.178.70.109:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 139.178.70.109:6443: connect: connection refused" logger="UnhandledError" Jul 9 23:58:48.899426 kubelet[2429]: E0709 23:58:48.899398 2429 csi_plugin.go:308] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Jul 9 23:58:49.265710 kubelet[2429]: E0709 23:58:49.265625 2429 csi_plugin.go:308] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Jul 9 23:58:49.709072 kubelet[2429]: E0709 23:58:49.709043 2429 csi_plugin.go:308] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Jul 9 23:58:50.112922 kubelet[2429]: E0709 23:58:50.112832 2429 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jul 9 23:58:50.286914 kubelet[2429]: I0709 23:58:50.286891 2429 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 9 23:58:50.325888 kubelet[2429]: I0709 23:58:50.325718 2429 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jul 9 23:58:50.325888 kubelet[2429]: E0709 23:58:50.325743 2429 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Jul 9 23:58:50.344211 kubelet[2429]: E0709 23:58:50.344067 2429 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 9 23:58:50.445104 kubelet[2429]: E0709 23:58:50.445040 2429 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 9 23:58:50.535789 systemd[1]: Reload requested from client PID 2701 ('systemctl') (unit session-9.scope)... Jul 9 23:58:50.535799 systemd[1]: Reloading... Jul 9 23:58:50.545570 kubelet[2429]: E0709 23:58:50.545550 2429 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 9 23:58:50.562429 kubelet[2429]: E0709 23:58:50.562414 2429 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 9 23:58:50.601581 zram_generator::config[2746]: No configuration found. Jul 9 23:58:50.646233 kubelet[2429]: E0709 23:58:50.646201 2429 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 9 23:58:50.660185 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Jul 9 23:58:50.678030 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 9 23:58:50.746827 kubelet[2429]: E0709 23:58:50.746633 2429 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 9 23:58:50.753414 systemd[1]: Reloading finished in 217 ms. Jul 9 23:58:50.774985 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 9 23:58:50.784142 systemd[1]: kubelet.service: Deactivated successfully. Jul 9 23:58:50.784319 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 9 23:58:50.784355 systemd[1]: kubelet.service: Consumed 434ms CPU time, 132.4M memory peak. Jul 9 23:58:50.788081 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 9 23:58:51.513619 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 9 23:58:51.517691 (kubelet)[2813]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 9 23:58:51.741088 kubelet[2813]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 9 23:58:51.741088 kubelet[2813]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 9 23:58:51.741088 kubelet[2813]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 9 23:58:51.763478 kubelet[2813]: I0709 23:58:51.763387 2813 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 9 23:58:51.797861 kubelet[2813]: I0709 23:58:51.797380 2813 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jul 9 23:58:51.797861 kubelet[2813]: I0709 23:58:51.797400 2813 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 9 23:58:51.797861 kubelet[2813]: I0709 23:58:51.797728 2813 server.go:954] "Client rotation is on, will bootstrap in background" Jul 9 23:58:51.798997 kubelet[2813]: I0709 23:58:51.798985 2813 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 9 23:58:51.800934 kubelet[2813]: I0709 23:58:51.800917 2813 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 9 23:58:51.826657 kubelet[2813]: E0709 23:58:51.826623 2813 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 9 23:58:51.826657 kubelet[2813]: I0709 23:58:51.826654 2813 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 9 23:58:51.829069 kubelet[2813]: I0709 23:58:51.829046 2813 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 9 23:58:51.829221 kubelet[2813]: I0709 23:58:51.829192 2813 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 9 23:58:51.829356 kubelet[2813]: I0709 23:58:51.829224 2813 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 9 23:58:51.829420 kubelet[2813]: I0709 23:58:51.829363 2813 topology_manager.go:138] "Creating topology manager with none policy" Jul 9 23:58:51.829420 kubelet[2813]: I0709 23:58:51.829371 2813 container_manager_linux.go:304] "Creating device plugin manager" Jul 9 23:58:51.829420 kubelet[2813]: I0709 23:58:51.829406 2813 state_mem.go:36] "Initialized new in-memory state store" Jul 9 23:58:51.829560 kubelet[2813]: I0709 23:58:51.829550 2813 kubelet.go:446] "Attempting to sync node with API server" Jul 9 23:58:51.829585 kubelet[2813]: I0709 23:58:51.829566 2813 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 9 23:58:51.829585 kubelet[2813]: I0709 23:58:51.829578 2813 kubelet.go:352] "Adding apiserver pod source" Jul 9 23:58:51.829585 kubelet[2813]: I0709 23:58:51.829585 2813 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 9 23:58:51.833392 kubelet[2813]: I0709 23:58:51.833377 2813 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jul 9 23:58:51.833704 kubelet[2813]: I0709 23:58:51.833696 2813 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 9 23:58:51.835479 kubelet[2813]: I0709 23:58:51.835459 2813 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 9 23:58:51.835550 kubelet[2813]: I0709 23:58:51.835486 2813 server.go:1287] "Started kubelet" Jul 9 23:58:51.850445 kubelet[2813]: I0709 23:58:51.850414 2813 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jul 9 23:58:51.860050 kubelet[2813]: I0709 23:58:51.859343 2813 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 9 23:58:51.860050 kubelet[2813]: I0709 23:58:51.859540 2813 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 9 23:58:51.862079 kubelet[2813]: I0709 23:58:51.862061 2813 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 9 23:58:51.877788 kubelet[2813]: I0709 23:58:51.877762 2813 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 9 23:58:51.884197 kubelet[2813]: I0709 23:58:51.884182 2813 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 9 23:58:51.984694 kubelet[2813]: I0709 23:58:51.984658 2813 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 9 23:58:51.984820 kubelet[2813]: I0709 23:58:51.984810 2813 reconciler.go:26] "Reconciler: start to sync state" Jul 9 23:58:51.993123 kubelet[2813]: I0709 23:58:51.993096 2813 server.go:479] "Adding debug handlers to kubelet server" Jul 9 23:58:51.996569 kubelet[2813]: I0709 23:58:51.996542 2813 factory.go:221] Registration of the systemd container factory successfully Jul 9 23:58:51.996668 kubelet[2813]: I0709 23:58:51.996631 2813 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 9 23:58:52.000970 kubelet[2813]: I0709 23:58:52.000013 2813 factory.go:221] Registration of the containerd container factory successfully Jul 9 23:58:52.001058 kubelet[2813]: I0709 23:58:52.000993 2813 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 9 23:58:52.001745 kubelet[2813]: I0709 23:58:52.001725 2813 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 9 23:58:52.001745 kubelet[2813]: I0709 23:58:52.001743 2813 status_manager.go:227] "Starting to sync pod status with apiserver" Jul 9 23:58:52.001806 kubelet[2813]: I0709 23:58:52.001756 2813 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 9 23:58:52.001806 kubelet[2813]: I0709 23:58:52.001760 2813 kubelet.go:2382] "Starting kubelet main sync loop" Jul 9 23:58:52.001806 kubelet[2813]: E0709 23:58:52.001789 2813 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 9 23:58:52.031399 sudo[2828]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jul 9 23:58:52.032158 sudo[2828]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jul 9 23:58:52.042571 kubelet[2813]: I0709 23:58:52.042548 2813 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 9 23:58:52.042571 kubelet[2813]: I0709 23:58:52.042565 2813 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 9 23:58:52.042691 kubelet[2813]: I0709 23:58:52.042592 2813 state_mem.go:36] "Initialized new in-memory state store" Jul 9 23:58:52.042879 kubelet[2813]: I0709 23:58:52.042839 2813 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 9 23:58:52.043869 kubelet[2813]: I0709 23:58:52.043590 2813 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 9 23:58:52.043869 kubelet[2813]: I0709 23:58:52.043614 2813 policy_none.go:49] "None policy: Start" Jul 9 23:58:52.043869 kubelet[2813]: I0709 23:58:52.043620 2813 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 9 23:58:52.043869 kubelet[2813]: I0709 23:58:52.043632 2813 state_mem.go:35] "Initializing new in-memory state store" Jul 9 23:58:52.043869 kubelet[2813]: I0709 23:58:52.043730 2813 state_mem.go:75] "Updated machine memory state" Jul 9 23:58:52.049675 kubelet[2813]: I0709 23:58:52.049619 2813 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 9 23:58:52.050808 kubelet[2813]: I0709 23:58:52.050013 2813 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 9 23:58:52.050905 kubelet[2813]: I0709 23:58:52.050877 2813 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 9 23:58:52.051054 kubelet[2813]: I0709 23:58:52.051042 2813 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 9 23:58:52.053088 kubelet[2813]: E0709 23:58:52.053067 2813 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 9 23:58:52.111189 kubelet[2813]: I0709 23:58:52.110644 2813 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 9 23:58:52.111189 kubelet[2813]: I0709 23:58:52.111069 2813 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jul 9 23:58:52.112986 kubelet[2813]: I0709 23:58:52.112828 2813 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 9 23:58:52.154825 kubelet[2813]: I0709 23:58:52.154805 2813 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 9 23:58:52.179374 kubelet[2813]: I0709 23:58:52.178897 2813 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Jul 9 23:58:52.179374 kubelet[2813]: I0709 23:58:52.178948 2813 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jul 9 23:58:52.286719 kubelet[2813]: I0709 23:58:52.286686 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4bc9b69b10c8f0cf6ab7eef38ce21413-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"4bc9b69b10c8f0cf6ab7eef38ce21413\") " pod="kube-system/kube-apiserver-localhost" Jul 9 23:58:52.286864 kubelet[2813]: I0709 23:58:52.286845 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4bc9b69b10c8f0cf6ab7eef38ce21413-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"4bc9b69b10c8f0cf6ab7eef38ce21413\") " pod="kube-system/kube-apiserver-localhost" Jul 9 23:58:52.286955 kubelet[2813]: I0709 23:58:52.286918 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 9 23:58:52.287008 kubelet[2813]: I0709 23:58:52.286999 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4bc9b69b10c8f0cf6ab7eef38ce21413-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"4bc9b69b10c8f0cf6ab7eef38ce21413\") " pod="kube-system/kube-apiserver-localhost" Jul 9 23:58:52.287054 kubelet[2813]: I0709 23:58:52.287047 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 9 23:58:52.287094 kubelet[2813]: I0709 23:58:52.287088 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 9 23:58:52.287132 kubelet[2813]: I0709 23:58:52.287126 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 9 23:58:52.287763 kubelet[2813]: I0709 23:58:52.287269 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 9 23:58:52.287763 kubelet[2813]: I0709 23:58:52.287283 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8a75e163f27396b2168da0f88f85f8a5-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"8a75e163f27396b2168da0f88f85f8a5\") " pod="kube-system/kube-scheduler-localhost" Jul 9 23:58:52.605564 sudo[2828]: pam_unix(sudo:session): session closed for user root Jul 9 23:58:52.830380 kubelet[2813]: I0709 23:58:52.830317 2813 apiserver.go:52] "Watching apiserver" Jul 9 23:58:52.885119 kubelet[2813]: I0709 23:58:52.885045 2813 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 9 23:58:52.892465 kubelet[2813]: I0709 23:58:52.892366 2813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=0.892343543 podStartE2EDuration="892.343543ms" podCreationTimestamp="2025-07-09 23:58:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-09 23:58:52.892247588 +0000 UTC m=+1.196528299" watchObservedRunningTime="2025-07-09 23:58:52.892343543 +0000 UTC m=+1.196624250" Jul 9 23:58:52.952340 kubelet[2813]: I0709 23:58:52.952298 2813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=0.952283261 podStartE2EDuration="952.283261ms" podCreationTimestamp="2025-07-09 23:58:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-09 23:58:52.926094844 +0000 UTC m=+1.230375551" watchObservedRunningTime="2025-07-09 23:58:52.952283261 +0000 UTC m=+1.256563969" Jul 9 23:58:52.970008 kubelet[2813]: I0709 23:58:52.969966 2813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=0.969950561 podStartE2EDuration="969.950561ms" podCreationTimestamp="2025-07-09 23:58:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-09 23:58:52.952510446 +0000 UTC m=+1.256791163" watchObservedRunningTime="2025-07-09 23:58:52.969950561 +0000 UTC m=+1.274231276" Jul 9 23:58:53.029607 kubelet[2813]: I0709 23:58:53.029585 2813 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 9 23:58:53.070834 kubelet[2813]: E0709 23:58:53.070810 2813 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jul 9 23:58:55.207896 sudo[1857]: pam_unix(sudo:session): session closed for user root Jul 9 23:58:55.208883 sshd[1856]: Connection closed by 139.178.89.65 port 47358 Jul 9 23:58:55.216762 sshd-session[1853]: pam_unix(sshd:session): session closed for user core Jul 9 23:58:55.218761 systemd[1]: sshd@6-139.178.70.109:22-139.178.89.65:47358.service: Deactivated successfully. Jul 9 23:58:55.220253 systemd[1]: session-9.scope: Deactivated successfully. Jul 9 23:58:55.220446 systemd[1]: session-9.scope: Consumed 3.263s CPU time, 205.3M memory peak. Jul 9 23:58:55.222091 systemd-logind[1536]: Session 9 logged out. Waiting for processes to exit. Jul 9 23:58:55.222752 systemd-logind[1536]: Removed session 9. Jul 9 23:58:55.702219 kubelet[2813]: I0709 23:58:55.701947 2813 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 9 23:58:55.702507 containerd[1556]: time="2025-07-09T23:58:55.702165151Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 9 23:58:55.702671 kubelet[2813]: I0709 23:58:55.702274 2813 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 9 23:58:56.277165 systemd[1]: Created slice kubepods-burstable-podb2c51f7e_468f_481e_ba96_38fbb2b938fe.slice - libcontainer container kubepods-burstable-podb2c51f7e_468f_481e_ba96_38fbb2b938fe.slice. Jul 9 23:58:56.283035 systemd[1]: Created slice kubepods-besteffort-pod74ec8a89_2207_4df7_85da_a1bd0ad83874.slice - libcontainer container kubepods-besteffort-pod74ec8a89_2207_4df7_85da_a1bd0ad83874.slice. Jul 9 23:58:56.309023 kubelet[2813]: I0709 23:58:56.308402 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b2c51f7e-468f-481e-ba96-38fbb2b938fe-lib-modules\") pod \"cilium-gkn5s\" (UID: \"b2c51f7e-468f-481e-ba96-38fbb2b938fe\") " pod="kube-system/cilium-gkn5s" Jul 9 23:58:56.309023 kubelet[2813]: I0709 23:58:56.308442 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b2c51f7e-468f-481e-ba96-38fbb2b938fe-etc-cni-netd\") pod \"cilium-gkn5s\" (UID: \"b2c51f7e-468f-481e-ba96-38fbb2b938fe\") " pod="kube-system/cilium-gkn5s" Jul 9 23:58:56.309023 kubelet[2813]: I0709 23:58:56.308464 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/74ec8a89-2207-4df7-85da-a1bd0ad83874-kube-proxy\") pod \"kube-proxy-8xb5c\" (UID: \"74ec8a89-2207-4df7-85da-a1bd0ad83874\") " pod="kube-system/kube-proxy-8xb5c" Jul 9 23:58:56.309023 kubelet[2813]: I0709 23:58:56.308495 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b2c51f7e-468f-481e-ba96-38fbb2b938fe-cni-path\") pod \"cilium-gkn5s\" (UID: \"b2c51f7e-468f-481e-ba96-38fbb2b938fe\") " pod="kube-system/cilium-gkn5s" Jul 9 23:58:56.309023 kubelet[2813]: I0709 23:58:56.308510 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b2c51f7e-468f-481e-ba96-38fbb2b938fe-hubble-tls\") pod \"cilium-gkn5s\" (UID: \"b2c51f7e-468f-481e-ba96-38fbb2b938fe\") " pod="kube-system/cilium-gkn5s" Jul 9 23:58:56.309023 kubelet[2813]: I0709 23:58:56.308529 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b2c51f7e-468f-481e-ba96-38fbb2b938fe-hostproc\") pod \"cilium-gkn5s\" (UID: \"b2c51f7e-468f-481e-ba96-38fbb2b938fe\") " pod="kube-system/cilium-gkn5s" Jul 9 23:58:56.309258 kubelet[2813]: I0709 23:58:56.308544 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ndnrm\" (UniqueName: \"kubernetes.io/projected/b2c51f7e-468f-481e-ba96-38fbb2b938fe-kube-api-access-ndnrm\") pod \"cilium-gkn5s\" (UID: \"b2c51f7e-468f-481e-ba96-38fbb2b938fe\") " pod="kube-system/cilium-gkn5s" Jul 9 23:58:56.309258 kubelet[2813]: I0709 23:58:56.308566 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b2c51f7e-468f-481e-ba96-38fbb2b938fe-bpf-maps\") pod \"cilium-gkn5s\" (UID: \"b2c51f7e-468f-481e-ba96-38fbb2b938fe\") " pod="kube-system/cilium-gkn5s" Jul 9 23:58:56.309258 kubelet[2813]: I0709 23:58:56.308588 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/74ec8a89-2207-4df7-85da-a1bd0ad83874-xtables-lock\") pod \"kube-proxy-8xb5c\" (UID: \"74ec8a89-2207-4df7-85da-a1bd0ad83874\") " pod="kube-system/kube-proxy-8xb5c" Jul 9 23:58:56.309258 kubelet[2813]: I0709 23:58:56.308601 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b2c51f7e-468f-481e-ba96-38fbb2b938fe-xtables-lock\") pod \"cilium-gkn5s\" (UID: \"b2c51f7e-468f-481e-ba96-38fbb2b938fe\") " pod="kube-system/cilium-gkn5s" Jul 9 23:58:56.309258 kubelet[2813]: I0709 23:58:56.308615 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b2c51f7e-468f-481e-ba96-38fbb2b938fe-host-proc-sys-kernel\") pod \"cilium-gkn5s\" (UID: \"b2c51f7e-468f-481e-ba96-38fbb2b938fe\") " pod="kube-system/cilium-gkn5s" Jul 9 23:58:56.309359 kubelet[2813]: I0709 23:58:56.308643 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/74ec8a89-2207-4df7-85da-a1bd0ad83874-lib-modules\") pod \"kube-proxy-8xb5c\" (UID: \"74ec8a89-2207-4df7-85da-a1bd0ad83874\") " pod="kube-system/kube-proxy-8xb5c" Jul 9 23:58:56.309359 kubelet[2813]: I0709 23:58:56.308659 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b2c51f7e-468f-481e-ba96-38fbb2b938fe-cilium-cgroup\") pod \"cilium-gkn5s\" (UID: \"b2c51f7e-468f-481e-ba96-38fbb2b938fe\") " pod="kube-system/cilium-gkn5s" Jul 9 23:58:56.309359 kubelet[2813]: I0709 23:58:56.308676 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-szhx7\" (UniqueName: \"kubernetes.io/projected/74ec8a89-2207-4df7-85da-a1bd0ad83874-kube-api-access-szhx7\") pod \"kube-proxy-8xb5c\" (UID: \"74ec8a89-2207-4df7-85da-a1bd0ad83874\") " pod="kube-system/kube-proxy-8xb5c" Jul 9 23:58:56.309359 kubelet[2813]: I0709 23:58:56.308695 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b2c51f7e-468f-481e-ba96-38fbb2b938fe-cilium-run\") pod \"cilium-gkn5s\" (UID: \"b2c51f7e-468f-481e-ba96-38fbb2b938fe\") " pod="kube-system/cilium-gkn5s" Jul 9 23:58:56.309359 kubelet[2813]: I0709 23:58:56.308720 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b2c51f7e-468f-481e-ba96-38fbb2b938fe-clustermesh-secrets\") pod \"cilium-gkn5s\" (UID: \"b2c51f7e-468f-481e-ba96-38fbb2b938fe\") " pod="kube-system/cilium-gkn5s" Jul 9 23:58:56.309466 kubelet[2813]: I0709 23:58:56.308737 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b2c51f7e-468f-481e-ba96-38fbb2b938fe-cilium-config-path\") pod \"cilium-gkn5s\" (UID: \"b2c51f7e-468f-481e-ba96-38fbb2b938fe\") " pod="kube-system/cilium-gkn5s" Jul 9 23:58:56.309466 kubelet[2813]: I0709 23:58:56.308753 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b2c51f7e-468f-481e-ba96-38fbb2b938fe-host-proc-sys-net\") pod \"cilium-gkn5s\" (UID: \"b2c51f7e-468f-481e-ba96-38fbb2b938fe\") " pod="kube-system/cilium-gkn5s" Jul 9 23:58:56.429348 kubelet[2813]: E0709 23:58:56.429318 2813 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jul 9 23:58:56.429348 kubelet[2813]: E0709 23:58:56.429352 2813 projected.go:194] Error preparing data for projected volume kube-api-access-szhx7 for pod kube-system/kube-proxy-8xb5c: configmap "kube-root-ca.crt" not found Jul 9 23:58:56.429483 kubelet[2813]: E0709 23:58:56.429417 2813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/74ec8a89-2207-4df7-85da-a1bd0ad83874-kube-api-access-szhx7 podName:74ec8a89-2207-4df7-85da-a1bd0ad83874 nodeName:}" failed. No retries permitted until 2025-07-09 23:58:56.929399884 +0000 UTC m=+5.233680589 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-szhx7" (UniqueName: "kubernetes.io/projected/74ec8a89-2207-4df7-85da-a1bd0ad83874-kube-api-access-szhx7") pod "kube-proxy-8xb5c" (UID: "74ec8a89-2207-4df7-85da-a1bd0ad83874") : configmap "kube-root-ca.crt" not found Jul 9 23:58:56.429675 kubelet[2813]: E0709 23:58:56.429318 2813 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jul 9 23:58:56.429675 kubelet[2813]: E0709 23:58:56.429630 2813 projected.go:194] Error preparing data for projected volume kube-api-access-ndnrm for pod kube-system/cilium-gkn5s: configmap "kube-root-ca.crt" not found Jul 9 23:58:56.429675 kubelet[2813]: E0709 23:58:56.429662 2813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b2c51f7e-468f-481e-ba96-38fbb2b938fe-kube-api-access-ndnrm podName:b2c51f7e-468f-481e-ba96-38fbb2b938fe nodeName:}" failed. No retries permitted until 2025-07-09 23:58:56.929649766 +0000 UTC m=+5.233930467 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-ndnrm" (UniqueName: "kubernetes.io/projected/b2c51f7e-468f-481e-ba96-38fbb2b938fe-kube-api-access-ndnrm") pod "cilium-gkn5s" (UID: "b2c51f7e-468f-481e-ba96-38fbb2b938fe") : configmap "kube-root-ca.crt" not found Jul 9 23:58:56.702263 systemd[1]: Created slice kubepods-besteffort-pod25058434_89bd_4f24_90d8_a71feb5bd4d7.slice - libcontainer container kubepods-besteffort-pod25058434_89bd_4f24_90d8_a71feb5bd4d7.slice. Jul 9 23:58:56.711139 kubelet[2813]: I0709 23:58:56.711061 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/25058434-89bd-4f24-90d8-a71feb5bd4d7-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-z7pf4\" (UID: \"25058434-89bd-4f24-90d8-a71feb5bd4d7\") " pod="kube-system/cilium-operator-6c4d7847fc-z7pf4" Jul 9 23:58:56.711139 kubelet[2813]: I0709 23:58:56.711097 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s45l2\" (UniqueName: \"kubernetes.io/projected/25058434-89bd-4f24-90d8-a71feb5bd4d7-kube-api-access-s45l2\") pod \"cilium-operator-6c4d7847fc-z7pf4\" (UID: \"25058434-89bd-4f24-90d8-a71feb5bd4d7\") " pod="kube-system/cilium-operator-6c4d7847fc-z7pf4" Jul 9 23:58:57.007603 containerd[1556]: time="2025-07-09T23:58:57.007420557Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-z7pf4,Uid:25058434-89bd-4f24-90d8-a71feb5bd4d7,Namespace:kube-system,Attempt:0,}" Jul 9 23:58:57.032801 containerd[1556]: time="2025-07-09T23:58:57.032258974Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 9 23:58:57.032801 containerd[1556]: time="2025-07-09T23:58:57.032747649Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 9 23:58:57.032801 containerd[1556]: time="2025-07-09T23:58:57.032758794Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 9 23:58:57.033341 containerd[1556]: time="2025-07-09T23:58:57.032878472Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 9 23:58:57.053061 systemd[1]: Started cri-containerd-ab523bac2c6c80bf3ae6f091945c53f170d7fe14817078092d187b64773c1e53.scope - libcontainer container ab523bac2c6c80bf3ae6f091945c53f170d7fe14817078092d187b64773c1e53. Jul 9 23:58:57.091176 containerd[1556]: time="2025-07-09T23:58:57.091123564Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-z7pf4,Uid:25058434-89bd-4f24-90d8-a71feb5bd4d7,Namespace:kube-system,Attempt:0,} returns sandbox id \"ab523bac2c6c80bf3ae6f091945c53f170d7fe14817078092d187b64773c1e53\"" Jul 9 23:58:57.092830 containerd[1556]: time="2025-07-09T23:58:57.092810540Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jul 9 23:58:57.182560 containerd[1556]: time="2025-07-09T23:58:57.182474728Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-gkn5s,Uid:b2c51f7e-468f-481e-ba96-38fbb2b938fe,Namespace:kube-system,Attempt:0,}" Jul 9 23:58:57.191998 containerd[1556]: time="2025-07-09T23:58:57.191843253Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8xb5c,Uid:74ec8a89-2207-4df7-85da-a1bd0ad83874,Namespace:kube-system,Attempt:0,}" Jul 9 23:58:57.277512 containerd[1556]: time="2025-07-09T23:58:57.277375747Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 9 23:58:57.277512 containerd[1556]: time="2025-07-09T23:58:57.277419381Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 9 23:58:57.277980 containerd[1556]: time="2025-07-09T23:58:57.277428503Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 9 23:58:57.277980 containerd[1556]: time="2025-07-09T23:58:57.277484459Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 9 23:58:57.280140 containerd[1556]: time="2025-07-09T23:58:57.280019256Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 9 23:58:57.280140 containerd[1556]: time="2025-07-09T23:58:57.280074852Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 9 23:58:57.280140 containerd[1556]: time="2025-07-09T23:58:57.280086923Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 9 23:58:57.281016 containerd[1556]: time="2025-07-09T23:58:57.280968612Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 9 23:58:57.302045 systemd[1]: Started cri-containerd-1ecc0e16a5e007adff52f5ec8d03b5ceadf45b171278c2b9b4c273f56d791e78.scope - libcontainer container 1ecc0e16a5e007adff52f5ec8d03b5ceadf45b171278c2b9b4c273f56d791e78. Jul 9 23:58:57.305594 systemd[1]: Started cri-containerd-cb0d3f18ceabcde6acc4bcc397352f68d7eaf297ec57fdd16ba6a918fc57e2c3.scope - libcontainer container cb0d3f18ceabcde6acc4bcc397352f68d7eaf297ec57fdd16ba6a918fc57e2c3. Jul 9 23:58:57.328098 containerd[1556]: time="2025-07-09T23:58:57.328017354Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8xb5c,Uid:74ec8a89-2207-4df7-85da-a1bd0ad83874,Namespace:kube-system,Attempt:0,} returns sandbox id \"1ecc0e16a5e007adff52f5ec8d03b5ceadf45b171278c2b9b4c273f56d791e78\"" Jul 9 23:58:57.330913 containerd[1556]: time="2025-07-09T23:58:57.330799771Z" level=info msg="CreateContainer within sandbox \"1ecc0e16a5e007adff52f5ec8d03b5ceadf45b171278c2b9b4c273f56d791e78\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 9 23:58:57.332176 containerd[1556]: time="2025-07-09T23:58:57.332132908Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-gkn5s,Uid:b2c51f7e-468f-481e-ba96-38fbb2b938fe,Namespace:kube-system,Attempt:0,} returns sandbox id \"cb0d3f18ceabcde6acc4bcc397352f68d7eaf297ec57fdd16ba6a918fc57e2c3\"" Jul 9 23:58:57.343379 containerd[1556]: time="2025-07-09T23:58:57.343344758Z" level=info msg="CreateContainer within sandbox \"1ecc0e16a5e007adff52f5ec8d03b5ceadf45b171278c2b9b4c273f56d791e78\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"e5657590aa274621f50e0566846475e4a94a674d709b0882817169ed18cd7593\"" Jul 9 23:58:57.344145 containerd[1556]: time="2025-07-09T23:58:57.344112419Z" level=info msg="StartContainer for \"e5657590aa274621f50e0566846475e4a94a674d709b0882817169ed18cd7593\"" Jul 9 23:58:57.365007 systemd[1]: Started cri-containerd-e5657590aa274621f50e0566846475e4a94a674d709b0882817169ed18cd7593.scope - libcontainer container e5657590aa274621f50e0566846475e4a94a674d709b0882817169ed18cd7593. Jul 9 23:58:57.388635 containerd[1556]: time="2025-07-09T23:58:57.388607804Z" level=info msg="StartContainer for \"e5657590aa274621f50e0566846475e4a94a674d709b0882817169ed18cd7593\" returns successfully" Jul 9 23:58:58.450864 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2445136695.mount: Deactivated successfully. Jul 9 23:58:59.018290 containerd[1556]: time="2025-07-09T23:58:59.017417146Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Jul 9 23:58:59.018748 containerd[1556]: time="2025-07-09T23:58:59.018723925Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 1.925886456s" Jul 9 23:58:59.018785 containerd[1556]: time="2025-07-09T23:58:59.018758512Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jul 9 23:58:59.021170 containerd[1556]: time="2025-07-09T23:58:59.020448657Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jul 9 23:58:59.024866 containerd[1556]: time="2025-07-09T23:58:59.024688706Z" level=info msg="CreateContainer within sandbox \"ab523bac2c6c80bf3ae6f091945c53f170d7fe14817078092d187b64773c1e53\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jul 9 23:58:59.028732 containerd[1556]: time="2025-07-09T23:58:59.028705871Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:58:59.029339 containerd[1556]: time="2025-07-09T23:58:59.029323882Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:58:59.057140 containerd[1556]: time="2025-07-09T23:58:59.057090552Z" level=info msg="CreateContainer within sandbox \"ab523bac2c6c80bf3ae6f091945c53f170d7fe14817078092d187b64773c1e53\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"ad256d1b547f8e8b91a5bb6af105ea324dc8605538af176b52bbc1d03928727e\"" Jul 9 23:58:59.057737 containerd[1556]: time="2025-07-09T23:58:59.057653771Z" level=info msg="StartContainer for \"ad256d1b547f8e8b91a5bb6af105ea324dc8605538af176b52bbc1d03928727e\"" Jul 9 23:58:59.090998 systemd[1]: Started cri-containerd-ad256d1b547f8e8b91a5bb6af105ea324dc8605538af176b52bbc1d03928727e.scope - libcontainer container ad256d1b547f8e8b91a5bb6af105ea324dc8605538af176b52bbc1d03928727e. Jul 9 23:58:59.148060 containerd[1556]: time="2025-07-09T23:58:59.148026198Z" level=info msg="StartContainer for \"ad256d1b547f8e8b91a5bb6af105ea324dc8605538af176b52bbc1d03928727e\" returns successfully" Jul 9 23:59:00.062709 kubelet[2813]: I0709 23:59:00.062360 2813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-8xb5c" podStartSLOduration=4.062343933 podStartE2EDuration="4.062343933s" podCreationTimestamp="2025-07-09 23:58:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-09 23:58:58.053686116 +0000 UTC m=+6.357966828" watchObservedRunningTime="2025-07-09 23:59:00.062343933 +0000 UTC m=+8.366624647" Jul 9 23:59:03.184653 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3240073295.mount: Deactivated successfully. Jul 9 23:59:03.682218 kubelet[2813]: I0709 23:59:03.682090 2813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-z7pf4" podStartSLOduration=5.754004089 podStartE2EDuration="7.681630117s" podCreationTimestamp="2025-07-09 23:58:56 +0000 UTC" firstStartedPulling="2025-07-09 23:58:57.092130345 +0000 UTC m=+5.396411050" lastFinishedPulling="2025-07-09 23:58:59.019756367 +0000 UTC m=+7.324037078" observedRunningTime="2025-07-09 23:59:00.063144368 +0000 UTC m=+8.367425080" watchObservedRunningTime="2025-07-09 23:59:03.681630117 +0000 UTC m=+11.985910824" Jul 9 23:59:05.894113 containerd[1556]: time="2025-07-09T23:59:05.893978529Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:59:05.904243 containerd[1556]: time="2025-07-09T23:59:05.904206505Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Jul 9 23:59:05.925048 containerd[1556]: time="2025-07-09T23:59:05.924605888Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:59:05.925872 containerd[1556]: time="2025-07-09T23:59:05.925830154Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 6.905344152s" Jul 9 23:59:05.925970 containerd[1556]: time="2025-07-09T23:59:05.925956684Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jul 9 23:59:05.930714 containerd[1556]: time="2025-07-09T23:59:05.930675147Z" level=info msg="CreateContainer within sandbox \"cb0d3f18ceabcde6acc4bcc397352f68d7eaf297ec57fdd16ba6a918fc57e2c3\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 9 23:59:06.073657 containerd[1556]: time="2025-07-09T23:59:06.073600622Z" level=info msg="CreateContainer within sandbox \"cb0d3f18ceabcde6acc4bcc397352f68d7eaf297ec57fdd16ba6a918fc57e2c3\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"be1f5f726268707737ecd661c4a46788ac790cf25ed0a607351ed9c6d86cd50e\"" Jul 9 23:59:06.074329 containerd[1556]: time="2025-07-09T23:59:06.074124052Z" level=info msg="StartContainer for \"be1f5f726268707737ecd661c4a46788ac790cf25ed0a607351ed9c6d86cd50e\"" Jul 9 23:59:06.191940 systemd[1]: Started cri-containerd-be1f5f726268707737ecd661c4a46788ac790cf25ed0a607351ed9c6d86cd50e.scope - libcontainer container be1f5f726268707737ecd661c4a46788ac790cf25ed0a607351ed9c6d86cd50e. Jul 9 23:59:06.214377 containerd[1556]: time="2025-07-09T23:59:06.214347758Z" level=info msg="StartContainer for \"be1f5f726268707737ecd661c4a46788ac790cf25ed0a607351ed9c6d86cd50e\" returns successfully" Jul 9 23:59:06.223586 systemd[1]: cri-containerd-be1f5f726268707737ecd661c4a46788ac790cf25ed0a607351ed9c6d86cd50e.scope: Deactivated successfully. Jul 9 23:59:07.022229 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-be1f5f726268707737ecd661c4a46788ac790cf25ed0a607351ed9c6d86cd50e-rootfs.mount: Deactivated successfully. Jul 9 23:59:07.242881 containerd[1556]: time="2025-07-09T23:59:07.235906068Z" level=info msg="shim disconnected" id=be1f5f726268707737ecd661c4a46788ac790cf25ed0a607351ed9c6d86cd50e namespace=k8s.io Jul 9 23:59:07.242881 containerd[1556]: time="2025-07-09T23:59:07.242816668Z" level=warning msg="cleaning up after shim disconnected" id=be1f5f726268707737ecd661c4a46788ac790cf25ed0a607351ed9c6d86cd50e namespace=k8s.io Jul 9 23:59:07.242881 containerd[1556]: time="2025-07-09T23:59:07.242827284Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 9 23:59:07.250729 containerd[1556]: time="2025-07-09T23:59:07.250537302Z" level=warning msg="cleanup warnings time=\"2025-07-09T23:59:07Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jul 9 23:59:07.476809 containerd[1556]: time="2025-07-09T23:59:07.476774431Z" level=info msg="CreateContainer within sandbox \"cb0d3f18ceabcde6acc4bcc397352f68d7eaf297ec57fdd16ba6a918fc57e2c3\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 9 23:59:07.649015 containerd[1556]: time="2025-07-09T23:59:07.648915450Z" level=info msg="CreateContainer within sandbox \"cb0d3f18ceabcde6acc4bcc397352f68d7eaf297ec57fdd16ba6a918fc57e2c3\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"22a4eaec1b2708e25f13b7b436f97596e086b48baf26899351adc4b7159b739d\"" Jul 9 23:59:07.650134 containerd[1556]: time="2025-07-09T23:59:07.649380134Z" level=info msg="StartContainer for \"22a4eaec1b2708e25f13b7b436f97596e086b48baf26899351adc4b7159b739d\"" Jul 9 23:59:07.680001 systemd[1]: Started cri-containerd-22a4eaec1b2708e25f13b7b436f97596e086b48baf26899351adc4b7159b739d.scope - libcontainer container 22a4eaec1b2708e25f13b7b436f97596e086b48baf26899351adc4b7159b739d. Jul 9 23:59:07.703216 containerd[1556]: time="2025-07-09T23:59:07.703179136Z" level=info msg="StartContainer for \"22a4eaec1b2708e25f13b7b436f97596e086b48baf26899351adc4b7159b739d\" returns successfully" Jul 9 23:59:07.722884 systemd[1]: cri-containerd-22a4eaec1b2708e25f13b7b436f97596e086b48baf26899351adc4b7159b739d.scope: Deactivated successfully. Jul 9 23:59:07.723808 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 9 23:59:07.724017 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 9 23:59:07.724242 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jul 9 23:59:07.728635 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 9 23:59:07.732059 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 9 23:59:07.745038 containerd[1556]: time="2025-07-09T23:59:07.744984183Z" level=info msg="shim disconnected" id=22a4eaec1b2708e25f13b7b436f97596e086b48baf26899351adc4b7159b739d namespace=k8s.io Jul 9 23:59:07.745038 containerd[1556]: time="2025-07-09T23:59:07.745030708Z" level=warning msg="cleaning up after shim disconnected" id=22a4eaec1b2708e25f13b7b436f97596e086b48baf26899351adc4b7159b739d namespace=k8s.io Jul 9 23:59:07.745288 containerd[1556]: time="2025-07-09T23:59:07.745049925Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 9 23:59:07.760878 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 9 23:59:08.022179 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-22a4eaec1b2708e25f13b7b436f97596e086b48baf26899351adc4b7159b739d-rootfs.mount: Deactivated successfully. Jul 9 23:59:08.478764 containerd[1556]: time="2025-07-09T23:59:08.478738341Z" level=info msg="CreateContainer within sandbox \"cb0d3f18ceabcde6acc4bcc397352f68d7eaf297ec57fdd16ba6a918fc57e2c3\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 9 23:59:08.572964 containerd[1556]: time="2025-07-09T23:59:08.572934889Z" level=info msg="CreateContainer within sandbox \"cb0d3f18ceabcde6acc4bcc397352f68d7eaf297ec57fdd16ba6a918fc57e2c3\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"cb30b8de1aab4c52a10aebafcd55c63d2ff38c28cc66696319c8940ffd583e35\"" Jul 9 23:59:08.573818 containerd[1556]: time="2025-07-09T23:59:08.573796134Z" level=info msg="StartContainer for \"cb30b8de1aab4c52a10aebafcd55c63d2ff38c28cc66696319c8940ffd583e35\"" Jul 9 23:59:08.596984 systemd[1]: Started cri-containerd-cb30b8de1aab4c52a10aebafcd55c63d2ff38c28cc66696319c8940ffd583e35.scope - libcontainer container cb30b8de1aab4c52a10aebafcd55c63d2ff38c28cc66696319c8940ffd583e35. Jul 9 23:59:08.617340 systemd[1]: cri-containerd-cb30b8de1aab4c52a10aebafcd55c63d2ff38c28cc66696319c8940ffd583e35.scope: Deactivated successfully. Jul 9 23:59:08.633592 containerd[1556]: time="2025-07-09T23:59:08.633560943Z" level=info msg="StartContainer for \"cb30b8de1aab4c52a10aebafcd55c63d2ff38c28cc66696319c8940ffd583e35\" returns successfully" Jul 9 23:59:08.645662 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cb30b8de1aab4c52a10aebafcd55c63d2ff38c28cc66696319c8940ffd583e35-rootfs.mount: Deactivated successfully. Jul 9 23:59:08.732243 containerd[1556]: time="2025-07-09T23:59:08.732071304Z" level=info msg="shim disconnected" id=cb30b8de1aab4c52a10aebafcd55c63d2ff38c28cc66696319c8940ffd583e35 namespace=k8s.io Jul 9 23:59:08.732243 containerd[1556]: time="2025-07-09T23:59:08.732111581Z" level=warning msg="cleaning up after shim disconnected" id=cb30b8de1aab4c52a10aebafcd55c63d2ff38c28cc66696319c8940ffd583e35 namespace=k8s.io Jul 9 23:59:08.732243 containerd[1556]: time="2025-07-09T23:59:08.732119252Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 9 23:59:08.742775 containerd[1556]: time="2025-07-09T23:59:08.742728486Z" level=warning msg="cleanup warnings time=\"2025-07-09T23:59:08Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jul 9 23:59:09.480607 containerd[1556]: time="2025-07-09T23:59:09.480508706Z" level=info msg="CreateContainer within sandbox \"cb0d3f18ceabcde6acc4bcc397352f68d7eaf297ec57fdd16ba6a918fc57e2c3\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 9 23:59:09.538765 containerd[1556]: time="2025-07-09T23:59:09.538716948Z" level=info msg="CreateContainer within sandbox \"cb0d3f18ceabcde6acc4bcc397352f68d7eaf297ec57fdd16ba6a918fc57e2c3\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"001644e92655133a45fd0b438b43d4374594ae0a64af3416186c3757f4a8315e\"" Jul 9 23:59:09.539149 containerd[1556]: time="2025-07-09T23:59:09.539137915Z" level=info msg="StartContainer for \"001644e92655133a45fd0b438b43d4374594ae0a64af3416186c3757f4a8315e\"" Jul 9 23:59:09.567001 systemd[1]: Started cri-containerd-001644e92655133a45fd0b438b43d4374594ae0a64af3416186c3757f4a8315e.scope - libcontainer container 001644e92655133a45fd0b438b43d4374594ae0a64af3416186c3757f4a8315e. Jul 9 23:59:09.582684 systemd[1]: cri-containerd-001644e92655133a45fd0b438b43d4374594ae0a64af3416186c3757f4a8315e.scope: Deactivated successfully. Jul 9 23:59:09.589882 containerd[1556]: time="2025-07-09T23:59:09.589820657Z" level=info msg="StartContainer for \"001644e92655133a45fd0b438b43d4374594ae0a64af3416186c3757f4a8315e\" returns successfully" Jul 9 23:59:09.603553 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-001644e92655133a45fd0b438b43d4374594ae0a64af3416186c3757f4a8315e-rootfs.mount: Deactivated successfully. Jul 9 23:59:09.613689 containerd[1556]: time="2025-07-09T23:59:09.613649062Z" level=info msg="shim disconnected" id=001644e92655133a45fd0b438b43d4374594ae0a64af3416186c3757f4a8315e namespace=k8s.io Jul 9 23:59:09.613689 containerd[1556]: time="2025-07-09T23:59:09.613685256Z" level=warning msg="cleaning up after shim disconnected" id=001644e92655133a45fd0b438b43d4374594ae0a64af3416186c3757f4a8315e namespace=k8s.io Jul 9 23:59:09.613689 containerd[1556]: time="2025-07-09T23:59:09.613695067Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 9 23:59:10.483109 containerd[1556]: time="2025-07-09T23:59:10.483081814Z" level=info msg="CreateContainer within sandbox \"cb0d3f18ceabcde6acc4bcc397352f68d7eaf297ec57fdd16ba6a918fc57e2c3\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 9 23:59:10.551458 containerd[1556]: time="2025-07-09T23:59:10.551374614Z" level=info msg="CreateContainer within sandbox \"cb0d3f18ceabcde6acc4bcc397352f68d7eaf297ec57fdd16ba6a918fc57e2c3\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"a29e522340df7a382cff3cec659278ecb513256607da5c03e86fa8dcca5ce1ab\"" Jul 9 23:59:10.552165 containerd[1556]: time="2025-07-09T23:59:10.551827726Z" level=info msg="StartContainer for \"a29e522340df7a382cff3cec659278ecb513256607da5c03e86fa8dcca5ce1ab\"" Jul 9 23:59:10.577970 systemd[1]: Started cri-containerd-a29e522340df7a382cff3cec659278ecb513256607da5c03e86fa8dcca5ce1ab.scope - libcontainer container a29e522340df7a382cff3cec659278ecb513256607da5c03e86fa8dcca5ce1ab. Jul 9 23:59:10.604243 containerd[1556]: time="2025-07-09T23:59:10.604214836Z" level=info msg="StartContainer for \"a29e522340df7a382cff3cec659278ecb513256607da5c03e86fa8dcca5ce1ab\" returns successfully" Jul 9 23:59:10.878864 kubelet[2813]: I0709 23:59:10.878838 2813 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jul 9 23:59:10.927143 systemd[1]: Created slice kubepods-burstable-poda09ae0ea_768a_4125_b6d5_74ad27e83684.slice - libcontainer container kubepods-burstable-poda09ae0ea_768a_4125_b6d5_74ad27e83684.slice. Jul 9 23:59:10.935879 systemd[1]: Created slice kubepods-burstable-pod2e3b2ed2_a17b_40bf_80fb_c159abd347ce.slice - libcontainer container kubepods-burstable-pod2e3b2ed2_a17b_40bf_80fb_c159abd347ce.slice. Jul 9 23:59:11.004704 kubelet[2813]: I0709 23:59:11.004673 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4q76n\" (UniqueName: \"kubernetes.io/projected/a09ae0ea-768a-4125-b6d5-74ad27e83684-kube-api-access-4q76n\") pod \"coredns-668d6bf9bc-6tqv9\" (UID: \"a09ae0ea-768a-4125-b6d5-74ad27e83684\") " pod="kube-system/coredns-668d6bf9bc-6tqv9" Jul 9 23:59:11.004704 kubelet[2813]: I0709 23:59:11.004708 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a09ae0ea-768a-4125-b6d5-74ad27e83684-config-volume\") pod \"coredns-668d6bf9bc-6tqv9\" (UID: \"a09ae0ea-768a-4125-b6d5-74ad27e83684\") " pod="kube-system/coredns-668d6bf9bc-6tqv9" Jul 9 23:59:11.105267 kubelet[2813]: I0709 23:59:11.105222 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2e3b2ed2-a17b-40bf-80fb-c159abd347ce-config-volume\") pod \"coredns-668d6bf9bc-qz496\" (UID: \"2e3b2ed2-a17b-40bf-80fb-c159abd347ce\") " pod="kube-system/coredns-668d6bf9bc-qz496" Jul 9 23:59:11.105267 kubelet[2813]: I0709 23:59:11.105272 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-shgdj\" (UniqueName: \"kubernetes.io/projected/2e3b2ed2-a17b-40bf-80fb-c159abd347ce-kube-api-access-shgdj\") pod \"coredns-668d6bf9bc-qz496\" (UID: \"2e3b2ed2-a17b-40bf-80fb-c159abd347ce\") " pod="kube-system/coredns-668d6bf9bc-qz496" Jul 9 23:59:11.233705 containerd[1556]: time="2025-07-09T23:59:11.233584723Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-6tqv9,Uid:a09ae0ea-768a-4125-b6d5-74ad27e83684,Namespace:kube-system,Attempt:0,}" Jul 9 23:59:11.246213 containerd[1556]: time="2025-07-09T23:59:11.246173603Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-qz496,Uid:2e3b2ed2-a17b-40bf-80fb-c159abd347ce,Namespace:kube-system,Attempt:0,}" Jul 9 23:59:11.513122 kubelet[2813]: I0709 23:59:11.513019 2813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-gkn5s" podStartSLOduration=6.919196159 podStartE2EDuration="15.513002828s" podCreationTimestamp="2025-07-09 23:58:56 +0000 UTC" firstStartedPulling="2025-07-09 23:58:57.333245991 +0000 UTC m=+5.637526693" lastFinishedPulling="2025-07-09 23:59:05.927052663 +0000 UTC m=+14.231333362" observedRunningTime="2025-07-09 23:59:11.512408292 +0000 UTC m=+19.816689003" watchObservedRunningTime="2025-07-09 23:59:11.513002828 +0000 UTC m=+19.817283541" Jul 9 23:59:11.530149 systemd[1]: run-containerd-runc-k8s.io-a29e522340df7a382cff3cec659278ecb513256607da5c03e86fa8dcca5ce1ab-runc.ZMswUM.mount: Deactivated successfully. Jul 9 23:59:27.907096 systemd[1]: Started sshd@7-139.178.70.109:22-185.156.73.234:26774.service - OpenSSH per-connection server daemon (185.156.73.234:26774). Jul 9 23:59:29.424992 sshd[3642]: Invalid user matrix from 185.156.73.234 port 26774 Jul 9 23:59:29.615921 sshd[3642]: Connection closed by invalid user matrix 185.156.73.234 port 26774 [preauth] Jul 9 23:59:29.616967 systemd[1]: sshd@7-139.178.70.109:22-185.156.73.234:26774.service: Deactivated successfully. Jul 9 23:59:37.608195 systemd-networkd[1466]: cilium_host: Link UP Jul 9 23:59:37.608283 systemd-networkd[1466]: cilium_net: Link UP Jul 9 23:59:37.608381 systemd-networkd[1466]: cilium_net: Gained carrier Jul 9 23:59:37.608470 systemd-networkd[1466]: cilium_host: Gained carrier Jul 9 23:59:37.706663 systemd-networkd[1466]: cilium_vxlan: Link UP Jul 9 23:59:37.706668 systemd-networkd[1466]: cilium_vxlan: Gained carrier Jul 9 23:59:38.006986 systemd-networkd[1466]: cilium_host: Gained IPv6LL Jul 9 23:59:38.047032 systemd-networkd[1466]: cilium_net: Gained IPv6LL Jul 9 23:59:38.251949 kernel: NET: Registered PF_ALG protocol family Jul 9 23:59:38.697116 systemd-networkd[1466]: lxc_health: Link UP Jul 9 23:59:38.699316 systemd-networkd[1466]: lxc_health: Gained carrier Jul 9 23:59:38.859863 kernel: eth0: renamed from tmpdc135 Jul 9 23:59:38.872868 kernel: eth0: renamed from tmpa6caf Jul 9 23:59:38.872011 systemd-networkd[1466]: lxc3cc775ed258a: Link UP Jul 9 23:59:38.872162 systemd-networkd[1466]: lxc76d4922a6d80: Link UP Jul 9 23:59:38.872314 systemd-networkd[1466]: lxc3cc775ed258a: Gained carrier Jul 9 23:59:38.877117 systemd-networkd[1466]: lxc76d4922a6d80: Gained carrier Jul 9 23:59:39.014967 systemd-networkd[1466]: cilium_vxlan: Gained IPv6LL Jul 9 23:59:39.910976 systemd-networkd[1466]: lxc76d4922a6d80: Gained IPv6LL Jul 9 23:59:40.294980 systemd-networkd[1466]: lxc_health: Gained IPv6LL Jul 9 23:59:40.743015 systemd-networkd[1466]: lxc3cc775ed258a: Gained IPv6LL Jul 9 23:59:41.786552 containerd[1556]: time="2025-07-09T23:59:41.786491856Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 9 23:59:41.787064 containerd[1556]: time="2025-07-09T23:59:41.786537885Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 9 23:59:41.787064 containerd[1556]: time="2025-07-09T23:59:41.786550111Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 9 23:59:41.787893 containerd[1556]: time="2025-07-09T23:59:41.787762434Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 9 23:59:41.797039 containerd[1556]: time="2025-07-09T23:59:41.789326988Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 9 23:59:41.797039 containerd[1556]: time="2025-07-09T23:59:41.789829965Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 9 23:59:41.797039 containerd[1556]: time="2025-07-09T23:59:41.789842641Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 9 23:59:41.797039 containerd[1556]: time="2025-07-09T23:59:41.790150238Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 9 23:59:41.816965 systemd[1]: Started cri-containerd-a6caf9ef1981f6b9b5ad4134aee5eb77bbe92e23d20cfb01d70b9572536e5db0.scope - libcontainer container a6caf9ef1981f6b9b5ad4134aee5eb77bbe92e23d20cfb01d70b9572536e5db0. Jul 9 23:59:41.818905 systemd[1]: Started cri-containerd-dc135a75a46df6eefd33244fe93c9f7b119b22edd8510e97bcbb08f3de52b09f.scope - libcontainer container dc135a75a46df6eefd33244fe93c9f7b119b22edd8510e97bcbb08f3de52b09f. Jul 9 23:59:41.829697 systemd-resolved[1467]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 9 23:59:41.830747 systemd-resolved[1467]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 9 23:59:41.862065 containerd[1556]: time="2025-07-09T23:59:41.862018888Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-6tqv9,Uid:a09ae0ea-768a-4125-b6d5-74ad27e83684,Namespace:kube-system,Attempt:0,} returns sandbox id \"a6caf9ef1981f6b9b5ad4134aee5eb77bbe92e23d20cfb01d70b9572536e5db0\"" Jul 9 23:59:41.866521 containerd[1556]: time="2025-07-09T23:59:41.866089225Z" level=info msg="CreateContainer within sandbox \"a6caf9ef1981f6b9b5ad4134aee5eb77bbe92e23d20cfb01d70b9572536e5db0\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 9 23:59:41.867321 containerd[1556]: time="2025-07-09T23:59:41.867294122Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-qz496,Uid:2e3b2ed2-a17b-40bf-80fb-c159abd347ce,Namespace:kube-system,Attempt:0,} returns sandbox id \"dc135a75a46df6eefd33244fe93c9f7b119b22edd8510e97bcbb08f3de52b09f\"" Jul 9 23:59:41.870373 containerd[1556]: time="2025-07-09T23:59:41.870352340Z" level=info msg="CreateContainer within sandbox \"dc135a75a46df6eefd33244fe93c9f7b119b22edd8510e97bcbb08f3de52b09f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 9 23:59:41.881165 containerd[1556]: time="2025-07-09T23:59:41.881138828Z" level=info msg="CreateContainer within sandbox \"dc135a75a46df6eefd33244fe93c9f7b119b22edd8510e97bcbb08f3de52b09f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"3b213cd3fb92d48e97f03904bbea579e61c5dab61528a40de16dfd5338173e21\"" Jul 9 23:59:41.881656 containerd[1556]: time="2025-07-09T23:59:41.881600270Z" level=info msg="CreateContainer within sandbox \"a6caf9ef1981f6b9b5ad4134aee5eb77bbe92e23d20cfb01d70b9572536e5db0\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f9e4d4b074f69ef6483e13e22e536dc27172a3f4f0fe786ff6f85ad88f9d4834\"" Jul 9 23:59:41.881954 containerd[1556]: time="2025-07-09T23:59:41.881886505Z" level=info msg="StartContainer for \"3b213cd3fb92d48e97f03904bbea579e61c5dab61528a40de16dfd5338173e21\"" Jul 9 23:59:41.882170 containerd[1556]: time="2025-07-09T23:59:41.882156147Z" level=info msg="StartContainer for \"f9e4d4b074f69ef6483e13e22e536dc27172a3f4f0fe786ff6f85ad88f9d4834\"" Jul 9 23:59:41.911011 systemd[1]: Started cri-containerd-f9e4d4b074f69ef6483e13e22e536dc27172a3f4f0fe786ff6f85ad88f9d4834.scope - libcontainer container f9e4d4b074f69ef6483e13e22e536dc27172a3f4f0fe786ff6f85ad88f9d4834. Jul 9 23:59:41.914651 systemd[1]: Started cri-containerd-3b213cd3fb92d48e97f03904bbea579e61c5dab61528a40de16dfd5338173e21.scope - libcontainer container 3b213cd3fb92d48e97f03904bbea579e61c5dab61528a40de16dfd5338173e21. Jul 9 23:59:41.941034 containerd[1556]: time="2025-07-09T23:59:41.940998873Z" level=info msg="StartContainer for \"3b213cd3fb92d48e97f03904bbea579e61c5dab61528a40de16dfd5338173e21\" returns successfully" Jul 9 23:59:41.941130 containerd[1556]: time="2025-07-09T23:59:41.940998864Z" level=info msg="StartContainer for \"f9e4d4b074f69ef6483e13e22e536dc27172a3f4f0fe786ff6f85ad88f9d4834\" returns successfully" Jul 9 23:59:42.555674 kubelet[2813]: I0709 23:59:42.555640 2813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-qz496" podStartSLOduration=46.555628541 podStartE2EDuration="46.555628541s" podCreationTimestamp="2025-07-09 23:58:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-09 23:59:42.555036738 +0000 UTC m=+50.859317445" watchObservedRunningTime="2025-07-09 23:59:42.555628541 +0000 UTC m=+50.859909247" Jul 9 23:59:42.564581 kubelet[2813]: I0709 23:59:42.564299 2813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-6tqv9" podStartSLOduration=46.56428788 podStartE2EDuration="46.56428788s" podCreationTimestamp="2025-07-09 23:58:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-09 23:59:42.564040099 +0000 UTC m=+50.868320811" watchObservedRunningTime="2025-07-09 23:59:42.56428788 +0000 UTC m=+50.868568587" Jul 9 23:59:42.794239 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2901451555.mount: Deactivated successfully. Jul 10 00:00:01.289077 systemd[1]: Started logrotate.service - Rotate and Compress System Logs. Jul 10 00:00:01.394135 systemd[1]: logrotate.service: Deactivated successfully. Jul 10 00:00:01.983910 systemd[1]: Started sshd@8-139.178.70.109:22-139.178.89.65:59334.service - OpenSSH per-connection server daemon (139.178.89.65:59334). Jul 10 00:00:02.042780 sshd[4206]: Accepted publickey for core from 139.178.89.65 port 59334 ssh2: RSA SHA256:iW7N8ouL0MNZquiamslGIbLBE90/GD9BMvBNQ+/8OB0 Jul 10 00:00:02.043874 sshd-session[4206]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:00:02.047527 systemd-logind[1536]: New session 10 of user core. Jul 10 00:00:02.050946 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 10 00:00:02.559421 sshd[4208]: Connection closed by 139.178.89.65 port 59334 Jul 10 00:00:02.559960 sshd-session[4206]: pam_unix(sshd:session): session closed for user core Jul 10 00:00:02.562217 systemd[1]: sshd@8-139.178.70.109:22-139.178.89.65:59334.service: Deactivated successfully. Jul 10 00:00:02.563520 systemd[1]: session-10.scope: Deactivated successfully. Jul 10 00:00:02.564079 systemd-logind[1536]: Session 10 logged out. Waiting for processes to exit. Jul 10 00:00:02.564610 systemd-logind[1536]: Removed session 10. Jul 10 00:00:07.572461 systemd[1]: Started sshd@9-139.178.70.109:22-139.178.89.65:59346.service - OpenSSH per-connection server daemon (139.178.89.65:59346). Jul 10 00:00:07.612496 sshd[4221]: Accepted publickey for core from 139.178.89.65 port 59346 ssh2: RSA SHA256:iW7N8ouL0MNZquiamslGIbLBE90/GD9BMvBNQ+/8OB0 Jul 10 00:00:07.613495 sshd-session[4221]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:00:07.616582 systemd-logind[1536]: New session 11 of user core. Jul 10 00:00:07.621994 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 10 00:00:07.721775 sshd[4223]: Connection closed by 139.178.89.65 port 59346 Jul 10 00:00:07.722176 sshd-session[4221]: pam_unix(sshd:session): session closed for user core Jul 10 00:00:07.724474 systemd[1]: sshd@9-139.178.70.109:22-139.178.89.65:59346.service: Deactivated successfully. Jul 10 00:00:07.725886 systemd[1]: session-11.scope: Deactivated successfully. Jul 10 00:00:07.726458 systemd-logind[1536]: Session 11 logged out. Waiting for processes to exit. Jul 10 00:00:07.727139 systemd-logind[1536]: Removed session 11. Jul 10 00:00:12.733053 systemd[1]: Started sshd@10-139.178.70.109:22-139.178.89.65:36270.service - OpenSSH per-connection server daemon (139.178.89.65:36270). Jul 10 00:00:12.769151 sshd[4237]: Accepted publickey for core from 139.178.89.65 port 36270 ssh2: RSA SHA256:iW7N8ouL0MNZquiamslGIbLBE90/GD9BMvBNQ+/8OB0 Jul 10 00:00:12.770205 sshd-session[4237]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:00:12.773981 systemd-logind[1536]: New session 12 of user core. Jul 10 00:00:12.782116 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 10 00:00:12.895778 sshd[4239]: Connection closed by 139.178.89.65 port 36270 Jul 10 00:00:12.896237 sshd-session[4237]: pam_unix(sshd:session): session closed for user core Jul 10 00:00:12.902689 systemd[1]: sshd@10-139.178.70.109:22-139.178.89.65:36270.service: Deactivated successfully. Jul 10 00:00:12.904123 systemd[1]: session-12.scope: Deactivated successfully. Jul 10 00:00:12.904691 systemd-logind[1536]: Session 12 logged out. Waiting for processes to exit. Jul 10 00:00:12.905505 systemd-logind[1536]: Removed session 12. Jul 10 00:00:17.906419 systemd[1]: Started sshd@11-139.178.70.109:22-139.178.89.65:36274.service - OpenSSH per-connection server daemon (139.178.89.65:36274). Jul 10 00:00:17.977757 sshd[4252]: Accepted publickey for core from 139.178.89.65 port 36274 ssh2: RSA SHA256:iW7N8ouL0MNZquiamslGIbLBE90/GD9BMvBNQ+/8OB0 Jul 10 00:00:17.978601 sshd-session[4252]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:00:17.982610 systemd-logind[1536]: New session 13 of user core. Jul 10 00:00:17.987946 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 10 00:00:18.096725 sshd[4254]: Connection closed by 139.178.89.65 port 36274 Jul 10 00:00:18.096411 sshd-session[4252]: pam_unix(sshd:session): session closed for user core Jul 10 00:00:18.110096 systemd[1]: Started sshd@12-139.178.70.109:22-139.178.89.65:36290.service - OpenSSH per-connection server daemon (139.178.89.65:36290). Jul 10 00:00:18.110473 systemd[1]: sshd@11-139.178.70.109:22-139.178.89.65:36274.service: Deactivated successfully. Jul 10 00:00:18.113405 systemd[1]: session-13.scope: Deactivated successfully. Jul 10 00:00:18.114211 systemd-logind[1536]: Session 13 logged out. Waiting for processes to exit. Jul 10 00:00:18.116127 systemd-logind[1536]: Removed session 13. Jul 10 00:00:18.145453 sshd[4263]: Accepted publickey for core from 139.178.89.65 port 36290 ssh2: RSA SHA256:iW7N8ouL0MNZquiamslGIbLBE90/GD9BMvBNQ+/8OB0 Jul 10 00:00:18.146573 sshd-session[4263]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:00:18.155262 systemd-logind[1536]: New session 14 of user core. Jul 10 00:00:18.159043 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 10 00:00:18.331880 sshd[4268]: Connection closed by 139.178.89.65 port 36290 Jul 10 00:00:18.332442 sshd-session[4263]: pam_unix(sshd:session): session closed for user core Jul 10 00:00:18.340723 systemd[1]: sshd@12-139.178.70.109:22-139.178.89.65:36290.service: Deactivated successfully. Jul 10 00:00:18.342483 systemd[1]: session-14.scope: Deactivated successfully. Jul 10 00:00:18.344388 systemd-logind[1536]: Session 14 logged out. Waiting for processes to exit. Jul 10 00:00:18.353253 systemd[1]: Started sshd@13-139.178.70.109:22-139.178.89.65:36298.service - OpenSSH per-connection server daemon (139.178.89.65:36298). Jul 10 00:00:18.357283 systemd-logind[1536]: Removed session 14. Jul 10 00:00:18.438191 sshd[4277]: Accepted publickey for core from 139.178.89.65 port 36298 ssh2: RSA SHA256:iW7N8ouL0MNZquiamslGIbLBE90/GD9BMvBNQ+/8OB0 Jul 10 00:00:18.439445 sshd-session[4277]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:00:18.444542 systemd-logind[1536]: New session 15 of user core. Jul 10 00:00:18.453026 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 10 00:00:18.572873 sshd[4280]: Connection closed by 139.178.89.65 port 36298 Jul 10 00:00:18.573075 sshd-session[4277]: pam_unix(sshd:session): session closed for user core Jul 10 00:00:18.575484 systemd[1]: sshd@13-139.178.70.109:22-139.178.89.65:36298.service: Deactivated successfully. Jul 10 00:00:18.576832 systemd[1]: session-15.scope: Deactivated successfully. Jul 10 00:00:18.577386 systemd-logind[1536]: Session 15 logged out. Waiting for processes to exit. Jul 10 00:00:18.578264 systemd-logind[1536]: Removed session 15. Jul 10 00:00:23.582359 systemd[1]: Started sshd@14-139.178.70.109:22-139.178.89.65:49662.service - OpenSSH per-connection server daemon (139.178.89.65:49662). Jul 10 00:00:23.614825 sshd[4293]: Accepted publickey for core from 139.178.89.65 port 49662 ssh2: RSA SHA256:iW7N8ouL0MNZquiamslGIbLBE90/GD9BMvBNQ+/8OB0 Jul 10 00:00:23.615700 sshd-session[4293]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:00:23.618420 systemd-logind[1536]: New session 16 of user core. Jul 10 00:00:23.624998 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 10 00:00:23.791936 sshd[4295]: Connection closed by 139.178.89.65 port 49662 Jul 10 00:00:23.792304 sshd-session[4293]: pam_unix(sshd:session): session closed for user core Jul 10 00:00:23.794786 systemd-logind[1536]: Session 16 logged out. Waiting for processes to exit. Jul 10 00:00:23.794929 systemd[1]: sshd@14-139.178.70.109:22-139.178.89.65:49662.service: Deactivated successfully. Jul 10 00:00:23.796142 systemd[1]: session-16.scope: Deactivated successfully. Jul 10 00:00:23.796719 systemd-logind[1536]: Removed session 16. Jul 10 00:00:28.802971 systemd[1]: Started sshd@15-139.178.70.109:22-139.178.89.65:49668.service - OpenSSH per-connection server daemon (139.178.89.65:49668). Jul 10 00:00:28.843435 sshd[4307]: Accepted publickey for core from 139.178.89.65 port 49668 ssh2: RSA SHA256:iW7N8ouL0MNZquiamslGIbLBE90/GD9BMvBNQ+/8OB0 Jul 10 00:00:28.844807 sshd-session[4307]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:00:28.847956 systemd-logind[1536]: New session 17 of user core. Jul 10 00:00:28.850957 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 10 00:00:28.967697 sshd[4309]: Connection closed by 139.178.89.65 port 49668 Jul 10 00:00:28.968810 sshd-session[4307]: pam_unix(sshd:session): session closed for user core Jul 10 00:00:28.977127 systemd[1]: sshd@15-139.178.70.109:22-139.178.89.65:49668.service: Deactivated successfully. Jul 10 00:00:28.978193 systemd[1]: session-17.scope: Deactivated successfully. Jul 10 00:00:28.978681 systemd-logind[1536]: Session 17 logged out. Waiting for processes to exit. Jul 10 00:00:28.983136 systemd[1]: Started sshd@16-139.178.70.109:22-139.178.89.65:49674.service - OpenSSH per-connection server daemon (139.178.89.65:49674). Jul 10 00:00:28.984339 systemd-logind[1536]: Removed session 17. Jul 10 00:00:29.013659 sshd[4319]: Accepted publickey for core from 139.178.89.65 port 49674 ssh2: RSA SHA256:iW7N8ouL0MNZquiamslGIbLBE90/GD9BMvBNQ+/8OB0 Jul 10 00:00:29.014454 sshd-session[4319]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:00:29.018545 systemd-logind[1536]: New session 18 of user core. Jul 10 00:00:29.022941 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 10 00:00:29.815544 sshd[4322]: Connection closed by 139.178.89.65 port 49674 Jul 10 00:00:29.815451 sshd-session[4319]: pam_unix(sshd:session): session closed for user core Jul 10 00:00:29.823107 systemd[1]: sshd@16-139.178.70.109:22-139.178.89.65:49674.service: Deactivated successfully. Jul 10 00:00:29.824317 systemd[1]: session-18.scope: Deactivated successfully. Jul 10 00:00:29.825291 systemd-logind[1536]: Session 18 logged out. Waiting for processes to exit. Jul 10 00:00:29.826251 systemd[1]: Started sshd@17-139.178.70.109:22-139.178.89.65:46436.service - OpenSSH per-connection server daemon (139.178.89.65:46436). Jul 10 00:00:29.827166 systemd-logind[1536]: Removed session 18. Jul 10 00:00:29.877229 sshd[4333]: Accepted publickey for core from 139.178.89.65 port 46436 ssh2: RSA SHA256:iW7N8ouL0MNZquiamslGIbLBE90/GD9BMvBNQ+/8OB0 Jul 10 00:00:29.878550 sshd-session[4333]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:00:29.882950 systemd-logind[1536]: New session 19 of user core. Jul 10 00:00:29.888980 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 10 00:00:31.487104 sshd[4336]: Connection closed by 139.178.89.65 port 46436 Jul 10 00:00:31.487658 sshd-session[4333]: pam_unix(sshd:session): session closed for user core Jul 10 00:00:31.496454 systemd[1]: sshd@17-139.178.70.109:22-139.178.89.65:46436.service: Deactivated successfully. Jul 10 00:00:31.498002 systemd[1]: session-19.scope: Deactivated successfully. Jul 10 00:00:31.498500 systemd-logind[1536]: Session 19 logged out. Waiting for processes to exit. Jul 10 00:00:31.504324 systemd[1]: Started sshd@18-139.178.70.109:22-139.178.89.65:46444.service - OpenSSH per-connection server daemon (139.178.89.65:46444). Jul 10 00:00:31.505843 systemd-logind[1536]: Removed session 19. Jul 10 00:00:31.620692 sshd[4352]: Accepted publickey for core from 139.178.89.65 port 46444 ssh2: RSA SHA256:iW7N8ouL0MNZquiamslGIbLBE90/GD9BMvBNQ+/8OB0 Jul 10 00:00:31.621832 sshd-session[4352]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:00:31.625609 systemd-logind[1536]: New session 20 of user core. Jul 10 00:00:31.630014 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 10 00:00:31.873309 sshd[4355]: Connection closed by 139.178.89.65 port 46444 Jul 10 00:00:31.874666 sshd-session[4352]: pam_unix(sshd:session): session closed for user core Jul 10 00:00:31.881047 systemd[1]: sshd@18-139.178.70.109:22-139.178.89.65:46444.service: Deactivated successfully. Jul 10 00:00:31.882260 systemd[1]: session-20.scope: Deactivated successfully. Jul 10 00:00:31.883908 systemd-logind[1536]: Session 20 logged out. Waiting for processes to exit. Jul 10 00:00:31.888142 systemd[1]: Started sshd@19-139.178.70.109:22-139.178.89.65:46452.service - OpenSSH per-connection server daemon (139.178.89.65:46452). Jul 10 00:00:31.889898 systemd-logind[1536]: Removed session 20. Jul 10 00:00:31.919975 sshd[4364]: Accepted publickey for core from 139.178.89.65 port 46452 ssh2: RSA SHA256:iW7N8ouL0MNZquiamslGIbLBE90/GD9BMvBNQ+/8OB0 Jul 10 00:00:31.920914 sshd-session[4364]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:00:31.923891 systemd-logind[1536]: New session 21 of user core. Jul 10 00:00:31.928979 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 10 00:00:32.025119 sshd[4367]: Connection closed by 139.178.89.65 port 46452 Jul 10 00:00:32.025482 sshd-session[4364]: pam_unix(sshd:session): session closed for user core Jul 10 00:00:32.027589 systemd[1]: sshd@19-139.178.70.109:22-139.178.89.65:46452.service: Deactivated successfully. Jul 10 00:00:32.028744 systemd[1]: session-21.scope: Deactivated successfully. Jul 10 00:00:32.029256 systemd-logind[1536]: Session 21 logged out. Waiting for processes to exit. Jul 10 00:00:32.030232 systemd-logind[1536]: Removed session 21. Jul 10 00:00:37.035539 systemd[1]: Started sshd@20-139.178.70.109:22-139.178.89.65:46464.service - OpenSSH per-connection server daemon (139.178.89.65:46464). Jul 10 00:00:37.067445 sshd[4381]: Accepted publickey for core from 139.178.89.65 port 46464 ssh2: RSA SHA256:iW7N8ouL0MNZquiamslGIbLBE90/GD9BMvBNQ+/8OB0 Jul 10 00:00:37.068400 sshd-session[4381]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:00:37.071261 systemd-logind[1536]: New session 22 of user core. Jul 10 00:00:37.082042 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 10 00:00:37.193900 sshd[4383]: Connection closed by 139.178.89.65 port 46464 Jul 10 00:00:37.194310 sshd-session[4381]: pam_unix(sshd:session): session closed for user core Jul 10 00:00:37.196776 systemd-logind[1536]: Session 22 logged out. Waiting for processes to exit. Jul 10 00:00:37.197257 systemd[1]: sshd@20-139.178.70.109:22-139.178.89.65:46464.service: Deactivated successfully. Jul 10 00:00:37.198402 systemd[1]: session-22.scope: Deactivated successfully. Jul 10 00:00:37.199302 systemd-logind[1536]: Removed session 22. Jul 10 00:00:42.208628 systemd[1]: Started sshd@21-139.178.70.109:22-139.178.89.65:42126.service - OpenSSH per-connection server daemon (139.178.89.65:42126). Jul 10 00:00:42.240442 sshd[4395]: Accepted publickey for core from 139.178.89.65 port 42126 ssh2: RSA SHA256:iW7N8ouL0MNZquiamslGIbLBE90/GD9BMvBNQ+/8OB0 Jul 10 00:00:42.241577 sshd-session[4395]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:00:42.245673 systemd-logind[1536]: New session 23 of user core. Jul 10 00:00:42.255042 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 10 00:00:42.343050 sshd[4397]: Connection closed by 139.178.89.65 port 42126 Jul 10 00:00:42.343990 sshd-session[4395]: pam_unix(sshd:session): session closed for user core Jul 10 00:00:42.345573 systemd-logind[1536]: Session 23 logged out. Waiting for processes to exit. Jul 10 00:00:42.345688 systemd[1]: sshd@21-139.178.70.109:22-139.178.89.65:42126.service: Deactivated successfully. Jul 10 00:00:42.346907 systemd[1]: session-23.scope: Deactivated successfully. Jul 10 00:00:42.348152 systemd-logind[1536]: Removed session 23. Jul 10 00:00:47.353505 systemd[1]: Started sshd@22-139.178.70.109:22-139.178.89.65:42140.service - OpenSSH per-connection server daemon (139.178.89.65:42140). Jul 10 00:00:47.385868 sshd[4410]: Accepted publickey for core from 139.178.89.65 port 42140 ssh2: RSA SHA256:iW7N8ouL0MNZquiamslGIbLBE90/GD9BMvBNQ+/8OB0 Jul 10 00:00:47.386871 sshd-session[4410]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:00:47.391124 systemd-logind[1536]: New session 24 of user core. Jul 10 00:00:47.397128 systemd[1]: Started session-24.scope - Session 24 of User core. Jul 10 00:00:47.484206 sshd[4412]: Connection closed by 139.178.89.65 port 42140 Jul 10 00:00:47.484565 sshd-session[4410]: pam_unix(sshd:session): session closed for user core Jul 10 00:00:47.486730 systemd[1]: sshd@22-139.178.70.109:22-139.178.89.65:42140.service: Deactivated successfully. Jul 10 00:00:47.488168 systemd[1]: session-24.scope: Deactivated successfully. Jul 10 00:00:47.488780 systemd-logind[1536]: Session 24 logged out. Waiting for processes to exit. Jul 10 00:00:47.489381 systemd-logind[1536]: Removed session 24. Jul 10 00:00:52.495549 systemd[1]: Started sshd@23-139.178.70.109:22-139.178.89.65:41366.service - OpenSSH per-connection server daemon (139.178.89.65:41366). Jul 10 00:00:52.528760 sshd[4426]: Accepted publickey for core from 139.178.89.65 port 41366 ssh2: RSA SHA256:iW7N8ouL0MNZquiamslGIbLBE90/GD9BMvBNQ+/8OB0 Jul 10 00:00:52.529629 sshd-session[4426]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:00:52.533625 systemd-logind[1536]: New session 25 of user core. Jul 10 00:00:52.541057 systemd[1]: Started session-25.scope - Session 25 of User core. Jul 10 00:00:52.628656 sshd[4428]: Connection closed by 139.178.89.65 port 41366 Jul 10 00:00:52.629590 sshd-session[4426]: pam_unix(sshd:session): session closed for user core Jul 10 00:00:52.635332 systemd[1]: sshd@23-139.178.70.109:22-139.178.89.65:41366.service: Deactivated successfully. Jul 10 00:00:52.636591 systemd[1]: session-25.scope: Deactivated successfully. Jul 10 00:00:52.637595 systemd-logind[1536]: Session 25 logged out. Waiting for processes to exit. Jul 10 00:00:52.644115 systemd[1]: Started sshd@24-139.178.70.109:22-139.178.89.65:41378.service - OpenSSH per-connection server daemon (139.178.89.65:41378). Jul 10 00:00:52.645578 systemd-logind[1536]: Removed session 25. Jul 10 00:00:52.672968 sshd[4439]: Accepted publickey for core from 139.178.89.65 port 41378 ssh2: RSA SHA256:iW7N8ouL0MNZquiamslGIbLBE90/GD9BMvBNQ+/8OB0 Jul 10 00:00:52.673818 sshd-session[4439]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:00:52.677491 systemd-logind[1536]: New session 26 of user core. Jul 10 00:00:52.678996 systemd[1]: Started session-26.scope - Session 26 of User core. Jul 10 00:00:54.115653 containerd[1556]: time="2025-07-10T00:00:54.115627741Z" level=info msg="StopContainer for \"ad256d1b547f8e8b91a5bb6af105ea324dc8605538af176b52bbc1d03928727e\" with timeout 30 (s)" Jul 10 00:00:54.117836 containerd[1556]: time="2025-07-10T00:00:54.117527822Z" level=info msg="Stop container \"ad256d1b547f8e8b91a5bb6af105ea324dc8605538af176b52bbc1d03928727e\" with signal terminated" Jul 10 00:00:54.128071 systemd[1]: run-containerd-runc-k8s.io-a29e522340df7a382cff3cec659278ecb513256607da5c03e86fa8dcca5ce1ab-runc.BhllII.mount: Deactivated successfully. Jul 10 00:00:54.133826 systemd[1]: cri-containerd-ad256d1b547f8e8b91a5bb6af105ea324dc8605538af176b52bbc1d03928727e.scope: Deactivated successfully. Jul 10 00:00:54.134180 systemd[1]: cri-containerd-ad256d1b547f8e8b91a5bb6af105ea324dc8605538af176b52bbc1d03928727e.scope: Consumed 216ms CPU time, 31.4M memory peak, 5.4M read from disk, 4K written to disk. Jul 10 00:00:54.147687 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ad256d1b547f8e8b91a5bb6af105ea324dc8605538af176b52bbc1d03928727e-rootfs.mount: Deactivated successfully. Jul 10 00:00:54.162147 containerd[1556]: time="2025-07-10T00:00:54.162016601Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 10 00:00:54.162857 containerd[1556]: time="2025-07-10T00:00:54.162495141Z" level=info msg="shim disconnected" id=ad256d1b547f8e8b91a5bb6af105ea324dc8605538af176b52bbc1d03928727e namespace=k8s.io Jul 10 00:00:54.162857 containerd[1556]: time="2025-07-10T00:00:54.162548860Z" level=warning msg="cleaning up after shim disconnected" id=ad256d1b547f8e8b91a5bb6af105ea324dc8605538af176b52bbc1d03928727e namespace=k8s.io Jul 10 00:00:54.162857 containerd[1556]: time="2025-07-10T00:00:54.162558936Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 10 00:00:54.185321 containerd[1556]: time="2025-07-10T00:00:54.185299041Z" level=info msg="StopContainer for \"ad256d1b547f8e8b91a5bb6af105ea324dc8605538af176b52bbc1d03928727e\" returns successfully" Jul 10 00:00:54.192787 containerd[1556]: time="2025-07-10T00:00:54.192611942Z" level=info msg="StopPodSandbox for \"ab523bac2c6c80bf3ae6f091945c53f170d7fe14817078092d187b64773c1e53\"" Jul 10 00:00:54.192787 containerd[1556]: time="2025-07-10T00:00:54.192687282Z" level=info msg="StopContainer for \"a29e522340df7a382cff3cec659278ecb513256607da5c03e86fa8dcca5ce1ab\" with timeout 2 (s)" Jul 10 00:00:54.193281 containerd[1556]: time="2025-07-10T00:00:54.193146322Z" level=info msg="Stop container \"a29e522340df7a382cff3cec659278ecb513256607da5c03e86fa8dcca5ce1ab\" with signal terminated" Jul 10 00:00:54.196419 containerd[1556]: time="2025-07-10T00:00:54.193479722Z" level=info msg="Container to stop \"ad256d1b547f8e8b91a5bb6af105ea324dc8605538af176b52bbc1d03928727e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 10 00:00:54.198017 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ab523bac2c6c80bf3ae6f091945c53f170d7fe14817078092d187b64773c1e53-shm.mount: Deactivated successfully. Jul 10 00:00:54.199764 systemd-networkd[1466]: lxc_health: Link DOWN Jul 10 00:00:54.199769 systemd-networkd[1466]: lxc_health: Lost carrier Jul 10 00:00:54.207870 systemd[1]: cri-containerd-ab523bac2c6c80bf3ae6f091945c53f170d7fe14817078092d187b64773c1e53.scope: Deactivated successfully. Jul 10 00:00:54.220913 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ab523bac2c6c80bf3ae6f091945c53f170d7fe14817078092d187b64773c1e53-rootfs.mount: Deactivated successfully. Jul 10 00:00:54.223125 systemd[1]: cri-containerd-a29e522340df7a382cff3cec659278ecb513256607da5c03e86fa8dcca5ce1ab.scope: Deactivated successfully. Jul 10 00:00:54.223393 systemd[1]: cri-containerd-a29e522340df7a382cff3cec659278ecb513256607da5c03e86fa8dcca5ce1ab.scope: Consumed 4.655s CPU time, 191.8M memory peak, 66.4M read from disk, 13.3M written to disk. Jul 10 00:00:54.225197 containerd[1556]: time="2025-07-10T00:00:54.225039268Z" level=info msg="shim disconnected" id=ab523bac2c6c80bf3ae6f091945c53f170d7fe14817078092d187b64773c1e53 namespace=k8s.io Jul 10 00:00:54.225197 containerd[1556]: time="2025-07-10T00:00:54.225075160Z" level=warning msg="cleaning up after shim disconnected" id=ab523bac2c6c80bf3ae6f091945c53f170d7fe14817078092d187b64773c1e53 namespace=k8s.io Jul 10 00:00:54.225197 containerd[1556]: time="2025-07-10T00:00:54.225080492Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 10 00:00:54.235958 containerd[1556]: time="2025-07-10T00:00:54.235863116Z" level=info msg="TearDown network for sandbox \"ab523bac2c6c80bf3ae6f091945c53f170d7fe14817078092d187b64773c1e53\" successfully" Jul 10 00:00:54.235958 containerd[1556]: time="2025-07-10T00:00:54.235884703Z" level=info msg="StopPodSandbox for \"ab523bac2c6c80bf3ae6f091945c53f170d7fe14817078092d187b64773c1e53\" returns successfully" Jul 10 00:00:54.248678 containerd[1556]: time="2025-07-10T00:00:54.248627165Z" level=info msg="shim disconnected" id=a29e522340df7a382cff3cec659278ecb513256607da5c03e86fa8dcca5ce1ab namespace=k8s.io Jul 10 00:00:54.248678 containerd[1556]: time="2025-07-10T00:00:54.248668800Z" level=warning msg="cleaning up after shim disconnected" id=a29e522340df7a382cff3cec659278ecb513256607da5c03e86fa8dcca5ce1ab namespace=k8s.io Jul 10 00:00:54.248678 containerd[1556]: time="2025-07-10T00:00:54.248676382Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 10 00:00:54.259335 containerd[1556]: time="2025-07-10T00:00:54.259222496Z" level=warning msg="cleanup warnings time=\"2025-07-10T00:00:54Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jul 10 00:00:54.261183 containerd[1556]: time="2025-07-10T00:00:54.261137168Z" level=info msg="StopContainer for \"a29e522340df7a382cff3cec659278ecb513256607da5c03e86fa8dcca5ce1ab\" returns successfully" Jul 10 00:00:54.261571 containerd[1556]: time="2025-07-10T00:00:54.261465156Z" level=info msg="StopPodSandbox for \"cb0d3f18ceabcde6acc4bcc397352f68d7eaf297ec57fdd16ba6a918fc57e2c3\"" Jul 10 00:00:54.261571 containerd[1556]: time="2025-07-10T00:00:54.261483831Z" level=info msg="Container to stop \"be1f5f726268707737ecd661c4a46788ac790cf25ed0a607351ed9c6d86cd50e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 10 00:00:54.261571 containerd[1556]: time="2025-07-10T00:00:54.261504221Z" level=info msg="Container to stop \"cb30b8de1aab4c52a10aebafcd55c63d2ff38c28cc66696319c8940ffd583e35\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 10 00:00:54.261571 containerd[1556]: time="2025-07-10T00:00:54.261509261Z" level=info msg="Container to stop \"a29e522340df7a382cff3cec659278ecb513256607da5c03e86fa8dcca5ce1ab\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 10 00:00:54.261571 containerd[1556]: time="2025-07-10T00:00:54.261513656Z" level=info msg="Container to stop \"22a4eaec1b2708e25f13b7b436f97596e086b48baf26899351adc4b7159b739d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 10 00:00:54.261571 containerd[1556]: time="2025-07-10T00:00:54.261517886Z" level=info msg="Container to stop \"001644e92655133a45fd0b438b43d4374594ae0a64af3416186c3757f4a8315e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 10 00:00:54.268550 systemd[1]: cri-containerd-cb0d3f18ceabcde6acc4bcc397352f68d7eaf297ec57fdd16ba6a918fc57e2c3.scope: Deactivated successfully. Jul 10 00:00:54.293763 containerd[1556]: time="2025-07-10T00:00:54.293647678Z" level=info msg="shim disconnected" id=cb0d3f18ceabcde6acc4bcc397352f68d7eaf297ec57fdd16ba6a918fc57e2c3 namespace=k8s.io Jul 10 00:00:54.293763 containerd[1556]: time="2025-07-10T00:00:54.293679388Z" level=warning msg="cleaning up after shim disconnected" id=cb0d3f18ceabcde6acc4bcc397352f68d7eaf297ec57fdd16ba6a918fc57e2c3 namespace=k8s.io Jul 10 00:00:54.293763 containerd[1556]: time="2025-07-10T00:00:54.293684690Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 10 00:00:54.308382 containerd[1556]: time="2025-07-10T00:00:54.307949574Z" level=warning msg="cleanup warnings time=\"2025-07-10T00:00:54Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jul 10 00:00:54.308721 containerd[1556]: time="2025-07-10T00:00:54.308704628Z" level=info msg="TearDown network for sandbox \"cb0d3f18ceabcde6acc4bcc397352f68d7eaf297ec57fdd16ba6a918fc57e2c3\" successfully" Jul 10 00:00:54.308721 containerd[1556]: time="2025-07-10T00:00:54.308718835Z" level=info msg="StopPodSandbox for \"cb0d3f18ceabcde6acc4bcc397352f68d7eaf297ec57fdd16ba6a918fc57e2c3\" returns successfully" Jul 10 00:00:54.398263 kubelet[2813]: I0710 00:00:54.392428 2813 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/25058434-89bd-4f24-90d8-a71feb5bd4d7-cilium-config-path\") pod \"25058434-89bd-4f24-90d8-a71feb5bd4d7\" (UID: \"25058434-89bd-4f24-90d8-a71feb5bd4d7\") " Jul 10 00:00:54.400872 kubelet[2813]: I0710 00:00:54.400826 2813 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b2c51f7e-468f-481e-ba96-38fbb2b938fe-etc-cni-netd\") pod \"b2c51f7e-468f-481e-ba96-38fbb2b938fe\" (UID: \"b2c51f7e-468f-481e-ba96-38fbb2b938fe\") " Jul 10 00:00:54.400872 kubelet[2813]: I0710 00:00:54.400866 2813 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b2c51f7e-468f-481e-ba96-38fbb2b938fe-cni-path\") pod \"b2c51f7e-468f-481e-ba96-38fbb2b938fe\" (UID: \"b2c51f7e-468f-481e-ba96-38fbb2b938fe\") " Jul 10 00:00:54.400952 kubelet[2813]: I0710 00:00:54.400892 2813 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b2c51f7e-468f-481e-ba96-38fbb2b938fe-xtables-lock\") pod \"b2c51f7e-468f-481e-ba96-38fbb2b938fe\" (UID: \"b2c51f7e-468f-481e-ba96-38fbb2b938fe\") " Jul 10 00:00:54.400952 kubelet[2813]: I0710 00:00:54.400909 2813 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b2c51f7e-468f-481e-ba96-38fbb2b938fe-lib-modules\") pod \"b2c51f7e-468f-481e-ba96-38fbb2b938fe\" (UID: \"b2c51f7e-468f-481e-ba96-38fbb2b938fe\") " Jul 10 00:00:54.400952 kubelet[2813]: I0710 00:00:54.400924 2813 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b2c51f7e-468f-481e-ba96-38fbb2b938fe-cilium-run\") pod \"b2c51f7e-468f-481e-ba96-38fbb2b938fe\" (UID: \"b2c51f7e-468f-481e-ba96-38fbb2b938fe\") " Jul 10 00:00:54.400952 kubelet[2813]: I0710 00:00:54.400939 2813 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b2c51f7e-468f-481e-ba96-38fbb2b938fe-cilium-cgroup\") pod \"b2c51f7e-468f-481e-ba96-38fbb2b938fe\" (UID: \"b2c51f7e-468f-481e-ba96-38fbb2b938fe\") " Jul 10 00:00:54.401119 kubelet[2813]: I0710 00:00:54.400956 2813 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b2c51f7e-468f-481e-ba96-38fbb2b938fe-host-proc-sys-kernel\") pod \"b2c51f7e-468f-481e-ba96-38fbb2b938fe\" (UID: \"b2c51f7e-468f-481e-ba96-38fbb2b938fe\") " Jul 10 00:00:54.401119 kubelet[2813]: I0710 00:00:54.400974 2813 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s45l2\" (UniqueName: \"kubernetes.io/projected/25058434-89bd-4f24-90d8-a71feb5bd4d7-kube-api-access-s45l2\") pod \"25058434-89bd-4f24-90d8-a71feb5bd4d7\" (UID: \"25058434-89bd-4f24-90d8-a71feb5bd4d7\") " Jul 10 00:00:54.401119 kubelet[2813]: I0710 00:00:54.400990 2813 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b2c51f7e-468f-481e-ba96-38fbb2b938fe-hubble-tls\") pod \"b2c51f7e-468f-481e-ba96-38fbb2b938fe\" (UID: \"b2c51f7e-468f-481e-ba96-38fbb2b938fe\") " Jul 10 00:00:54.401119 kubelet[2813]: I0710 00:00:54.401007 2813 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ndnrm\" (UniqueName: \"kubernetes.io/projected/b2c51f7e-468f-481e-ba96-38fbb2b938fe-kube-api-access-ndnrm\") pod \"b2c51f7e-468f-481e-ba96-38fbb2b938fe\" (UID: \"b2c51f7e-468f-481e-ba96-38fbb2b938fe\") " Jul 10 00:00:54.401119 kubelet[2813]: I0710 00:00:54.401019 2813 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b2c51f7e-468f-481e-ba96-38fbb2b938fe-host-proc-sys-net\") pod \"b2c51f7e-468f-481e-ba96-38fbb2b938fe\" (UID: \"b2c51f7e-468f-481e-ba96-38fbb2b938fe\") " Jul 10 00:00:54.401119 kubelet[2813]: I0710 00:00:54.401036 2813 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b2c51f7e-468f-481e-ba96-38fbb2b938fe-clustermesh-secrets\") pod \"b2c51f7e-468f-481e-ba96-38fbb2b938fe\" (UID: \"b2c51f7e-468f-481e-ba96-38fbb2b938fe\") " Jul 10 00:00:54.401251 kubelet[2813]: I0710 00:00:54.401053 2813 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b2c51f7e-468f-481e-ba96-38fbb2b938fe-cilium-config-path\") pod \"b2c51f7e-468f-481e-ba96-38fbb2b938fe\" (UID: \"b2c51f7e-468f-481e-ba96-38fbb2b938fe\") " Jul 10 00:00:54.401251 kubelet[2813]: I0710 00:00:54.401070 2813 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b2c51f7e-468f-481e-ba96-38fbb2b938fe-bpf-maps\") pod \"b2c51f7e-468f-481e-ba96-38fbb2b938fe\" (UID: \"b2c51f7e-468f-481e-ba96-38fbb2b938fe\") " Jul 10 00:00:54.407859 kubelet[2813]: I0710 00:00:54.406470 2813 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b2c51f7e-468f-481e-ba96-38fbb2b938fe-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "b2c51f7e-468f-481e-ba96-38fbb2b938fe" (UID: "b2c51f7e-468f-481e-ba96-38fbb2b938fe"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 10 00:00:54.407859 kubelet[2813]: I0710 00:00:54.407731 2813 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b2c51f7e-468f-481e-ba96-38fbb2b938fe-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "b2c51f7e-468f-481e-ba96-38fbb2b938fe" (UID: "b2c51f7e-468f-481e-ba96-38fbb2b938fe"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 10 00:00:54.407859 kubelet[2813]: I0710 00:00:54.407752 2813 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b2c51f7e-468f-481e-ba96-38fbb2b938fe-cni-path" (OuterVolumeSpecName: "cni-path") pod "b2c51f7e-468f-481e-ba96-38fbb2b938fe" (UID: "b2c51f7e-468f-481e-ba96-38fbb2b938fe"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 10 00:00:54.407859 kubelet[2813]: I0710 00:00:54.407765 2813 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b2c51f7e-468f-481e-ba96-38fbb2b938fe-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "b2c51f7e-468f-481e-ba96-38fbb2b938fe" (UID: "b2c51f7e-468f-481e-ba96-38fbb2b938fe"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 10 00:00:54.407859 kubelet[2813]: I0710 00:00:54.407780 2813 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b2c51f7e-468f-481e-ba96-38fbb2b938fe-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "b2c51f7e-468f-481e-ba96-38fbb2b938fe" (UID: "b2c51f7e-468f-481e-ba96-38fbb2b938fe"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 10 00:00:54.407982 kubelet[2813]: I0710 00:00:54.407791 2813 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b2c51f7e-468f-481e-ba96-38fbb2b938fe-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "b2c51f7e-468f-481e-ba96-38fbb2b938fe" (UID: "b2c51f7e-468f-481e-ba96-38fbb2b938fe"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 10 00:00:54.407982 kubelet[2813]: I0710 00:00:54.407805 2813 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b2c51f7e-468f-481e-ba96-38fbb2b938fe-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "b2c51f7e-468f-481e-ba96-38fbb2b938fe" (UID: "b2c51f7e-468f-481e-ba96-38fbb2b938fe"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 10 00:00:54.407982 kubelet[2813]: I0710 00:00:54.407816 2813 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b2c51f7e-468f-481e-ba96-38fbb2b938fe-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "b2c51f7e-468f-481e-ba96-38fbb2b938fe" (UID: "b2c51f7e-468f-481e-ba96-38fbb2b938fe"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 10 00:00:54.408724 kubelet[2813]: I0710 00:00:54.408712 2813 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25058434-89bd-4f24-90d8-a71feb5bd4d7-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "25058434-89bd-4f24-90d8-a71feb5bd4d7" (UID: "25058434-89bd-4f24-90d8-a71feb5bd4d7"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 10 00:00:54.408862 kubelet[2813]: I0710 00:00:54.408785 2813 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b2c51f7e-468f-481e-ba96-38fbb2b938fe-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "b2c51f7e-468f-481e-ba96-38fbb2b938fe" (UID: "b2c51f7e-468f-481e-ba96-38fbb2b938fe"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 10 00:00:54.416316 kubelet[2813]: I0710 00:00:54.416294 2813 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b2c51f7e-468f-481e-ba96-38fbb2b938fe-kube-api-access-ndnrm" (OuterVolumeSpecName: "kube-api-access-ndnrm") pod "b2c51f7e-468f-481e-ba96-38fbb2b938fe" (UID: "b2c51f7e-468f-481e-ba96-38fbb2b938fe"). InnerVolumeSpecName "kube-api-access-ndnrm". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 10 00:00:54.416446 kubelet[2813]: I0710 00:00:54.416391 2813 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b2c51f7e-468f-481e-ba96-38fbb2b938fe-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "b2c51f7e-468f-481e-ba96-38fbb2b938fe" (UID: "b2c51f7e-468f-481e-ba96-38fbb2b938fe"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 10 00:00:54.416446 kubelet[2813]: I0710 00:00:54.416398 2813 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25058434-89bd-4f24-90d8-a71feb5bd4d7-kube-api-access-s45l2" (OuterVolumeSpecName: "kube-api-access-s45l2") pod "25058434-89bd-4f24-90d8-a71feb5bd4d7" (UID: "25058434-89bd-4f24-90d8-a71feb5bd4d7"). InnerVolumeSpecName "kube-api-access-s45l2". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 10 00:00:54.417371 kubelet[2813]: I0710 00:00:54.417354 2813 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b2c51f7e-468f-481e-ba96-38fbb2b938fe-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "b2c51f7e-468f-481e-ba96-38fbb2b938fe" (UID: "b2c51f7e-468f-481e-ba96-38fbb2b938fe"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 10 00:00:54.418257 kubelet[2813]: I0710 00:00:54.418236 2813 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b2c51f7e-468f-481e-ba96-38fbb2b938fe-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "b2c51f7e-468f-481e-ba96-38fbb2b938fe" (UID: "b2c51f7e-468f-481e-ba96-38fbb2b938fe"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jul 10 00:00:54.501722 kubelet[2813]: I0710 00:00:54.501690 2813 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b2c51f7e-468f-481e-ba96-38fbb2b938fe-hostproc\") pod \"b2c51f7e-468f-481e-ba96-38fbb2b938fe\" (UID: \"b2c51f7e-468f-481e-ba96-38fbb2b938fe\") " Jul 10 00:00:54.501835 kubelet[2813]: I0710 00:00:54.501748 2813 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b2c51f7e-468f-481e-ba96-38fbb2b938fe-lib-modules\") on node \"localhost\" DevicePath \"\"" Jul 10 00:00:54.501835 kubelet[2813]: I0710 00:00:54.501759 2813 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b2c51f7e-468f-481e-ba96-38fbb2b938fe-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Jul 10 00:00:54.501835 kubelet[2813]: I0710 00:00:54.501767 2813 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b2c51f7e-468f-481e-ba96-38fbb2b938fe-cilium-run\") on node \"localhost\" DevicePath \"\"" Jul 10 00:00:54.501835 kubelet[2813]: I0710 00:00:54.501773 2813 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b2c51f7e-468f-481e-ba96-38fbb2b938fe-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Jul 10 00:00:54.501835 kubelet[2813]: I0710 00:00:54.501779 2813 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b2c51f7e-468f-481e-ba96-38fbb2b938fe-hubble-tls\") on node \"localhost\" DevicePath \"\"" Jul 10 00:00:54.501835 kubelet[2813]: I0710 00:00:54.501783 2813 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ndnrm\" (UniqueName: \"kubernetes.io/projected/b2c51f7e-468f-481e-ba96-38fbb2b938fe-kube-api-access-ndnrm\") on node \"localhost\" DevicePath \"\"" Jul 10 00:00:54.501835 kubelet[2813]: I0710 00:00:54.501789 2813 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b2c51f7e-468f-481e-ba96-38fbb2b938fe-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Jul 10 00:00:54.501835 kubelet[2813]: I0710 00:00:54.501794 2813 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-s45l2\" (UniqueName: \"kubernetes.io/projected/25058434-89bd-4f24-90d8-a71feb5bd4d7-kube-api-access-s45l2\") on node \"localhost\" DevicePath \"\"" Jul 10 00:00:54.502030 kubelet[2813]: I0710 00:00:54.501799 2813 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b2c51f7e-468f-481e-ba96-38fbb2b938fe-bpf-maps\") on node \"localhost\" DevicePath \"\"" Jul 10 00:00:54.502030 kubelet[2813]: I0710 00:00:54.501803 2813 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b2c51f7e-468f-481e-ba96-38fbb2b938fe-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Jul 10 00:00:54.502030 kubelet[2813]: I0710 00:00:54.501808 2813 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b2c51f7e-468f-481e-ba96-38fbb2b938fe-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jul 10 00:00:54.502030 kubelet[2813]: I0710 00:00:54.501812 2813 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b2c51f7e-468f-481e-ba96-38fbb2b938fe-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Jul 10 00:00:54.502030 kubelet[2813]: I0710 00:00:54.501817 2813 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b2c51f7e-468f-481e-ba96-38fbb2b938fe-cni-path\") on node \"localhost\" DevicePath \"\"" Jul 10 00:00:54.502030 kubelet[2813]: I0710 00:00:54.501821 2813 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b2c51f7e-468f-481e-ba96-38fbb2b938fe-xtables-lock\") on node \"localhost\" DevicePath \"\"" Jul 10 00:00:54.502030 kubelet[2813]: I0710 00:00:54.501826 2813 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/25058434-89bd-4f24-90d8-a71feb5bd4d7-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jul 10 00:00:54.502458 kubelet[2813]: I0710 00:00:54.502444 2813 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b2c51f7e-468f-481e-ba96-38fbb2b938fe-hostproc" (OuterVolumeSpecName: "hostproc") pod "b2c51f7e-468f-481e-ba96-38fbb2b938fe" (UID: "b2c51f7e-468f-481e-ba96-38fbb2b938fe"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 10 00:00:54.602450 kubelet[2813]: I0710 00:00:54.602430 2813 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b2c51f7e-468f-481e-ba96-38fbb2b938fe-hostproc\") on node \"localhost\" DevicePath \"\"" Jul 10 00:00:54.690610 systemd[1]: Removed slice kubepods-besteffort-pod25058434_89bd_4f24_90d8_a71feb5bd4d7.slice - libcontainer container kubepods-besteffort-pod25058434_89bd_4f24_90d8_a71feb5bd4d7.slice. Jul 10 00:00:54.692030 kubelet[2813]: I0710 00:00:54.690694 2813 scope.go:117] "RemoveContainer" containerID="ad256d1b547f8e8b91a5bb6af105ea324dc8605538af176b52bbc1d03928727e" Jul 10 00:00:54.690674 systemd[1]: kubepods-besteffort-pod25058434_89bd_4f24_90d8_a71feb5bd4d7.slice: Consumed 238ms CPU time, 32.1M memory peak, 5.4M read from disk, 4K written to disk. Jul 10 00:00:54.696706 containerd[1556]: time="2025-07-10T00:00:54.696673057Z" level=info msg="RemoveContainer for \"ad256d1b547f8e8b91a5bb6af105ea324dc8605538af176b52bbc1d03928727e\"" Jul 10 00:00:54.698077 containerd[1556]: time="2025-07-10T00:00:54.698045873Z" level=info msg="RemoveContainer for \"ad256d1b547f8e8b91a5bb6af105ea324dc8605538af176b52bbc1d03928727e\" returns successfully" Jul 10 00:00:54.702309 kubelet[2813]: I0710 00:00:54.702296 2813 scope.go:117] "RemoveContainer" containerID="ad256d1b547f8e8b91a5bb6af105ea324dc8605538af176b52bbc1d03928727e" Jul 10 00:00:54.703677 systemd[1]: Removed slice kubepods-burstable-podb2c51f7e_468f_481e_ba96_38fbb2b938fe.slice - libcontainer container kubepods-burstable-podb2c51f7e_468f_481e_ba96_38fbb2b938fe.slice. Jul 10 00:00:54.703757 systemd[1]: kubepods-burstable-podb2c51f7e_468f_481e_ba96_38fbb2b938fe.slice: Consumed 4.707s CPU time, 192.7M memory peak, 66.4M read from disk, 13.3M written to disk. Jul 10 00:00:54.704575 containerd[1556]: time="2025-07-10T00:00:54.704540740Z" level=error msg="ContainerStatus for \"ad256d1b547f8e8b91a5bb6af105ea324dc8605538af176b52bbc1d03928727e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ad256d1b547f8e8b91a5bb6af105ea324dc8605538af176b52bbc1d03928727e\": not found" Jul 10 00:00:54.722079 kubelet[2813]: E0710 00:00:54.722018 2813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ad256d1b547f8e8b91a5bb6af105ea324dc8605538af176b52bbc1d03928727e\": not found" containerID="ad256d1b547f8e8b91a5bb6af105ea324dc8605538af176b52bbc1d03928727e" Jul 10 00:00:54.723150 kubelet[2813]: I0710 00:00:54.723007 2813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ad256d1b547f8e8b91a5bb6af105ea324dc8605538af176b52bbc1d03928727e"} err="failed to get container status \"ad256d1b547f8e8b91a5bb6af105ea324dc8605538af176b52bbc1d03928727e\": rpc error: code = NotFound desc = an error occurred when try to find container \"ad256d1b547f8e8b91a5bb6af105ea324dc8605538af176b52bbc1d03928727e\": not found" Jul 10 00:00:54.723150 kubelet[2813]: I0710 00:00:54.723084 2813 scope.go:117] "RemoveContainer" containerID="a29e522340df7a382cff3cec659278ecb513256607da5c03e86fa8dcca5ce1ab" Jul 10 00:00:54.723769 containerd[1556]: time="2025-07-10T00:00:54.723751598Z" level=info msg="RemoveContainer for \"a29e522340df7a382cff3cec659278ecb513256607da5c03e86fa8dcca5ce1ab\"" Jul 10 00:00:54.725134 containerd[1556]: time="2025-07-10T00:00:54.725121768Z" level=info msg="RemoveContainer for \"a29e522340df7a382cff3cec659278ecb513256607da5c03e86fa8dcca5ce1ab\" returns successfully" Jul 10 00:00:54.725264 kubelet[2813]: I0710 00:00:54.725250 2813 scope.go:117] "RemoveContainer" containerID="001644e92655133a45fd0b438b43d4374594ae0a64af3416186c3757f4a8315e" Jul 10 00:00:54.725763 containerd[1556]: time="2025-07-10T00:00:54.725752668Z" level=info msg="RemoveContainer for \"001644e92655133a45fd0b438b43d4374594ae0a64af3416186c3757f4a8315e\"" Jul 10 00:00:54.727045 containerd[1556]: time="2025-07-10T00:00:54.727013456Z" level=info msg="RemoveContainer for \"001644e92655133a45fd0b438b43d4374594ae0a64af3416186c3757f4a8315e\" returns successfully" Jul 10 00:00:54.727143 kubelet[2813]: I0710 00:00:54.727096 2813 scope.go:117] "RemoveContainer" containerID="cb30b8de1aab4c52a10aebafcd55c63d2ff38c28cc66696319c8940ffd583e35" Jul 10 00:00:54.727804 containerd[1556]: time="2025-07-10T00:00:54.727613188Z" level=info msg="RemoveContainer for \"cb30b8de1aab4c52a10aebafcd55c63d2ff38c28cc66696319c8940ffd583e35\"" Jul 10 00:00:54.728586 containerd[1556]: time="2025-07-10T00:00:54.728574449Z" level=info msg="RemoveContainer for \"cb30b8de1aab4c52a10aebafcd55c63d2ff38c28cc66696319c8940ffd583e35\" returns successfully" Jul 10 00:00:54.728718 kubelet[2813]: I0710 00:00:54.728705 2813 scope.go:117] "RemoveContainer" containerID="22a4eaec1b2708e25f13b7b436f97596e086b48baf26899351adc4b7159b739d" Jul 10 00:00:54.729349 containerd[1556]: time="2025-07-10T00:00:54.729136324Z" level=info msg="RemoveContainer for \"22a4eaec1b2708e25f13b7b436f97596e086b48baf26899351adc4b7159b739d\"" Jul 10 00:00:54.730158 containerd[1556]: time="2025-07-10T00:00:54.730146776Z" level=info msg="RemoveContainer for \"22a4eaec1b2708e25f13b7b436f97596e086b48baf26899351adc4b7159b739d\" returns successfully" Jul 10 00:00:54.730295 kubelet[2813]: I0710 00:00:54.730281 2813 scope.go:117] "RemoveContainer" containerID="be1f5f726268707737ecd661c4a46788ac790cf25ed0a607351ed9c6d86cd50e" Jul 10 00:00:54.730943 containerd[1556]: time="2025-07-10T00:00:54.730890789Z" level=info msg="RemoveContainer for \"be1f5f726268707737ecd661c4a46788ac790cf25ed0a607351ed9c6d86cd50e\"" Jul 10 00:00:54.732142 containerd[1556]: time="2025-07-10T00:00:54.732059214Z" level=info msg="RemoveContainer for \"be1f5f726268707737ecd661c4a46788ac790cf25ed0a607351ed9c6d86cd50e\" returns successfully" Jul 10 00:00:54.732233 kubelet[2813]: I0710 00:00:54.732150 2813 scope.go:117] "RemoveContainer" containerID="a29e522340df7a382cff3cec659278ecb513256607da5c03e86fa8dcca5ce1ab" Jul 10 00:00:54.732317 containerd[1556]: time="2025-07-10T00:00:54.732295847Z" level=error msg="ContainerStatus for \"a29e522340df7a382cff3cec659278ecb513256607da5c03e86fa8dcca5ce1ab\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a29e522340df7a382cff3cec659278ecb513256607da5c03e86fa8dcca5ce1ab\": not found" Jul 10 00:00:54.732378 kubelet[2813]: E0710 00:00:54.732364 2813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a29e522340df7a382cff3cec659278ecb513256607da5c03e86fa8dcca5ce1ab\": not found" containerID="a29e522340df7a382cff3cec659278ecb513256607da5c03e86fa8dcca5ce1ab" Jul 10 00:00:54.732400 kubelet[2813]: I0710 00:00:54.732379 2813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a29e522340df7a382cff3cec659278ecb513256607da5c03e86fa8dcca5ce1ab"} err="failed to get container status \"a29e522340df7a382cff3cec659278ecb513256607da5c03e86fa8dcca5ce1ab\": rpc error: code = NotFound desc = an error occurred when try to find container \"a29e522340df7a382cff3cec659278ecb513256607da5c03e86fa8dcca5ce1ab\": not found" Jul 10 00:00:54.732400 kubelet[2813]: I0710 00:00:54.732392 2813 scope.go:117] "RemoveContainer" containerID="001644e92655133a45fd0b438b43d4374594ae0a64af3416186c3757f4a8315e" Jul 10 00:00:54.732559 kubelet[2813]: E0710 00:00:54.732543 2813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"001644e92655133a45fd0b438b43d4374594ae0a64af3416186c3757f4a8315e\": not found" containerID="001644e92655133a45fd0b438b43d4374594ae0a64af3416186c3757f4a8315e" Jul 10 00:00:54.732559 kubelet[2813]: I0710 00:00:54.732553 2813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"001644e92655133a45fd0b438b43d4374594ae0a64af3416186c3757f4a8315e"} err="failed to get container status \"001644e92655133a45fd0b438b43d4374594ae0a64af3416186c3757f4a8315e\": rpc error: code = NotFound desc = an error occurred when try to find container \"001644e92655133a45fd0b438b43d4374594ae0a64af3416186c3757f4a8315e\": not found" Jul 10 00:00:54.732605 containerd[1556]: time="2025-07-10T00:00:54.732481393Z" level=error msg="ContainerStatus for \"001644e92655133a45fd0b438b43d4374594ae0a64af3416186c3757f4a8315e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"001644e92655133a45fd0b438b43d4374594ae0a64af3416186c3757f4a8315e\": not found" Jul 10 00:00:54.732625 kubelet[2813]: I0710 00:00:54.732561 2813 scope.go:117] "RemoveContainer" containerID="cb30b8de1aab4c52a10aebafcd55c63d2ff38c28cc66696319c8940ffd583e35" Jul 10 00:00:54.732759 containerd[1556]: time="2025-07-10T00:00:54.732729447Z" level=error msg="ContainerStatus for \"cb30b8de1aab4c52a10aebafcd55c63d2ff38c28cc66696319c8940ffd583e35\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"cb30b8de1aab4c52a10aebafcd55c63d2ff38c28cc66696319c8940ffd583e35\": not found" Jul 10 00:00:54.732808 kubelet[2813]: E0710 00:00:54.732785 2813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"cb30b8de1aab4c52a10aebafcd55c63d2ff38c28cc66696319c8940ffd583e35\": not found" containerID="cb30b8de1aab4c52a10aebafcd55c63d2ff38c28cc66696319c8940ffd583e35" Jul 10 00:00:54.732808 kubelet[2813]: I0710 00:00:54.732795 2813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"cb30b8de1aab4c52a10aebafcd55c63d2ff38c28cc66696319c8940ffd583e35"} err="failed to get container status \"cb30b8de1aab4c52a10aebafcd55c63d2ff38c28cc66696319c8940ffd583e35\": rpc error: code = NotFound desc = an error occurred when try to find container \"cb30b8de1aab4c52a10aebafcd55c63d2ff38c28cc66696319c8940ffd583e35\": not found" Jul 10 00:00:54.732808 kubelet[2813]: I0710 00:00:54.732802 2813 scope.go:117] "RemoveContainer" containerID="22a4eaec1b2708e25f13b7b436f97596e086b48baf26899351adc4b7159b739d" Jul 10 00:00:54.733151 kubelet[2813]: E0710 00:00:54.732954 2813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"22a4eaec1b2708e25f13b7b436f97596e086b48baf26899351adc4b7159b739d\": not found" containerID="22a4eaec1b2708e25f13b7b436f97596e086b48baf26899351adc4b7159b739d" Jul 10 00:00:54.733151 kubelet[2813]: I0710 00:00:54.732967 2813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"22a4eaec1b2708e25f13b7b436f97596e086b48baf26899351adc4b7159b739d"} err="failed to get container status \"22a4eaec1b2708e25f13b7b436f97596e086b48baf26899351adc4b7159b739d\": rpc error: code = NotFound desc = an error occurred when try to find container \"22a4eaec1b2708e25f13b7b436f97596e086b48baf26899351adc4b7159b739d\": not found" Jul 10 00:00:54.733151 kubelet[2813]: I0710 00:00:54.732978 2813 scope.go:117] "RemoveContainer" containerID="be1f5f726268707737ecd661c4a46788ac790cf25ed0a607351ed9c6d86cd50e" Jul 10 00:00:54.733282 containerd[1556]: time="2025-07-10T00:00:54.732891321Z" level=error msg="ContainerStatus for \"22a4eaec1b2708e25f13b7b436f97596e086b48baf26899351adc4b7159b739d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"22a4eaec1b2708e25f13b7b436f97596e086b48baf26899351adc4b7159b739d\": not found" Jul 10 00:00:54.733282 containerd[1556]: time="2025-07-10T00:00:54.733065650Z" level=error msg="ContainerStatus for \"be1f5f726268707737ecd661c4a46788ac790cf25ed0a607351ed9c6d86cd50e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"be1f5f726268707737ecd661c4a46788ac790cf25ed0a607351ed9c6d86cd50e\": not found" Jul 10 00:00:54.733324 kubelet[2813]: E0710 00:00:54.733205 2813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"be1f5f726268707737ecd661c4a46788ac790cf25ed0a607351ed9c6d86cd50e\": not found" containerID="be1f5f726268707737ecd661c4a46788ac790cf25ed0a607351ed9c6d86cd50e" Jul 10 00:00:54.733324 kubelet[2813]: I0710 00:00:54.733215 2813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"be1f5f726268707737ecd661c4a46788ac790cf25ed0a607351ed9c6d86cd50e"} err="failed to get container status \"be1f5f726268707737ecd661c4a46788ac790cf25ed0a607351ed9c6d86cd50e\": rpc error: code = NotFound desc = an error occurred when try to find container \"be1f5f726268707737ecd661c4a46788ac790cf25ed0a607351ed9c6d86cd50e\": not found" Jul 10 00:00:55.121143 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a29e522340df7a382cff3cec659278ecb513256607da5c03e86fa8dcca5ce1ab-rootfs.mount: Deactivated successfully. Jul 10 00:00:55.121238 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cb0d3f18ceabcde6acc4bcc397352f68d7eaf297ec57fdd16ba6a918fc57e2c3-rootfs.mount: Deactivated successfully. Jul 10 00:00:55.121301 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-cb0d3f18ceabcde6acc4bcc397352f68d7eaf297ec57fdd16ba6a918fc57e2c3-shm.mount: Deactivated successfully. Jul 10 00:00:55.121350 systemd[1]: var-lib-kubelet-pods-b2c51f7e\x2d468f\x2d481e\x2dba96\x2d38fbb2b938fe-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dndnrm.mount: Deactivated successfully. Jul 10 00:00:55.121398 systemd[1]: var-lib-kubelet-pods-25058434\x2d89bd\x2d4f24\x2d90d8\x2da71feb5bd4d7-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2ds45l2.mount: Deactivated successfully. Jul 10 00:00:55.121449 systemd[1]: var-lib-kubelet-pods-b2c51f7e\x2d468f\x2d481e\x2dba96\x2d38fbb2b938fe-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 10 00:00:55.121501 systemd[1]: var-lib-kubelet-pods-b2c51f7e\x2d468f\x2d481e\x2dba96\x2d38fbb2b938fe-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 10 00:00:56.004304 kubelet[2813]: I0710 00:00:56.004275 2813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25058434-89bd-4f24-90d8-a71feb5bd4d7" path="/var/lib/kubelet/pods/25058434-89bd-4f24-90d8-a71feb5bd4d7/volumes" Jul 10 00:00:56.009389 kubelet[2813]: I0710 00:00:56.009364 2813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b2c51f7e-468f-481e-ba96-38fbb2b938fe" path="/var/lib/kubelet/pods/b2c51f7e-468f-481e-ba96-38fbb2b938fe/volumes" Jul 10 00:00:56.082443 sshd[4442]: Connection closed by 139.178.89.65 port 41378 Jul 10 00:00:56.084376 sshd-session[4439]: pam_unix(sshd:session): session closed for user core Jul 10 00:00:56.089633 systemd[1]: sshd@24-139.178.70.109:22-139.178.89.65:41378.service: Deactivated successfully. Jul 10 00:00:56.091609 systemd[1]: session-26.scope: Deactivated successfully. Jul 10 00:00:56.092243 systemd-logind[1536]: Session 26 logged out. Waiting for processes to exit. Jul 10 00:00:56.100457 systemd[1]: Started sshd@25-139.178.70.109:22-139.178.89.65:41380.service - OpenSSH per-connection server daemon (139.178.89.65:41380). Jul 10 00:00:56.101336 systemd-logind[1536]: Removed session 26. Jul 10 00:00:56.245162 sshd[4602]: Accepted publickey for core from 139.178.89.65 port 41380 ssh2: RSA SHA256:iW7N8ouL0MNZquiamslGIbLBE90/GD9BMvBNQ+/8OB0 Jul 10 00:00:56.246121 sshd-session[4602]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:00:56.253640 systemd-logind[1536]: New session 27 of user core. Jul 10 00:00:56.259948 systemd[1]: Started session-27.scope - Session 27 of User core. Jul 10 00:00:56.842403 sshd[4605]: Connection closed by 139.178.89.65 port 41380 Jul 10 00:00:56.842925 sshd-session[4602]: pam_unix(sshd:session): session closed for user core Jul 10 00:00:56.849197 systemd[1]: sshd@25-139.178.70.109:22-139.178.89.65:41380.service: Deactivated successfully. Jul 10 00:00:56.852787 systemd[1]: session-27.scope: Deactivated successfully. Jul 10 00:00:56.856312 systemd-logind[1536]: Session 27 logged out. Waiting for processes to exit. Jul 10 00:00:56.863137 systemd[1]: Started sshd@26-139.178.70.109:22-139.178.89.65:41388.service - OpenSSH per-connection server daemon (139.178.89.65:41388). Jul 10 00:00:56.865698 systemd-logind[1536]: Removed session 27. Jul 10 00:00:56.881719 kubelet[2813]: I0710 00:00:56.881694 2813 memory_manager.go:355] "RemoveStaleState removing state" podUID="25058434-89bd-4f24-90d8-a71feb5bd4d7" containerName="cilium-operator" Jul 10 00:00:56.881822 kubelet[2813]: I0710 00:00:56.881815 2813 memory_manager.go:355] "RemoveStaleState removing state" podUID="b2c51f7e-468f-481e-ba96-38fbb2b938fe" containerName="cilium-agent" Jul 10 00:00:56.905734 systemd[1]: Created slice kubepods-burstable-pod75bd1f7e_5ee8_4f40_9fa1_c0904b808503.slice - libcontainer container kubepods-burstable-pod75bd1f7e_5ee8_4f40_9fa1_c0904b808503.slice. Jul 10 00:00:56.918695 kubelet[2813]: I0710 00:00:56.918658 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/75bd1f7e-5ee8-4f40-9fa1-c0904b808503-xtables-lock\") pod \"cilium-68mxh\" (UID: \"75bd1f7e-5ee8-4f40-9fa1-c0904b808503\") " pod="kube-system/cilium-68mxh" Jul 10 00:00:56.918695 kubelet[2813]: I0710 00:00:56.918688 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/75bd1f7e-5ee8-4f40-9fa1-c0904b808503-host-proc-sys-net\") pod \"cilium-68mxh\" (UID: \"75bd1f7e-5ee8-4f40-9fa1-c0904b808503\") " pod="kube-system/cilium-68mxh" Jul 10 00:00:56.918695 kubelet[2813]: I0710 00:00:56.918704 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/75bd1f7e-5ee8-4f40-9fa1-c0904b808503-etc-cni-netd\") pod \"cilium-68mxh\" (UID: \"75bd1f7e-5ee8-4f40-9fa1-c0904b808503\") " pod="kube-system/cilium-68mxh" Jul 10 00:00:56.918863 kubelet[2813]: I0710 00:00:56.918714 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/75bd1f7e-5ee8-4f40-9fa1-c0904b808503-cilium-cgroup\") pod \"cilium-68mxh\" (UID: \"75bd1f7e-5ee8-4f40-9fa1-c0904b808503\") " pod="kube-system/cilium-68mxh" Jul 10 00:00:56.918863 kubelet[2813]: I0710 00:00:56.918723 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/75bd1f7e-5ee8-4f40-9fa1-c0904b808503-cilium-config-path\") pod \"cilium-68mxh\" (UID: \"75bd1f7e-5ee8-4f40-9fa1-c0904b808503\") " pod="kube-system/cilium-68mxh" Jul 10 00:00:56.918863 kubelet[2813]: I0710 00:00:56.918732 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/75bd1f7e-5ee8-4f40-9fa1-c0904b808503-cilium-ipsec-secrets\") pod \"cilium-68mxh\" (UID: \"75bd1f7e-5ee8-4f40-9fa1-c0904b808503\") " pod="kube-system/cilium-68mxh" Jul 10 00:00:56.918863 kubelet[2813]: I0710 00:00:56.918743 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/75bd1f7e-5ee8-4f40-9fa1-c0904b808503-cilium-run\") pod \"cilium-68mxh\" (UID: \"75bd1f7e-5ee8-4f40-9fa1-c0904b808503\") " pod="kube-system/cilium-68mxh" Jul 10 00:00:56.918863 kubelet[2813]: I0710 00:00:56.918753 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/75bd1f7e-5ee8-4f40-9fa1-c0904b808503-clustermesh-secrets\") pod \"cilium-68mxh\" (UID: \"75bd1f7e-5ee8-4f40-9fa1-c0904b808503\") " pod="kube-system/cilium-68mxh" Jul 10 00:00:56.918863 kubelet[2813]: I0710 00:00:56.918766 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/75bd1f7e-5ee8-4f40-9fa1-c0904b808503-hubble-tls\") pod \"cilium-68mxh\" (UID: \"75bd1f7e-5ee8-4f40-9fa1-c0904b808503\") " pod="kube-system/cilium-68mxh" Jul 10 00:00:56.918986 kubelet[2813]: I0710 00:00:56.918776 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mm67r\" (UniqueName: \"kubernetes.io/projected/75bd1f7e-5ee8-4f40-9fa1-c0904b808503-kube-api-access-mm67r\") pod \"cilium-68mxh\" (UID: \"75bd1f7e-5ee8-4f40-9fa1-c0904b808503\") " pod="kube-system/cilium-68mxh" Jul 10 00:00:56.918986 kubelet[2813]: I0710 00:00:56.918786 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/75bd1f7e-5ee8-4f40-9fa1-c0904b808503-bpf-maps\") pod \"cilium-68mxh\" (UID: \"75bd1f7e-5ee8-4f40-9fa1-c0904b808503\") " pod="kube-system/cilium-68mxh" Jul 10 00:00:56.918986 kubelet[2813]: I0710 00:00:56.918797 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/75bd1f7e-5ee8-4f40-9fa1-c0904b808503-hostproc\") pod \"cilium-68mxh\" (UID: \"75bd1f7e-5ee8-4f40-9fa1-c0904b808503\") " pod="kube-system/cilium-68mxh" Jul 10 00:00:56.918986 kubelet[2813]: I0710 00:00:56.918807 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/75bd1f7e-5ee8-4f40-9fa1-c0904b808503-lib-modules\") pod \"cilium-68mxh\" (UID: \"75bd1f7e-5ee8-4f40-9fa1-c0904b808503\") " pod="kube-system/cilium-68mxh" Jul 10 00:00:56.918986 kubelet[2813]: I0710 00:00:56.918818 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/75bd1f7e-5ee8-4f40-9fa1-c0904b808503-host-proc-sys-kernel\") pod \"cilium-68mxh\" (UID: \"75bd1f7e-5ee8-4f40-9fa1-c0904b808503\") " pod="kube-system/cilium-68mxh" Jul 10 00:00:56.918986 kubelet[2813]: I0710 00:00:56.918828 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/75bd1f7e-5ee8-4f40-9fa1-c0904b808503-cni-path\") pod \"cilium-68mxh\" (UID: \"75bd1f7e-5ee8-4f40-9fa1-c0904b808503\") " pod="kube-system/cilium-68mxh" Jul 10 00:00:56.919433 sshd[4615]: Accepted publickey for core from 139.178.89.65 port 41388 ssh2: RSA SHA256:iW7N8ouL0MNZquiamslGIbLBE90/GD9BMvBNQ+/8OB0 Jul 10 00:00:56.923089 sshd-session[4615]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:00:56.926461 systemd-logind[1536]: New session 28 of user core. Jul 10 00:00:56.935095 systemd[1]: Started session-28.scope - Session 28 of User core. Jul 10 00:00:56.985940 sshd[4618]: Connection closed by 139.178.89.65 port 41388 Jul 10 00:00:56.985280 sshd-session[4615]: pam_unix(sshd:session): session closed for user core Jul 10 00:00:56.995134 systemd[1]: sshd@26-139.178.70.109:22-139.178.89.65:41388.service: Deactivated successfully. Jul 10 00:00:56.996490 systemd[1]: session-28.scope: Deactivated successfully. Jul 10 00:00:56.997190 systemd-logind[1536]: Session 28 logged out. Waiting for processes to exit. Jul 10 00:00:57.003290 systemd[1]: Started sshd@27-139.178.70.109:22-139.178.89.65:41390.service - OpenSSH per-connection server daemon (139.178.89.65:41390). Jul 10 00:00:57.004755 systemd-logind[1536]: Removed session 28. Jul 10 00:00:57.031968 sshd[4624]: Accepted publickey for core from 139.178.89.65 port 41390 ssh2: RSA SHA256:iW7N8ouL0MNZquiamslGIbLBE90/GD9BMvBNQ+/8OB0 Jul 10 00:00:57.032874 sshd-session[4624]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:00:57.036939 systemd-logind[1536]: New session 29 of user core. Jul 10 00:00:57.041669 systemd[1]: Started session-29.scope - Session 29 of User core. Jul 10 00:00:57.229698 containerd[1556]: time="2025-07-10T00:00:57.229629033Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-68mxh,Uid:75bd1f7e-5ee8-4f40-9fa1-c0904b808503,Namespace:kube-system,Attempt:0,}" Jul 10 00:00:57.250466 kubelet[2813]: E0710 00:00:57.250441 2813 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 10 00:00:57.273964 containerd[1556]: time="2025-07-10T00:00:57.268680395Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 10 00:00:57.273964 containerd[1556]: time="2025-07-10T00:00:57.268743478Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 10 00:00:57.273964 containerd[1556]: time="2025-07-10T00:00:57.268756743Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 00:00:57.273964 containerd[1556]: time="2025-07-10T00:00:57.268808891Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 00:00:57.285944 systemd[1]: Started cri-containerd-8abe56daa285e01aa5308c6019ab1329f23628c21c575d6917ccc1decaa84f6c.scope - libcontainer container 8abe56daa285e01aa5308c6019ab1329f23628c21c575d6917ccc1decaa84f6c. Jul 10 00:00:57.306260 containerd[1556]: time="2025-07-10T00:00:57.306085432Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-68mxh,Uid:75bd1f7e-5ee8-4f40-9fa1-c0904b808503,Namespace:kube-system,Attempt:0,} returns sandbox id \"8abe56daa285e01aa5308c6019ab1329f23628c21c575d6917ccc1decaa84f6c\"" Jul 10 00:00:57.308342 containerd[1556]: time="2025-07-10T00:00:57.308324811Z" level=info msg="CreateContainer within sandbox \"8abe56daa285e01aa5308c6019ab1329f23628c21c575d6917ccc1decaa84f6c\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 10 00:00:57.320408 containerd[1556]: time="2025-07-10T00:00:57.320384787Z" level=info msg="CreateContainer within sandbox \"8abe56daa285e01aa5308c6019ab1329f23628c21c575d6917ccc1decaa84f6c\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"0f40ef867e92eb7ffee897bf0a44d2f47a2bde747575258b30185090a11c7cea\"" Jul 10 00:00:57.320838 containerd[1556]: time="2025-07-10T00:00:57.320823071Z" level=info msg="StartContainer for \"0f40ef867e92eb7ffee897bf0a44d2f47a2bde747575258b30185090a11c7cea\"" Jul 10 00:00:57.341003 systemd[1]: Started cri-containerd-0f40ef867e92eb7ffee897bf0a44d2f47a2bde747575258b30185090a11c7cea.scope - libcontainer container 0f40ef867e92eb7ffee897bf0a44d2f47a2bde747575258b30185090a11c7cea. Jul 10 00:00:57.360838 containerd[1556]: time="2025-07-10T00:00:57.360584809Z" level=info msg="StartContainer for \"0f40ef867e92eb7ffee897bf0a44d2f47a2bde747575258b30185090a11c7cea\" returns successfully" Jul 10 00:00:57.374181 systemd[1]: cri-containerd-0f40ef867e92eb7ffee897bf0a44d2f47a2bde747575258b30185090a11c7cea.scope: Deactivated successfully. Jul 10 00:00:57.374556 systemd[1]: cri-containerd-0f40ef867e92eb7ffee897bf0a44d2f47a2bde747575258b30185090a11c7cea.scope: Consumed 16ms CPU time, 9.2M memory peak, 2.6M read from disk. Jul 10 00:00:57.430122 containerd[1556]: time="2025-07-10T00:00:57.430039589Z" level=info msg="shim disconnected" id=0f40ef867e92eb7ffee897bf0a44d2f47a2bde747575258b30185090a11c7cea namespace=k8s.io Jul 10 00:00:57.430122 containerd[1556]: time="2025-07-10T00:00:57.430074382Z" level=warning msg="cleaning up after shim disconnected" id=0f40ef867e92eb7ffee897bf0a44d2f47a2bde747575258b30185090a11c7cea namespace=k8s.io Jul 10 00:00:57.430122 containerd[1556]: time="2025-07-10T00:00:57.430079839Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 10 00:00:57.702772 containerd[1556]: time="2025-07-10T00:00:57.702739490Z" level=info msg="CreateContainer within sandbox \"8abe56daa285e01aa5308c6019ab1329f23628c21c575d6917ccc1decaa84f6c\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 10 00:00:57.739744 containerd[1556]: time="2025-07-10T00:00:57.739390532Z" level=info msg="CreateContainer within sandbox \"8abe56daa285e01aa5308c6019ab1329f23628c21c575d6917ccc1decaa84f6c\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"f30135f042e8b221d746669f87ee4742f0c247f4f9fcaec3ce958c313477a69d\"" Jul 10 00:00:57.740625 containerd[1556]: time="2025-07-10T00:00:57.740087982Z" level=info msg="StartContainer for \"f30135f042e8b221d746669f87ee4742f0c247f4f9fcaec3ce958c313477a69d\"" Jul 10 00:00:57.766037 systemd[1]: Started cri-containerd-f30135f042e8b221d746669f87ee4742f0c247f4f9fcaec3ce958c313477a69d.scope - libcontainer container f30135f042e8b221d746669f87ee4742f0c247f4f9fcaec3ce958c313477a69d. Jul 10 00:00:57.781087 containerd[1556]: time="2025-07-10T00:00:57.781056766Z" level=info msg="StartContainer for \"f30135f042e8b221d746669f87ee4742f0c247f4f9fcaec3ce958c313477a69d\" returns successfully" Jul 10 00:00:57.794351 systemd[1]: cri-containerd-f30135f042e8b221d746669f87ee4742f0c247f4f9fcaec3ce958c313477a69d.scope: Deactivated successfully. Jul 10 00:00:57.794548 systemd[1]: cri-containerd-f30135f042e8b221d746669f87ee4742f0c247f4f9fcaec3ce958c313477a69d.scope: Consumed 12ms CPU time, 7.3M memory peak, 2M read from disk. Jul 10 00:00:57.808496 containerd[1556]: time="2025-07-10T00:00:57.808443663Z" level=info msg="shim disconnected" id=f30135f042e8b221d746669f87ee4742f0c247f4f9fcaec3ce958c313477a69d namespace=k8s.io Jul 10 00:00:57.808496 containerd[1556]: time="2025-07-10T00:00:57.808488930Z" level=warning msg="cleaning up after shim disconnected" id=f30135f042e8b221d746669f87ee4742f0c247f4f9fcaec3ce958c313477a69d namespace=k8s.io Jul 10 00:00:57.808496 containerd[1556]: time="2025-07-10T00:00:57.808497197Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 10 00:00:58.705377 containerd[1556]: time="2025-07-10T00:00:58.705334395Z" level=info msg="CreateContainer within sandbox \"8abe56daa285e01aa5308c6019ab1329f23628c21c575d6917ccc1decaa84f6c\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 10 00:00:58.719040 containerd[1556]: time="2025-07-10T00:00:58.719012975Z" level=info msg="CreateContainer within sandbox \"8abe56daa285e01aa5308c6019ab1329f23628c21c575d6917ccc1decaa84f6c\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"4ee6c7229746275e578fff3cc0e259b27a0fe22b3836d8c681f848f54774e791\"" Jul 10 00:00:58.719579 containerd[1556]: time="2025-07-10T00:00:58.719566734Z" level=info msg="StartContainer for \"4ee6c7229746275e578fff3cc0e259b27a0fe22b3836d8c681f848f54774e791\"" Jul 10 00:00:58.762125 systemd[1]: Started cri-containerd-4ee6c7229746275e578fff3cc0e259b27a0fe22b3836d8c681f848f54774e791.scope - libcontainer container 4ee6c7229746275e578fff3cc0e259b27a0fe22b3836d8c681f848f54774e791. Jul 10 00:00:58.788026 containerd[1556]: time="2025-07-10T00:00:58.787991190Z" level=info msg="StartContainer for \"4ee6c7229746275e578fff3cc0e259b27a0fe22b3836d8c681f848f54774e791\" returns successfully" Jul 10 00:00:58.827731 systemd[1]: cri-containerd-4ee6c7229746275e578fff3cc0e259b27a0fe22b3836d8c681f848f54774e791.scope: Deactivated successfully. Jul 10 00:00:58.848428 containerd[1556]: time="2025-07-10T00:00:58.848258852Z" level=info msg="shim disconnected" id=4ee6c7229746275e578fff3cc0e259b27a0fe22b3836d8c681f848f54774e791 namespace=k8s.io Jul 10 00:00:58.848428 containerd[1556]: time="2025-07-10T00:00:58.848302105Z" level=warning msg="cleaning up after shim disconnected" id=4ee6c7229746275e578fff3cc0e259b27a0fe22b3836d8c681f848f54774e791 namespace=k8s.io Jul 10 00:00:58.848428 containerd[1556]: time="2025-07-10T00:00:58.848310656Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 10 00:00:59.043727 systemd[1]: run-containerd-runc-k8s.io-4ee6c7229746275e578fff3cc0e259b27a0fe22b3836d8c681f848f54774e791-runc.LmCHzu.mount: Deactivated successfully. Jul 10 00:00:59.043820 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4ee6c7229746275e578fff3cc0e259b27a0fe22b3836d8c681f848f54774e791-rootfs.mount: Deactivated successfully. Jul 10 00:00:59.708052 containerd[1556]: time="2025-07-10T00:00:59.707945930Z" level=info msg="CreateContainer within sandbox \"8abe56daa285e01aa5308c6019ab1329f23628c21c575d6917ccc1decaa84f6c\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 10 00:00:59.724148 containerd[1556]: time="2025-07-10T00:00:59.724077152Z" level=info msg="CreateContainer within sandbox \"8abe56daa285e01aa5308c6019ab1329f23628c21c575d6917ccc1decaa84f6c\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"b997452f0a14d39d7958a4ec7f3e697daafb2b3597d8c560ef83054b279d9c57\"" Jul 10 00:00:59.725121 containerd[1556]: time="2025-07-10T00:00:59.725023526Z" level=info msg="StartContainer for \"b997452f0a14d39d7958a4ec7f3e697daafb2b3597d8c560ef83054b279d9c57\"" Jul 10 00:00:59.751002 systemd[1]: Started cri-containerd-b997452f0a14d39d7958a4ec7f3e697daafb2b3597d8c560ef83054b279d9c57.scope - libcontainer container b997452f0a14d39d7958a4ec7f3e697daafb2b3597d8c560ef83054b279d9c57. Jul 10 00:00:59.766632 containerd[1556]: time="2025-07-10T00:00:59.766038336Z" level=info msg="StartContainer for \"b997452f0a14d39d7958a4ec7f3e697daafb2b3597d8c560ef83054b279d9c57\" returns successfully" Jul 10 00:00:59.766145 systemd[1]: cri-containerd-b997452f0a14d39d7958a4ec7f3e697daafb2b3597d8c560ef83054b279d9c57.scope: Deactivated successfully. Jul 10 00:00:59.783313 containerd[1556]: time="2025-07-10T00:00:59.783253519Z" level=info msg="shim disconnected" id=b997452f0a14d39d7958a4ec7f3e697daafb2b3597d8c560ef83054b279d9c57 namespace=k8s.io Jul 10 00:00:59.783313 containerd[1556]: time="2025-07-10T00:00:59.783308676Z" level=warning msg="cleaning up after shim disconnected" id=b997452f0a14d39d7958a4ec7f3e697daafb2b3597d8c560ef83054b279d9c57 namespace=k8s.io Jul 10 00:00:59.783313 containerd[1556]: time="2025-07-10T00:00:59.783315046Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 10 00:01:00.043238 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b997452f0a14d39d7958a4ec7f3e697daafb2b3597d8c560ef83054b279d9c57-rootfs.mount: Deactivated successfully. Jul 10 00:01:00.711290 containerd[1556]: time="2025-07-10T00:01:00.711259216Z" level=info msg="CreateContainer within sandbox \"8abe56daa285e01aa5308c6019ab1329f23628c21c575d6917ccc1decaa84f6c\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 10 00:01:00.769952 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3625821709.mount: Deactivated successfully. Jul 10 00:01:00.798693 containerd[1556]: time="2025-07-10T00:01:00.798605168Z" level=info msg="CreateContainer within sandbox \"8abe56daa285e01aa5308c6019ab1329f23628c21c575d6917ccc1decaa84f6c\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"e2ee33f04986b0c1644cad90ff8f41b22d998cac5e52f5bbfe8a57a10094577b\"" Jul 10 00:01:00.799214 containerd[1556]: time="2025-07-10T00:01:00.799180790Z" level=info msg="StartContainer for \"e2ee33f04986b0c1644cad90ff8f41b22d998cac5e52f5bbfe8a57a10094577b\"" Jul 10 00:01:00.826970 systemd[1]: Started cri-containerd-e2ee33f04986b0c1644cad90ff8f41b22d998cac5e52f5bbfe8a57a10094577b.scope - libcontainer container e2ee33f04986b0c1644cad90ff8f41b22d998cac5e52f5bbfe8a57a10094577b. Jul 10 00:01:00.858340 containerd[1556]: time="2025-07-10T00:01:00.858273670Z" level=info msg="StartContainer for \"e2ee33f04986b0c1644cad90ff8f41b22d998cac5e52f5bbfe8a57a10094577b\" returns successfully" Jul 10 00:01:01.574913 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Jul 10 00:01:01.733001 kubelet[2813]: I0710 00:01:01.732642 2813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-68mxh" podStartSLOduration=5.732630071 podStartE2EDuration="5.732630071s" podCreationTimestamp="2025-07-10 00:00:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 00:01:01.732516212 +0000 UTC m=+130.036796923" watchObservedRunningTime="2025-07-10 00:01:01.732630071 +0000 UTC m=+130.036910778" Jul 10 00:01:03.372274 systemd[1]: run-containerd-runc-k8s.io-e2ee33f04986b0c1644cad90ff8f41b22d998cac5e52f5bbfe8a57a10094577b-runc.4L98lt.mount: Deactivated successfully. Jul 10 00:01:04.206376 systemd-networkd[1466]: lxc_health: Link UP Jul 10 00:01:04.206545 systemd-networkd[1466]: lxc_health: Gained carrier Jul 10 00:01:06.183033 systemd-networkd[1466]: lxc_health: Gained IPv6LL Jul 10 00:01:11.869189 systemd[1]: run-containerd-runc-k8s.io-e2ee33f04986b0c1644cad90ff8f41b22d998cac5e52f5bbfe8a57a10094577b-runc.rjrAHP.mount: Deactivated successfully. Jul 10 00:01:11.914327 sshd[4631]: Connection closed by 139.178.89.65 port 41390 Jul 10 00:01:11.916806 sshd-session[4624]: pam_unix(sshd:session): session closed for user core Jul 10 00:01:11.930287 systemd[1]: sshd@27-139.178.70.109:22-139.178.89.65:41390.service: Deactivated successfully. Jul 10 00:01:11.931939 systemd[1]: session-29.scope: Deactivated successfully. Jul 10 00:01:11.932968 systemd-logind[1536]: Session 29 logged out. Waiting for processes to exit. Jul 10 00:01:11.933748 systemd-logind[1536]: Removed session 29.