Jul 14 22:37:40.699757 kernel: Linux version 6.12.37-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Mon Jul 14 19:48:52 -00 2025 Jul 14 22:37:40.699774 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=f410b284aa01ceabdc6514a023dd712766dc6ad9b98f3a3d0190d1808e2e1320 Jul 14 22:37:40.699780 kernel: Disabled fast string operations Jul 14 22:37:40.699784 kernel: BIOS-provided physical RAM map: Jul 14 22:37:40.699788 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ebff] usable Jul 14 22:37:40.699792 kernel: BIOS-e820: [mem 0x000000000009ec00-0x000000000009ffff] reserved Jul 14 22:37:40.699798 kernel: BIOS-e820: [mem 0x00000000000dc000-0x00000000000fffff] reserved Jul 14 22:37:40.699802 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007fedffff] usable Jul 14 22:37:40.699807 kernel: BIOS-e820: [mem 0x000000007fee0000-0x000000007fefefff] ACPI data Jul 14 22:37:40.699811 kernel: BIOS-e820: [mem 0x000000007feff000-0x000000007fefffff] ACPI NVS Jul 14 22:37:40.699815 kernel: BIOS-e820: [mem 0x000000007ff00000-0x000000007fffffff] usable Jul 14 22:37:40.699819 kernel: BIOS-e820: [mem 0x00000000f0000000-0x00000000f7ffffff] reserved Jul 14 22:37:40.699823 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec0ffff] reserved Jul 14 22:37:40.699828 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved Jul 14 22:37:40.699834 kernel: BIOS-e820: [mem 0x00000000fffe0000-0x00000000ffffffff] reserved Jul 14 22:37:40.699839 kernel: NX (Execute Disable) protection: active Jul 14 22:37:40.699844 kernel: APIC: Static calls initialized Jul 14 22:37:40.699848 kernel: SMBIOS 2.7 present. Jul 14 22:37:40.699853 kernel: DMI: VMware, Inc. VMware Virtual Platform/440BX Desktop Reference Platform, BIOS 6.00 05/28/2020 Jul 14 22:37:40.699858 kernel: DMI: Memory slots populated: 1/128 Jul 14 22:37:40.699864 kernel: vmware: hypercall mode: 0x00 Jul 14 22:37:40.699869 kernel: Hypervisor detected: VMware Jul 14 22:37:40.699873 kernel: vmware: TSC freq read from hypervisor : 3408.000 MHz Jul 14 22:37:40.699878 kernel: vmware: Host bus clock speed read from hypervisor : 66000000 Hz Jul 14 22:37:40.699883 kernel: vmware: using clock offset of 3652415108 ns Jul 14 22:37:40.699887 kernel: tsc: Detected 3408.000 MHz processor Jul 14 22:37:40.699893 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jul 14 22:37:40.699898 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jul 14 22:37:40.699903 kernel: last_pfn = 0x80000 max_arch_pfn = 0x400000000 Jul 14 22:37:40.699907 kernel: total RAM covered: 3072M Jul 14 22:37:40.699914 kernel: Found optimal setting for mtrr clean up Jul 14 22:37:40.699919 kernel: gran_size: 64K chunk_size: 64K num_reg: 2 lose cover RAM: 0G Jul 14 22:37:40.699924 kernel: MTRR map: 6 entries (5 fixed + 1 variable; max 21), built from 8 variable MTRRs Jul 14 22:37:40.699929 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jul 14 22:37:40.699934 kernel: Using GB pages for direct mapping Jul 14 22:37:40.699939 kernel: ACPI: Early table checksum verification disabled Jul 14 22:37:40.699944 kernel: ACPI: RSDP 0x00000000000F6A00 000024 (v02 PTLTD ) Jul 14 22:37:40.699948 kernel: ACPI: XSDT 0x000000007FEE965B 00005C (v01 INTEL 440BX 06040000 VMW 01324272) Jul 14 22:37:40.699953 kernel: ACPI: FACP 0x000000007FEFEE73 0000F4 (v04 INTEL 440BX 06040000 PTL 000F4240) Jul 14 22:37:40.699959 kernel: ACPI: DSDT 0x000000007FEEAD55 01411E (v01 PTLTD Custom 06040000 MSFT 03000001) Jul 14 22:37:40.699966 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 Jul 14 22:37:40.699971 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 Jul 14 22:37:40.699976 kernel: ACPI: BOOT 0x000000007FEEAD2D 000028 (v01 PTLTD $SBFTBL$ 06040000 LTP 00000001) Jul 14 22:37:40.699981 kernel: ACPI: APIC 0x000000007FEEA5EB 000742 (v01 PTLTD ? APIC 06040000 LTP 00000000) Jul 14 22:37:40.699987 kernel: ACPI: MCFG 0x000000007FEEA5AF 00003C (v01 PTLTD $PCITBL$ 06040000 LTP 00000001) Jul 14 22:37:40.699993 kernel: ACPI: SRAT 0x000000007FEE9757 0008A8 (v02 VMWARE MEMPLUG 06040000 VMW 00000001) Jul 14 22:37:40.699998 kernel: ACPI: HPET 0x000000007FEE971F 000038 (v01 VMWARE VMW HPET 06040000 VMW 00000001) Jul 14 22:37:40.700003 kernel: ACPI: WAET 0x000000007FEE96F7 000028 (v01 VMWARE VMW WAET 06040000 VMW 00000001) Jul 14 22:37:40.700008 kernel: ACPI: Reserving FACP table memory at [mem 0x7fefee73-0x7fefef66] Jul 14 22:37:40.700013 kernel: ACPI: Reserving DSDT table memory at [mem 0x7feead55-0x7fefee72] Jul 14 22:37:40.700018 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] Jul 14 22:37:40.700023 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] Jul 14 22:37:40.700028 kernel: ACPI: Reserving BOOT table memory at [mem 0x7feead2d-0x7feead54] Jul 14 22:37:40.700033 kernel: ACPI: Reserving APIC table memory at [mem 0x7feea5eb-0x7feead2c] Jul 14 22:37:40.700039 kernel: ACPI: Reserving MCFG table memory at [mem 0x7feea5af-0x7feea5ea] Jul 14 22:37:40.700044 kernel: ACPI: Reserving SRAT table memory at [mem 0x7fee9757-0x7fee9ffe] Jul 14 22:37:40.700049 kernel: ACPI: Reserving HPET table memory at [mem 0x7fee971f-0x7fee9756] Jul 14 22:37:40.700054 kernel: ACPI: Reserving WAET table memory at [mem 0x7fee96f7-0x7fee971e] Jul 14 22:37:40.700059 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Jul 14 22:37:40.700064 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Jul 14 22:37:40.700070 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000-0xbfffffff] hotplug Jul 14 22:37:40.700075 kernel: NUMA: Node 0 [mem 0x00001000-0x0009ffff] + [mem 0x00100000-0x7fffffff] -> [mem 0x00001000-0x7fffffff] Jul 14 22:37:40.700080 kernel: NODE_DATA(0) allocated [mem 0x7fff8dc0-0x7fffffff] Jul 14 22:37:40.700086 kernel: Zone ranges: Jul 14 22:37:40.700091 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jul 14 22:37:40.700096 kernel: DMA32 [mem 0x0000000001000000-0x000000007fffffff] Jul 14 22:37:40.700101 kernel: Normal empty Jul 14 22:37:40.700106 kernel: Device empty Jul 14 22:37:40.700111 kernel: Movable zone start for each node Jul 14 22:37:40.700116 kernel: Early memory node ranges Jul 14 22:37:40.700121 kernel: node 0: [mem 0x0000000000001000-0x000000000009dfff] Jul 14 22:37:40.700126 kernel: node 0: [mem 0x0000000000100000-0x000000007fedffff] Jul 14 22:37:40.700132 kernel: node 0: [mem 0x000000007ff00000-0x000000007fffffff] Jul 14 22:37:40.700137 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007fffffff] Jul 14 22:37:40.700142 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 14 22:37:40.700148 kernel: On node 0, zone DMA: 98 pages in unavailable ranges Jul 14 22:37:40.700153 kernel: On node 0, zone DMA32: 32 pages in unavailable ranges Jul 14 22:37:40.700160 kernel: ACPI: PM-Timer IO Port: 0x1008 Jul 14 22:37:40.700166 kernel: ACPI: LAPIC_NMI (acpi_id[0x00] high edge lint[0x1]) Jul 14 22:37:40.700171 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) Jul 14 22:37:40.700176 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) Jul 14 22:37:40.700181 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) Jul 14 22:37:40.700187 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) Jul 14 22:37:40.700192 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) Jul 14 22:37:40.700197 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) Jul 14 22:37:40.700202 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) Jul 14 22:37:40.700207 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) Jul 14 22:37:40.700212 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) Jul 14 22:37:40.700217 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) Jul 14 22:37:40.700221 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) Jul 14 22:37:40.700226 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) Jul 14 22:37:40.700232 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) Jul 14 22:37:40.700237 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) Jul 14 22:37:40.700242 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) Jul 14 22:37:40.700247 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) Jul 14 22:37:40.700252 kernel: ACPI: LAPIC_NMI (acpi_id[0x11] high edge lint[0x1]) Jul 14 22:37:40.700257 kernel: ACPI: LAPIC_NMI (acpi_id[0x12] high edge lint[0x1]) Jul 14 22:37:40.700262 kernel: ACPI: LAPIC_NMI (acpi_id[0x13] high edge lint[0x1]) Jul 14 22:37:40.700267 kernel: ACPI: LAPIC_NMI (acpi_id[0x14] high edge lint[0x1]) Jul 14 22:37:40.700272 kernel: ACPI: LAPIC_NMI (acpi_id[0x15] high edge lint[0x1]) Jul 14 22:37:40.700278 kernel: ACPI: LAPIC_NMI (acpi_id[0x16] high edge lint[0x1]) Jul 14 22:37:40.700283 kernel: ACPI: LAPIC_NMI (acpi_id[0x17] high edge lint[0x1]) Jul 14 22:37:40.700288 kernel: ACPI: LAPIC_NMI (acpi_id[0x18] high edge lint[0x1]) Jul 14 22:37:40.700293 kernel: ACPI: LAPIC_NMI (acpi_id[0x19] high edge lint[0x1]) Jul 14 22:37:40.700298 kernel: ACPI: LAPIC_NMI (acpi_id[0x1a] high edge lint[0x1]) Jul 14 22:37:40.700303 kernel: ACPI: LAPIC_NMI (acpi_id[0x1b] high edge lint[0x1]) Jul 14 22:37:40.700308 kernel: ACPI: LAPIC_NMI (acpi_id[0x1c] high edge lint[0x1]) Jul 14 22:37:40.700313 kernel: ACPI: LAPIC_NMI (acpi_id[0x1d] high edge lint[0x1]) Jul 14 22:37:40.700317 kernel: ACPI: LAPIC_NMI (acpi_id[0x1e] high edge lint[0x1]) Jul 14 22:37:40.700323 kernel: ACPI: LAPIC_NMI (acpi_id[0x1f] high edge lint[0x1]) Jul 14 22:37:40.700329 kernel: ACPI: LAPIC_NMI (acpi_id[0x20] high edge lint[0x1]) Jul 14 22:37:40.700333 kernel: ACPI: LAPIC_NMI (acpi_id[0x21] high edge lint[0x1]) Jul 14 22:37:40.700338 kernel: ACPI: LAPIC_NMI (acpi_id[0x22] high edge lint[0x1]) Jul 14 22:37:40.700343 kernel: ACPI: LAPIC_NMI (acpi_id[0x23] high edge lint[0x1]) Jul 14 22:37:40.700348 kernel: ACPI: LAPIC_NMI (acpi_id[0x24] high edge lint[0x1]) Jul 14 22:37:40.700353 kernel: ACPI: LAPIC_NMI (acpi_id[0x25] high edge lint[0x1]) Jul 14 22:37:40.700358 kernel: ACPI: LAPIC_NMI (acpi_id[0x26] high edge lint[0x1]) Jul 14 22:37:40.700363 kernel: ACPI: LAPIC_NMI (acpi_id[0x27] high edge lint[0x1]) Jul 14 22:37:40.700373 kernel: ACPI: LAPIC_NMI (acpi_id[0x28] high edge lint[0x1]) Jul 14 22:37:40.700378 kernel: ACPI: LAPIC_NMI (acpi_id[0x29] high edge lint[0x1]) Jul 14 22:37:40.700384 kernel: ACPI: LAPIC_NMI (acpi_id[0x2a] high edge lint[0x1]) Jul 14 22:37:40.700389 kernel: ACPI: LAPIC_NMI (acpi_id[0x2b] high edge lint[0x1]) Jul 14 22:37:40.700395 kernel: ACPI: LAPIC_NMI (acpi_id[0x2c] high edge lint[0x1]) Jul 14 22:37:40.700400 kernel: ACPI: LAPIC_NMI (acpi_id[0x2d] high edge lint[0x1]) Jul 14 22:37:40.700406 kernel: ACPI: LAPIC_NMI (acpi_id[0x2e] high edge lint[0x1]) Jul 14 22:37:40.700411 kernel: ACPI: LAPIC_NMI (acpi_id[0x2f] high edge lint[0x1]) Jul 14 22:37:40.700416 kernel: ACPI: LAPIC_NMI (acpi_id[0x30] high edge lint[0x1]) Jul 14 22:37:40.700422 kernel: ACPI: LAPIC_NMI (acpi_id[0x31] high edge lint[0x1]) Jul 14 22:37:40.700428 kernel: ACPI: LAPIC_NMI (acpi_id[0x32] high edge lint[0x1]) Jul 14 22:37:40.700433 kernel: ACPI: LAPIC_NMI (acpi_id[0x33] high edge lint[0x1]) Jul 14 22:37:40.700438 kernel: ACPI: LAPIC_NMI (acpi_id[0x34] high edge lint[0x1]) Jul 14 22:37:40.700444 kernel: ACPI: LAPIC_NMI (acpi_id[0x35] high edge lint[0x1]) Jul 14 22:37:40.700449 kernel: ACPI: LAPIC_NMI (acpi_id[0x36] high edge lint[0x1]) Jul 14 22:37:40.700454 kernel: ACPI: LAPIC_NMI (acpi_id[0x37] high edge lint[0x1]) Jul 14 22:37:40.700459 kernel: ACPI: LAPIC_NMI (acpi_id[0x38] high edge lint[0x1]) Jul 14 22:37:40.700465 kernel: ACPI: LAPIC_NMI (acpi_id[0x39] high edge lint[0x1]) Jul 14 22:37:40.700471 kernel: ACPI: LAPIC_NMI (acpi_id[0x3a] high edge lint[0x1]) Jul 14 22:37:40.700476 kernel: ACPI: LAPIC_NMI (acpi_id[0x3b] high edge lint[0x1]) Jul 14 22:37:40.700482 kernel: ACPI: LAPIC_NMI (acpi_id[0x3c] high edge lint[0x1]) Jul 14 22:37:40.700493 kernel: ACPI: LAPIC_NMI (acpi_id[0x3d] high edge lint[0x1]) Jul 14 22:37:40.700498 kernel: ACPI: LAPIC_NMI (acpi_id[0x3e] high edge lint[0x1]) Jul 14 22:37:40.700503 kernel: ACPI: LAPIC_NMI (acpi_id[0x3f] high edge lint[0x1]) Jul 14 22:37:40.700509 kernel: ACPI: LAPIC_NMI (acpi_id[0x40] high edge lint[0x1]) Jul 14 22:37:40.700514 kernel: ACPI: LAPIC_NMI (acpi_id[0x41] high edge lint[0x1]) Jul 14 22:37:40.700519 kernel: ACPI: LAPIC_NMI (acpi_id[0x42] high edge lint[0x1]) Jul 14 22:37:40.700524 kernel: ACPI: LAPIC_NMI (acpi_id[0x43] high edge lint[0x1]) Jul 14 22:37:40.700531 kernel: ACPI: LAPIC_NMI (acpi_id[0x44] high edge lint[0x1]) Jul 14 22:37:40.700536 kernel: ACPI: LAPIC_NMI (acpi_id[0x45] high edge lint[0x1]) Jul 14 22:37:40.700541 kernel: ACPI: LAPIC_NMI (acpi_id[0x46] high edge lint[0x1]) Jul 14 22:37:40.700547 kernel: ACPI: LAPIC_NMI (acpi_id[0x47] high edge lint[0x1]) Jul 14 22:37:40.700552 kernel: ACPI: LAPIC_NMI (acpi_id[0x48] high edge lint[0x1]) Jul 14 22:37:40.700557 kernel: ACPI: LAPIC_NMI (acpi_id[0x49] high edge lint[0x1]) Jul 14 22:37:40.700562 kernel: ACPI: LAPIC_NMI (acpi_id[0x4a] high edge lint[0x1]) Jul 14 22:37:40.700568 kernel: ACPI: LAPIC_NMI (acpi_id[0x4b] high edge lint[0x1]) Jul 14 22:37:40.700573 kernel: ACPI: LAPIC_NMI (acpi_id[0x4c] high edge lint[0x1]) Jul 14 22:37:40.700579 kernel: ACPI: LAPIC_NMI (acpi_id[0x4d] high edge lint[0x1]) Jul 14 22:37:40.700594 kernel: ACPI: LAPIC_NMI (acpi_id[0x4e] high edge lint[0x1]) Jul 14 22:37:40.700601 kernel: ACPI: LAPIC_NMI (acpi_id[0x4f] high edge lint[0x1]) Jul 14 22:37:40.700606 kernel: ACPI: LAPIC_NMI (acpi_id[0x50] high edge lint[0x1]) Jul 14 22:37:40.700611 kernel: ACPI: LAPIC_NMI (acpi_id[0x51] high edge lint[0x1]) Jul 14 22:37:40.700617 kernel: ACPI: LAPIC_NMI (acpi_id[0x52] high edge lint[0x1]) Jul 14 22:37:40.700622 kernel: ACPI: LAPIC_NMI (acpi_id[0x53] high edge lint[0x1]) Jul 14 22:37:40.700627 kernel: ACPI: LAPIC_NMI (acpi_id[0x54] high edge lint[0x1]) Jul 14 22:37:40.700632 kernel: ACPI: LAPIC_NMI (acpi_id[0x55] high edge lint[0x1]) Jul 14 22:37:40.700638 kernel: ACPI: LAPIC_NMI (acpi_id[0x56] high edge lint[0x1]) Jul 14 22:37:40.700645 kernel: ACPI: LAPIC_NMI (acpi_id[0x57] high edge lint[0x1]) Jul 14 22:37:40.700650 kernel: ACPI: LAPIC_NMI (acpi_id[0x58] high edge lint[0x1]) Jul 14 22:37:40.700655 kernel: ACPI: LAPIC_NMI (acpi_id[0x59] high edge lint[0x1]) Jul 14 22:37:40.700660 kernel: ACPI: LAPIC_NMI (acpi_id[0x5a] high edge lint[0x1]) Jul 14 22:37:40.700666 kernel: ACPI: LAPIC_NMI (acpi_id[0x5b] high edge lint[0x1]) Jul 14 22:37:40.700671 kernel: ACPI: LAPIC_NMI (acpi_id[0x5c] high edge lint[0x1]) Jul 14 22:37:40.700676 kernel: ACPI: LAPIC_NMI (acpi_id[0x5d] high edge lint[0x1]) Jul 14 22:37:40.700682 kernel: ACPI: LAPIC_NMI (acpi_id[0x5e] high edge lint[0x1]) Jul 14 22:37:40.700687 kernel: ACPI: LAPIC_NMI (acpi_id[0x5f] high edge lint[0x1]) Jul 14 22:37:40.700693 kernel: ACPI: LAPIC_NMI (acpi_id[0x60] high edge lint[0x1]) Jul 14 22:37:40.700698 kernel: ACPI: LAPIC_NMI (acpi_id[0x61] high edge lint[0x1]) Jul 14 22:37:40.700704 kernel: ACPI: LAPIC_NMI (acpi_id[0x62] high edge lint[0x1]) Jul 14 22:37:40.700709 kernel: ACPI: LAPIC_NMI (acpi_id[0x63] high edge lint[0x1]) Jul 14 22:37:40.700714 kernel: ACPI: LAPIC_NMI (acpi_id[0x64] high edge lint[0x1]) Jul 14 22:37:40.700720 kernel: ACPI: LAPIC_NMI (acpi_id[0x65] high edge lint[0x1]) Jul 14 22:37:40.700725 kernel: ACPI: LAPIC_NMI (acpi_id[0x66] high edge lint[0x1]) Jul 14 22:37:40.700730 kernel: ACPI: LAPIC_NMI (acpi_id[0x67] high edge lint[0x1]) Jul 14 22:37:40.700736 kernel: ACPI: LAPIC_NMI (acpi_id[0x68] high edge lint[0x1]) Jul 14 22:37:40.700742 kernel: ACPI: LAPIC_NMI (acpi_id[0x69] high edge lint[0x1]) Jul 14 22:37:40.700747 kernel: ACPI: LAPIC_NMI (acpi_id[0x6a] high edge lint[0x1]) Jul 14 22:37:40.700753 kernel: ACPI: LAPIC_NMI (acpi_id[0x6b] high edge lint[0x1]) Jul 14 22:37:40.700758 kernel: ACPI: LAPIC_NMI (acpi_id[0x6c] high edge lint[0x1]) Jul 14 22:37:40.700763 kernel: ACPI: LAPIC_NMI (acpi_id[0x6d] high edge lint[0x1]) Jul 14 22:37:40.700768 kernel: ACPI: LAPIC_NMI (acpi_id[0x6e] high edge lint[0x1]) Jul 14 22:37:40.700774 kernel: ACPI: LAPIC_NMI (acpi_id[0x6f] high edge lint[0x1]) Jul 14 22:37:40.700779 kernel: ACPI: LAPIC_NMI (acpi_id[0x70] high edge lint[0x1]) Jul 14 22:37:40.700784 kernel: ACPI: LAPIC_NMI (acpi_id[0x71] high edge lint[0x1]) Jul 14 22:37:40.700789 kernel: ACPI: LAPIC_NMI (acpi_id[0x72] high edge lint[0x1]) Jul 14 22:37:40.700796 kernel: ACPI: LAPIC_NMI (acpi_id[0x73] high edge lint[0x1]) Jul 14 22:37:40.700801 kernel: ACPI: LAPIC_NMI (acpi_id[0x74] high edge lint[0x1]) Jul 14 22:37:40.700806 kernel: ACPI: LAPIC_NMI (acpi_id[0x75] high edge lint[0x1]) Jul 14 22:37:40.700812 kernel: ACPI: LAPIC_NMI (acpi_id[0x76] high edge lint[0x1]) Jul 14 22:37:40.700817 kernel: ACPI: LAPIC_NMI (acpi_id[0x77] high edge lint[0x1]) Jul 14 22:37:40.700822 kernel: ACPI: LAPIC_NMI (acpi_id[0x78] high edge lint[0x1]) Jul 14 22:37:40.700828 kernel: ACPI: LAPIC_NMI (acpi_id[0x79] high edge lint[0x1]) Jul 14 22:37:40.700833 kernel: ACPI: LAPIC_NMI (acpi_id[0x7a] high edge lint[0x1]) Jul 14 22:37:40.700838 kernel: ACPI: LAPIC_NMI (acpi_id[0x7b] high edge lint[0x1]) Jul 14 22:37:40.700845 kernel: ACPI: LAPIC_NMI (acpi_id[0x7c] high edge lint[0x1]) Jul 14 22:37:40.700850 kernel: ACPI: LAPIC_NMI (acpi_id[0x7d] high edge lint[0x1]) Jul 14 22:37:40.700855 kernel: ACPI: LAPIC_NMI (acpi_id[0x7e] high edge lint[0x1]) Jul 14 22:37:40.700861 kernel: ACPI: LAPIC_NMI (acpi_id[0x7f] high edge lint[0x1]) Jul 14 22:37:40.700866 kernel: IOAPIC[0]: apic_id 1, version 17, address 0xfec00000, GSI 0-23 Jul 14 22:37:40.700871 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 high edge) Jul 14 22:37:40.700877 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jul 14 22:37:40.700882 kernel: ACPI: HPET id: 0x8086af01 base: 0xfed00000 Jul 14 22:37:40.700887 kernel: TSC deadline timer available Jul 14 22:37:40.700894 kernel: CPU topo: Max. logical packages: 128 Jul 14 22:37:40.700899 kernel: CPU topo: Max. logical dies: 128 Jul 14 22:37:40.700904 kernel: CPU topo: Max. dies per package: 1 Jul 14 22:37:40.700910 kernel: CPU topo: Max. threads per core: 1 Jul 14 22:37:40.700915 kernel: CPU topo: Num. cores per package: 1 Jul 14 22:37:40.700920 kernel: CPU topo: Num. threads per package: 1 Jul 14 22:37:40.700926 kernel: CPU topo: Allowing 2 present CPUs plus 126 hotplug CPUs Jul 14 22:37:40.700931 kernel: [mem 0x80000000-0xefffffff] available for PCI devices Jul 14 22:37:40.700936 kernel: Booting paravirtualized kernel on VMware hypervisor Jul 14 22:37:40.700942 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jul 14 22:37:40.700948 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:128 nr_cpu_ids:128 nr_node_ids:1 Jul 14 22:37:40.700954 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u262144 Jul 14 22:37:40.700959 kernel: pcpu-alloc: s207832 r8192 d29736 u262144 alloc=1*2097152 Jul 14 22:37:40.700965 kernel: pcpu-alloc: [0] 000 001 002 003 004 005 006 007 Jul 14 22:37:40.700970 kernel: pcpu-alloc: [0] 008 009 010 011 012 013 014 015 Jul 14 22:37:40.700975 kernel: pcpu-alloc: [0] 016 017 018 019 020 021 022 023 Jul 14 22:37:40.700981 kernel: pcpu-alloc: [0] 024 025 026 027 028 029 030 031 Jul 14 22:37:40.700986 kernel: pcpu-alloc: [0] 032 033 034 035 036 037 038 039 Jul 14 22:37:40.700991 kernel: pcpu-alloc: [0] 040 041 042 043 044 045 046 047 Jul 14 22:37:40.700997 kernel: pcpu-alloc: [0] 048 049 050 051 052 053 054 055 Jul 14 22:37:40.701002 kernel: pcpu-alloc: [0] 056 057 058 059 060 061 062 063 Jul 14 22:37:40.701008 kernel: pcpu-alloc: [0] 064 065 066 067 068 069 070 071 Jul 14 22:37:40.701013 kernel: pcpu-alloc: [0] 072 073 074 075 076 077 078 079 Jul 14 22:37:40.701018 kernel: pcpu-alloc: [0] 080 081 082 083 084 085 086 087 Jul 14 22:37:40.701024 kernel: pcpu-alloc: [0] 088 089 090 091 092 093 094 095 Jul 14 22:37:40.701029 kernel: pcpu-alloc: [0] 096 097 098 099 100 101 102 103 Jul 14 22:37:40.701034 kernel: pcpu-alloc: [0] 104 105 106 107 108 109 110 111 Jul 14 22:37:40.701040 kernel: pcpu-alloc: [0] 112 113 114 115 116 117 118 119 Jul 14 22:37:40.701046 kernel: pcpu-alloc: [0] 120 121 122 123 124 125 126 127 Jul 14 22:37:40.701052 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=f410b284aa01ceabdc6514a023dd712766dc6ad9b98f3a3d0190d1808e2e1320 Jul 14 22:37:40.701058 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 14 22:37:40.701063 kernel: random: crng init done Jul 14 22:37:40.701068 kernel: printk: log_buf_len individual max cpu contribution: 4096 bytes Jul 14 22:37:40.701073 kernel: printk: log_buf_len total cpu_extra contributions: 520192 bytes Jul 14 22:37:40.701079 kernel: printk: log_buf_len min size: 262144 bytes Jul 14 22:37:40.701084 kernel: printk: log_buf_len: 1048576 bytes Jul 14 22:37:40.701090 kernel: printk: early log buf free: 245592(93%) Jul 14 22:37:40.701096 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 14 22:37:40.701101 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jul 14 22:37:40.701107 kernel: Fallback order for Node 0: 0 Jul 14 22:37:40.701112 kernel: Built 1 zonelists, mobility grouping on. Total pages: 524157 Jul 14 22:37:40.701117 kernel: Policy zone: DMA32 Jul 14 22:37:40.701122 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 14 22:37:40.701128 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=128, Nodes=1 Jul 14 22:37:40.701133 kernel: ftrace: allocating 40101 entries in 157 pages Jul 14 22:37:40.701140 kernel: ftrace: allocated 157 pages with 5 groups Jul 14 22:37:40.701148 kernel: Dynamic Preempt: voluntary Jul 14 22:37:40.701154 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 14 22:37:40.701159 kernel: rcu: RCU event tracing is enabled. Jul 14 22:37:40.701165 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=128. Jul 14 22:37:40.701170 kernel: Trampoline variant of Tasks RCU enabled. Jul 14 22:37:40.701176 kernel: Rude variant of Tasks RCU enabled. Jul 14 22:37:40.701181 kernel: Tracing variant of Tasks RCU enabled. Jul 14 22:37:40.701187 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 14 22:37:40.701193 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=128 Jul 14 22:37:40.701199 kernel: RCU Tasks: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. Jul 14 22:37:40.701204 kernel: RCU Tasks Rude: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. Jul 14 22:37:40.701210 kernel: RCU Tasks Trace: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. Jul 14 22:37:40.701215 kernel: NR_IRQS: 33024, nr_irqs: 1448, preallocated irqs: 16 Jul 14 22:37:40.701220 kernel: rcu: srcu_init: Setting srcu_struct sizes to big. Jul 14 22:37:40.701226 kernel: Console: colour VGA+ 80x25 Jul 14 22:37:40.701231 kernel: printk: legacy console [tty0] enabled Jul 14 22:37:40.701236 kernel: printk: legacy console [ttyS0] enabled Jul 14 22:37:40.701243 kernel: ACPI: Core revision 20240827 Jul 14 22:37:40.701249 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 133484882848 ns Jul 14 22:37:40.701254 kernel: APIC: Switch to symmetric I/O mode setup Jul 14 22:37:40.701260 kernel: x2apic enabled Jul 14 22:37:40.701265 kernel: APIC: Switched APIC routing to: physical x2apic Jul 14 22:37:40.701270 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jul 14 22:37:40.701276 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns Jul 14 22:37:40.701281 kernel: Calibrating delay loop (skipped) preset value.. 6816.00 BogoMIPS (lpj=3408000) Jul 14 22:37:40.701287 kernel: Disabled fast string operations Jul 14 22:37:40.701293 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Jul 14 22:37:40.701298 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Jul 14 22:37:40.701304 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jul 14 22:37:40.701309 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall and VM exit Jul 14 22:37:40.701315 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Jul 14 22:37:40.701320 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Jul 14 22:37:40.701325 kernel: RETBleed: Mitigation: Enhanced IBRS Jul 14 22:37:40.701331 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jul 14 22:37:40.701336 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jul 14 22:37:40.701343 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jul 14 22:37:40.701348 kernel: SRBDS: Unknown: Dependent on hypervisor status Jul 14 22:37:40.701354 kernel: GDS: Unknown: Dependent on hypervisor status Jul 14 22:37:40.701359 kernel: ITS: Mitigation: Aligned branch/return thunks Jul 14 22:37:40.701364 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jul 14 22:37:40.701370 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jul 14 22:37:40.701375 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jul 14 22:37:40.701380 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jul 14 22:37:40.701386 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jul 14 22:37:40.701392 kernel: Freeing SMP alternatives memory: 32K Jul 14 22:37:40.701398 kernel: pid_max: default: 131072 minimum: 1024 Jul 14 22:37:40.701403 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jul 14 22:37:40.701409 kernel: landlock: Up and running. Jul 14 22:37:40.701414 kernel: SELinux: Initializing. Jul 14 22:37:40.701419 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jul 14 22:37:40.701425 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jul 14 22:37:40.701430 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) Jul 14 22:37:40.701436 kernel: Performance Events: Skylake events, core PMU driver. Jul 14 22:37:40.701442 kernel: core: CPUID marked event: 'cpu cycles' unavailable Jul 14 22:37:40.701448 kernel: core: CPUID marked event: 'instructions' unavailable Jul 14 22:37:40.701453 kernel: core: CPUID marked event: 'bus cycles' unavailable Jul 14 22:37:40.701458 kernel: core: CPUID marked event: 'cache references' unavailable Jul 14 22:37:40.701464 kernel: core: CPUID marked event: 'cache misses' unavailable Jul 14 22:37:40.701469 kernel: core: CPUID marked event: 'branch instructions' unavailable Jul 14 22:37:40.701474 kernel: core: CPUID marked event: 'branch misses' unavailable Jul 14 22:37:40.701480 kernel: ... version: 1 Jul 14 22:37:40.701494 kernel: ... bit width: 48 Jul 14 22:37:40.701500 kernel: ... generic registers: 4 Jul 14 22:37:40.701506 kernel: ... value mask: 0000ffffffffffff Jul 14 22:37:40.701511 kernel: ... max period: 000000007fffffff Jul 14 22:37:40.701517 kernel: ... fixed-purpose events: 0 Jul 14 22:37:40.701530 kernel: ... event mask: 000000000000000f Jul 14 22:37:40.701541 kernel: signal: max sigframe size: 1776 Jul 14 22:37:40.701546 kernel: rcu: Hierarchical SRCU implementation. Jul 14 22:37:40.701552 kernel: rcu: Max phase no-delay instances is 400. Jul 14 22:37:40.701558 kernel: Timer migration: 3 hierarchy levels; 8 children per group; 3 crossnode level Jul 14 22:37:40.701567 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jul 14 22:37:40.701573 kernel: smp: Bringing up secondary CPUs ... Jul 14 22:37:40.701578 kernel: smpboot: x86: Booting SMP configuration: Jul 14 22:37:40.701583 kernel: .... node #0, CPUs: #1 Jul 14 22:37:40.701589 kernel: Disabled fast string operations Jul 14 22:37:40.701594 kernel: smp: Brought up 1 node, 2 CPUs Jul 14 22:37:40.701599 kernel: smpboot: Total of 2 processors activated (13632.00 BogoMIPS) Jul 14 22:37:40.701605 kernel: Memory: 1924264K/2096628K available (14336K kernel code, 2430K rwdata, 9960K rodata, 54600K init, 2368K bss, 160980K reserved, 0K cma-reserved) Jul 14 22:37:40.701610 kernel: devtmpfs: initialized Jul 14 22:37:40.701617 kernel: x86/mm: Memory block size: 128MB Jul 14 22:37:40.701623 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7feff000-0x7fefffff] (4096 bytes) Jul 14 22:37:40.701628 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 14 22:37:40.701634 kernel: futex hash table entries: 32768 (order: 9, 2097152 bytes, linear) Jul 14 22:37:40.701639 kernel: pinctrl core: initialized pinctrl subsystem Jul 14 22:37:40.701644 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 14 22:37:40.701650 kernel: audit: initializing netlink subsys (disabled) Jul 14 22:37:40.701655 kernel: audit: type=2000 audit(1752532657.279:1): state=initialized audit_enabled=0 res=1 Jul 14 22:37:40.701661 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 14 22:37:40.701667 kernel: thermal_sys: Registered thermal governor 'user_space' Jul 14 22:37:40.701672 kernel: cpuidle: using governor menu Jul 14 22:37:40.701678 kernel: Simple Boot Flag at 0x36 set to 0x80 Jul 14 22:37:40.701683 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 14 22:37:40.701689 kernel: dca service started, version 1.12.1 Jul 14 22:37:40.701694 kernel: PCI: ECAM [mem 0xf0000000-0xf7ffffff] (base 0xf0000000) for domain 0000 [bus 00-7f] Jul 14 22:37:40.701708 kernel: PCI: Using configuration type 1 for base access Jul 14 22:37:40.701714 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jul 14 22:37:40.701720 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 14 22:37:40.701727 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jul 14 22:37:40.701733 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 14 22:37:40.701739 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jul 14 22:37:40.701744 kernel: ACPI: Added _OSI(Module Device) Jul 14 22:37:40.701750 kernel: ACPI: Added _OSI(Processor Device) Jul 14 22:37:40.701756 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 14 22:37:40.701761 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 14 22:37:40.701767 kernel: ACPI: [Firmware Bug]: BIOS _OSI(Linux) query ignored Jul 14 22:37:40.701773 kernel: ACPI: Interpreter enabled Jul 14 22:37:40.701779 kernel: ACPI: PM: (supports S0 S1 S5) Jul 14 22:37:40.701785 kernel: ACPI: Using IOAPIC for interrupt routing Jul 14 22:37:40.701791 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jul 14 22:37:40.701797 kernel: PCI: Using E820 reservations for host bridge windows Jul 14 22:37:40.701802 kernel: ACPI: Enabled 4 GPEs in block 00 to 0F Jul 14 22:37:40.701808 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-7f]) Jul 14 22:37:40.701889 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 14 22:37:40.701943 kernel: acpi PNP0A03:00: _OSC: platform does not support [AER LTR] Jul 14 22:37:40.701991 kernel: acpi PNP0A03:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability] Jul 14 22:37:40.702000 kernel: PCI host bridge to bus 0000:00 Jul 14 22:37:40.702049 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jul 14 22:37:40.702094 kernel: pci_bus 0000:00: root bus resource [mem 0x000cc000-0x000dbfff window] Jul 14 22:37:40.702137 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jul 14 22:37:40.702179 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jul 14 22:37:40.702224 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xfeff window] Jul 14 22:37:40.702266 kernel: pci_bus 0000:00: root bus resource [bus 00-7f] Jul 14 22:37:40.702324 kernel: pci 0000:00:00.0: [8086:7190] type 00 class 0x060000 conventional PCI endpoint Jul 14 22:37:40.702383 kernel: pci 0000:00:01.0: [8086:7191] type 01 class 0x060400 conventional PCI bridge Jul 14 22:37:40.702435 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Jul 14 22:37:40.702524 kernel: pci 0000:00:07.0: [8086:7110] type 00 class 0x060100 conventional PCI endpoint Jul 14 22:37:40.702586 kernel: pci 0000:00:07.1: [8086:7111] type 00 class 0x01018a conventional PCI endpoint Jul 14 22:37:40.702639 kernel: pci 0000:00:07.1: BAR 4 [io 0x1060-0x106f] Jul 14 22:37:40.702690 kernel: pci 0000:00:07.1: BAR 0 [io 0x01f0-0x01f7]: legacy IDE quirk Jul 14 22:37:40.702739 kernel: pci 0000:00:07.1: BAR 1 [io 0x03f6]: legacy IDE quirk Jul 14 22:37:40.702790 kernel: pci 0000:00:07.1: BAR 2 [io 0x0170-0x0177]: legacy IDE quirk Jul 14 22:37:40.702839 kernel: pci 0000:00:07.1: BAR 3 [io 0x0376]: legacy IDE quirk Jul 14 22:37:40.702891 kernel: pci 0000:00:07.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint Jul 14 22:37:40.702941 kernel: pci 0000:00:07.3: quirk: [io 0x1000-0x103f] claimed by PIIX4 ACPI Jul 14 22:37:40.702989 kernel: pci 0000:00:07.3: quirk: [io 0x1040-0x104f] claimed by PIIX4 SMB Jul 14 22:37:40.703044 kernel: pci 0000:00:07.7: [15ad:0740] type 00 class 0x088000 conventional PCI endpoint Jul 14 22:37:40.703094 kernel: pci 0000:00:07.7: BAR 0 [io 0x1080-0x10bf] Jul 14 22:37:40.703146 kernel: pci 0000:00:07.7: BAR 1 [mem 0xfebfe000-0xfebfffff 64bit] Jul 14 22:37:40.703203 kernel: pci 0000:00:0f.0: [15ad:0405] type 00 class 0x030000 conventional PCI endpoint Jul 14 22:37:40.703253 kernel: pci 0000:00:0f.0: BAR 0 [io 0x1070-0x107f] Jul 14 22:37:40.703302 kernel: pci 0000:00:0f.0: BAR 1 [mem 0xe8000000-0xefffffff pref] Jul 14 22:37:40.703350 kernel: pci 0000:00:0f.0: BAR 2 [mem 0xfe000000-0xfe7fffff] Jul 14 22:37:40.703399 kernel: pci 0000:00:0f.0: ROM [mem 0x00000000-0x00007fff pref] Jul 14 22:37:40.703447 kernel: pci 0000:00:0f.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jul 14 22:37:40.703818 kernel: pci 0000:00:11.0: [15ad:0790] type 01 class 0x060401 conventional PCI bridge Jul 14 22:37:40.703874 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] (subtractive decode) Jul 14 22:37:40.703924 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] Jul 14 22:37:40.703973 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] Jul 14 22:37:40.704341 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] Jul 14 22:37:40.704402 kernel: pci 0000:00:15.0: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Jul 14 22:37:40.704457 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] Jul 14 22:37:40.704517 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] Jul 14 22:37:40.704568 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] Jul 14 22:37:40.704618 kernel: pci 0000:00:15.0: PME# supported from D0 D3hot D3cold Jul 14 22:37:40.704674 kernel: pci 0000:00:15.1: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Jul 14 22:37:40.704726 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] Jul 14 22:37:40.704778 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] Jul 14 22:37:40.704827 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] Jul 14 22:37:40.704880 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] Jul 14 22:37:40.704931 kernel: pci 0000:00:15.1: PME# supported from D0 D3hot D3cold Jul 14 22:37:40.704985 kernel: pci 0000:00:15.2: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Jul 14 22:37:40.705036 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] Jul 14 22:37:40.705087 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] Jul 14 22:37:40.705137 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] Jul 14 22:37:40.705189 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] Jul 14 22:37:40.705239 kernel: pci 0000:00:15.2: PME# supported from D0 D3hot D3cold Jul 14 22:37:40.705294 kernel: pci 0000:00:15.3: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Jul 14 22:37:40.705345 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] Jul 14 22:37:40.705395 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] Jul 14 22:37:40.705445 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] Jul 14 22:37:40.706549 kernel: pci 0000:00:15.3: PME# supported from D0 D3hot D3cold Jul 14 22:37:40.706618 kernel: pci 0000:00:15.4: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Jul 14 22:37:40.706673 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] Jul 14 22:37:40.706725 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] Jul 14 22:37:40.706775 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] Jul 14 22:37:40.706825 kernel: pci 0000:00:15.4: PME# supported from D0 D3hot D3cold Jul 14 22:37:40.706880 kernel: pci 0000:00:15.5: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Jul 14 22:37:40.706931 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] Jul 14 22:37:40.706983 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] Jul 14 22:37:40.707032 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] Jul 14 22:37:40.707081 kernel: pci 0000:00:15.5: PME# supported from D0 D3hot D3cold Jul 14 22:37:40.707138 kernel: pci 0000:00:15.6: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Jul 14 22:37:40.707195 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] Jul 14 22:37:40.707244 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] Jul 14 22:37:40.707293 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] Jul 14 22:37:40.707344 kernel: pci 0000:00:15.6: PME# supported from D0 D3hot D3cold Jul 14 22:37:40.707396 kernel: pci 0000:00:15.7: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Jul 14 22:37:40.707446 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] Jul 14 22:37:40.708385 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] Jul 14 22:37:40.708445 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] Jul 14 22:37:40.708529 kernel: pci 0000:00:15.7: PME# supported from D0 D3hot D3cold Jul 14 22:37:40.708587 kernel: pci 0000:00:16.0: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Jul 14 22:37:40.708642 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] Jul 14 22:37:40.708693 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] Jul 14 22:37:40.708744 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] Jul 14 22:37:40.708794 kernel: pci 0000:00:16.0: PME# supported from D0 D3hot D3cold Jul 14 22:37:40.708848 kernel: pci 0000:00:16.1: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Jul 14 22:37:40.708899 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] Jul 14 22:37:40.708949 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] Jul 14 22:37:40.709012 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] Jul 14 22:37:40.709073 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] Jul 14 22:37:40.709147 kernel: pci 0000:00:16.1: PME# supported from D0 D3hot D3cold Jul 14 22:37:40.709225 kernel: pci 0000:00:16.2: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Jul 14 22:37:40.709288 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] Jul 14 22:37:40.709338 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] Jul 14 22:37:40.709387 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] Jul 14 22:37:40.709439 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] Jul 14 22:37:40.711568 kernel: pci 0000:00:16.2: PME# supported from D0 D3hot D3cold Jul 14 22:37:40.711639 kernel: pci 0000:00:16.3: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Jul 14 22:37:40.711694 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] Jul 14 22:37:40.711745 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] Jul 14 22:37:40.711795 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] Jul 14 22:37:40.711845 kernel: pci 0000:00:16.3: PME# supported from D0 D3hot D3cold Jul 14 22:37:40.711898 kernel: pci 0000:00:16.4: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Jul 14 22:37:40.711952 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] Jul 14 22:37:40.712001 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] Jul 14 22:37:40.712050 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] Jul 14 22:37:40.712099 kernel: pci 0000:00:16.4: PME# supported from D0 D3hot D3cold Jul 14 22:37:40.712152 kernel: pci 0000:00:16.5: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Jul 14 22:37:40.712207 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] Jul 14 22:37:40.712256 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] Jul 14 22:37:40.712308 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] Jul 14 22:37:40.712356 kernel: pci 0000:00:16.5: PME# supported from D0 D3hot D3cold Jul 14 22:37:40.712411 kernel: pci 0000:00:16.6: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Jul 14 22:37:40.712462 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] Jul 14 22:37:40.712535 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] Jul 14 22:37:40.712928 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] Jul 14 22:37:40.712997 kernel: pci 0000:00:16.6: PME# supported from D0 D3hot D3cold Jul 14 22:37:40.713063 kernel: pci 0000:00:16.7: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Jul 14 22:37:40.713122 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] Jul 14 22:37:40.713173 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] Jul 14 22:37:40.713224 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] Jul 14 22:37:40.713291 kernel: pci 0000:00:16.7: PME# supported from D0 D3hot D3cold Jul 14 22:37:40.713346 kernel: pci 0000:00:17.0: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Jul 14 22:37:40.713397 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] Jul 14 22:37:40.713449 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] Jul 14 22:37:40.713897 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] Jul 14 22:37:40.713970 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] Jul 14 22:37:40.714029 kernel: pci 0000:00:17.0: PME# supported from D0 D3hot D3cold Jul 14 22:37:40.714086 kernel: pci 0000:00:17.1: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Jul 14 22:37:40.714150 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] Jul 14 22:37:40.714217 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] Jul 14 22:37:40.714267 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] Jul 14 22:37:40.714317 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] Jul 14 22:37:40.714366 kernel: pci 0000:00:17.1: PME# supported from D0 D3hot D3cold Jul 14 22:37:40.714422 kernel: pci 0000:00:17.2: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Jul 14 22:37:40.714474 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] Jul 14 22:37:40.714540 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] Jul 14 22:37:40.714590 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] Jul 14 22:37:40.715002 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] Jul 14 22:37:40.715060 kernel: pci 0000:00:17.2: PME# supported from D0 D3hot D3cold Jul 14 22:37:40.715117 kernel: pci 0000:00:17.3: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Jul 14 22:37:40.715170 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] Jul 14 22:37:40.715223 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] Jul 14 22:37:40.715273 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] Jul 14 22:37:40.715322 kernel: pci 0000:00:17.3: PME# supported from D0 D3hot D3cold Jul 14 22:37:40.715376 kernel: pci 0000:00:17.4: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Jul 14 22:37:40.715426 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] Jul 14 22:37:40.715477 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] Jul 14 22:37:40.715541 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] Jul 14 22:37:40.715597 kernel: pci 0000:00:17.4: PME# supported from D0 D3hot D3cold Jul 14 22:37:40.715652 kernel: pci 0000:00:17.5: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Jul 14 22:37:40.715703 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] Jul 14 22:37:40.715754 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] Jul 14 22:37:40.715803 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] Jul 14 22:37:40.715852 kernel: pci 0000:00:17.5: PME# supported from D0 D3hot D3cold Jul 14 22:37:40.715905 kernel: pci 0000:00:17.6: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Jul 14 22:37:40.715957 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] Jul 14 22:37:40.716006 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] Jul 14 22:37:40.716054 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] Jul 14 22:37:40.716104 kernel: pci 0000:00:17.6: PME# supported from D0 D3hot D3cold Jul 14 22:37:40.716159 kernel: pci 0000:00:17.7: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Jul 14 22:37:40.716209 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] Jul 14 22:37:40.716257 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] Jul 14 22:37:40.716308 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] Jul 14 22:37:40.716357 kernel: pci 0000:00:17.7: PME# supported from D0 D3hot D3cold Jul 14 22:37:40.716411 kernel: pci 0000:00:18.0: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Jul 14 22:37:40.716462 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] Jul 14 22:37:40.716528 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] Jul 14 22:37:40.716590 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] Jul 14 22:37:40.716640 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] Jul 14 22:37:40.716690 kernel: pci 0000:00:18.0: PME# supported from D0 D3hot D3cold Jul 14 22:37:40.716747 kernel: pci 0000:00:18.1: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Jul 14 22:37:40.716797 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] Jul 14 22:37:40.716846 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] Jul 14 22:37:40.716895 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] Jul 14 22:37:40.716944 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] Jul 14 22:37:40.716993 kernel: pci 0000:00:18.1: PME# supported from D0 D3hot D3cold Jul 14 22:37:40.717049 kernel: pci 0000:00:18.2: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Jul 14 22:37:40.717100 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] Jul 14 22:37:40.717159 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] Jul 14 22:37:40.717227 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] Jul 14 22:37:40.717275 kernel: pci 0000:00:18.2: PME# supported from D0 D3hot D3cold Jul 14 22:37:40.717329 kernel: pci 0000:00:18.3: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Jul 14 22:37:40.717379 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] Jul 14 22:37:40.717429 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] Jul 14 22:37:40.717477 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] Jul 14 22:37:40.717541 kernel: pci 0000:00:18.3: PME# supported from D0 D3hot D3cold Jul 14 22:37:40.717594 kernel: pci 0000:00:18.4: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Jul 14 22:37:40.717643 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] Jul 14 22:37:40.717691 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] Jul 14 22:37:40.717740 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] Jul 14 22:37:40.717788 kernel: pci 0000:00:18.4: PME# supported from D0 D3hot D3cold Jul 14 22:37:40.717843 kernel: pci 0000:00:18.5: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Jul 14 22:37:40.717892 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] Jul 14 22:37:40.717940 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] Jul 14 22:37:40.717988 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] Jul 14 22:37:40.718036 kernel: pci 0000:00:18.5: PME# supported from D0 D3hot D3cold Jul 14 22:37:40.718090 kernel: pci 0000:00:18.6: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Jul 14 22:37:40.718139 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] Jul 14 22:37:40.718190 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] Jul 14 22:37:40.718238 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] Jul 14 22:37:40.718287 kernel: pci 0000:00:18.6: PME# supported from D0 D3hot D3cold Jul 14 22:37:40.718340 kernel: pci 0000:00:18.7: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Jul 14 22:37:40.718390 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] Jul 14 22:37:40.718438 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] Jul 14 22:37:40.718676 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] Jul 14 22:37:40.718735 kernel: pci 0000:00:18.7: PME# supported from D0 D3hot D3cold Jul 14 22:37:40.718790 kernel: pci_bus 0000:01: extended config space not accessible Jul 14 22:37:40.718843 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Jul 14 22:37:40.718894 kernel: pci_bus 0000:02: extended config space not accessible Jul 14 22:37:40.718903 kernel: acpiphp: Slot [32] registered Jul 14 22:37:40.718910 kernel: acpiphp: Slot [33] registered Jul 14 22:37:40.718915 kernel: acpiphp: Slot [34] registered Jul 14 22:37:40.718921 kernel: acpiphp: Slot [35] registered Jul 14 22:37:40.718928 kernel: acpiphp: Slot [36] registered Jul 14 22:37:40.718934 kernel: acpiphp: Slot [37] registered Jul 14 22:37:40.718940 kernel: acpiphp: Slot [38] registered Jul 14 22:37:40.718946 kernel: acpiphp: Slot [39] registered Jul 14 22:37:40.718951 kernel: acpiphp: Slot [40] registered Jul 14 22:37:40.718957 kernel: acpiphp: Slot [41] registered Jul 14 22:37:40.718963 kernel: acpiphp: Slot [42] registered Jul 14 22:37:40.718968 kernel: acpiphp: Slot [43] registered Jul 14 22:37:40.718974 kernel: acpiphp: Slot [44] registered Jul 14 22:37:40.718981 kernel: acpiphp: Slot [45] registered Jul 14 22:37:40.718987 kernel: acpiphp: Slot [46] registered Jul 14 22:37:40.718993 kernel: acpiphp: Slot [47] registered Jul 14 22:37:40.718998 kernel: acpiphp: Slot [48] registered Jul 14 22:37:40.719004 kernel: acpiphp: Slot [49] registered Jul 14 22:37:40.719009 kernel: acpiphp: Slot [50] registered Jul 14 22:37:40.719015 kernel: acpiphp: Slot [51] registered Jul 14 22:37:40.719020 kernel: acpiphp: Slot [52] registered Jul 14 22:37:40.719026 kernel: acpiphp: Slot [53] registered Jul 14 22:37:40.719032 kernel: acpiphp: Slot [54] registered Jul 14 22:37:40.719038 kernel: acpiphp: Slot [55] registered Jul 14 22:37:40.719044 kernel: acpiphp: Slot [56] registered Jul 14 22:37:40.719050 kernel: acpiphp: Slot [57] registered Jul 14 22:37:40.719055 kernel: acpiphp: Slot [58] registered Jul 14 22:37:40.719061 kernel: acpiphp: Slot [59] registered Jul 14 22:37:40.719066 kernel: acpiphp: Slot [60] registered Jul 14 22:37:40.719072 kernel: acpiphp: Slot [61] registered Jul 14 22:37:40.719078 kernel: acpiphp: Slot [62] registered Jul 14 22:37:40.719083 kernel: acpiphp: Slot [63] registered Jul 14 22:37:40.719133 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] (subtractive decode) Jul 14 22:37:40.719183 kernel: pci 0000:00:11.0: bridge window [mem 0x000a0000-0x000bffff window] (subtractive decode) Jul 14 22:37:40.719231 kernel: pci 0000:00:11.0: bridge window [mem 0x000cc000-0x000dbfff window] (subtractive decode) Jul 14 22:37:40.719279 kernel: pci 0000:00:11.0: bridge window [mem 0xc0000000-0xfebfffff window] (subtractive decode) Jul 14 22:37:40.719326 kernel: pci 0000:00:11.0: bridge window [io 0x0000-0x0cf7 window] (subtractive decode) Jul 14 22:37:40.719375 kernel: pci 0000:00:11.0: bridge window [io 0x0d00-0xfeff window] (subtractive decode) Jul 14 22:37:40.719430 kernel: pci 0000:03:00.0: [15ad:07c0] type 00 class 0x010700 PCIe Endpoint Jul 14 22:37:40.719481 kernel: pci 0000:03:00.0: BAR 0 [io 0x4000-0x4007] Jul 14 22:37:40.721581 kernel: pci 0000:03:00.0: BAR 1 [mem 0xfd5f8000-0xfd5fffff 64bit] Jul 14 22:37:40.721639 kernel: pci 0000:03:00.0: ROM [mem 0x00000000-0x0000ffff pref] Jul 14 22:37:40.721692 kernel: pci 0000:03:00.0: PME# supported from D0 D3hot D3cold Jul 14 22:37:40.721744 kernel: pci 0000:03:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' Jul 14 22:37:40.721797 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] Jul 14 22:37:40.721850 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] Jul 14 22:37:40.721901 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] Jul 14 22:37:40.721956 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] Jul 14 22:37:40.722006 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] Jul 14 22:37:40.722056 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] Jul 14 22:37:40.722107 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] Jul 14 22:37:40.722163 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] Jul 14 22:37:40.722220 kernel: pci 0000:0b:00.0: [15ad:07b0] type 00 class 0x020000 PCIe Endpoint Jul 14 22:37:40.722272 kernel: pci 0000:0b:00.0: BAR 0 [mem 0xfd4fc000-0xfd4fcfff] Jul 14 22:37:40.722325 kernel: pci 0000:0b:00.0: BAR 1 [mem 0xfd4fd000-0xfd4fdfff] Jul 14 22:37:40.722375 kernel: pci 0000:0b:00.0: BAR 2 [mem 0xfd4fe000-0xfd4fffff] Jul 14 22:37:40.722426 kernel: pci 0000:0b:00.0: BAR 3 [io 0x5000-0x500f] Jul 14 22:37:40.722476 kernel: pci 0000:0b:00.0: ROM [mem 0x00000000-0x0000ffff pref] Jul 14 22:37:40.723382 kernel: pci 0000:0b:00.0: supports D1 D2 Jul 14 22:37:40.723438 kernel: pci 0000:0b:00.0: PME# supported from D0 D1 D2 D3hot D3cold Jul 14 22:37:40.723503 kernel: pci 0000:0b:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' Jul 14 22:37:40.723562 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] Jul 14 22:37:40.723614 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] Jul 14 22:37:40.723665 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] Jul 14 22:37:40.723716 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] Jul 14 22:37:40.723765 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] Jul 14 22:37:40.723815 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] Jul 14 22:37:40.723864 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] Jul 14 22:37:40.723913 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] Jul 14 22:37:40.723965 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] Jul 14 22:37:40.724014 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] Jul 14 22:37:40.724062 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] Jul 14 22:37:40.724111 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] Jul 14 22:37:40.724160 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] Jul 14 22:37:40.724208 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] Jul 14 22:37:40.724257 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] Jul 14 22:37:40.724308 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] Jul 14 22:37:40.724356 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] Jul 14 22:37:40.724405 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] Jul 14 22:37:40.724453 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] Jul 14 22:37:40.729531 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] Jul 14 22:37:40.729594 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] Jul 14 22:37:40.729648 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] Jul 14 22:37:40.729701 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] Jul 14 22:37:40.729755 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] Jul 14 22:37:40.729764 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 9 Jul 14 22:37:40.729770 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 0 Jul 14 22:37:40.729776 kernel: ACPI: PCI: Interrupt link LNKB disabled Jul 14 22:37:40.729782 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jul 14 22:37:40.729788 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 10 Jul 14 22:37:40.729794 kernel: iommu: Default domain type: Translated Jul 14 22:37:40.729799 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jul 14 22:37:40.729807 kernel: PCI: Using ACPI for IRQ routing Jul 14 22:37:40.729813 kernel: PCI: pci_cache_line_size set to 64 bytes Jul 14 22:37:40.729820 kernel: e820: reserve RAM buffer [mem 0x0009ec00-0x0009ffff] Jul 14 22:37:40.729825 kernel: e820: reserve RAM buffer [mem 0x7fee0000-0x7fffffff] Jul 14 22:37:40.729876 kernel: pci 0000:00:0f.0: vgaarb: setting as boot VGA device Jul 14 22:37:40.729925 kernel: pci 0000:00:0f.0: vgaarb: bridge control possible Jul 14 22:37:40.729974 kernel: pci 0000:00:0f.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jul 14 22:37:40.729982 kernel: vgaarb: loaded Jul 14 22:37:40.729988 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 Jul 14 22:37:40.729996 kernel: hpet0: 16 comparators, 64-bit 14.318180 MHz counter Jul 14 22:37:40.730002 kernel: clocksource: Switched to clocksource tsc-early Jul 14 22:37:40.730008 kernel: VFS: Disk quotas dquot_6.6.0 Jul 14 22:37:40.730014 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 14 22:37:40.730022 kernel: pnp: PnP ACPI init Jul 14 22:37:40.730077 kernel: system 00:00: [io 0x1000-0x103f] has been reserved Jul 14 22:37:40.730129 kernel: system 00:00: [io 0x1040-0x104f] has been reserved Jul 14 22:37:40.730178 kernel: system 00:00: [io 0x0cf0-0x0cf1] has been reserved Jul 14 22:37:40.730230 kernel: system 00:04: [mem 0xfed00000-0xfed003ff] has been reserved Jul 14 22:37:40.730288 kernel: pnp 00:06: [dma 2] Jul 14 22:37:40.730337 kernel: system 00:07: [io 0xfce0-0xfcff] has been reserved Jul 14 22:37:40.730382 kernel: system 00:07: [mem 0xf0000000-0xf7ffffff] has been reserved Jul 14 22:37:40.730425 kernel: system 00:07: [mem 0xfe800000-0xfe9fffff] has been reserved Jul 14 22:37:40.730434 kernel: pnp: PnP ACPI: found 8 devices Jul 14 22:37:40.730442 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jul 14 22:37:40.730448 kernel: NET: Registered PF_INET protocol family Jul 14 22:37:40.730454 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 14 22:37:40.730460 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jul 14 22:37:40.730466 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 14 22:37:40.730471 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jul 14 22:37:40.730477 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jul 14 22:37:40.730488 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jul 14 22:37:40.730495 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jul 14 22:37:40.730502 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jul 14 22:37:40.730508 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 14 22:37:40.730513 kernel: NET: Registered PF_XDP protocol family Jul 14 22:37:40.730564 kernel: pci 0000:00:15.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 Jul 14 22:37:40.730614 kernel: pci 0000:00:15.3: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Jul 14 22:37:40.730664 kernel: pci 0000:00:15.4: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Jul 14 22:37:40.730713 kernel: pci 0000:00:15.5: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Jul 14 22:37:40.730763 kernel: pci 0000:00:15.6: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Jul 14 22:37:40.730815 kernel: pci 0000:00:15.7: bridge window [io 0x1000-0x0fff] to [bus 0a] add_size 1000 Jul 14 22:37:40.730865 kernel: pci 0000:00:16.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 0b] add_size 200000 add_align 100000 Jul 14 22:37:40.730915 kernel: pci 0000:00:16.3: bridge window [io 0x1000-0x0fff] to [bus 0e] add_size 1000 Jul 14 22:37:40.730964 kernel: pci 0000:00:16.4: bridge window [io 0x1000-0x0fff] to [bus 0f] add_size 1000 Jul 14 22:37:40.731013 kernel: pci 0000:00:16.5: bridge window [io 0x1000-0x0fff] to [bus 10] add_size 1000 Jul 14 22:37:40.731062 kernel: pci 0000:00:16.6: bridge window [io 0x1000-0x0fff] to [bus 11] add_size 1000 Jul 14 22:37:40.731111 kernel: pci 0000:00:16.7: bridge window [io 0x1000-0x0fff] to [bus 12] add_size 1000 Jul 14 22:37:40.731161 kernel: pci 0000:00:17.3: bridge window [io 0x1000-0x0fff] to [bus 16] add_size 1000 Jul 14 22:37:40.731213 kernel: pci 0000:00:17.4: bridge window [io 0x1000-0x0fff] to [bus 17] add_size 1000 Jul 14 22:37:40.731261 kernel: pci 0000:00:17.5: bridge window [io 0x1000-0x0fff] to [bus 18] add_size 1000 Jul 14 22:37:40.731310 kernel: pci 0000:00:17.6: bridge window [io 0x1000-0x0fff] to [bus 19] add_size 1000 Jul 14 22:37:40.731358 kernel: pci 0000:00:17.7: bridge window [io 0x1000-0x0fff] to [bus 1a] add_size 1000 Jul 14 22:37:40.731407 kernel: pci 0000:00:18.2: bridge window [io 0x1000-0x0fff] to [bus 1d] add_size 1000 Jul 14 22:37:40.731456 kernel: pci 0000:00:18.3: bridge window [io 0x1000-0x0fff] to [bus 1e] add_size 1000 Jul 14 22:37:40.732712 kernel: pci 0000:00:18.4: bridge window [io 0x1000-0x0fff] to [bus 1f] add_size 1000 Jul 14 22:37:40.732770 kernel: pci 0000:00:18.5: bridge window [io 0x1000-0x0fff] to [bus 20] add_size 1000 Jul 14 22:37:40.732862 kernel: pci 0000:00:18.6: bridge window [io 0x1000-0x0fff] to [bus 21] add_size 1000 Jul 14 22:37:40.732920 kernel: pci 0000:00:18.7: bridge window [io 0x1000-0x0fff] to [bus 22] add_size 1000 Jul 14 22:37:40.732976 kernel: pci 0000:00:15.0: bridge window [mem 0xc0000000-0xc01fffff 64bit pref]: assigned Jul 14 22:37:40.733026 kernel: pci 0000:00:16.0: bridge window [mem 0xc0200000-0xc03fffff 64bit pref]: assigned Jul 14 22:37:40.733076 kernel: pci 0000:00:15.3: bridge window [io size 0x1000]: can't assign; no space Jul 14 22:37:40.733125 kernel: pci 0000:00:15.3: bridge window [io size 0x1000]: failed to assign Jul 14 22:37:40.733178 kernel: pci 0000:00:15.4: bridge window [io size 0x1000]: can't assign; no space Jul 14 22:37:40.733229 kernel: pci 0000:00:15.4: bridge window [io size 0x1000]: failed to assign Jul 14 22:37:40.733279 kernel: pci 0000:00:15.5: bridge window [io size 0x1000]: can't assign; no space Jul 14 22:37:40.733329 kernel: pci 0000:00:15.5: bridge window [io size 0x1000]: failed to assign Jul 14 22:37:40.733378 kernel: pci 0000:00:15.6: bridge window [io size 0x1000]: can't assign; no space Jul 14 22:37:40.733427 kernel: pci 0000:00:15.6: bridge window [io size 0x1000]: failed to assign Jul 14 22:37:40.733476 kernel: pci 0000:00:15.7: bridge window [io size 0x1000]: can't assign; no space Jul 14 22:37:40.734552 kernel: pci 0000:00:15.7: bridge window [io size 0x1000]: failed to assign Jul 14 22:37:40.734610 kernel: pci 0000:00:16.3: bridge window [io size 0x1000]: can't assign; no space Jul 14 22:37:40.734664 kernel: pci 0000:00:16.3: bridge window [io size 0x1000]: failed to assign Jul 14 22:37:40.734712 kernel: pci 0000:00:16.4: bridge window [io size 0x1000]: can't assign; no space Jul 14 22:37:40.734760 kernel: pci 0000:00:16.4: bridge window [io size 0x1000]: failed to assign Jul 14 22:37:40.734809 kernel: pci 0000:00:16.5: bridge window [io size 0x1000]: can't assign; no space Jul 14 22:37:40.734858 kernel: pci 0000:00:16.5: bridge window [io size 0x1000]: failed to assign Jul 14 22:37:40.734907 kernel: pci 0000:00:16.6: bridge window [io size 0x1000]: can't assign; no space Jul 14 22:37:40.734955 kernel: pci 0000:00:16.6: bridge window [io size 0x1000]: failed to assign Jul 14 22:37:40.735004 kernel: pci 0000:00:16.7: bridge window [io size 0x1000]: can't assign; no space Jul 14 22:37:40.735056 kernel: pci 0000:00:16.7: bridge window [io size 0x1000]: failed to assign Jul 14 22:37:40.735104 kernel: pci 0000:00:17.3: bridge window [io size 0x1000]: can't assign; no space Jul 14 22:37:40.735153 kernel: pci 0000:00:17.3: bridge window [io size 0x1000]: failed to assign Jul 14 22:37:40.735201 kernel: pci 0000:00:17.4: bridge window [io size 0x1000]: can't assign; no space Jul 14 22:37:40.735249 kernel: pci 0000:00:17.4: bridge window [io size 0x1000]: failed to assign Jul 14 22:37:40.735298 kernel: pci 0000:00:17.5: bridge window [io size 0x1000]: can't assign; no space Jul 14 22:37:40.735346 kernel: pci 0000:00:17.5: bridge window [io size 0x1000]: failed to assign Jul 14 22:37:40.735395 kernel: pci 0000:00:17.6: bridge window [io size 0x1000]: can't assign; no space Jul 14 22:37:40.735445 kernel: pci 0000:00:17.6: bridge window [io size 0x1000]: failed to assign Jul 14 22:37:40.735596 kernel: pci 0000:00:17.7: bridge window [io size 0x1000]: can't assign; no space Jul 14 22:37:40.735650 kernel: pci 0000:00:17.7: bridge window [io size 0x1000]: failed to assign Jul 14 22:37:40.735700 kernel: pci 0000:00:18.2: bridge window [io size 0x1000]: can't assign; no space Jul 14 22:37:40.735749 kernel: pci 0000:00:18.2: bridge window [io size 0x1000]: failed to assign Jul 14 22:37:40.735798 kernel: pci 0000:00:18.3: bridge window [io size 0x1000]: can't assign; no space Jul 14 22:37:40.735846 kernel: pci 0000:00:18.3: bridge window [io size 0x1000]: failed to assign Jul 14 22:37:40.735895 kernel: pci 0000:00:18.4: bridge window [io size 0x1000]: can't assign; no space Jul 14 22:37:40.735947 kernel: pci 0000:00:18.4: bridge window [io size 0x1000]: failed to assign Jul 14 22:37:40.735995 kernel: pci 0000:00:18.5: bridge window [io size 0x1000]: can't assign; no space Jul 14 22:37:40.736044 kernel: pci 0000:00:18.5: bridge window [io size 0x1000]: failed to assign Jul 14 22:37:40.736092 kernel: pci 0000:00:18.6: bridge window [io size 0x1000]: can't assign; no space Jul 14 22:37:40.736141 kernel: pci 0000:00:18.6: bridge window [io size 0x1000]: failed to assign Jul 14 22:37:40.736193 kernel: pci 0000:00:18.7: bridge window [io size 0x1000]: can't assign; no space Jul 14 22:37:40.736240 kernel: pci 0000:00:18.7: bridge window [io size 0x1000]: failed to assign Jul 14 22:37:40.736288 kernel: pci 0000:00:18.7: bridge window [io size 0x1000]: can't assign; no space Jul 14 22:37:40.736338 kernel: pci 0000:00:18.7: bridge window [io size 0x1000]: failed to assign Jul 14 22:37:40.736424 kernel: pci 0000:00:18.6: bridge window [io size 0x1000]: can't assign; no space Jul 14 22:37:40.736476 kernel: pci 0000:00:18.6: bridge window [io size 0x1000]: failed to assign Jul 14 22:37:40.736538 kernel: pci 0000:00:18.5: bridge window [io size 0x1000]: can't assign; no space Jul 14 22:37:40.736587 kernel: pci 0000:00:18.5: bridge window [io size 0x1000]: failed to assign Jul 14 22:37:40.736636 kernel: pci 0000:00:18.4: bridge window [io size 0x1000]: can't assign; no space Jul 14 22:37:40.736683 kernel: pci 0000:00:18.4: bridge window [io size 0x1000]: failed to assign Jul 14 22:37:40.736732 kernel: pci 0000:00:18.3: bridge window [io size 0x1000]: can't assign; no space Jul 14 22:37:40.736780 kernel: pci 0000:00:18.3: bridge window [io size 0x1000]: failed to assign Jul 14 22:37:40.736831 kernel: pci 0000:00:18.2: bridge window [io size 0x1000]: can't assign; no space Jul 14 22:37:40.736879 kernel: pci 0000:00:18.2: bridge window [io size 0x1000]: failed to assign Jul 14 22:37:40.736926 kernel: pci 0000:00:17.7: bridge window [io size 0x1000]: can't assign; no space Jul 14 22:37:40.736974 kernel: pci 0000:00:17.7: bridge window [io size 0x1000]: failed to assign Jul 14 22:37:40.737022 kernel: pci 0000:00:17.6: bridge window [io size 0x1000]: can't assign; no space Jul 14 22:37:40.737071 kernel: pci 0000:00:17.6: bridge window [io size 0x1000]: failed to assign Jul 14 22:37:40.737119 kernel: pci 0000:00:17.5: bridge window [io size 0x1000]: can't assign; no space Jul 14 22:37:40.737207 kernel: pci 0000:00:17.5: bridge window [io size 0x1000]: failed to assign Jul 14 22:37:40.737270 kernel: pci 0000:00:17.4: bridge window [io size 0x1000]: can't assign; no space Jul 14 22:37:40.737318 kernel: pci 0000:00:17.4: bridge window [io size 0x1000]: failed to assign Jul 14 22:37:40.737368 kernel: pci 0000:00:17.3: bridge window [io size 0x1000]: can't assign; no space Jul 14 22:37:40.737417 kernel: pci 0000:00:17.3: bridge window [io size 0x1000]: failed to assign Jul 14 22:37:40.737465 kernel: pci 0000:00:16.7: bridge window [io size 0x1000]: can't assign; no space Jul 14 22:37:40.737520 kernel: pci 0000:00:16.7: bridge window [io size 0x1000]: failed to assign Jul 14 22:37:40.737569 kernel: pci 0000:00:16.6: bridge window [io size 0x1000]: can't assign; no space Jul 14 22:37:40.737618 kernel: pci 0000:00:16.6: bridge window [io size 0x1000]: failed to assign Jul 14 22:37:40.737667 kernel: pci 0000:00:16.5: bridge window [io size 0x1000]: can't assign; no space Jul 14 22:37:40.737715 kernel: pci 0000:00:16.5: bridge window [io size 0x1000]: failed to assign Jul 14 22:37:40.737763 kernel: pci 0000:00:16.4: bridge window [io size 0x1000]: can't assign; no space Jul 14 22:37:40.737813 kernel: pci 0000:00:16.4: bridge window [io size 0x1000]: failed to assign Jul 14 22:37:40.737861 kernel: pci 0000:00:16.3: bridge window [io size 0x1000]: can't assign; no space Jul 14 22:37:40.737910 kernel: pci 0000:00:16.3: bridge window [io size 0x1000]: failed to assign Jul 14 22:37:40.737961 kernel: pci 0000:00:15.7: bridge window [io size 0x1000]: can't assign; no space Jul 14 22:37:40.738009 kernel: pci 0000:00:15.7: bridge window [io size 0x1000]: failed to assign Jul 14 22:37:40.738057 kernel: pci 0000:00:15.6: bridge window [io size 0x1000]: can't assign; no space Jul 14 22:37:40.738107 kernel: pci 0000:00:15.6: bridge window [io size 0x1000]: failed to assign Jul 14 22:37:40.738155 kernel: pci 0000:00:15.5: bridge window [io size 0x1000]: can't assign; no space Jul 14 22:37:40.738204 kernel: pci 0000:00:15.5: bridge window [io size 0x1000]: failed to assign Jul 14 22:37:40.738252 kernel: pci 0000:00:15.4: bridge window [io size 0x1000]: can't assign; no space Jul 14 22:37:40.738300 kernel: pci 0000:00:15.4: bridge window [io size 0x1000]: failed to assign Jul 14 22:37:40.738349 kernel: pci 0000:00:15.3: bridge window [io size 0x1000]: can't assign; no space Jul 14 22:37:40.738397 kernel: pci 0000:00:15.3: bridge window [io size 0x1000]: failed to assign Jul 14 22:37:40.738446 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Jul 14 22:37:40.738506 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] Jul 14 22:37:40.738560 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] Jul 14 22:37:40.738608 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] Jul 14 22:37:40.738657 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] Jul 14 22:37:40.738709 kernel: pci 0000:03:00.0: ROM [mem 0xfd500000-0xfd50ffff pref]: assigned Jul 14 22:37:40.738758 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] Jul 14 22:37:40.738807 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] Jul 14 22:37:40.738855 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] Jul 14 22:37:40.738905 kernel: pci 0000:00:15.0: bridge window [mem 0xc0000000-0xc01fffff 64bit pref] Jul 14 22:37:40.738954 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] Jul 14 22:37:40.739006 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] Jul 14 22:37:40.739055 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] Jul 14 22:37:40.739104 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] Jul 14 22:37:40.739153 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] Jul 14 22:37:40.739202 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] Jul 14 22:37:40.739250 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] Jul 14 22:37:40.739299 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] Jul 14 22:37:40.739347 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] Jul 14 22:37:40.739396 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] Jul 14 22:37:40.739444 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] Jul 14 22:37:40.739513 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] Jul 14 22:37:40.739565 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] Jul 14 22:37:40.739613 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] Jul 14 22:37:40.739683 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] Jul 14 22:37:40.739736 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] Jul 14 22:37:40.739784 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] Jul 14 22:37:40.739835 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] Jul 14 22:37:40.739884 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] Jul 14 22:37:40.739931 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] Jul 14 22:37:40.739980 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] Jul 14 22:37:40.740028 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] Jul 14 22:37:40.740075 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] Jul 14 22:37:40.740126 kernel: pci 0000:0b:00.0: ROM [mem 0xfd400000-0xfd40ffff pref]: assigned Jul 14 22:37:40.740189 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] Jul 14 22:37:40.740249 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] Jul 14 22:37:40.740298 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] Jul 14 22:37:40.740347 kernel: pci 0000:00:16.0: bridge window [mem 0xc0200000-0xc03fffff 64bit pref] Jul 14 22:37:40.740396 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] Jul 14 22:37:40.740444 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] Jul 14 22:37:40.740509 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] Jul 14 22:37:40.741674 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] Jul 14 22:37:40.741739 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] Jul 14 22:37:40.741791 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] Jul 14 22:37:40.741846 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] Jul 14 22:37:40.741896 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] Jul 14 22:37:40.741945 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] Jul 14 22:37:40.741994 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] Jul 14 22:37:40.742042 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] Jul 14 22:37:40.742091 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] Jul 14 22:37:40.742157 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] Jul 14 22:37:40.742222 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] Jul 14 22:37:40.742274 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] Jul 14 22:37:40.742322 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] Jul 14 22:37:40.742371 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] Jul 14 22:37:40.742419 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] Jul 14 22:37:40.742467 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] Jul 14 22:37:40.742714 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] Jul 14 22:37:40.742767 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] Jul 14 22:37:40.742820 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] Jul 14 22:37:40.742869 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] Jul 14 22:37:40.742920 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] Jul 14 22:37:40.742968 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] Jul 14 22:37:40.743018 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] Jul 14 22:37:40.743067 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] Jul 14 22:37:40.743117 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] Jul 14 22:37:40.743165 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] Jul 14 22:37:40.743214 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] Jul 14 22:37:40.743266 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] Jul 14 22:37:40.743315 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] Jul 14 22:37:40.743364 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] Jul 14 22:37:40.743412 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] Jul 14 22:37:40.743460 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] Jul 14 22:37:40.743526 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] Jul 14 22:37:40.743576 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] Jul 14 22:37:40.743624 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] Jul 14 22:37:40.743674 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] Jul 14 22:37:40.743726 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] Jul 14 22:37:40.743774 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] Jul 14 22:37:40.743823 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] Jul 14 22:37:40.743872 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] Jul 14 22:37:40.743920 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] Jul 14 22:37:40.743969 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] Jul 14 22:37:40.744017 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] Jul 14 22:37:40.744068 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] Jul 14 22:37:40.744117 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] Jul 14 22:37:40.744170 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] Jul 14 22:37:40.744218 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] Jul 14 22:37:40.744267 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] Jul 14 22:37:40.744315 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] Jul 14 22:37:40.744363 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] Jul 14 22:37:40.744411 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] Jul 14 22:37:40.744463 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] Jul 14 22:37:40.744537 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] Jul 14 22:37:40.744587 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] Jul 14 22:37:40.744636 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] Jul 14 22:37:40.744685 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] Jul 14 22:37:40.744733 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] Jul 14 22:37:40.744781 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] Jul 14 22:37:40.744831 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] Jul 14 22:37:40.744880 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] Jul 14 22:37:40.744932 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] Jul 14 22:37:40.744982 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] Jul 14 22:37:40.745030 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] Jul 14 22:37:40.745078 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] Jul 14 22:37:40.745127 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] Jul 14 22:37:40.745176 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] Jul 14 22:37:40.745225 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] Jul 14 22:37:40.745276 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] Jul 14 22:37:40.745325 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] Jul 14 22:37:40.745373 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] Jul 14 22:37:40.745422 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] Jul 14 22:37:40.745470 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] Jul 14 22:37:40.745541 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] Jul 14 22:37:40.746638 kernel: pci_bus 0000:00: resource 4 [mem 0x000a0000-0x000bffff window] Jul 14 22:37:40.746687 kernel: pci_bus 0000:00: resource 5 [mem 0x000cc000-0x000dbfff window] Jul 14 22:37:40.746751 kernel: pci_bus 0000:00: resource 6 [mem 0xc0000000-0xfebfffff window] Jul 14 22:37:40.746798 kernel: pci_bus 0000:00: resource 7 [io 0x0000-0x0cf7 window] Jul 14 22:37:40.746841 kernel: pci_bus 0000:00: resource 8 [io 0x0d00-0xfeff window] Jul 14 22:37:40.746889 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x3fff] Jul 14 22:37:40.746934 kernel: pci_bus 0000:02: resource 1 [mem 0xfd600000-0xfdffffff] Jul 14 22:37:40.746979 kernel: pci_bus 0000:02: resource 2 [mem 0xe7b00000-0xe7ffffff 64bit pref] Jul 14 22:37:40.747026 kernel: pci_bus 0000:02: resource 4 [mem 0x000a0000-0x000bffff window] Jul 14 22:37:40.747069 kernel: pci_bus 0000:02: resource 5 [mem 0x000cc000-0x000dbfff window] Jul 14 22:37:40.747113 kernel: pci_bus 0000:02: resource 6 [mem 0xc0000000-0xfebfffff window] Jul 14 22:37:40.747189 kernel: pci_bus 0000:02: resource 7 [io 0x0000-0x0cf7 window] Jul 14 22:37:40.747235 kernel: pci_bus 0000:02: resource 8 [io 0x0d00-0xfeff window] Jul 14 22:37:40.747285 kernel: pci_bus 0000:03: resource 0 [io 0x4000-0x4fff] Jul 14 22:37:40.747330 kernel: pci_bus 0000:03: resource 1 [mem 0xfd500000-0xfd5fffff] Jul 14 22:37:40.747379 kernel: pci_bus 0000:03: resource 2 [mem 0xc0000000-0xc01fffff 64bit pref] Jul 14 22:37:40.747428 kernel: pci_bus 0000:04: resource 0 [io 0x8000-0x8fff] Jul 14 22:37:40.747474 kernel: pci_bus 0000:04: resource 1 [mem 0xfd100000-0xfd1fffff] Jul 14 22:37:40.748166 kernel: pci_bus 0000:04: resource 2 [mem 0xe7800000-0xe78fffff 64bit pref] Jul 14 22:37:40.748224 kernel: pci_bus 0000:05: resource 0 [io 0xc000-0xcfff] Jul 14 22:37:40.748271 kernel: pci_bus 0000:05: resource 1 [mem 0xfcd00000-0xfcdfffff] Jul 14 22:37:40.748317 kernel: pci_bus 0000:05: resource 2 [mem 0xe7400000-0xe74fffff 64bit pref] Jul 14 22:37:40.748370 kernel: pci_bus 0000:06: resource 1 [mem 0xfc900000-0xfc9fffff] Jul 14 22:37:40.748415 kernel: pci_bus 0000:06: resource 2 [mem 0xe7000000-0xe70fffff 64bit pref] Jul 14 22:37:40.748465 kernel: pci_bus 0000:07: resource 1 [mem 0xfc500000-0xfc5fffff] Jul 14 22:37:40.748532 kernel: pci_bus 0000:07: resource 2 [mem 0xe6c00000-0xe6cfffff 64bit pref] Jul 14 22:37:40.748583 kernel: pci_bus 0000:08: resource 1 [mem 0xfc100000-0xfc1fffff] Jul 14 22:37:40.748629 kernel: pci_bus 0000:08: resource 2 [mem 0xe6800000-0xe68fffff 64bit pref] Jul 14 22:37:40.748680 kernel: pci_bus 0000:09: resource 1 [mem 0xfbd00000-0xfbdfffff] Jul 14 22:37:40.748729 kernel: pci_bus 0000:09: resource 2 [mem 0xe6400000-0xe64fffff 64bit pref] Jul 14 22:37:40.748778 kernel: pci_bus 0000:0a: resource 1 [mem 0xfb900000-0xfb9fffff] Jul 14 22:37:40.748823 kernel: pci_bus 0000:0a: resource 2 [mem 0xe6000000-0xe60fffff 64bit pref] Jul 14 22:37:40.748873 kernel: pci_bus 0000:0b: resource 0 [io 0x5000-0x5fff] Jul 14 22:37:40.748919 kernel: pci_bus 0000:0b: resource 1 [mem 0xfd400000-0xfd4fffff] Jul 14 22:37:40.748967 kernel: pci_bus 0000:0b: resource 2 [mem 0xc0200000-0xc03fffff 64bit pref] Jul 14 22:37:40.749017 kernel: pci_bus 0000:0c: resource 0 [io 0x9000-0x9fff] Jul 14 22:37:40.749063 kernel: pci_bus 0000:0c: resource 1 [mem 0xfd000000-0xfd0fffff] Jul 14 22:37:40.749109 kernel: pci_bus 0000:0c: resource 2 [mem 0xe7700000-0xe77fffff 64bit pref] Jul 14 22:37:40.749160 kernel: pci_bus 0000:0d: resource 0 [io 0xd000-0xdfff] Jul 14 22:37:40.749206 kernel: pci_bus 0000:0d: resource 1 [mem 0xfcc00000-0xfccfffff] Jul 14 22:37:40.749250 kernel: pci_bus 0000:0d: resource 2 [mem 0xe7300000-0xe73fffff 64bit pref] Jul 14 22:37:40.749302 kernel: pci_bus 0000:0e: resource 1 [mem 0xfc800000-0xfc8fffff] Jul 14 22:37:40.749348 kernel: pci_bus 0000:0e: resource 2 [mem 0xe6f00000-0xe6ffffff 64bit pref] Jul 14 22:37:40.749397 kernel: pci_bus 0000:0f: resource 1 [mem 0xfc400000-0xfc4fffff] Jul 14 22:37:40.749442 kernel: pci_bus 0000:0f: resource 2 [mem 0xe6b00000-0xe6bfffff 64bit pref] Jul 14 22:37:40.750796 kernel: pci_bus 0000:10: resource 1 [mem 0xfc000000-0xfc0fffff] Jul 14 22:37:40.750850 kernel: pci_bus 0000:10: resource 2 [mem 0xe6700000-0xe67fffff 64bit pref] Jul 14 22:37:40.750904 kernel: pci_bus 0000:11: resource 1 [mem 0xfbc00000-0xfbcfffff] Jul 14 22:37:40.750951 kernel: pci_bus 0000:11: resource 2 [mem 0xe6300000-0xe63fffff 64bit pref] Jul 14 22:37:40.751000 kernel: pci_bus 0000:12: resource 1 [mem 0xfb800000-0xfb8fffff] Jul 14 22:37:40.751046 kernel: pci_bus 0000:12: resource 2 [mem 0xe5f00000-0xe5ffffff 64bit pref] Jul 14 22:37:40.751096 kernel: pci_bus 0000:13: resource 0 [io 0x6000-0x6fff] Jul 14 22:37:40.751142 kernel: pci_bus 0000:13: resource 1 [mem 0xfd300000-0xfd3fffff] Jul 14 22:37:40.751189 kernel: pci_bus 0000:13: resource 2 [mem 0xe7a00000-0xe7afffff 64bit pref] Jul 14 22:37:40.751240 kernel: pci_bus 0000:14: resource 0 [io 0xa000-0xafff] Jul 14 22:37:40.751285 kernel: pci_bus 0000:14: resource 1 [mem 0xfcf00000-0xfcffffff] Jul 14 22:37:40.751330 kernel: pci_bus 0000:14: resource 2 [mem 0xe7600000-0xe76fffff 64bit pref] Jul 14 22:37:40.751379 kernel: pci_bus 0000:15: resource 0 [io 0xe000-0xefff] Jul 14 22:37:40.751425 kernel: pci_bus 0000:15: resource 1 [mem 0xfcb00000-0xfcbfffff] Jul 14 22:37:40.751470 kernel: pci_bus 0000:15: resource 2 [mem 0xe7200000-0xe72fffff 64bit pref] Jul 14 22:37:40.751777 kernel: pci_bus 0000:16: resource 1 [mem 0xfc700000-0xfc7fffff] Jul 14 22:37:40.751826 kernel: pci_bus 0000:16: resource 2 [mem 0xe6e00000-0xe6efffff 64bit pref] Jul 14 22:37:40.751875 kernel: pci_bus 0000:17: resource 1 [mem 0xfc300000-0xfc3fffff] Jul 14 22:37:40.751921 kernel: pci_bus 0000:17: resource 2 [mem 0xe6a00000-0xe6afffff 64bit pref] Jul 14 22:37:40.751972 kernel: pci_bus 0000:18: resource 1 [mem 0xfbf00000-0xfbffffff] Jul 14 22:37:40.752018 kernel: pci_bus 0000:18: resource 2 [mem 0xe6600000-0xe66fffff 64bit pref] Jul 14 22:37:40.752071 kernel: pci_bus 0000:19: resource 1 [mem 0xfbb00000-0xfbbfffff] Jul 14 22:37:40.752117 kernel: pci_bus 0000:19: resource 2 [mem 0xe6200000-0xe62fffff 64bit pref] Jul 14 22:37:40.752166 kernel: pci_bus 0000:1a: resource 1 [mem 0xfb700000-0xfb7fffff] Jul 14 22:37:40.752211 kernel: pci_bus 0000:1a: resource 2 [mem 0xe5e00000-0xe5efffff 64bit pref] Jul 14 22:37:40.752259 kernel: pci_bus 0000:1b: resource 0 [io 0x7000-0x7fff] Jul 14 22:37:40.752305 kernel: pci_bus 0000:1b: resource 1 [mem 0xfd200000-0xfd2fffff] Jul 14 22:37:40.752350 kernel: pci_bus 0000:1b: resource 2 [mem 0xe7900000-0xe79fffff 64bit pref] Jul 14 22:37:40.752404 kernel: pci_bus 0000:1c: resource 0 [io 0xb000-0xbfff] Jul 14 22:37:40.752450 kernel: pci_bus 0000:1c: resource 1 [mem 0xfce00000-0xfcefffff] Jul 14 22:37:40.752512 kernel: pci_bus 0000:1c: resource 2 [mem 0xe7500000-0xe75fffff 64bit pref] Jul 14 22:37:40.752561 kernel: pci_bus 0000:1d: resource 1 [mem 0xfca00000-0xfcafffff] Jul 14 22:37:40.752607 kernel: pci_bus 0000:1d: resource 2 [mem 0xe7100000-0xe71fffff 64bit pref] Jul 14 22:37:40.752655 kernel: pci_bus 0000:1e: resource 1 [mem 0xfc600000-0xfc6fffff] Jul 14 22:37:40.752703 kernel: pci_bus 0000:1e: resource 2 [mem 0xe6d00000-0xe6dfffff 64bit pref] Jul 14 22:37:40.752754 kernel: pci_bus 0000:1f: resource 1 [mem 0xfc200000-0xfc2fffff] Jul 14 22:37:40.752800 kernel: pci_bus 0000:1f: resource 2 [mem 0xe6900000-0xe69fffff 64bit pref] Jul 14 22:37:40.752849 kernel: pci_bus 0000:20: resource 1 [mem 0xfbe00000-0xfbefffff] Jul 14 22:37:40.752894 kernel: pci_bus 0000:20: resource 2 [mem 0xe6500000-0xe65fffff 64bit pref] Jul 14 22:37:40.752946 kernel: pci_bus 0000:21: resource 1 [mem 0xfba00000-0xfbafffff] Jul 14 22:37:40.752994 kernel: pci_bus 0000:21: resource 2 [mem 0xe6100000-0xe61fffff 64bit pref] Jul 14 22:37:40.753043 kernel: pci_bus 0000:22: resource 1 [mem 0xfb600000-0xfb6fffff] Jul 14 22:37:40.753089 kernel: pci_bus 0000:22: resource 2 [mem 0xe5d00000-0xe5dfffff 64bit pref] Jul 14 22:37:40.753148 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jul 14 22:37:40.753158 kernel: PCI: CLS 32 bytes, default 64 Jul 14 22:37:40.753165 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jul 14 22:37:40.753171 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns Jul 14 22:37:40.753177 kernel: clocksource: Switched to clocksource tsc Jul 14 22:37:40.753188 kernel: Initialise system trusted keyrings Jul 14 22:37:40.753194 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jul 14 22:37:40.753200 kernel: Key type asymmetric registered Jul 14 22:37:40.753206 kernel: Asymmetric key parser 'x509' registered Jul 14 22:37:40.753212 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jul 14 22:37:40.753218 kernel: io scheduler mq-deadline registered Jul 14 22:37:40.753224 kernel: io scheduler kyber registered Jul 14 22:37:40.753230 kernel: io scheduler bfq registered Jul 14 22:37:40.753285 kernel: pcieport 0000:00:15.0: PME: Signaling with IRQ 24 Jul 14 22:37:40.753340 kernel: pcieport 0000:00:15.0: pciehp: Slot #160 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 14 22:37:40.753394 kernel: pcieport 0000:00:15.1: PME: Signaling with IRQ 25 Jul 14 22:37:40.753444 kernel: pcieport 0000:00:15.1: pciehp: Slot #161 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 14 22:37:40.753506 kernel: pcieport 0000:00:15.2: PME: Signaling with IRQ 26 Jul 14 22:37:40.753558 kernel: pcieport 0000:00:15.2: pciehp: Slot #162 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 14 22:37:40.753610 kernel: pcieport 0000:00:15.3: PME: Signaling with IRQ 27 Jul 14 22:37:40.753663 kernel: pcieport 0000:00:15.3: pciehp: Slot #163 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 14 22:37:40.753715 kernel: pcieport 0000:00:15.4: PME: Signaling with IRQ 28 Jul 14 22:37:40.753765 kernel: pcieport 0000:00:15.4: pciehp: Slot #164 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 14 22:37:40.753817 kernel: pcieport 0000:00:15.5: PME: Signaling with IRQ 29 Jul 14 22:37:40.753868 kernel: pcieport 0000:00:15.5: pciehp: Slot #165 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 14 22:37:40.753920 kernel: pcieport 0000:00:15.6: PME: Signaling with IRQ 30 Jul 14 22:37:40.753970 kernel: pcieport 0000:00:15.6: pciehp: Slot #166 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 14 22:37:40.754021 kernel: pcieport 0000:00:15.7: PME: Signaling with IRQ 31 Jul 14 22:37:40.754074 kernel: pcieport 0000:00:15.7: pciehp: Slot #167 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 14 22:37:40.754124 kernel: pcieport 0000:00:16.0: PME: Signaling with IRQ 32 Jul 14 22:37:40.754174 kernel: pcieport 0000:00:16.0: pciehp: Slot #192 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 14 22:37:40.754226 kernel: pcieport 0000:00:16.1: PME: Signaling with IRQ 33 Jul 14 22:37:40.754276 kernel: pcieport 0000:00:16.1: pciehp: Slot #193 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 14 22:37:40.754327 kernel: pcieport 0000:00:16.2: PME: Signaling with IRQ 34 Jul 14 22:37:40.754376 kernel: pcieport 0000:00:16.2: pciehp: Slot #194 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 14 22:37:40.754430 kernel: pcieport 0000:00:16.3: PME: Signaling with IRQ 35 Jul 14 22:37:40.754481 kernel: pcieport 0000:00:16.3: pciehp: Slot #195 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 14 22:37:40.754563 kernel: pcieport 0000:00:16.4: PME: Signaling with IRQ 36 Jul 14 22:37:40.754617 kernel: pcieport 0000:00:16.4: pciehp: Slot #196 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 14 22:37:40.754669 kernel: pcieport 0000:00:16.5: PME: Signaling with IRQ 37 Jul 14 22:37:40.754719 kernel: pcieport 0000:00:16.5: pciehp: Slot #197 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 14 22:37:40.754770 kernel: pcieport 0000:00:16.6: PME: Signaling with IRQ 38 Jul 14 22:37:40.754823 kernel: pcieport 0000:00:16.6: pciehp: Slot #198 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 14 22:37:40.754875 kernel: pcieport 0000:00:16.7: PME: Signaling with IRQ 39 Jul 14 22:37:40.754926 kernel: pcieport 0000:00:16.7: pciehp: Slot #199 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 14 22:37:40.754976 kernel: pcieport 0000:00:17.0: PME: Signaling with IRQ 40 Jul 14 22:37:40.755028 kernel: pcieport 0000:00:17.0: pciehp: Slot #224 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 14 22:37:40.755078 kernel: pcieport 0000:00:17.1: PME: Signaling with IRQ 41 Jul 14 22:37:40.755129 kernel: pcieport 0000:00:17.1: pciehp: Slot #225 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 14 22:37:40.755180 kernel: pcieport 0000:00:17.2: PME: Signaling with IRQ 42 Jul 14 22:37:40.755233 kernel: pcieport 0000:00:17.2: pciehp: Slot #226 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 14 22:37:40.755284 kernel: pcieport 0000:00:17.3: PME: Signaling with IRQ 43 Jul 14 22:37:40.755336 kernel: pcieport 0000:00:17.3: pciehp: Slot #227 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 14 22:37:40.755387 kernel: pcieport 0000:00:17.4: PME: Signaling with IRQ 44 Jul 14 22:37:40.755437 kernel: pcieport 0000:00:17.4: pciehp: Slot #228 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 14 22:37:40.755500 kernel: pcieport 0000:00:17.5: PME: Signaling with IRQ 45 Jul 14 22:37:40.755554 kernel: pcieport 0000:00:17.5: pciehp: Slot #229 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 14 22:37:40.755609 kernel: pcieport 0000:00:17.6: PME: Signaling with IRQ 46 Jul 14 22:37:40.755661 kernel: pcieport 0000:00:17.6: pciehp: Slot #230 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 14 22:37:40.755713 kernel: pcieport 0000:00:17.7: PME: Signaling with IRQ 47 Jul 14 22:37:40.755764 kernel: pcieport 0000:00:17.7: pciehp: Slot #231 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 14 22:37:40.755814 kernel: pcieport 0000:00:18.0: PME: Signaling with IRQ 48 Jul 14 22:37:40.755864 kernel: pcieport 0000:00:18.0: pciehp: Slot #256 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 14 22:37:40.755916 kernel: pcieport 0000:00:18.1: PME: Signaling with IRQ 49 Jul 14 22:37:40.755966 kernel: pcieport 0000:00:18.1: pciehp: Slot #257 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 14 22:37:40.756019 kernel: pcieport 0000:00:18.2: PME: Signaling with IRQ 50 Jul 14 22:37:40.756070 kernel: pcieport 0000:00:18.2: pciehp: Slot #258 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 14 22:37:40.756122 kernel: pcieport 0000:00:18.3: PME: Signaling with IRQ 51 Jul 14 22:37:40.756176 kernel: pcieport 0000:00:18.3: pciehp: Slot #259 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 14 22:37:40.756231 kernel: pcieport 0000:00:18.4: PME: Signaling with IRQ 52 Jul 14 22:37:40.756282 kernel: pcieport 0000:00:18.4: pciehp: Slot #260 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 14 22:37:40.756334 kernel: pcieport 0000:00:18.5: PME: Signaling with IRQ 53 Jul 14 22:37:40.756387 kernel: pcieport 0000:00:18.5: pciehp: Slot #261 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 14 22:37:40.756437 kernel: pcieport 0000:00:18.6: PME: Signaling with IRQ 54 Jul 14 22:37:40.756741 kernel: pcieport 0000:00:18.6: pciehp: Slot #262 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 14 22:37:40.757976 kernel: pcieport 0000:00:18.7: PME: Signaling with IRQ 55 Jul 14 22:37:40.758035 kernel: pcieport 0000:00:18.7: pciehp: Slot #263 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 14 22:37:40.758045 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jul 14 22:37:40.758055 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 14 22:37:40.758063 kernel: 00:05: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jul 14 22:37:40.758069 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBC,PNP0f13:MOUS] at 0x60,0x64 irq 1,12 Jul 14 22:37:40.758076 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jul 14 22:37:40.758082 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jul 14 22:37:40.758136 kernel: rtc_cmos 00:01: registered as rtc0 Jul 14 22:37:40.758186 kernel: rtc_cmos 00:01: setting system clock to 2025-07-14T22:37:40 UTC (1752532660) Jul 14 22:37:40.758231 kernel: rtc_cmos 00:01: alarms up to one month, y3k, 114 bytes nvram Jul 14 22:37:40.758241 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jul 14 22:37:40.758251 kernel: intel_pstate: CPU model not supported Jul 14 22:37:40.758257 kernel: NET: Registered PF_INET6 protocol family Jul 14 22:37:40.758263 kernel: Segment Routing with IPv6 Jul 14 22:37:40.758269 kernel: In-situ OAM (IOAM) with IPv6 Jul 14 22:37:40.758275 kernel: NET: Registered PF_PACKET protocol family Jul 14 22:37:40.758282 kernel: Key type dns_resolver registered Jul 14 22:37:40.758290 kernel: IPI shorthand broadcast: enabled Jul 14 22:37:40.758296 kernel: sched_clock: Marking stable (2696003579, 165425804)->(2875470940, -14041557) Jul 14 22:37:40.758302 kernel: registered taskstats version 1 Jul 14 22:37:40.758309 kernel: Loading compiled-in X.509 certificates Jul 14 22:37:40.758316 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.37-flatcar: 9c7654c28ef536b045f7837c9e022792dfa016b1' Jul 14 22:37:40.758322 kernel: Demotion targets for Node 0: null Jul 14 22:37:40.758328 kernel: Key type .fscrypt registered Jul 14 22:37:40.758335 kernel: Key type fscrypt-provisioning registered Jul 14 22:37:40.758341 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 14 22:37:40.758347 kernel: ima: Allocated hash algorithm: sha1 Jul 14 22:37:40.758354 kernel: ima: No architecture policies found Jul 14 22:37:40.758360 kernel: clk: Disabling unused clocks Jul 14 22:37:40.758367 kernel: Warning: unable to open an initial console. Jul 14 22:37:40.758374 kernel: Freeing unused kernel image (initmem) memory: 54600K Jul 14 22:37:40.758380 kernel: Write protecting the kernel read-only data: 24576k Jul 14 22:37:40.758386 kernel: Freeing unused kernel image (rodata/data gap) memory: 280K Jul 14 22:37:40.758392 kernel: Run /init as init process Jul 14 22:37:40.758398 kernel: with arguments: Jul 14 22:37:40.758405 kernel: /init Jul 14 22:37:40.758411 kernel: with environment: Jul 14 22:37:40.758417 kernel: HOME=/ Jul 14 22:37:40.758424 kernel: TERM=linux Jul 14 22:37:40.758430 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 14 22:37:40.758437 systemd[1]: Successfully made /usr/ read-only. Jul 14 22:37:40.758446 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 14 22:37:40.758453 systemd[1]: Detected virtualization vmware. Jul 14 22:37:40.758459 systemd[1]: Detected architecture x86-64. Jul 14 22:37:40.758466 systemd[1]: Running in initrd. Jul 14 22:37:40.758472 systemd[1]: No hostname configured, using default hostname. Jul 14 22:37:40.758480 systemd[1]: Hostname set to . Jul 14 22:37:40.758505 systemd[1]: Initializing machine ID from random generator. Jul 14 22:37:40.758512 systemd[1]: Queued start job for default target initrd.target. Jul 14 22:37:40.758519 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 14 22:37:40.758526 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 14 22:37:40.758533 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 14 22:37:40.758540 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 14 22:37:40.758548 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 14 22:37:40.758556 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 14 22:37:40.758564 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 14 22:37:40.758570 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 14 22:37:40.758577 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 14 22:37:40.758583 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 14 22:37:40.758590 systemd[1]: Reached target paths.target - Path Units. Jul 14 22:37:40.758598 systemd[1]: Reached target slices.target - Slice Units. Jul 14 22:37:40.758605 systemd[1]: Reached target swap.target - Swaps. Jul 14 22:37:40.758611 systemd[1]: Reached target timers.target - Timer Units. Jul 14 22:37:40.758618 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 14 22:37:40.758624 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 14 22:37:40.758631 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 14 22:37:40.758637 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jul 14 22:37:40.758644 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 14 22:37:40.758651 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 14 22:37:40.758658 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 14 22:37:40.758665 systemd[1]: Reached target sockets.target - Socket Units. Jul 14 22:37:40.758672 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 14 22:37:40.758678 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 14 22:37:40.758685 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 14 22:37:40.758692 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jul 14 22:37:40.758698 systemd[1]: Starting systemd-fsck-usr.service... Jul 14 22:37:40.758705 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 14 22:37:40.758713 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 14 22:37:40.758719 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 14 22:37:40.758726 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 14 22:37:40.758749 systemd-journald[244]: Collecting audit messages is disabled. Jul 14 22:37:40.758768 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 14 22:37:40.758775 systemd[1]: Finished systemd-fsck-usr.service. Jul 14 22:37:40.758782 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 14 22:37:40.758788 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 14 22:37:40.758796 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 14 22:37:40.758803 kernel: Bridge firewalling registered Jul 14 22:37:40.758810 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 14 22:37:40.758817 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 14 22:37:40.758823 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 14 22:37:40.758830 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 14 22:37:40.758837 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 14 22:37:40.758844 systemd-journald[244]: Journal started Jul 14 22:37:40.758860 systemd-journald[244]: Runtime Journal (/run/log/journal/6c6f9c0780e24491a0a7ef5eca89b634) is 4.8M, max 38.8M, 34M free. Jul 14 22:37:40.713084 systemd-modules-load[245]: Inserted module 'overlay' Jul 14 22:37:40.734450 systemd-modules-load[245]: Inserted module 'br_netfilter' Jul 14 22:37:40.761769 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 14 22:37:40.761791 systemd[1]: Started systemd-journald.service - Journal Service. Jul 14 22:37:40.764766 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 14 22:37:40.765467 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 14 22:37:40.770797 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 14 22:37:40.772432 systemd-tmpfiles[268]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jul 14 22:37:40.772736 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 14 22:37:40.774008 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 14 22:37:40.775571 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 14 22:37:40.783530 dracut-cmdline[282]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=f410b284aa01ceabdc6514a023dd712766dc6ad9b98f3a3d0190d1808e2e1320 Jul 14 22:37:40.800631 systemd-resolved[284]: Positive Trust Anchors: Jul 14 22:37:40.800639 systemd-resolved[284]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 14 22:37:40.800662 systemd-resolved[284]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 14 22:37:40.803283 systemd-resolved[284]: Defaulting to hostname 'linux'. Jul 14 22:37:40.804170 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 14 22:37:40.804335 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 14 22:37:40.836501 kernel: SCSI subsystem initialized Jul 14 22:37:40.852499 kernel: Loading iSCSI transport class v2.0-870. Jul 14 22:37:40.860504 kernel: iscsi: registered transport (tcp) Jul 14 22:37:40.882772 kernel: iscsi: registered transport (qla4xxx) Jul 14 22:37:40.882819 kernel: QLogic iSCSI HBA Driver Jul 14 22:37:40.893063 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 14 22:37:40.906559 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 14 22:37:40.907548 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 14 22:37:40.930214 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 14 22:37:40.931095 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 14 22:37:40.968500 kernel: raid6: avx2x4 gen() 47518 MB/s Jul 14 22:37:40.985496 kernel: raid6: avx2x2 gen() 53491 MB/s Jul 14 22:37:41.002641 kernel: raid6: avx2x1 gen() 44553 MB/s Jul 14 22:37:41.002658 kernel: raid6: using algorithm avx2x2 gen() 53491 MB/s Jul 14 22:37:41.020656 kernel: raid6: .... xor() 32247 MB/s, rmw enabled Jul 14 22:37:41.020684 kernel: raid6: using avx2x2 recovery algorithm Jul 14 22:37:41.034498 kernel: xor: automatically using best checksumming function avx Jul 14 22:37:41.137503 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 14 22:37:41.141092 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 14 22:37:41.142187 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 14 22:37:41.163743 systemd-udevd[493]: Using default interface naming scheme 'v255'. Jul 14 22:37:41.167072 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 14 22:37:41.168096 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 14 22:37:41.185874 dracut-pre-trigger[499]: rd.md=0: removing MD RAID activation Jul 14 22:37:41.199007 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 14 22:37:41.199951 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 14 22:37:41.271970 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 14 22:37:41.273922 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 14 22:37:41.343497 kernel: VMware PVSCSI driver - version 1.0.7.0-k Jul 14 22:37:41.347508 kernel: vmw_pvscsi: using 64bit dma Jul 14 22:37:41.347526 kernel: vmw_pvscsi: max_id: 16 Jul 14 22:37:41.348498 kernel: vmw_pvscsi: setting ring_pages to 8 Jul 14 22:37:41.356099 kernel: vmw_pvscsi: enabling reqCallThreshold Jul 14 22:37:41.356129 kernel: vmw_pvscsi: driver-based request coalescing enabled Jul 14 22:37:41.356139 kernel: vmw_pvscsi: using MSI-X Jul 14 22:37:41.361210 kernel: scsi host0: VMware PVSCSI storage adapter rev 2, req/cmp/msg rings: 8/8/1 pages, cmd_per_lun=254 Jul 14 22:37:41.361242 kernel: VMware vmxnet3 virtual NIC driver - version 1.9.0.0-k-NAPI Jul 14 22:37:41.363506 kernel: vmw_pvscsi 0000:03:00.0: VMware PVSCSI rev 2 host #0 Jul 14 22:37:41.365507 kernel: vmxnet3 0000:0b:00.0: # of Tx queues : 2, # of Rx queues : 2 Jul 14 22:37:41.369506 kernel: scsi 0:0:0:0: Direct-Access VMware Virtual disk 2.0 PQ: 0 ANSI: 6 Jul 14 22:37:41.375515 kernel: vmxnet3 0000:0b:00.0 eth0: NIC Link is Up 10000 Mbps Jul 14 22:37:41.380497 kernel: cryptd: max_cpu_qlen set to 1000 Jul 14 22:37:41.386499 kernel: vmxnet3 0000:0b:00.0 ens192: renamed from eth0 Jul 14 22:37:41.387066 (udev-worker)[537]: id: Truncating stdout of 'dmi_memory_id' up to 16384 byte. Jul 14 22:37:41.392537 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 14 22:37:41.392614 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 14 22:37:41.393393 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 14 22:37:41.396626 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 14 22:37:41.401372 kernel: sd 0:0:0:0: [sda] 17805312 512-byte logical blocks: (9.12 GB/8.49 GiB) Jul 14 22:37:41.401480 kernel: sd 0:0:0:0: [sda] Write Protect is off Jul 14 22:37:41.401578 kernel: sd 0:0:0:0: [sda] Mode Sense: 31 00 00 00 Jul 14 22:37:41.401640 kernel: sd 0:0:0:0: [sda] Cache data unavailable Jul 14 22:37:41.401699 kernel: sd 0:0:0:0: [sda] Assuming drive cache: write through Jul 14 22:37:41.407501 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input2 Jul 14 22:37:41.410873 kernel: libata version 3.00 loaded. Jul 14 22:37:41.410895 kernel: AES CTR mode by8 optimization enabled Jul 14 22:37:41.424491 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 14 22:37:41.460050 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 14 22:37:41.460456 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jul 14 22:37:41.468996 kernel: ata_piix 0000:00:07.1: version 2.13 Jul 14 22:37:41.469127 kernel: scsi host1: ata_piix Jul 14 22:37:41.473375 kernel: scsi host2: ata_piix Jul 14 22:37:41.473479 kernel: ata1: PATA max UDMA/33 cmd 0x1f0 ctl 0x3f6 bmdma 0x1060 irq 14 lpm-pol 0 Jul 14 22:37:41.473498 kernel: ata2: PATA max UDMA/33 cmd 0x170 ctl 0x376 bmdma 0x1068 irq 15 lpm-pol 0 Jul 14 22:37:41.640512 kernel: ata2.00: ATAPI: VMware Virtual IDE CDROM Drive, 00000001, max UDMA/33 Jul 14 22:37:41.644504 kernel: scsi 2:0:0:0: CD-ROM NECVMWar VMware IDE CDR10 1.00 PQ: 0 ANSI: 5 Jul 14 22:37:41.672510 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 1x/1x writer dvd-ram cd/rw xa/form2 cdda tray Jul 14 22:37:41.672637 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jul 14 22:37:41.685506 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jul 14 22:37:41.765933 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_disk OEM. Jul 14 22:37:41.772880 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_disk ROOT. Jul 14 22:37:41.779152 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_disk EFI-SYSTEM. Jul 14 22:37:41.794046 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_disk USR-A. Jul 14 22:37:41.794324 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_disk USR-A. Jul 14 22:37:41.795058 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 14 22:37:41.831508 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 14 22:37:41.842498 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 14 22:37:41.943422 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 14 22:37:41.954095 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 14 22:37:41.954228 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 14 22:37:41.954440 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 14 22:37:41.955081 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 14 22:37:41.971533 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 14 22:37:42.897503 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 14 22:37:42.897776 disk-uuid[644]: The operation has completed successfully. Jul 14 22:37:43.041846 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 14 22:37:43.041916 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 14 22:37:43.057418 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 14 22:37:43.064243 sh[673]: Success Jul 14 22:37:43.077858 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 14 22:37:43.077884 kernel: device-mapper: uevent: version 1.0.3 Jul 14 22:37:43.079001 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jul 14 22:37:43.085516 kernel: device-mapper: verity: sha256 using shash "sha256-avx2" Jul 14 22:37:43.127822 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 14 22:37:43.130521 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 14 22:37:43.138315 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 14 22:37:43.150816 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' Jul 14 22:37:43.150849 kernel: BTRFS: device fsid 37339954-65ea-413a-8ddc-1919cedf99fb devid 1 transid 35 /dev/mapper/usr (254:0) scanned by mount (685) Jul 14 22:37:43.152354 kernel: BTRFS info (device dm-0): first mount of filesystem 37339954-65ea-413a-8ddc-1919cedf99fb Jul 14 22:37:43.152375 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jul 14 22:37:43.153940 kernel: BTRFS info (device dm-0): using free-space-tree Jul 14 22:37:43.160934 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 14 22:37:43.161286 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jul 14 22:37:43.162095 systemd[1]: Starting afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments... Jul 14 22:37:43.163543 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 14 22:37:43.188425 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (708) Jul 14 22:37:43.188457 kernel: BTRFS info (device sda6): first mount of filesystem c6464955-ee12-4a70-8721-d590c51e77c0 Jul 14 22:37:43.188466 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jul 14 22:37:43.190065 kernel: BTRFS info (device sda6): using free-space-tree Jul 14 22:37:43.200512 kernel: BTRFS info (device sda6): last unmount of filesystem c6464955-ee12-4a70-8721-d590c51e77c0 Jul 14 22:37:43.201209 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 14 22:37:43.202567 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 14 22:37:43.250683 systemd[1]: Finished afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments. Jul 14 22:37:43.253657 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 14 22:37:43.313176 ignition[727]: Ignition 2.21.0 Jul 14 22:37:43.313188 ignition[727]: Stage: fetch-offline Jul 14 22:37:43.313207 ignition[727]: no configs at "/usr/lib/ignition/base.d" Jul 14 22:37:43.313212 ignition[727]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Jul 14 22:37:43.313259 ignition[727]: parsed url from cmdline: "" Jul 14 22:37:43.313261 ignition[727]: no config URL provided Jul 14 22:37:43.313264 ignition[727]: reading system config file "/usr/lib/ignition/user.ign" Jul 14 22:37:43.313268 ignition[727]: no config at "/usr/lib/ignition/user.ign" Jul 14 22:37:43.313731 ignition[727]: config successfully fetched Jul 14 22:37:43.313749 ignition[727]: parsing config with SHA512: 725b5fc9b7f0cda33c3b5b42246d568f95431068037e6dc4b73434f770837f9bb18e4e70738934c743c13783ddf32fdb65f3a9e2919b0c74c4f131bb1810b622 Jul 14 22:37:43.319289 unknown[727]: fetched base config from "system" Jul 14 22:37:43.319297 unknown[727]: fetched user config from "vmware" Jul 14 22:37:43.319609 ignition[727]: fetch-offline: fetch-offline passed Jul 14 22:37:43.319661 ignition[727]: Ignition finished successfully Jul 14 22:37:43.320564 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 14 22:37:43.331941 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 14 22:37:43.333025 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 14 22:37:43.356227 systemd-networkd[865]: lo: Link UP Jul 14 22:37:43.356233 systemd-networkd[865]: lo: Gained carrier Jul 14 22:37:43.357050 systemd-networkd[865]: Enumeration completed Jul 14 22:37:43.357321 systemd-networkd[865]: ens192: Configuring with /etc/systemd/network/10-dracut-cmdline-99.network. Jul 14 22:37:43.357534 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 14 22:37:43.357678 systemd[1]: Reached target network.target - Network. Jul 14 22:37:43.357769 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jul 14 22:37:43.358783 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 14 22:37:43.360546 kernel: vmxnet3 0000:0b:00.0 ens192: intr type 3, mode 0, 3 vectors allocated Jul 14 22:37:43.360659 kernel: vmxnet3 0000:0b:00.0 ens192: NIC Link is Up 10000 Mbps Jul 14 22:37:43.361568 systemd-networkd[865]: ens192: Link UP Jul 14 22:37:43.361572 systemd-networkd[865]: ens192: Gained carrier Jul 14 22:37:43.379967 ignition[868]: Ignition 2.21.0 Jul 14 22:37:43.380267 ignition[868]: Stage: kargs Jul 14 22:37:43.380456 ignition[868]: no configs at "/usr/lib/ignition/base.d" Jul 14 22:37:43.380595 ignition[868]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Jul 14 22:37:43.381232 ignition[868]: kargs: kargs passed Jul 14 22:37:43.381346 ignition[868]: Ignition finished successfully Jul 14 22:37:43.382829 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 14 22:37:43.383680 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 14 22:37:43.397063 ignition[876]: Ignition 2.21.0 Jul 14 22:37:43.397073 ignition[876]: Stage: disks Jul 14 22:37:43.397161 ignition[876]: no configs at "/usr/lib/ignition/base.d" Jul 14 22:37:43.397169 ignition[876]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Jul 14 22:37:43.398881 ignition[876]: disks: disks passed Jul 14 22:37:43.398926 ignition[876]: Ignition finished successfully Jul 14 22:37:43.399818 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 14 22:37:43.400173 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 14 22:37:43.400583 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 14 22:37:43.400816 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 14 22:37:43.401038 systemd[1]: Reached target sysinit.target - System Initialization. Jul 14 22:37:43.401265 systemd[1]: Reached target basic.target - Basic System. Jul 14 22:37:43.402001 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 14 22:37:43.419979 systemd-fsck[884]: ROOT: clean, 15/1628000 files, 120826/1617920 blocks Jul 14 22:37:43.421294 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 14 22:37:43.422123 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 14 22:37:43.499512 kernel: EXT4-fs (sda9): mounted filesystem 4b0aee01-21d6-4f7f-85ed-854e0d3e61ff r/w with ordered data mode. Quota mode: none. Jul 14 22:37:43.499997 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 14 22:37:43.500426 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 14 22:37:43.501423 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 14 22:37:43.503527 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 14 22:37:43.503900 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 14 22:37:43.504070 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 14 22:37:43.504085 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 14 22:37:43.508387 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 14 22:37:43.509400 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 14 22:37:43.517025 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (892) Jul 14 22:37:43.517051 kernel: BTRFS info (device sda6): first mount of filesystem c6464955-ee12-4a70-8721-d590c51e77c0 Jul 14 22:37:43.517060 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jul 14 22:37:43.517067 kernel: BTRFS info (device sda6): using free-space-tree Jul 14 22:37:43.520922 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 14 22:37:43.540044 initrd-setup-root[916]: cut: /sysroot/etc/passwd: No such file or directory Jul 14 22:37:43.542917 initrd-setup-root[923]: cut: /sysroot/etc/group: No such file or directory Jul 14 22:37:43.545220 initrd-setup-root[930]: cut: /sysroot/etc/shadow: No such file or directory Jul 14 22:37:43.547351 initrd-setup-root[937]: cut: /sysroot/etc/gshadow: No such file or directory Jul 14 22:37:43.600906 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 14 22:37:43.601729 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 14 22:37:43.603558 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 14 22:37:43.611494 kernel: BTRFS info (device sda6): last unmount of filesystem c6464955-ee12-4a70-8721-d590c51e77c0 Jul 14 22:37:43.626877 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 14 22:37:43.633557 ignition[1005]: INFO : Ignition 2.21.0 Jul 14 22:37:43.633557 ignition[1005]: INFO : Stage: mount Jul 14 22:37:43.633879 ignition[1005]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 14 22:37:43.633879 ignition[1005]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Jul 14 22:37:43.634155 ignition[1005]: INFO : mount: mount passed Jul 14 22:37:43.634667 ignition[1005]: INFO : Ignition finished successfully Jul 14 22:37:43.634913 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 14 22:37:43.635580 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 14 22:37:44.149092 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 14 22:37:44.150150 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 14 22:37:44.166345 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (1016) Jul 14 22:37:44.166375 kernel: BTRFS info (device sda6): first mount of filesystem c6464955-ee12-4a70-8721-d590c51e77c0 Jul 14 22:37:44.166383 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jul 14 22:37:44.167947 kernel: BTRFS info (device sda6): using free-space-tree Jul 14 22:37:44.171026 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 14 22:37:44.187451 ignition[1033]: INFO : Ignition 2.21.0 Jul 14 22:37:44.187451 ignition[1033]: INFO : Stage: files Jul 14 22:37:44.187855 ignition[1033]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 14 22:37:44.187855 ignition[1033]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Jul 14 22:37:44.188212 ignition[1033]: DEBUG : files: compiled without relabeling support, skipping Jul 14 22:37:44.188969 ignition[1033]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 14 22:37:44.188969 ignition[1033]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 14 22:37:44.190762 ignition[1033]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 14 22:37:44.190970 ignition[1033]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 14 22:37:44.191145 ignition[1033]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 14 22:37:44.191139 unknown[1033]: wrote ssh authorized keys file for user: core Jul 14 22:37:44.192725 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 14 22:37:44.192954 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jul 14 22:37:44.233368 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 14 22:37:44.421344 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 14 22:37:44.421344 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 14 22:37:44.421815 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jul 14 22:37:44.529627 systemd-networkd[865]: ens192: Gained IPv6LL Jul 14 22:37:44.922720 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jul 14 22:37:44.984504 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 14 22:37:44.984504 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jul 14 22:37:44.984504 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jul 14 22:37:44.984504 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 14 22:37:44.984504 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 14 22:37:44.984504 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 14 22:37:44.985686 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 14 22:37:44.985686 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 14 22:37:44.985686 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 14 22:37:45.002062 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 14 22:37:45.002322 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 14 22:37:45.002322 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 14 22:37:45.011514 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 14 22:37:45.011514 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 14 22:37:45.011969 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Jul 14 22:37:50.373888 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jul 14 22:37:50.648064 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 14 22:37:50.648064 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/etc/systemd/network/00-vmware.network" Jul 14 22:37:50.657167 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/etc/systemd/network/00-vmware.network" Jul 14 22:37:50.657167 ignition[1033]: INFO : files: op(d): [started] processing unit "prepare-helm.service" Jul 14 22:37:50.672607 ignition[1033]: INFO : files: op(d): op(e): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 14 22:37:50.680596 ignition[1033]: INFO : files: op(d): op(e): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 14 22:37:50.680596 ignition[1033]: INFO : files: op(d): [finished] processing unit "prepare-helm.service" Jul 14 22:37:50.680596 ignition[1033]: INFO : files: op(f): [started] processing unit "coreos-metadata.service" Jul 14 22:37:50.681221 ignition[1033]: INFO : files: op(f): op(10): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 14 22:37:50.681221 ignition[1033]: INFO : files: op(f): op(10): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 14 22:37:50.681221 ignition[1033]: INFO : files: op(f): [finished] processing unit "coreos-metadata.service" Jul 14 22:37:50.681221 ignition[1033]: INFO : files: op(11): [started] setting preset to disabled for "coreos-metadata.service" Jul 14 22:37:50.748342 ignition[1033]: INFO : files: op(11): op(12): [started] removing enablement symlink(s) for "coreos-metadata.service" Jul 14 22:37:50.750944 ignition[1033]: INFO : files: op(11): op(12): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jul 14 22:37:50.750944 ignition[1033]: INFO : files: op(11): [finished] setting preset to disabled for "coreos-metadata.service" Jul 14 22:37:50.750944 ignition[1033]: INFO : files: op(13): [started] setting preset to enabled for "prepare-helm.service" Jul 14 22:37:50.750944 ignition[1033]: INFO : files: op(13): [finished] setting preset to enabled for "prepare-helm.service" Jul 14 22:37:50.750944 ignition[1033]: INFO : files: createResultFile: createFiles: op(14): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 14 22:37:50.750944 ignition[1033]: INFO : files: createResultFile: createFiles: op(14): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 14 22:37:50.750944 ignition[1033]: INFO : files: files passed Jul 14 22:37:50.750944 ignition[1033]: INFO : Ignition finished successfully Jul 14 22:37:50.752268 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 14 22:37:50.753213 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 14 22:37:50.754604 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 14 22:37:50.767972 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 14 22:37:50.768345 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 14 22:37:50.771371 initrd-setup-root-after-ignition[1065]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 14 22:37:50.771371 initrd-setup-root-after-ignition[1065]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 14 22:37:50.772247 initrd-setup-root-after-ignition[1069]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 14 22:37:50.773132 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 14 22:37:50.773537 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 14 22:37:50.774089 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 14 22:37:50.813811 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 14 22:37:50.813886 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 14 22:37:50.814152 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 14 22:37:50.814276 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 14 22:37:50.814474 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 14 22:37:50.814960 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 14 22:37:50.830932 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 14 22:37:50.831983 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 14 22:37:50.852084 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 14 22:37:50.852481 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 14 22:37:50.852923 systemd[1]: Stopped target timers.target - Timer Units. Jul 14 22:37:50.853322 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 14 22:37:50.853573 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 14 22:37:50.854110 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 14 22:37:50.854504 systemd[1]: Stopped target basic.target - Basic System. Jul 14 22:37:50.854831 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 14 22:37:50.855235 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 14 22:37:50.855641 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 14 22:37:50.856038 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jul 14 22:37:50.856425 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 14 22:37:50.856771 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 14 22:37:50.857236 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 14 22:37:50.857634 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 14 22:37:50.858010 systemd[1]: Stopped target swap.target - Swaps. Jul 14 22:37:50.858325 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 14 22:37:50.858576 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 14 22:37:50.859074 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 14 22:37:50.859433 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 14 22:37:50.859646 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 14 22:37:50.859706 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 14 22:37:50.859950 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 14 22:37:50.860033 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 14 22:37:50.860407 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 14 22:37:50.860502 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 14 22:37:50.860814 systemd[1]: Stopped target paths.target - Path Units. Jul 14 22:37:50.861029 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 14 22:37:50.864505 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 14 22:37:50.864717 systemd[1]: Stopped target slices.target - Slice Units. Jul 14 22:37:50.865007 systemd[1]: Stopped target sockets.target - Socket Units. Jul 14 22:37:50.865192 systemd[1]: iscsid.socket: Deactivated successfully. Jul 14 22:37:50.865250 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 14 22:37:50.865572 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 14 22:37:50.865649 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 14 22:37:50.865926 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 14 22:37:50.866032 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 14 22:37:50.866328 systemd[1]: ignition-files.service: Deactivated successfully. Jul 14 22:37:50.866423 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 14 22:37:50.867271 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 14 22:37:50.867403 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 14 22:37:50.867527 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 14 22:37:50.869541 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 14 22:37:50.869673 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 14 22:37:50.869749 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 14 22:37:50.869981 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 14 22:37:50.870069 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 14 22:37:50.873604 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 14 22:37:50.875479 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 14 22:37:50.883327 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 14 22:37:50.885101 ignition[1089]: INFO : Ignition 2.21.0 Jul 14 22:37:50.885317 ignition[1089]: INFO : Stage: umount Jul 14 22:37:50.885530 ignition[1089]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 14 22:37:50.885687 ignition[1089]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Jul 14 22:37:50.885754 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 14 22:37:50.885816 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 14 22:37:50.886568 ignition[1089]: INFO : umount: umount passed Jul 14 22:37:50.886711 ignition[1089]: INFO : Ignition finished successfully Jul 14 22:37:50.887415 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 14 22:37:50.887481 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 14 22:37:50.887727 systemd[1]: Stopped target network.target - Network. Jul 14 22:37:50.887838 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 14 22:37:50.887863 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 14 22:37:50.888014 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 14 22:37:50.888036 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 14 22:37:50.888178 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 14 22:37:50.888200 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 14 22:37:50.888352 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 14 22:37:50.888373 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 14 22:37:50.888544 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 14 22:37:50.888567 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 14 22:37:50.888773 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 14 22:37:50.889066 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 14 22:37:50.890527 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 14 22:37:50.890595 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 14 22:37:50.891890 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jul 14 22:37:50.892035 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 14 22:37:50.892063 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 14 22:37:50.893102 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jul 14 22:37:50.897306 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 14 22:37:50.897404 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 14 22:37:50.898055 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jul 14 22:37:50.898142 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jul 14 22:37:50.898299 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 14 22:37:50.898317 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 14 22:37:50.898956 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 14 22:37:50.899051 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 14 22:37:50.899076 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 14 22:37:50.899231 systemd[1]: afterburn-network-kargs.service: Deactivated successfully. Jul 14 22:37:50.899253 systemd[1]: Stopped afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments. Jul 14 22:37:50.899371 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 14 22:37:50.899392 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 14 22:37:50.899554 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 14 22:37:50.899575 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 14 22:37:50.900944 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 14 22:37:50.901482 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 14 22:37:50.908201 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 14 22:37:50.908586 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 14 22:37:50.908980 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 14 22:37:50.909168 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 14 22:37:50.909627 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 14 22:37:50.909657 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 14 22:37:50.910013 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 14 22:37:50.910030 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 14 22:37:50.910259 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 14 22:37:50.910282 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 14 22:37:50.910573 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 14 22:37:50.910597 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 14 22:37:50.910986 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 14 22:37:50.911010 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 14 22:37:50.912140 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 14 22:37:50.912388 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jul 14 22:37:50.912534 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jul 14 22:37:50.912849 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 14 22:37:50.912873 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 14 22:37:50.913162 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 14 22:37:50.913185 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 14 22:37:50.922372 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 14 22:37:50.922440 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 14 22:37:50.922838 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 14 22:37:50.923354 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 14 22:37:50.936306 systemd[1]: Switching root. Jul 14 22:37:50.960653 systemd-journald[244]: Journal stopped Jul 14 22:37:52.152077 systemd-journald[244]: Received SIGTERM from PID 1 (systemd). Jul 14 22:37:52.152105 kernel: SELinux: policy capability network_peer_controls=1 Jul 14 22:37:52.152113 kernel: SELinux: policy capability open_perms=1 Jul 14 22:37:52.152119 kernel: SELinux: policy capability extended_socket_class=1 Jul 14 22:37:52.152125 kernel: SELinux: policy capability always_check_network=0 Jul 14 22:37:52.152131 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 14 22:37:52.152137 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 14 22:37:52.152143 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 14 22:37:52.152149 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 14 22:37:52.152154 kernel: SELinux: policy capability userspace_initial_context=0 Jul 14 22:37:52.152160 systemd[1]: Successfully loaded SELinux policy in 51.292ms. Jul 14 22:37:52.152167 kernel: audit: type=1403 audit(1752532671.582:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 14 22:37:52.152175 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 3.647ms. Jul 14 22:37:52.152182 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 14 22:37:52.152189 systemd[1]: Detected virtualization vmware. Jul 14 22:37:52.152196 systemd[1]: Detected architecture x86-64. Jul 14 22:37:52.152204 systemd[1]: Detected first boot. Jul 14 22:37:52.152211 systemd[1]: Initializing machine ID from random generator. Jul 14 22:37:52.152307 kernel: vmw_vmci 0000:00:07.7: Using capabilities 0xc Jul 14 22:37:52.152320 kernel: Guest personality initialized and is active Jul 14 22:37:52.152330 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Jul 14 22:37:52.152337 kernel: Initialized host personality Jul 14 22:37:52.152344 zram_generator::config[1133]: No configuration found. Jul 14 22:37:52.152353 kernel: NET: Registered PF_VSOCK protocol family Jul 14 22:37:52.152363 systemd[1]: Populated /etc with preset unit settings. Jul 14 22:37:52.152372 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Jul 14 22:37:52.152379 systemd[1]: COREOS_CUSTOM_PUBLIC_IPV4=$(ip addr show ens192 | grep -v "inet 10." | grep -Po "inet \K[\d.]+")" > ${OUTPUT}" Jul 14 22:37:52.152386 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jul 14 22:37:52.152397 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 14 22:37:52.152406 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 14 22:37:52.152415 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 14 22:37:52.152422 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 14 22:37:52.152433 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 14 22:37:52.152440 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 14 22:37:52.152447 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 14 22:37:52.152453 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 14 22:37:52.152466 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 14 22:37:52.152477 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 14 22:37:52.152503 systemd[1]: Created slice user.slice - User and Session Slice. Jul 14 22:37:52.155218 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 14 22:37:52.155232 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 14 22:37:52.155240 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 14 22:37:52.155247 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 14 22:37:52.155254 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 14 22:37:52.155261 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 14 22:37:52.155269 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jul 14 22:37:52.155276 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 14 22:37:52.155283 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 14 22:37:52.155290 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 14 22:37:52.155296 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 14 22:37:52.155303 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 14 22:37:52.155310 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 14 22:37:52.155317 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 14 22:37:52.155325 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 14 22:37:52.155332 systemd[1]: Reached target slices.target - Slice Units. Jul 14 22:37:52.155339 systemd[1]: Reached target swap.target - Swaps. Jul 14 22:37:52.155346 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 14 22:37:52.155353 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 14 22:37:52.155361 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jul 14 22:37:52.155368 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 14 22:37:52.155375 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 14 22:37:52.155382 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 14 22:37:52.155388 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 14 22:37:52.155395 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 14 22:37:52.155402 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 14 22:37:52.155409 systemd[1]: Mounting media.mount - External Media Directory... Jul 14 22:37:52.155417 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 14 22:37:52.155424 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 14 22:37:52.155431 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 14 22:37:52.155438 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 14 22:37:52.155445 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 14 22:37:52.155452 systemd[1]: Reached target machines.target - Containers. Jul 14 22:37:52.155459 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 14 22:37:52.155466 systemd[1]: Starting ignition-delete-config.service - Ignition (delete config)... Jul 14 22:37:52.155474 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 14 22:37:52.155480 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 14 22:37:52.155494 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 14 22:37:52.155501 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 14 22:37:52.155508 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 14 22:37:52.155515 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 14 22:37:52.155521 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 14 22:37:52.155528 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 14 22:37:52.155537 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 14 22:37:52.155544 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 14 22:37:52.155551 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 14 22:37:52.155558 systemd[1]: Stopped systemd-fsck-usr.service. Jul 14 22:37:52.155566 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 14 22:37:52.155573 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 14 22:37:52.155580 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 14 22:37:52.155587 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 14 22:37:52.155595 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 14 22:37:52.155602 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jul 14 22:37:52.155609 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 14 22:37:52.155615 systemd[1]: verity-setup.service: Deactivated successfully. Jul 14 22:37:52.155622 systemd[1]: Stopped verity-setup.service. Jul 14 22:37:52.155629 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 14 22:37:52.155636 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 14 22:37:52.155643 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 14 22:37:52.155650 systemd[1]: Mounted media.mount - External Media Directory. Jul 14 22:37:52.155659 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 14 22:37:52.155666 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 14 22:37:52.155673 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 14 22:37:52.155680 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 14 22:37:52.155687 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 14 22:37:52.155694 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 14 22:37:52.155701 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 14 22:37:52.155707 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 14 22:37:52.155715 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 14 22:37:52.155723 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 14 22:37:52.155730 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 14 22:37:52.155737 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 14 22:37:52.155744 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 14 22:37:52.155751 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 14 22:37:52.155757 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 14 22:37:52.155764 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jul 14 22:37:52.155771 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 14 22:37:52.155779 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 14 22:37:52.155802 systemd-journald[1230]: Collecting audit messages is disabled. Jul 14 22:37:52.155823 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 14 22:37:52.155831 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 14 22:37:52.155839 systemd-journald[1230]: Journal started Jul 14 22:37:52.155856 systemd-journald[1230]: Runtime Journal (/run/log/journal/dc2c9a6a658b489cb97bb720505f0969) is 4.8M, max 38.8M, 34M free. Jul 14 22:37:52.161509 kernel: fuse: init (API version 7.41) Jul 14 22:37:51.983047 systemd[1]: Queued start job for default target multi-user.target. Jul 14 22:37:51.996148 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jul 14 22:37:51.996457 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 14 22:37:52.162037 jq[1203]: true Jul 14 22:37:52.162538 jq[1246]: true Jul 14 22:37:52.164613 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 14 22:37:52.170554 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 14 22:37:52.172498 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 14 22:37:52.176518 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 14 22:37:52.176549 systemd[1]: Started systemd-journald.service - Journal Service. Jul 14 22:37:52.177645 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 14 22:37:52.182250 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 14 22:37:52.182541 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 14 22:37:52.182655 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 14 22:37:52.182948 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jul 14 22:37:52.197612 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 14 22:37:52.208667 kernel: loop0: detected capacity change from 0 to 2960 Jul 14 22:37:52.209507 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 14 22:37:52.209763 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 14 22:37:52.212653 kernel: loop: module loaded Jul 14 22:37:52.214679 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jul 14 22:37:52.214953 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 14 22:37:52.217654 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 14 22:37:52.218094 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 14 22:37:52.233618 kernel: ACPI: bus type drm_connector registered Jul 14 22:37:52.235423 systemd-journald[1230]: Time spent on flushing to /var/log/journal/dc2c9a6a658b489cb97bb720505f0969 is 41.331ms for 1757 entries. Jul 14 22:37:52.235423 systemd-journald[1230]: System Journal (/var/log/journal/dc2c9a6a658b489cb97bb720505f0969) is 8M, max 584.8M, 576.8M free. Jul 14 22:37:52.323357 systemd-journald[1230]: Received client request to flush runtime journal. Jul 14 22:37:52.323397 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 14 22:37:52.323413 kernel: loop1: detected capacity change from 0 to 146488 Jul 14 22:37:52.234364 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 14 22:37:52.260292 ignition[1256]: Ignition 2.21.0 Jul 14 22:37:52.234543 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 14 22:37:52.262505 ignition[1256]: deleting config from guestinfo properties Jul 14 22:37:52.237374 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 14 22:37:52.280548 ignition[1256]: Successfully deleted config Jul 14 22:37:52.277825 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jul 14 22:37:52.284700 systemd[1]: Finished ignition-delete-config.service - Ignition (delete config). Jul 14 22:37:52.311252 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 14 22:37:52.325732 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 14 22:37:52.327319 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 14 22:37:52.329562 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 14 22:37:52.352626 systemd-tmpfiles[1297]: ACLs are not supported, ignoring. Jul 14 22:37:52.352819 systemd-tmpfiles[1297]: ACLs are not supported, ignoring. Jul 14 22:37:52.357206 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 14 22:37:52.357511 kernel: loop2: detected capacity change from 0 to 221472 Jul 14 22:37:52.391510 kernel: loop3: detected capacity change from 0 to 114000 Jul 14 22:37:52.422511 kernel: loop4: detected capacity change from 0 to 2960 Jul 14 22:37:52.453500 kernel: loop5: detected capacity change from 0 to 146488 Jul 14 22:37:52.480519 kernel: loop6: detected capacity change from 0 to 221472 Jul 14 22:37:52.533506 kernel: loop7: detected capacity change from 0 to 114000 Jul 14 22:37:52.555871 (sd-merge)[1303]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-vmware'. Jul 14 22:37:52.556501 (sd-merge)[1303]: Merged extensions into '/usr'. Jul 14 22:37:52.563426 systemd[1]: Reload requested from client PID 1254 ('systemd-sysext') (unit systemd-sysext.service)... Jul 14 22:37:52.563494 systemd[1]: Reloading... Jul 14 22:37:52.621930 zram_generator::config[1326]: No configuration found. Jul 14 22:37:52.762023 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 14 22:37:52.772283 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Jul 14 22:37:52.819283 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 14 22:37:52.819613 systemd[1]: Reloading finished in 255 ms. Jul 14 22:37:52.837854 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 14 22:37:52.845043 systemd[1]: Starting ensure-sysext.service... Jul 14 22:37:52.848150 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 14 22:37:52.853016 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 14 22:37:52.859002 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 14 22:37:52.859694 systemd[1]: Reload requested from client PID 1385 ('systemctl') (unit ensure-sysext.service)... Jul 14 22:37:52.859700 systemd[1]: Reloading... Jul 14 22:37:52.861288 ldconfig[1250]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 14 22:37:52.867278 systemd-tmpfiles[1386]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jul 14 22:37:52.867615 systemd-tmpfiles[1386]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jul 14 22:37:52.867815 systemd-tmpfiles[1386]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 14 22:37:52.868012 systemd-tmpfiles[1386]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 14 22:37:52.868549 systemd-tmpfiles[1386]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 14 22:37:52.868757 systemd-tmpfiles[1386]: ACLs are not supported, ignoring. Jul 14 22:37:52.868829 systemd-tmpfiles[1386]: ACLs are not supported, ignoring. Jul 14 22:37:52.870559 systemd-tmpfiles[1386]: Detected autofs mount point /boot during canonicalization of boot. Jul 14 22:37:52.870607 systemd-tmpfiles[1386]: Skipping /boot Jul 14 22:37:52.874529 systemd-tmpfiles[1386]: Detected autofs mount point /boot during canonicalization of boot. Jul 14 22:37:52.874570 systemd-tmpfiles[1386]: Skipping /boot Jul 14 22:37:52.892767 systemd-udevd[1389]: Using default interface naming scheme 'v255'. Jul 14 22:37:52.909505 zram_generator::config[1415]: No configuration found. Jul 14 22:37:53.032080 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 14 22:37:53.042836 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Jul 14 22:37:53.049495 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jul 14 22:37:53.063593 kernel: mousedev: PS/2 mouse device common for all mice Jul 14 22:37:53.072538 kernel: ACPI: button: Power Button [PWRF] Jul 14 22:37:53.111903 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jul 14 22:37:53.112118 systemd[1]: Reloading finished in 252 ms. Jul 14 22:37:53.122362 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 14 22:37:53.122826 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 14 22:37:53.130333 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 14 22:37:53.157690 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 14 22:37:53.159071 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 14 22:37:53.160497 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 14 22:37:53.162260 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 14 22:37:53.162915 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 14 22:37:53.166699 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 14 22:37:53.168539 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 14 22:37:53.174821 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 14 22:37:53.175224 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 14 22:37:53.175298 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 14 22:37:53.176889 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 14 22:37:53.178646 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 14 22:37:53.180681 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 14 22:37:53.185538 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 14 22:37:53.185675 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 14 22:37:53.187951 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 14 22:37:53.188100 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 14 22:37:53.188439 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 14 22:37:53.188572 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 14 22:37:53.188894 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 14 22:37:53.188993 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 14 22:37:53.195271 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 14 22:37:53.195530 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 14 22:37:53.202176 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_disk OEM. Jul 14 22:37:53.205548 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 14 22:37:53.208343 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 14 22:37:53.217289 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 14 22:37:53.221997 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 14 22:37:53.225631 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 14 22:37:53.226561 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 14 22:37:53.228926 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 14 22:37:53.229064 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 14 22:37:53.229140 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 14 22:37:53.229756 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 14 22:37:53.230634 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 14 22:37:53.231860 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 14 22:37:53.240981 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 14 22:37:53.241527 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 14 22:37:53.244535 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 14 22:37:53.248346 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 14 22:37:53.254844 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 14 22:37:53.257784 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 14 22:37:53.258057 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 14 22:37:53.258328 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 14 22:37:53.258424 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 14 22:37:53.258492 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 14 22:37:53.263014 systemd[1]: Finished ensure-sysext.service. Jul 14 22:37:53.266653 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jul 14 22:37:53.276961 augenrules[1568]: No rules Jul 14 22:37:53.277300 systemd[1]: audit-rules.service: Deactivated successfully. Jul 14 22:37:53.279690 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 14 22:37:53.280697 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 14 22:37:53.280944 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 14 22:37:53.281050 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 14 22:37:53.285555 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 14 22:37:53.285904 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 14 22:37:53.286171 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 14 22:37:53.286287 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 14 22:37:53.290112 (udev-worker)[1425]: id: Truncating stdout of 'dmi_memory_id' up to 16384 byte. Jul 14 22:37:53.301659 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 14 22:37:53.301792 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 14 22:37:53.302706 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 14 22:37:53.302821 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 14 22:37:53.303640 kernel: piix4_smbus 0000:00:07.3: SMBus Host Controller not enabled! Jul 14 22:37:53.303547 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 14 22:37:53.303591 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 14 22:37:53.305882 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 14 22:37:53.305995 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 14 22:37:53.310524 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 14 22:37:53.310775 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 14 22:37:53.312538 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 14 22:37:53.312866 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 14 22:37:53.315917 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 14 22:37:53.323349 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 14 22:37:53.324257 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 14 22:37:53.337269 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 14 22:37:53.356910 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 14 22:37:53.418451 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 14 22:37:53.428647 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jul 14 22:37:53.428861 systemd[1]: Reached target time-set.target - System Time Set. Jul 14 22:37:53.430471 systemd-resolved[1519]: Positive Trust Anchors: Jul 14 22:37:53.430504 systemd-resolved[1519]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 14 22:37:53.430553 systemd-resolved[1519]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 14 22:37:53.435929 systemd-networkd[1517]: lo: Link UP Jul 14 22:37:53.435933 systemd-networkd[1517]: lo: Gained carrier Jul 14 22:37:53.436782 systemd-resolved[1519]: Defaulting to hostname 'linux'. Jul 14 22:37:53.436795 systemd-networkd[1517]: Enumeration completed Jul 14 22:37:53.436836 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 14 22:37:53.437956 systemd-networkd[1517]: ens192: Configuring with /etc/systemd/network/00-vmware.network. Jul 14 22:37:53.439896 kernel: vmxnet3 0000:0b:00.0 ens192: intr type 3, mode 0, 3 vectors allocated Jul 14 22:37:53.440022 kernel: vmxnet3 0000:0b:00.0 ens192: NIC Link is Up 10000 Mbps Jul 14 22:37:53.440013 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jul 14 22:37:53.441047 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 14 22:37:53.441204 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 14 22:37:53.441338 systemd[1]: Reached target network.target - Network. Jul 14 22:37:53.441434 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 14 22:37:53.441571 systemd[1]: Reached target sysinit.target - System Initialization. Jul 14 22:37:53.441719 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 14 22:37:53.441852 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 14 22:37:53.442051 systemd-networkd[1517]: ens192: Link UP Jul 14 22:37:53.442146 systemd-networkd[1517]: ens192: Gained carrier Jul 14 22:37:53.442334 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jul 14 22:37:53.442544 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 14 22:37:53.442699 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 14 22:37:53.442816 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 14 22:37:53.442929 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 14 22:37:53.442950 systemd[1]: Reached target paths.target - Path Units. Jul 14 22:37:53.443047 systemd[1]: Reached target timers.target - Timer Units. Jul 14 22:37:53.443359 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 14 22:37:53.444386 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 14 22:37:53.445999 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jul 14 22:37:53.446193 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jul 14 22:37:53.446320 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jul 14 22:37:53.446884 systemd-timesyncd[1564]: Network configuration changed, trying to establish connection. Jul 14 22:37:53.448759 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 14 22:37:53.449507 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jul 14 22:37:53.450024 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 14 22:37:53.450601 systemd[1]: Reached target sockets.target - Socket Units. Jul 14 22:37:53.450703 systemd[1]: Reached target basic.target - Basic System. Jul 14 22:37:53.450825 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 14 22:37:53.450838 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 14 22:37:53.451450 systemd[1]: Starting containerd.service - containerd container runtime... Jul 14 22:37:53.453244 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 14 22:37:53.454603 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 14 22:37:53.457249 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 14 22:37:53.460299 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 14 22:37:53.460416 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 14 22:37:53.466048 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jul 14 22:37:53.466713 jq[1615]: false Jul 14 22:37:53.469293 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 14 22:37:53.471594 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 14 22:37:53.471961 extend-filesystems[1616]: Found /dev/sda6 Jul 14 22:37:53.472912 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 14 22:37:53.474614 extend-filesystems[1616]: Found /dev/sda9 Jul 14 22:37:53.475751 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 14 22:37:53.475886 extend-filesystems[1616]: Checking size of /dev/sda9 Jul 14 22:37:53.481976 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 14 22:37:53.482582 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 14 22:37:53.483051 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 14 22:37:53.484783 google_oslogin_nss_cache[1617]: oslogin_cache_refresh[1617]: Refreshing passwd entry cache Jul 14 22:37:53.484973 oslogin_cache_refresh[1617]: Refreshing passwd entry cache Jul 14 22:37:53.489621 systemd[1]: Starting update-engine.service - Update Engine... Jul 14 22:37:53.490501 google_oslogin_nss_cache[1617]: oslogin_cache_refresh[1617]: Failure getting users, quitting Jul 14 22:37:53.490501 google_oslogin_nss_cache[1617]: oslogin_cache_refresh[1617]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jul 14 22:37:53.490501 google_oslogin_nss_cache[1617]: oslogin_cache_refresh[1617]: Refreshing group entry cache Jul 14 22:37:53.490287 oslogin_cache_refresh[1617]: Failure getting users, quitting Jul 14 22:37:53.490299 oslogin_cache_refresh[1617]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jul 14 22:37:53.490327 oslogin_cache_refresh[1617]: Refreshing group entry cache Jul 14 22:37:53.490710 extend-filesystems[1616]: Old size kept for /dev/sda9 Jul 14 22:37:53.493981 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 14 22:37:53.498365 systemd[1]: Starting vgauthd.service - VGAuth Service for open-vm-tools... Jul 14 22:37:53.503513 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jul 14 22:37:53.505916 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 14 22:37:53.506158 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 14 22:37:53.506272 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 14 22:37:53.506408 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 14 22:37:53.506530 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 14 22:37:53.508869 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 14 22:37:53.508985 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 14 22:37:53.524129 jq[1635]: true Jul 14 22:37:53.524890 tar[1646]: linux-amd64/helm Jul 14 22:37:53.532455 systemd[1]: motdgen.service: Deactivated successfully. Jul 14 22:37:53.532831 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 14 22:37:53.541538 update_engine[1630]: I20250714 22:37:53.540753 1630 main.cc:92] Flatcar Update Engine starting Jul 14 22:37:53.543980 dbus-daemon[1613]: [system] SELinux support is enabled Jul 14 22:37:53.544060 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 14 22:37:53.546221 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 14 22:37:53.546239 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 14 22:37:53.546554 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 14 22:37:53.546564 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 14 22:37:53.551136 jq[1657]: true Jul 14 22:37:53.556677 update_engine[1630]: I20250714 22:37:53.556648 1630 update_check_scheduler.cc:74] Next update check in 4m53s Jul 14 22:37:53.556906 (ntainerd)[1660]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 14 22:37:53.565625 systemd[1]: Started vgauthd.service - VGAuth Service for open-vm-tools. Jul 14 22:37:53.565869 systemd[1]: Started update-engine.service - Update Engine. Jul 14 22:37:53.570815 systemd[1]: Starting vmtoolsd.service - Service for virtual machines hosted on VMware... Jul 14 22:37:53.603600 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 14 22:37:53.642329 systemd[1]: Started vmtoolsd.service - Service for virtual machines hosted on VMware. Jul 14 22:37:53.650217 unknown[1663]: Pref_Init: Using '/etc/vmware-tools/vgauth.conf' as preferences filepath Jul 14 22:37:53.655263 unknown[1663]: Core dump limit set to -1 Jul 14 22:37:53.668354 bash[1684]: Updated "/home/core/.ssh/authorized_keys" Jul 14 22:37:53.670361 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 14 22:37:53.671190 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jul 14 22:37:53.672324 systemd-logind[1629]: Watching system buttons on /dev/input/event2 (Power Button) Jul 14 22:37:53.672350 systemd-logind[1629]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jul 14 22:37:53.675439 systemd-logind[1629]: New seat seat0. Jul 14 22:37:53.677913 systemd[1]: Started systemd-logind.service - User Login Management. Jul 14 22:37:53.780170 locksmithd[1666]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 14 22:37:53.811929 containerd[1660]: time="2025-07-14T22:37:53Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jul 14 22:37:53.815499 containerd[1660]: time="2025-07-14T22:37:53.815474044Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Jul 14 22:37:53.836855 containerd[1660]: time="2025-07-14T22:37:53.836824506Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="6.144µs" Jul 14 22:37:53.836855 containerd[1660]: time="2025-07-14T22:37:53.836846159Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jul 14 22:37:53.836855 containerd[1660]: time="2025-07-14T22:37:53.836857378Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jul 14 22:37:53.836960 containerd[1660]: time="2025-07-14T22:37:53.836947075Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jul 14 22:37:53.837002 containerd[1660]: time="2025-07-14T22:37:53.836959294Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jul 14 22:37:53.837002 containerd[1660]: time="2025-07-14T22:37:53.836985060Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 14 22:37:53.837028 containerd[1660]: time="2025-07-14T22:37:53.837020121Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 14 22:37:53.837043 containerd[1660]: time="2025-07-14T22:37:53.837027579Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 14 22:37:53.837178 containerd[1660]: time="2025-07-14T22:37:53.837162594Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 14 22:37:53.837178 containerd[1660]: time="2025-07-14T22:37:53.837174186Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 14 22:37:53.837215 containerd[1660]: time="2025-07-14T22:37:53.837181176Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 14 22:37:53.837215 containerd[1660]: time="2025-07-14T22:37:53.837185793Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jul 14 22:37:53.837242 containerd[1660]: time="2025-07-14T22:37:53.837225255Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jul 14 22:37:53.837347 containerd[1660]: time="2025-07-14T22:37:53.837335828Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 14 22:37:53.837368 containerd[1660]: time="2025-07-14T22:37:53.837354655Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 14 22:37:53.837368 containerd[1660]: time="2025-07-14T22:37:53.837361552Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jul 14 22:37:53.837398 containerd[1660]: time="2025-07-14T22:37:53.837380259Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jul 14 22:37:53.840537 containerd[1660]: time="2025-07-14T22:37:53.840523695Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jul 14 22:37:53.840575 containerd[1660]: time="2025-07-14T22:37:53.840559423Z" level=info msg="metadata content store policy set" policy=shared Jul 14 22:37:53.844860 containerd[1660]: time="2025-07-14T22:37:53.844722508Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jul 14 22:37:53.844860 containerd[1660]: time="2025-07-14T22:37:53.844746101Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jul 14 22:37:53.844860 containerd[1660]: time="2025-07-14T22:37:53.844754761Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jul 14 22:37:53.844860 containerd[1660]: time="2025-07-14T22:37:53.844761702Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jul 14 22:37:53.844860 containerd[1660]: time="2025-07-14T22:37:53.844768734Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jul 14 22:37:53.844860 containerd[1660]: time="2025-07-14T22:37:53.844774988Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jul 14 22:37:53.844860 containerd[1660]: time="2025-07-14T22:37:53.844781950Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jul 14 22:37:53.844860 containerd[1660]: time="2025-07-14T22:37:53.844788862Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jul 14 22:37:53.844860 containerd[1660]: time="2025-07-14T22:37:53.844794640Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jul 14 22:37:53.844860 containerd[1660]: time="2025-07-14T22:37:53.844803084Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jul 14 22:37:53.844860 containerd[1660]: time="2025-07-14T22:37:53.844808530Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jul 14 22:37:53.844860 containerd[1660]: time="2025-07-14T22:37:53.844815507Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jul 14 22:37:53.845276 containerd[1660]: time="2025-07-14T22:37:53.844869831Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jul 14 22:37:53.845276 containerd[1660]: time="2025-07-14T22:37:53.844881239Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jul 14 22:37:53.845276 containerd[1660]: time="2025-07-14T22:37:53.844890868Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jul 14 22:37:53.845276 containerd[1660]: time="2025-07-14T22:37:53.844899804Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jul 14 22:37:53.845276 containerd[1660]: time="2025-07-14T22:37:53.844906714Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jul 14 22:37:53.845276 containerd[1660]: time="2025-07-14T22:37:53.844912377Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jul 14 22:37:53.845276 containerd[1660]: time="2025-07-14T22:37:53.844918159Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jul 14 22:37:53.845276 containerd[1660]: time="2025-07-14T22:37:53.844923422Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jul 14 22:37:53.845276 containerd[1660]: time="2025-07-14T22:37:53.844929083Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jul 14 22:37:53.845276 containerd[1660]: time="2025-07-14T22:37:53.844935274Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jul 14 22:37:53.845276 containerd[1660]: time="2025-07-14T22:37:53.844943141Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jul 14 22:37:53.845276 containerd[1660]: time="2025-07-14T22:37:53.844977875Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jul 14 22:37:53.845276 containerd[1660]: time="2025-07-14T22:37:53.844985996Z" level=info msg="Start snapshots syncer" Jul 14 22:37:53.845276 containerd[1660]: time="2025-07-14T22:37:53.845001699Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jul 14 22:37:53.846293 containerd[1660]: time="2025-07-14T22:37:53.845149731Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jul 14 22:37:53.846293 containerd[1660]: time="2025-07-14T22:37:53.845183857Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jul 14 22:37:53.846369 containerd[1660]: time="2025-07-14T22:37:53.845218289Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jul 14 22:37:53.846369 containerd[1660]: time="2025-07-14T22:37:53.845266436Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jul 14 22:37:53.846369 containerd[1660]: time="2025-07-14T22:37:53.845279148Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jul 14 22:37:53.846369 containerd[1660]: time="2025-07-14T22:37:53.845288831Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jul 14 22:37:53.846369 containerd[1660]: time="2025-07-14T22:37:53.845295466Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jul 14 22:37:53.846369 containerd[1660]: time="2025-07-14T22:37:53.845301712Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jul 14 22:37:53.846369 containerd[1660]: time="2025-07-14T22:37:53.845307906Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jul 14 22:37:53.846369 containerd[1660]: time="2025-07-14T22:37:53.845313735Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jul 14 22:37:53.846369 containerd[1660]: time="2025-07-14T22:37:53.845329171Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jul 14 22:37:53.846369 containerd[1660]: time="2025-07-14T22:37:53.845337311Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jul 14 22:37:53.846369 containerd[1660]: time="2025-07-14T22:37:53.845343318Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jul 14 22:37:53.846369 containerd[1660]: time="2025-07-14T22:37:53.845357384Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 14 22:37:53.846369 containerd[1660]: time="2025-07-14T22:37:53.845364892Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 14 22:37:53.846369 containerd[1660]: time="2025-07-14T22:37:53.845369525Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 14 22:37:53.846564 containerd[1660]: time="2025-07-14T22:37:53.845374952Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 14 22:37:53.846564 containerd[1660]: time="2025-07-14T22:37:53.845379074Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jul 14 22:37:53.846564 containerd[1660]: time="2025-07-14T22:37:53.845384207Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jul 14 22:37:53.846564 containerd[1660]: time="2025-07-14T22:37:53.845389878Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jul 14 22:37:53.846564 containerd[1660]: time="2025-07-14T22:37:53.845399347Z" level=info msg="runtime interface created" Jul 14 22:37:53.846564 containerd[1660]: time="2025-07-14T22:37:53.845402751Z" level=info msg="created NRI interface" Jul 14 22:37:53.846564 containerd[1660]: time="2025-07-14T22:37:53.845406963Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jul 14 22:37:53.846564 containerd[1660]: time="2025-07-14T22:37:53.845413099Z" level=info msg="Connect containerd service" Jul 14 22:37:53.846564 containerd[1660]: time="2025-07-14T22:37:53.845426871Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 14 22:37:53.847906 containerd[1660]: time="2025-07-14T22:37:53.847823571Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 14 22:37:53.931861 sshd_keygen[1637]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 14 22:37:53.953425 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 14 22:37:53.956192 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 14 22:37:53.971325 systemd[1]: issuegen.service: Deactivated successfully. Jul 14 22:37:53.971639 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 14 22:37:53.975741 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 14 22:37:53.979105 tar[1646]: linux-amd64/LICENSE Jul 14 22:37:53.979105 tar[1646]: linux-amd64/README.md Jul 14 22:37:53.989181 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 14 22:37:53.991034 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 14 22:37:53.993674 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 14 22:37:53.995832 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jul 14 22:37:53.996032 systemd[1]: Reached target getty.target - Login Prompts. Jul 14 22:37:54.016945 containerd[1660]: time="2025-07-14T22:37:54.016891180Z" level=info msg="Start subscribing containerd event" Jul 14 22:37:54.016945 containerd[1660]: time="2025-07-14T22:37:54.016922655Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 14 22:37:54.017036 containerd[1660]: time="2025-07-14T22:37:54.016965229Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 14 22:37:54.017036 containerd[1660]: time="2025-07-14T22:37:54.016923804Z" level=info msg="Start recovering state" Jul 14 22:37:54.017064 containerd[1660]: time="2025-07-14T22:37:54.017036892Z" level=info msg="Start event monitor" Jul 14 22:37:54.017064 containerd[1660]: time="2025-07-14T22:37:54.017045886Z" level=info msg="Start cni network conf syncer for default" Jul 14 22:37:54.017064 containerd[1660]: time="2025-07-14T22:37:54.017050922Z" level=info msg="Start streaming server" Jul 14 22:37:54.017064 containerd[1660]: time="2025-07-14T22:37:54.017056070Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jul 14 22:37:54.017064 containerd[1660]: time="2025-07-14T22:37:54.017060040Z" level=info msg="runtime interface starting up..." Jul 14 22:37:54.017064 containerd[1660]: time="2025-07-14T22:37:54.017063082Z" level=info msg="starting plugins..." Jul 14 22:37:54.017786 containerd[1660]: time="2025-07-14T22:37:54.017070338Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jul 14 22:37:54.017786 containerd[1660]: time="2025-07-14T22:37:54.017150625Z" level=info msg="containerd successfully booted in 0.205420s" Jul 14 22:37:54.017210 systemd[1]: Started containerd.service - containerd container runtime. Jul 14 22:37:55.153664 systemd-networkd[1517]: ens192: Gained IPv6LL Jul 14 22:37:55.153974 systemd-timesyncd[1564]: Network configuration changed, trying to establish connection. Jul 14 22:37:55.155443 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 14 22:37:55.156595 systemd[1]: Reached target network-online.target - Network is Online. Jul 14 22:37:55.158051 systemd[1]: Starting coreos-metadata.service - VMware metadata agent... Jul 14 22:37:55.161380 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 14 22:37:55.166270 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 14 22:37:55.196412 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 14 22:37:55.202882 systemd[1]: coreos-metadata.service: Deactivated successfully. Jul 14 22:37:55.203043 systemd[1]: Finished coreos-metadata.service - VMware metadata agent. Jul 14 22:37:55.203415 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 14 22:37:55.496761 google_oslogin_nss_cache[1617]: oslogin_cache_refresh[1617]: Failure getting groups, quitting Jul 14 22:37:55.496761 google_oslogin_nss_cache[1617]: oslogin_cache_refresh[1617]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jul 14 22:37:55.496710 oslogin_cache_refresh[1617]: Failure getting groups, quitting Jul 14 22:37:55.496725 oslogin_cache_refresh[1617]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jul 14 22:37:55.497587 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jul 14 22:37:55.497777 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jul 14 22:37:56.851900 systemd-timesyncd[1564]: Network configuration changed, trying to establish connection. Jul 14 22:37:59.365201 login[1778]: pam_lastlog(login:session): file /var/log/lastlog is locked/write, retrying Jul 14 22:37:59.383188 login[1774]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jul 14 22:37:59.388418 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 14 22:37:59.391604 (kubelet)[1815]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 14 22:37:59.394429 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 14 22:37:59.395565 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 14 22:37:59.396678 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 14 22:37:59.399361 systemd-logind[1629]: New session 2 of user core. Jul 14 22:37:59.419357 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 14 22:37:59.420802 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 14 22:37:59.434558 (systemd)[1819]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 14 22:37:59.436550 systemd-logind[1629]: New session c1 of user core. Jul 14 22:37:59.550408 systemd[1819]: Queued start job for default target default.target. Jul 14 22:37:59.568420 systemd[1819]: Created slice app.slice - User Application Slice. Jul 14 22:37:59.568553 systemd[1819]: Reached target paths.target - Paths. Jul 14 22:37:59.568584 systemd[1819]: Reached target timers.target - Timers. Jul 14 22:37:59.569276 systemd[1819]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 14 22:37:59.576066 systemd[1819]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 14 22:37:59.576102 systemd[1819]: Reached target sockets.target - Sockets. Jul 14 22:37:59.576132 systemd[1819]: Reached target basic.target - Basic System. Jul 14 22:37:59.576156 systemd[1819]: Reached target default.target - Main User Target. Jul 14 22:37:59.576174 systemd[1819]: Startup finished in 135ms. Jul 14 22:37:59.576210 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 14 22:37:59.586582 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 14 22:37:59.586930 systemd[1]: Startup finished in 2.729s (kernel) + 10.990s (initrd) + 8.054s (userspace) = 21.774s. Jul 14 22:38:00.367247 login[1778]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jul 14 22:38:00.371780 systemd-logind[1629]: New session 1 of user core. Jul 14 22:38:00.381651 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 14 22:38:00.611383 kubelet[1815]: E0714 22:38:00.611339 1815 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 14 22:38:00.612854 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 14 22:38:00.613074 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 14 22:38:00.613359 systemd[1]: kubelet.service: Consumed 739ms CPU time, 264.2M memory peak. Jul 14 22:38:10.863538 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 14 22:38:10.865075 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 14 22:38:11.318015 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 14 22:38:11.322689 (kubelet)[1865]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 14 22:38:11.361529 kubelet[1865]: E0714 22:38:11.361478 1865 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 14 22:38:11.363843 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 14 22:38:11.363931 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 14 22:38:11.364378 systemd[1]: kubelet.service: Consumed 105ms CPU time, 108.6M memory peak. Jul 14 22:38:21.427347 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 14 22:38:21.428789 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 14 22:38:21.770024 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 14 22:38:21.772557 (kubelet)[1880]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 14 22:38:21.816445 kubelet[1880]: E0714 22:38:21.816407 1880 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 14 22:38:21.817760 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 14 22:38:21.817846 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 14 22:38:21.818300 systemd[1]: kubelet.service: Consumed 104ms CPU time, 109.1M memory peak. Jul 14 22:38:23.743766 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 14 22:38:23.744740 systemd[1]: Started sshd@0-139.178.70.108:22-139.178.89.65:56314.service - OpenSSH per-connection server daemon (139.178.89.65:56314). Jul 14 22:38:23.798312 sshd[1888]: Accepted publickey for core from 139.178.89.65 port 56314 ssh2: RSA SHA256:RSWGZuhuTovkP7yToXQSr6sgrWxhGTyTYOnlX2cWN2k Jul 14 22:38:23.799244 sshd-session[1888]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:38:23.802862 systemd-logind[1629]: New session 3 of user core. Jul 14 22:38:23.808610 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 14 22:38:23.861928 systemd[1]: Started sshd@1-139.178.70.108:22-139.178.89.65:56324.service - OpenSSH per-connection server daemon (139.178.89.65:56324). Jul 14 22:38:23.902210 sshd[1894]: Accepted publickey for core from 139.178.89.65 port 56324 ssh2: RSA SHA256:RSWGZuhuTovkP7yToXQSr6sgrWxhGTyTYOnlX2cWN2k Jul 14 22:38:23.902995 sshd-session[1894]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:38:23.906684 systemd-logind[1629]: New session 4 of user core. Jul 14 22:38:23.915648 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 14 22:38:23.963801 sshd[1897]: Connection closed by 139.178.89.65 port 56324 Jul 14 22:38:23.964076 sshd-session[1894]: pam_unix(sshd:session): session closed for user core Jul 14 22:38:23.969361 systemd[1]: sshd@1-139.178.70.108:22-139.178.89.65:56324.service: Deactivated successfully. Jul 14 22:38:23.970217 systemd[1]: session-4.scope: Deactivated successfully. Jul 14 22:38:23.970717 systemd-logind[1629]: Session 4 logged out. Waiting for processes to exit. Jul 14 22:38:23.971825 systemd[1]: Started sshd@2-139.178.70.108:22-139.178.89.65:56332.service - OpenSSH per-connection server daemon (139.178.89.65:56332). Jul 14 22:38:23.972807 systemd-logind[1629]: Removed session 4. Jul 14 22:38:24.013734 sshd[1903]: Accepted publickey for core from 139.178.89.65 port 56332 ssh2: RSA SHA256:RSWGZuhuTovkP7yToXQSr6sgrWxhGTyTYOnlX2cWN2k Jul 14 22:38:24.014341 sshd-session[1903]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:38:24.016917 systemd-logind[1629]: New session 5 of user core. Jul 14 22:38:24.024624 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 14 22:38:24.070103 sshd[1906]: Connection closed by 139.178.89.65 port 56332 Jul 14 22:38:24.070375 sshd-session[1903]: pam_unix(sshd:session): session closed for user core Jul 14 22:38:24.075646 systemd[1]: sshd@2-139.178.70.108:22-139.178.89.65:56332.service: Deactivated successfully. Jul 14 22:38:24.076607 systemd[1]: session-5.scope: Deactivated successfully. Jul 14 22:38:24.077087 systemd-logind[1629]: Session 5 logged out. Waiting for processes to exit. Jul 14 22:38:24.078307 systemd[1]: Started sshd@3-139.178.70.108:22-139.178.89.65:56342.service - OpenSSH per-connection server daemon (139.178.89.65:56342). Jul 14 22:38:24.079990 systemd-logind[1629]: Removed session 5. Jul 14 22:38:24.114529 sshd[1912]: Accepted publickey for core from 139.178.89.65 port 56342 ssh2: RSA SHA256:RSWGZuhuTovkP7yToXQSr6sgrWxhGTyTYOnlX2cWN2k Jul 14 22:38:24.115262 sshd-session[1912]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:38:24.118122 systemd-logind[1629]: New session 6 of user core. Jul 14 22:38:24.123568 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 14 22:38:24.172261 sshd[1915]: Connection closed by 139.178.89.65 port 56342 Jul 14 22:38:24.172997 sshd-session[1912]: pam_unix(sshd:session): session closed for user core Jul 14 22:38:24.177369 systemd[1]: sshd@3-139.178.70.108:22-139.178.89.65:56342.service: Deactivated successfully. Jul 14 22:38:24.178460 systemd[1]: session-6.scope: Deactivated successfully. Jul 14 22:38:24.179048 systemd-logind[1629]: Session 6 logged out. Waiting for processes to exit. Jul 14 22:38:24.180995 systemd[1]: Started sshd@4-139.178.70.108:22-139.178.89.65:56358.service - OpenSSH per-connection server daemon (139.178.89.65:56358). Jul 14 22:38:24.181806 systemd-logind[1629]: Removed session 6. Jul 14 22:38:24.219812 sshd[1921]: Accepted publickey for core from 139.178.89.65 port 56358 ssh2: RSA SHA256:RSWGZuhuTovkP7yToXQSr6sgrWxhGTyTYOnlX2cWN2k Jul 14 22:38:24.220498 sshd-session[1921]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:38:24.223004 systemd-logind[1629]: New session 7 of user core. Jul 14 22:38:24.238889 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 14 22:38:24.299736 sudo[1925]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 14 22:38:24.299915 sudo[1925]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 14 22:38:24.320168 sudo[1925]: pam_unix(sudo:session): session closed for user root Jul 14 22:38:24.320990 sshd[1924]: Connection closed by 139.178.89.65 port 56358 Jul 14 22:38:24.321905 sshd-session[1921]: pam_unix(sshd:session): session closed for user core Jul 14 22:38:24.328078 systemd[1]: sshd@4-139.178.70.108:22-139.178.89.65:56358.service: Deactivated successfully. Jul 14 22:38:24.329181 systemd[1]: session-7.scope: Deactivated successfully. Jul 14 22:38:24.330427 systemd-logind[1629]: Session 7 logged out. Waiting for processes to exit. Jul 14 22:38:24.331557 systemd[1]: Started sshd@5-139.178.70.108:22-139.178.89.65:56366.service - OpenSSH per-connection server daemon (139.178.89.65:56366). Jul 14 22:38:24.332387 systemd-logind[1629]: Removed session 7. Jul 14 22:38:24.373514 sshd[1931]: Accepted publickey for core from 139.178.89.65 port 56366 ssh2: RSA SHA256:RSWGZuhuTovkP7yToXQSr6sgrWxhGTyTYOnlX2cWN2k Jul 14 22:38:24.374492 sshd-session[1931]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:38:24.377302 systemd-logind[1629]: New session 8 of user core. Jul 14 22:38:24.391689 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 14 22:38:24.440690 sudo[1936]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 14 22:38:24.440861 sudo[1936]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 14 22:38:24.443921 sudo[1936]: pam_unix(sudo:session): session closed for user root Jul 14 22:38:24.447792 sudo[1935]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jul 14 22:38:24.447957 sudo[1935]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 14 22:38:24.454662 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 14 22:38:24.480522 augenrules[1958]: No rules Jul 14 22:38:24.480853 systemd[1]: audit-rules.service: Deactivated successfully. Jul 14 22:38:24.481042 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 14 22:38:24.482012 sudo[1935]: pam_unix(sudo:session): session closed for user root Jul 14 22:38:24.482800 sshd[1934]: Connection closed by 139.178.89.65 port 56366 Jul 14 22:38:24.483053 sshd-session[1931]: pam_unix(sshd:session): session closed for user core Jul 14 22:38:24.492872 systemd[1]: sshd@5-139.178.70.108:22-139.178.89.65:56366.service: Deactivated successfully. Jul 14 22:38:24.494011 systemd[1]: session-8.scope: Deactivated successfully. Jul 14 22:38:24.494548 systemd-logind[1629]: Session 8 logged out. Waiting for processes to exit. Jul 14 22:38:24.495935 systemd[1]: Started sshd@6-139.178.70.108:22-139.178.89.65:56368.service - OpenSSH per-connection server daemon (139.178.89.65:56368). Jul 14 22:38:24.496973 systemd-logind[1629]: Removed session 8. Jul 14 22:38:24.530922 sshd[1967]: Accepted publickey for core from 139.178.89.65 port 56368 ssh2: RSA SHA256:RSWGZuhuTovkP7yToXQSr6sgrWxhGTyTYOnlX2cWN2k Jul 14 22:38:24.531808 sshd-session[1967]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:38:24.534520 systemd-logind[1629]: New session 9 of user core. Jul 14 22:38:24.542925 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 14 22:38:24.590667 sudo[1971]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 14 22:38:24.590825 sudo[1971]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 14 22:38:24.882425 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 14 22:38:24.893747 (dockerd)[1988]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 14 22:38:25.106237 dockerd[1988]: time="2025-07-14T22:38:25.106194710Z" level=info msg="Starting up" Jul 14 22:38:25.106630 dockerd[1988]: time="2025-07-14T22:38:25.106616860Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jul 14 22:38:25.113029 dockerd[1988]: time="2025-07-14T22:38:25.112999212Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Jul 14 22:38:25.121737 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport560789618-merged.mount: Deactivated successfully. Jul 14 22:38:25.137201 dockerd[1988]: time="2025-07-14T22:38:25.137010100Z" level=info msg="Loading containers: start." Jul 14 22:38:25.144532 kernel: Initializing XFRM netlink socket Jul 14 22:38:25.269998 systemd-timesyncd[1564]: Network configuration changed, trying to establish connection. Jul 14 22:38:25.303388 systemd-networkd[1517]: docker0: Link UP Jul 14 22:38:25.304641 dockerd[1988]: time="2025-07-14T22:38:25.304618520Z" level=info msg="Loading containers: done." Jul 14 22:38:25.315010 dockerd[1988]: time="2025-07-14T22:38:25.314968046Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 14 22:38:25.315114 dockerd[1988]: time="2025-07-14T22:38:25.315040495Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Jul 14 22:38:25.315114 dockerd[1988]: time="2025-07-14T22:38:25.315103172Z" level=info msg="Initializing buildkit" Jul 14 22:38:25.326218 dockerd[1988]: time="2025-07-14T22:38:25.326140199Z" level=info msg="Completed buildkit initialization" Jul 14 22:38:25.330918 dockerd[1988]: time="2025-07-14T22:38:25.330877689Z" level=info msg="Daemon has completed initialization" Jul 14 22:38:25.331159 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 14 22:38:25.331546 dockerd[1988]: time="2025-07-14T22:38:25.331508918Z" level=info msg="API listen on /run/docker.sock" Jul 14 22:38:26.119476 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck4264527573-merged.mount: Deactivated successfully. Jul 14 22:39:57.075418 systemd-resolved[1519]: Clock change detected. Flushing caches. Jul 14 22:39:57.075638 systemd-timesyncd[1564]: Contacted time server 208.113.130.146:123 (2.flatcar.pool.ntp.org). Jul 14 22:39:57.075667 systemd-timesyncd[1564]: Initial clock synchronization to Mon 2025-07-14 22:39:57.075289 UTC. Jul 14 22:39:58.502039 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jul 14 22:39:58.503082 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 14 22:39:58.855024 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 14 22:39:58.863441 (kubelet)[2205]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 14 22:39:58.904773 kubelet[2205]: E0714 22:39:58.904733 2205 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 14 22:39:58.906353 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 14 22:39:58.906571 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 14 22:39:58.906912 systemd[1]: kubelet.service: Consumed 100ms CPU time, 108.6M memory peak. Jul 14 22:40:05.401188 update_engine[1630]: I20250714 22:40:05.401013 1630 update_attempter.cc:509] Updating boot flags... Jul 14 22:40:07.822629 containerd[1660]: time="2025-07-14T22:40:07.822596197Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.8\"" Jul 14 22:40:08.487445 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount315496164.mount: Deactivated successfully. Jul 14 22:40:09.002350 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jul 14 22:40:09.004550 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 14 22:40:09.096335 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 14 22:40:09.098431 (kubelet)[2293]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 14 22:40:09.127250 kubelet[2293]: E0714 22:40:09.127210 2293 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 14 22:40:09.128531 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 14 22:40:09.128904 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 14 22:40:09.129223 systemd[1]: kubelet.service: Consumed 98ms CPU time, 110.2M memory peak. Jul 14 22:40:09.524859 containerd[1660]: time="2025-07-14T22:40:09.524435190Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:40:09.525307 containerd[1660]: time="2025-07-14T22:40:09.525286505Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.8: active requests=0, bytes read=27960987" Jul 14 22:40:09.525384 containerd[1660]: time="2025-07-14T22:40:09.525332777Z" level=info msg="ImageCreate event name:\"sha256:e6d208e868a9ca7f89efcb0d5bddc55a62df551cb4fb39c5099a2fe7b0e33adc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:40:09.527530 containerd[1660]: time="2025-07-14T22:40:09.527446867Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:30090db6a7d53799163ce82dae9e8ddb645fd47db93f2ec9da0cc787fd825625\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:40:09.528471 containerd[1660]: time="2025-07-14T22:40:09.528364343Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.8\" with image id \"sha256:e6d208e868a9ca7f89efcb0d5bddc55a62df551cb4fb39c5099a2fe7b0e33adc\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.8\", repo digest \"registry.k8s.io/kube-apiserver@sha256:30090db6a7d53799163ce82dae9e8ddb645fd47db93f2ec9da0cc787fd825625\", size \"27957787\" in 1.705721087s" Jul 14 22:40:09.528471 containerd[1660]: time="2025-07-14T22:40:09.528388156Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.8\" returns image reference \"sha256:e6d208e868a9ca7f89efcb0d5bddc55a62df551cb4fb39c5099a2fe7b0e33adc\"" Jul 14 22:40:09.528849 containerd[1660]: time="2025-07-14T22:40:09.528808714Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.8\"" Jul 14 22:40:10.819979 containerd[1660]: time="2025-07-14T22:40:10.819325175Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:40:10.824547 containerd[1660]: time="2025-07-14T22:40:10.824530519Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.8: active requests=0, bytes read=24713776" Jul 14 22:40:10.832252 containerd[1660]: time="2025-07-14T22:40:10.832220263Z" level=info msg="ImageCreate event name:\"sha256:fbda0bc3bc4bb93c8b2d8627a9aa8d945c200b51e48c88f9b837dde628fc7c8f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:40:10.841176 containerd[1660]: time="2025-07-14T22:40:10.841144273Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:29eaddc64792a689df48506e78bbc641d063ac8bb92d2e66ae2ad05977420747\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:40:10.842007 containerd[1660]: time="2025-07-14T22:40:10.841984465Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.8\" with image id \"sha256:fbda0bc3bc4bb93c8b2d8627a9aa8d945c200b51e48c88f9b837dde628fc7c8f\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.8\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:29eaddc64792a689df48506e78bbc641d063ac8bb92d2e66ae2ad05977420747\", size \"26202149\" in 1.313060086s" Jul 14 22:40:10.842053 containerd[1660]: time="2025-07-14T22:40:10.842007245Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.8\" returns image reference \"sha256:fbda0bc3bc4bb93c8b2d8627a9aa8d945c200b51e48c88f9b837dde628fc7c8f\"" Jul 14 22:40:10.842364 containerd[1660]: time="2025-07-14T22:40:10.842340152Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.8\"" Jul 14 22:40:19.252158 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Jul 14 22:40:19.253818 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 14 22:40:19.712999 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 14 22:40:19.721522 (kubelet)[2312]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 14 22:40:19.746865 kubelet[2312]: E0714 22:40:19.746829 2312 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 14 22:40:19.748249 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 14 22:40:19.748393 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 14 22:40:19.748768 systemd[1]: kubelet.service: Consumed 106ms CPU time, 110.1M memory peak. Jul 14 22:40:27.381099 containerd[1660]: time="2025-07-14T22:40:27.380630777Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:40:27.381538 containerd[1660]: time="2025-07-14T22:40:27.381528269Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.8: active requests=0, bytes read=18780386" Jul 14 22:40:27.381975 containerd[1660]: time="2025-07-14T22:40:27.381964631Z" level=info msg="ImageCreate event name:\"sha256:2a9c646db0be37003c2b50605a252f7139145411d9e4e0badd8ae07f56ce5eb8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:40:27.383739 containerd[1660]: time="2025-07-14T22:40:27.383726873Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:22994a2632e81059720480b9f6bdeb133b08d58492d0b36dfd6e9768b159b22a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:40:27.384138 containerd[1660]: time="2025-07-14T22:40:27.384057477Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.8\" with image id \"sha256:2a9c646db0be37003c2b50605a252f7139145411d9e4e0badd8ae07f56ce5eb8\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.8\", repo digest \"registry.k8s.io/kube-scheduler@sha256:22994a2632e81059720480b9f6bdeb133b08d58492d0b36dfd6e9768b159b22a\", size \"20268777\" in 16.541697925s" Jul 14 22:40:27.384501 containerd[1660]: time="2025-07-14T22:40:27.384491350Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.8\" returns image reference \"sha256:2a9c646db0be37003c2b50605a252f7139145411d9e4e0badd8ae07f56ce5eb8\"" Jul 14 22:40:27.384934 containerd[1660]: time="2025-07-14T22:40:27.384886345Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.8\"" Jul 14 22:40:29.752165 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Jul 14 22:40:29.753901 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 14 22:40:30.179749 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 14 22:40:30.182423 (kubelet)[2330]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 14 22:40:30.210759 kubelet[2330]: E0714 22:40:30.210723 2330 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 14 22:40:30.212197 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 14 22:40:30.212364 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 14 22:40:30.212760 systemd[1]: kubelet.service: Consumed 102ms CPU time, 110.3M memory peak. Jul 14 22:40:39.412548 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3416298027.mount: Deactivated successfully. Jul 14 22:40:39.740939 containerd[1660]: time="2025-07-14T22:40:39.740436948Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:40:39.742149 containerd[1660]: time="2025-07-14T22:40:39.742134483Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.8: active requests=0, bytes read=30354625" Jul 14 22:40:39.743146 containerd[1660]: time="2025-07-14T22:40:39.743129607Z" level=info msg="ImageCreate event name:\"sha256:7d73f013cedcf301aef42272c93e4c1174dab1a8eccd96840091ef04b63480f2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:40:39.744267 containerd[1660]: time="2025-07-14T22:40:39.744254889Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:dd0c9a37670f209947b1ed880f06a2e93e1d41da78c037f52f94b13858769838\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:40:39.744648 containerd[1660]: time="2025-07-14T22:40:39.744625710Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.8\" with image id \"sha256:7d73f013cedcf301aef42272c93e4c1174dab1a8eccd96840091ef04b63480f2\", repo tag \"registry.k8s.io/kube-proxy:v1.31.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:dd0c9a37670f209947b1ed880f06a2e93e1d41da78c037f52f94b13858769838\", size \"30353644\" in 12.359640069s" Jul 14 22:40:39.744681 containerd[1660]: time="2025-07-14T22:40:39.744649849Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.8\" returns image reference \"sha256:7d73f013cedcf301aef42272c93e4c1174dab1a8eccd96840091ef04b63480f2\"" Jul 14 22:40:39.745095 containerd[1660]: time="2025-07-14T22:40:39.745037451Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jul 14 22:40:40.252380 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Jul 14 22:40:40.254273 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 14 22:40:40.614285 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 14 22:40:40.618539 (kubelet)[2353]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 14 22:40:40.660547 kubelet[2353]: E0714 22:40:40.660433 2353 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 14 22:40:40.662134 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 14 22:40:40.662229 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 14 22:40:40.662668 systemd[1]: kubelet.service: Consumed 114ms CPU time, 108.4M memory peak. Jul 14 22:40:41.103378 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1988646016.mount: Deactivated successfully. Jul 14 22:40:42.397597 containerd[1660]: time="2025-07-14T22:40:42.397560337Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:40:42.415113 containerd[1660]: time="2025-07-14T22:40:42.415072548Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Jul 14 22:40:42.426590 containerd[1660]: time="2025-07-14T22:40:42.426559406Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:40:42.440207 containerd[1660]: time="2025-07-14T22:40:42.439939123Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:40:42.440446 containerd[1660]: time="2025-07-14T22:40:42.440427040Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 2.695370264s" Jul 14 22:40:42.440480 containerd[1660]: time="2025-07-14T22:40:42.440448112Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jul 14 22:40:42.441171 containerd[1660]: time="2025-07-14T22:40:42.440685880Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 14 22:40:44.322825 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3428184816.mount: Deactivated successfully. Jul 14 22:40:44.394209 containerd[1660]: time="2025-07-14T22:40:44.394099264Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 14 22:40:44.405475 containerd[1660]: time="2025-07-14T22:40:44.405450814Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Jul 14 22:40:44.416680 containerd[1660]: time="2025-07-14T22:40:44.416635455Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 14 22:40:44.431910 containerd[1660]: time="2025-07-14T22:40:44.431871117Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 14 22:40:44.432441 containerd[1660]: time="2025-07-14T22:40:44.432325499Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 1.991223352s" Jul 14 22:40:44.432441 containerd[1660]: time="2025-07-14T22:40:44.432349803Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jul 14 22:40:44.432678 containerd[1660]: time="2025-07-14T22:40:44.432635084Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Jul 14 22:40:45.501931 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4149868758.mount: Deactivated successfully. Jul 14 22:40:50.022884 containerd[1660]: time="2025-07-14T22:40:50.022705062Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:40:50.023724 containerd[1660]: time="2025-07-14T22:40:50.023495258Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56780013" Jul 14 22:40:50.024223 containerd[1660]: time="2025-07-14T22:40:50.024203578Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:40:50.026109 containerd[1660]: time="2025-07-14T22:40:50.026089345Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:40:50.026897 containerd[1660]: time="2025-07-14T22:40:50.026877826Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 5.594225162s" Jul 14 22:40:50.026897 containerd[1660]: time="2025-07-14T22:40:50.026897040Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Jul 14 22:40:50.331498 systemd[1]: Started sshd@7-139.178.70.108:22-94.102.49.186:58570.service - OpenSSH per-connection server daemon (94.102.49.186:58570). Jul 14 22:40:50.548273 sshd[2480]: Connection closed by 94.102.49.186 port 58570 Jul 14 22:40:50.549043 systemd[1]: sshd@7-139.178.70.108:22-94.102.49.186:58570.service: Deactivated successfully. Jul 14 22:40:50.752092 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. Jul 14 22:40:50.753462 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 14 22:40:51.200999 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 14 22:40:51.203869 (kubelet)[2492]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 14 22:40:51.440491 kubelet[2492]: E0714 22:40:51.440452 2492 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 14 22:40:51.441917 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 14 22:40:51.442124 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 14 22:40:51.442538 systemd[1]: kubelet.service: Consumed 124ms CPU time, 109.1M memory peak. Jul 14 22:40:55.524253 containerd[1660]: time="2025-07-14T22:40:55.524202547Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.10\"" Jul 14 22:40:56.003470 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3142963496.mount: Deactivated successfully. Jul 14 22:40:56.799259 containerd[1660]: time="2025-07-14T22:40:56.799206052Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:40:56.799956 containerd[1660]: time="2025-07-14T22:40:56.799938183Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.10: active requests=0, bytes read=27930739" Jul 14 22:40:56.800368 containerd[1660]: time="2025-07-14T22:40:56.800348498Z" level=info msg="ImageCreate event name:\"sha256:74c5154ea84d9a53c406e6c00e53cf66145cce821fd80e3c74e2e1bf312f3977\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:40:56.802023 containerd[1660]: time="2025-07-14T22:40:56.801998073Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:083d7d64af31cd090f870eb49fb815e6bb42c175fc602ee9dae2f28f082bd4dc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:40:56.802381 containerd[1660]: time="2025-07-14T22:40:56.802307888Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.10\" with image id \"sha256:74c5154ea84d9a53c406e6c00e53cf66145cce821fd80e3c74e2e1bf312f3977\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:083d7d64af31cd090f870eb49fb815e6bb42c175fc602ee9dae2f28f082bd4dc\", size \"28074544\" in 1.278080666s" Jul 14 22:40:56.802381 containerd[1660]: time="2025-07-14T22:40:56.802326607Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.10\" returns image reference \"sha256:74c5154ea84d9a53c406e6c00e53cf66145cce821fd80e3c74e2e1bf312f3977\"" Jul 14 22:40:56.803163 containerd[1660]: time="2025-07-14T22:40:56.803145820Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.10\"" Jul 14 22:40:58.238172 containerd[1660]: time="2025-07-14T22:40:58.237379109Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:40:58.244612 containerd[1660]: time="2025-07-14T22:40:58.244587604Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.10: active requests=0, bytes read=24713294" Jul 14 22:40:58.254293 containerd[1660]: time="2025-07-14T22:40:58.254258203Z" level=info msg="ImageCreate event name:\"sha256:c285c4e62c91c434e9928bee7063b361509f43f43faa31641b626d6eff97616d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:40:58.261993 containerd[1660]: time="2025-07-14T22:40:58.261958638Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3c67387d023c6114879f1e817669fd641797d30f117230682faf3930ecaaf0fe\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:40:58.262727 containerd[1660]: time="2025-07-14T22:40:58.262705195Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.10\" with image id \"sha256:c285c4e62c91c434e9928bee7063b361509f43f43faa31641b626d6eff97616d\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3c67387d023c6114879f1e817669fd641797d30f117230682faf3930ecaaf0fe\", size \"26315128\" in 1.459539619s" Jul 14 22:40:58.262779 containerd[1660]: time="2025-07-14T22:40:58.262730524Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.10\" returns image reference \"sha256:c285c4e62c91c434e9928bee7063b361509f43f43faa31641b626d6eff97616d\"" Jul 14 22:40:58.263286 containerd[1660]: time="2025-07-14T22:40:58.263269275Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.10\"" Jul 14 22:40:59.395338 containerd[1660]: time="2025-07-14T22:40:59.395298550Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:40:59.400101 containerd[1660]: time="2025-07-14T22:40:59.400077322Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.10: active requests=0, bytes read=18783671" Jul 14 22:40:59.406308 containerd[1660]: time="2025-07-14T22:40:59.406284013Z" level=info msg="ImageCreate event name:\"sha256:61daeb7d112d9547792027cb16242b1d131f357f511545477381457fff5a69e2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:40:59.410588 containerd[1660]: time="2025-07-14T22:40:59.410564788Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:284dc2a5cf6afc9b76e39ad4b79c680c23d289488517643b28784a06d0141272\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:40:59.411206 containerd[1660]: time="2025-07-14T22:40:59.411068786Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.10\" with image id \"sha256:61daeb7d112d9547792027cb16242b1d131f357f511545477381457fff5a69e2\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:284dc2a5cf6afc9b76e39ad4b79c680c23d289488517643b28784a06d0141272\", size \"20385523\" in 1.147724042s" Jul 14 22:40:59.411206 containerd[1660]: time="2025-07-14T22:40:59.411088940Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.10\" returns image reference \"sha256:61daeb7d112d9547792027cb16242b1d131f357f511545477381457fff5a69e2\"" Jul 14 22:40:59.411879 containerd[1660]: time="2025-07-14T22:40:59.411854933Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\"" Jul 14 22:41:00.276278 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1902365639.mount: Deactivated successfully. Jul 14 22:41:00.604315 containerd[1660]: time="2025-07-14T22:41:00.604113585Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:41:00.606531 containerd[1660]: time="2025-07-14T22:41:00.606508587Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.10: active requests=0, bytes read=30383943" Jul 14 22:41:00.613370 containerd[1660]: time="2025-07-14T22:41:00.613338792Z" level=info msg="ImageCreate event name:\"sha256:3ed600862d3e69931e0f9f4dbf5c2b46343af40aa079772434f13de771bdc30c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:41:00.620779 containerd[1660]: time="2025-07-14T22:41:00.620746107Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bcbb293812bdf587b28ea98369a8c347ca84884160046296761acdf12b27029d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:41:00.621023 containerd[1660]: time="2025-07-14T22:41:00.620999064Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.10\" with image id \"sha256:3ed600862d3e69931e0f9f4dbf5c2b46343af40aa079772434f13de771bdc30c\", repo tag \"registry.k8s.io/kube-proxy:v1.31.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:bcbb293812bdf587b28ea98369a8c347ca84884160046296761acdf12b27029d\", size \"30382962\" in 1.209117827s" Jul 14 22:41:00.621023 containerd[1660]: time="2025-07-14T22:41:00.621021097Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\" returns image reference \"sha256:3ed600862d3e69931e0f9f4dbf5c2b46343af40aa079772434f13de771bdc30c\"" Jul 14 22:41:01.502023 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 9. Jul 14 22:41:01.504164 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 14 22:41:02.025538 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 14 22:41:02.025605 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 14 22:41:02.025843 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 14 22:41:02.026000 systemd[1]: kubelet.service: Consumed 80ms CPU time, 98.8M memory peak. Jul 14 22:41:02.029505 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 14 22:41:02.054844 systemd[1]: Reload requested from client PID 2593 ('systemctl') (unit session-9.scope)... Jul 14 22:41:02.054862 systemd[1]: Reloading... Jul 14 22:41:02.146285 zram_generator::config[2635]: No configuration found. Jul 14 22:41:02.240503 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 14 22:41:02.249135 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Jul 14 22:41:02.322301 systemd[1]: Reloading finished in 267 ms. Jul 14 22:41:02.383273 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 14 22:41:02.383346 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 14 22:41:02.383564 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 14 22:41:02.384862 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 14 22:41:02.792305 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 14 22:41:02.795074 (kubelet)[2703]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 14 22:41:02.955873 kubelet[2703]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 14 22:41:02.955873 kubelet[2703]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 14 22:41:02.955873 kubelet[2703]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 14 22:41:02.962060 kubelet[2703]: I0714 22:41:02.961981 2703 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 14 22:41:03.123940 kubelet[2703]: I0714 22:41:03.123694 2703 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 14 22:41:03.123940 kubelet[2703]: I0714 22:41:03.123717 2703 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 14 22:41:03.123940 kubelet[2703]: I0714 22:41:03.123925 2703 server.go:934] "Client rotation is on, will bootstrap in background" Jul 14 22:41:03.350561 kubelet[2703]: E0714 22:41:03.350521 2703 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://139.178.70.108:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 139.178.70.108:6443: connect: connection refused" logger="UnhandledError" Jul 14 22:41:03.358871 kubelet[2703]: I0714 22:41:03.358834 2703 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 14 22:41:03.386002 kubelet[2703]: I0714 22:41:03.385791 2703 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 14 22:41:03.391482 kubelet[2703]: I0714 22:41:03.391453 2703 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 14 22:41:03.395448 kubelet[2703]: I0714 22:41:03.395415 2703 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 14 22:41:03.396615 kubelet[2703]: I0714 22:41:03.396581 2703 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 14 22:41:03.396738 kubelet[2703]: I0714 22:41:03.396614 2703 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 14 22:41:03.399751 kubelet[2703]: I0714 22:41:03.399719 2703 topology_manager.go:138] "Creating topology manager with none policy" Jul 14 22:41:03.399751 kubelet[2703]: I0714 22:41:03.399746 2703 container_manager_linux.go:300] "Creating device plugin manager" Jul 14 22:41:03.400383 kubelet[2703]: I0714 22:41:03.400368 2703 state_mem.go:36] "Initialized new in-memory state store" Jul 14 22:41:03.409165 kubelet[2703]: I0714 22:41:03.409127 2703 kubelet.go:408] "Attempting to sync node with API server" Jul 14 22:41:03.409165 kubelet[2703]: I0714 22:41:03.409160 2703 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 14 22:41:03.411199 kubelet[2703]: I0714 22:41:03.411181 2703 kubelet.go:314] "Adding apiserver pod source" Jul 14 22:41:03.411262 kubelet[2703]: I0714 22:41:03.411210 2703 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 14 22:41:03.418003 kubelet[2703]: W0714 22:41:03.417883 2703 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://139.178.70.108:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 139.178.70.108:6443: connect: connection refused Jul 14 22:41:03.418003 kubelet[2703]: E0714 22:41:03.417925 2703 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://139.178.70.108:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 139.178.70.108:6443: connect: connection refused" logger="UnhandledError" Jul 14 22:41:03.418003 kubelet[2703]: W0714 22:41:03.417967 2703 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://139.178.70.108:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 139.178.70.108:6443: connect: connection refused Jul 14 22:41:03.418003 kubelet[2703]: E0714 22:41:03.417984 2703 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://139.178.70.108:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 139.178.70.108:6443: connect: connection refused" logger="UnhandledError" Jul 14 22:41:03.418402 kubelet[2703]: I0714 22:41:03.418275 2703 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Jul 14 22:41:03.423356 kubelet[2703]: I0714 22:41:03.423336 2703 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 14 22:41:03.426192 kubelet[2703]: W0714 22:41:03.425757 2703 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 14 22:41:03.429724 kubelet[2703]: I0714 22:41:03.429704 2703 server.go:1274] "Started kubelet" Jul 14 22:41:03.440553 kubelet[2703]: I0714 22:41:03.440529 2703 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 14 22:41:03.443850 kubelet[2703]: I0714 22:41:03.443751 2703 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 14 22:41:03.451491 kubelet[2703]: I0714 22:41:03.451368 2703 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 14 22:41:03.451614 kubelet[2703]: I0714 22:41:03.451599 2703 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 14 22:41:03.451796 kubelet[2703]: I0714 22:41:03.451784 2703 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 14 22:41:03.452817 kubelet[2703]: E0714 22:41:03.449056 2703 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://139.178.70.108:6443/api/v1/namespaces/default/events\": dial tcp 139.178.70.108:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18523f5a9deca573 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-07-14 22:41:03.429682547 +0000 UTC m=+0.516895160,LastTimestamp:2025-07-14 22:41:03.429682547 +0000 UTC m=+0.516895160,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 14 22:41:03.455111 kubelet[2703]: I0714 22:41:03.455084 2703 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 14 22:41:03.455196 kubelet[2703]: E0714 22:41:03.455185 2703 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 22:41:03.466955 kubelet[2703]: I0714 22:41:03.466830 2703 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 14 22:41:03.467977 kubelet[2703]: I0714 22:41:03.467889 2703 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 14 22:41:03.468547 kubelet[2703]: I0714 22:41:03.468399 2703 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 14 22:41:03.468547 kubelet[2703]: I0714 22:41:03.468429 2703 kubelet.go:2321] "Starting kubelet main sync loop" Jul 14 22:41:03.468547 kubelet[2703]: E0714 22:41:03.468461 2703 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 14 22:41:03.468660 kubelet[2703]: I0714 22:41:03.468652 2703 server.go:449] "Adding debug handlers to kubelet server" Jul 14 22:41:03.474722 kubelet[2703]: E0714 22:41:03.474689 2703 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.108:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.108:6443: connect: connection refused" interval="200ms" Jul 14 22:41:03.474985 kubelet[2703]: W0714 22:41:03.474886 2703 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://139.178.70.108:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.70.108:6443: connect: connection refused Jul 14 22:41:03.474985 kubelet[2703]: E0714 22:41:03.474927 2703 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://139.178.70.108:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 139.178.70.108:6443: connect: connection refused" logger="UnhandledError" Jul 14 22:41:03.476846 kubelet[2703]: I0714 22:41:03.476789 2703 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 14 22:41:03.476846 kubelet[2703]: I0714 22:41:03.476818 2703 reconciler.go:26] "Reconciler: start to sync state" Jul 14 22:41:03.477423 kubelet[2703]: W0714 22:41:03.477290 2703 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://139.178.70.108:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.70.108:6443: connect: connection refused Jul 14 22:41:03.477423 kubelet[2703]: E0714 22:41:03.477311 2703 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://139.178.70.108:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 139.178.70.108:6443: connect: connection refused" logger="UnhandledError" Jul 14 22:41:03.477423 kubelet[2703]: I0714 22:41:03.477380 2703 factory.go:221] Registration of the containerd container factory successfully Jul 14 22:41:03.477423 kubelet[2703]: I0714 22:41:03.477387 2703 factory.go:221] Registration of the systemd container factory successfully Jul 14 22:41:03.477423 kubelet[2703]: I0714 22:41:03.477423 2703 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 14 22:41:03.478411 kubelet[2703]: E0714 22:41:03.478324 2703 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 14 22:41:03.495798 kubelet[2703]: I0714 22:41:03.495767 2703 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 14 22:41:03.495994 kubelet[2703]: I0714 22:41:03.495910 2703 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 14 22:41:03.495994 kubelet[2703]: I0714 22:41:03.495929 2703 state_mem.go:36] "Initialized new in-memory state store" Jul 14 22:41:03.497670 kubelet[2703]: I0714 22:41:03.497529 2703 policy_none.go:49] "None policy: Start" Jul 14 22:41:03.498047 kubelet[2703]: I0714 22:41:03.498035 2703 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 14 22:41:03.498084 kubelet[2703]: I0714 22:41:03.498051 2703 state_mem.go:35] "Initializing new in-memory state store" Jul 14 22:41:03.507962 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 14 22:41:03.519878 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 14 22:41:03.534336 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 14 22:41:03.535229 kubelet[2703]: I0714 22:41:03.535213 2703 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 14 22:41:03.535384 kubelet[2703]: I0714 22:41:03.535374 2703 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 14 22:41:03.535423 kubelet[2703]: I0714 22:41:03.535388 2703 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 14 22:41:03.536036 kubelet[2703]: I0714 22:41:03.536018 2703 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 14 22:41:03.536677 kubelet[2703]: E0714 22:41:03.536666 2703 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jul 14 22:41:03.576274 systemd[1]: Created slice kubepods-burstable-pod3f04709fe51ae4ab5abd58e8da771b74.slice - libcontainer container kubepods-burstable-pod3f04709fe51ae4ab5abd58e8da771b74.slice. Jul 14 22:41:03.593913 systemd[1]: Created slice kubepods-burstable-podb35b56493416c25588cb530e37ffc065.slice - libcontainer container kubepods-burstable-podb35b56493416c25588cb530e37ffc065.slice. Jul 14 22:41:03.606133 systemd[1]: Created slice kubepods-burstable-pod795f69a91dc182145837222cdb53e291.slice - libcontainer container kubepods-burstable-pod795f69a91dc182145837222cdb53e291.slice. Jul 14 22:41:03.638319 kubelet[2703]: I0714 22:41:03.637218 2703 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 14 22:41:03.638645 kubelet[2703]: E0714 22:41:03.638631 2703 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://139.178.70.108:6443/api/v1/nodes\": dial tcp 139.178.70.108:6443: connect: connection refused" node="localhost" Jul 14 22:41:03.675111 kubelet[2703]: E0714 22:41:03.675072 2703 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.108:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.108:6443: connect: connection refused" interval="400ms" Jul 14 22:41:03.681527 kubelet[2703]: I0714 22:41:03.681429 2703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b35b56493416c25588cb530e37ffc065-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"b35b56493416c25588cb530e37ffc065\") " pod="kube-system/kube-scheduler-localhost" Jul 14 22:41:03.681527 kubelet[2703]: I0714 22:41:03.681455 2703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 14 22:41:03.681527 kubelet[2703]: I0714 22:41:03.681467 2703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 14 22:41:03.681527 kubelet[2703]: I0714 22:41:03.681481 2703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 14 22:41:03.681527 kubelet[2703]: I0714 22:41:03.681493 2703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/795f69a91dc182145837222cdb53e291-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"795f69a91dc182145837222cdb53e291\") " pod="kube-system/kube-apiserver-localhost" Jul 14 22:41:03.681733 kubelet[2703]: I0714 22:41:03.681505 2703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/795f69a91dc182145837222cdb53e291-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"795f69a91dc182145837222cdb53e291\") " pod="kube-system/kube-apiserver-localhost" Jul 14 22:41:03.681733 kubelet[2703]: I0714 22:41:03.681513 2703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 14 22:41:03.681733 kubelet[2703]: I0714 22:41:03.681521 2703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 14 22:41:03.681733 kubelet[2703]: I0714 22:41:03.681529 2703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/795f69a91dc182145837222cdb53e291-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"795f69a91dc182145837222cdb53e291\") " pod="kube-system/kube-apiserver-localhost" Jul 14 22:41:03.839963 kubelet[2703]: I0714 22:41:03.839905 2703 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 14 22:41:03.840143 kubelet[2703]: E0714 22:41:03.840127 2703 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://139.178.70.108:6443/api/v1/nodes\": dial tcp 139.178.70.108:6443: connect: connection refused" node="localhost" Jul 14 22:41:03.895548 containerd[1660]: time="2025-07-14T22:41:03.895457257Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:3f04709fe51ae4ab5abd58e8da771b74,Namespace:kube-system,Attempt:0,}" Jul 14 22:41:03.905741 containerd[1660]: time="2025-07-14T22:41:03.905585218Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:b35b56493416c25588cb530e37ffc065,Namespace:kube-system,Attempt:0,}" Jul 14 22:41:03.921518 containerd[1660]: time="2025-07-14T22:41:03.921399784Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:795f69a91dc182145837222cdb53e291,Namespace:kube-system,Attempt:0,}" Jul 14 22:41:04.076155 kubelet[2703]: E0714 22:41:04.076050 2703 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.108:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.108:6443: connect: connection refused" interval="800ms" Jul 14 22:41:04.076645 containerd[1660]: time="2025-07-14T22:41:04.076536561Z" level=info msg="connecting to shim 4bdbce8891d267a672c6b36185a61e71179bebfc5628cff2c9f44bff8c21295b" address="unix:///run/containerd/s/39351ef85ee6ee818965b6935b8c514181e3008a2834659f728ed46331d182ef" namespace=k8s.io protocol=ttrpc version=3 Jul 14 22:41:04.083388 containerd[1660]: time="2025-07-14T22:41:04.083365650Z" level=info msg="connecting to shim bc35ad7d4c93351d729114c1cc346ed0f1766ceff22c7ce6f4da52e7ba78327b" address="unix:///run/containerd/s/b6f6b9e67575c0add9d94bd9321f70c71953c9eadfbba6bae541ba7020e490b6" namespace=k8s.io protocol=ttrpc version=3 Jul 14 22:41:04.146921 containerd[1660]: time="2025-07-14T22:41:04.146841916Z" level=info msg="connecting to shim 23f9629441bb228bf7ba5e680c9b43aaabf71e3500c3b4cb2a2d42c1629479b5" address="unix:///run/containerd/s/a1d3ebb2d531e970b4ea123f05fd7ad7b65e8349d63a4dd85c6f9ef4e2e36b95" namespace=k8s.io protocol=ttrpc version=3 Jul 14 22:41:04.241872 kubelet[2703]: I0714 22:41:04.241858 2703 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 14 22:41:04.242324 kubelet[2703]: E0714 22:41:04.242307 2703 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://139.178.70.108:6443/api/v1/nodes\": dial tcp 139.178.70.108:6443: connect: connection refused" node="localhost" Jul 14 22:41:04.271407 systemd[1]: Started cri-containerd-23f9629441bb228bf7ba5e680c9b43aaabf71e3500c3b4cb2a2d42c1629479b5.scope - libcontainer container 23f9629441bb228bf7ba5e680c9b43aaabf71e3500c3b4cb2a2d42c1629479b5. Jul 14 22:41:04.272484 systemd[1]: Started cri-containerd-4bdbce8891d267a672c6b36185a61e71179bebfc5628cff2c9f44bff8c21295b.scope - libcontainer container 4bdbce8891d267a672c6b36185a61e71179bebfc5628cff2c9f44bff8c21295b. Jul 14 22:41:04.274310 systemd[1]: Started cri-containerd-bc35ad7d4c93351d729114c1cc346ed0f1766ceff22c7ce6f4da52e7ba78327b.scope - libcontainer container bc35ad7d4c93351d729114c1cc346ed0f1766ceff22c7ce6f4da52e7ba78327b. Jul 14 22:41:04.318998 containerd[1660]: time="2025-07-14T22:41:04.318733335Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:3f04709fe51ae4ab5abd58e8da771b74,Namespace:kube-system,Attempt:0,} returns sandbox id \"4bdbce8891d267a672c6b36185a61e71179bebfc5628cff2c9f44bff8c21295b\"" Jul 14 22:41:04.326758 containerd[1660]: time="2025-07-14T22:41:04.326689872Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:795f69a91dc182145837222cdb53e291,Namespace:kube-system,Attempt:0,} returns sandbox id \"bc35ad7d4c93351d729114c1cc346ed0f1766ceff22c7ce6f4da52e7ba78327b\"" Jul 14 22:41:04.329799 containerd[1660]: time="2025-07-14T22:41:04.329770900Z" level=info msg="CreateContainer within sandbox \"4bdbce8891d267a672c6b36185a61e71179bebfc5628cff2c9f44bff8c21295b\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 14 22:41:04.330631 containerd[1660]: time="2025-07-14T22:41:04.330614017Z" level=info msg="CreateContainer within sandbox \"bc35ad7d4c93351d729114c1cc346ed0f1766ceff22c7ce6f4da52e7ba78327b\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 14 22:41:04.335363 containerd[1660]: time="2025-07-14T22:41:04.335332964Z" level=info msg="Container 70859f2a48bbcdbb2cdf9b574420f9166ab0de5182cca83e2869e7f68fe93275: CDI devices from CRI Config.CDIDevices: []" Jul 14 22:41:04.338420 containerd[1660]: time="2025-07-14T22:41:04.338396971Z" level=info msg="Container 803718e787e52111cbeec1d62ffc0c869b92f2eb53754529db8db6b5a229026b: CDI devices from CRI Config.CDIDevices: []" Jul 14 22:41:04.350058 containerd[1660]: time="2025-07-14T22:41:04.350032254Z" level=info msg="CreateContainer within sandbox \"bc35ad7d4c93351d729114c1cc346ed0f1766ceff22c7ce6f4da52e7ba78327b\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"803718e787e52111cbeec1d62ffc0c869b92f2eb53754529db8db6b5a229026b\"" Jul 14 22:41:04.351939 containerd[1660]: time="2025-07-14T22:41:04.351221705Z" level=info msg="StartContainer for \"803718e787e52111cbeec1d62ffc0c869b92f2eb53754529db8db6b5a229026b\"" Jul 14 22:41:04.351939 containerd[1660]: time="2025-07-14T22:41:04.351843599Z" level=info msg="connecting to shim 803718e787e52111cbeec1d62ffc0c869b92f2eb53754529db8db6b5a229026b" address="unix:///run/containerd/s/b6f6b9e67575c0add9d94bd9321f70c71953c9eadfbba6bae541ba7020e490b6" protocol=ttrpc version=3 Jul 14 22:41:04.353299 containerd[1660]: time="2025-07-14T22:41:04.353271247Z" level=info msg="CreateContainer within sandbox \"4bdbce8891d267a672c6b36185a61e71179bebfc5628cff2c9f44bff8c21295b\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"70859f2a48bbcdbb2cdf9b574420f9166ab0de5182cca83e2869e7f68fe93275\"" Jul 14 22:41:04.353762 containerd[1660]: time="2025-07-14T22:41:04.353662014Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:b35b56493416c25588cb530e37ffc065,Namespace:kube-system,Attempt:0,} returns sandbox id \"23f9629441bb228bf7ba5e680c9b43aaabf71e3500c3b4cb2a2d42c1629479b5\"" Jul 14 22:41:04.354093 containerd[1660]: time="2025-07-14T22:41:04.354079603Z" level=info msg="StartContainer for \"70859f2a48bbcdbb2cdf9b574420f9166ab0de5182cca83e2869e7f68fe93275\"" Jul 14 22:41:04.355636 containerd[1660]: time="2025-07-14T22:41:04.355598322Z" level=info msg="connecting to shim 70859f2a48bbcdbb2cdf9b574420f9166ab0de5182cca83e2869e7f68fe93275" address="unix:///run/containerd/s/39351ef85ee6ee818965b6935b8c514181e3008a2834659f728ed46331d182ef" protocol=ttrpc version=3 Jul 14 22:41:04.357460 containerd[1660]: time="2025-07-14T22:41:04.357434209Z" level=info msg="CreateContainer within sandbox \"23f9629441bb228bf7ba5e680c9b43aaabf71e3500c3b4cb2a2d42c1629479b5\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 14 22:41:04.370047 containerd[1660]: time="2025-07-14T22:41:04.370020482Z" level=info msg="Container 3d0b7fe83b84ef245442d52a4476927fd6890c89de4d62c0e424ad1fed87feca: CDI devices from CRI Config.CDIDevices: []" Jul 14 22:41:04.371450 systemd[1]: Started cri-containerd-803718e787e52111cbeec1d62ffc0c869b92f2eb53754529db8db6b5a229026b.scope - libcontainer container 803718e787e52111cbeec1d62ffc0c869b92f2eb53754529db8db6b5a229026b. Jul 14 22:41:04.378131 containerd[1660]: time="2025-07-14T22:41:04.378096885Z" level=info msg="CreateContainer within sandbox \"23f9629441bb228bf7ba5e680c9b43aaabf71e3500c3b4cb2a2d42c1629479b5\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"3d0b7fe83b84ef245442d52a4476927fd6890c89de4d62c0e424ad1fed87feca\"" Jul 14 22:41:04.380129 containerd[1660]: time="2025-07-14T22:41:04.379888624Z" level=info msg="StartContainer for \"3d0b7fe83b84ef245442d52a4476927fd6890c89de4d62c0e424ad1fed87feca\"" Jul 14 22:41:04.380683 containerd[1660]: time="2025-07-14T22:41:04.380666404Z" level=info msg="connecting to shim 3d0b7fe83b84ef245442d52a4476927fd6890c89de4d62c0e424ad1fed87feca" address="unix:///run/containerd/s/a1d3ebb2d531e970b4ea123f05fd7ad7b65e8349d63a4dd85c6f9ef4e2e36b95" protocol=ttrpc version=3 Jul 14 22:41:04.382353 systemd[1]: Started cri-containerd-70859f2a48bbcdbb2cdf9b574420f9166ab0de5182cca83e2869e7f68fe93275.scope - libcontainer container 70859f2a48bbcdbb2cdf9b574420f9166ab0de5182cca83e2869e7f68fe93275. Jul 14 22:41:04.405372 systemd[1]: Started cri-containerd-3d0b7fe83b84ef245442d52a4476927fd6890c89de4d62c0e424ad1fed87feca.scope - libcontainer container 3d0b7fe83b84ef245442d52a4476927fd6890c89de4d62c0e424ad1fed87feca. Jul 14 22:41:04.437322 containerd[1660]: time="2025-07-14T22:41:04.437285548Z" level=info msg="StartContainer for \"803718e787e52111cbeec1d62ffc0c869b92f2eb53754529db8db6b5a229026b\" returns successfully" Jul 14 22:41:04.447031 containerd[1660]: time="2025-07-14T22:41:04.447002143Z" level=info msg="StartContainer for \"70859f2a48bbcdbb2cdf9b574420f9166ab0de5182cca83e2869e7f68fe93275\" returns successfully" Jul 14 22:41:04.495194 containerd[1660]: time="2025-07-14T22:41:04.495081495Z" level=info msg="StartContainer for \"3d0b7fe83b84ef245442d52a4476927fd6890c89de4d62c0e424ad1fed87feca\" returns successfully" Jul 14 22:41:05.046351 kubelet[2703]: I0714 22:41:05.045921 2703 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 14 22:41:05.625979 kubelet[2703]: E0714 22:41:05.625957 2703 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jul 14 22:41:05.711231 kubelet[2703]: I0714 22:41:05.710651 2703 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Jul 14 22:41:06.419171 kubelet[2703]: I0714 22:41:06.419051 2703 apiserver.go:52] "Watching apiserver" Jul 14 22:41:06.477017 kubelet[2703]: I0714 22:41:06.476981 2703 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 14 22:41:06.505197 kubelet[2703]: E0714 22:41:06.505166 2703 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Jul 14 22:41:07.905968 systemd[1]: Reload requested from client PID 2973 ('systemctl') (unit session-9.scope)... Jul 14 22:41:07.906159 systemd[1]: Reloading... Jul 14 22:41:07.970254 zram_generator::config[3019]: No configuration found. Jul 14 22:41:08.048898 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 14 22:41:08.057679 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Jul 14 22:41:08.154275 systemd[1]: Reloading finished in 247 ms. Jul 14 22:41:08.177601 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 14 22:41:08.192979 systemd[1]: kubelet.service: Deactivated successfully. Jul 14 22:41:08.193157 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 14 22:41:08.193201 systemd[1]: kubelet.service: Consumed 524ms CPU time, 135.5M memory peak. Jul 14 22:41:08.196739 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 14 22:41:08.888582 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 14 22:41:08.893554 (kubelet)[3084]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 14 22:41:09.016286 kubelet[3084]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 14 22:41:09.016286 kubelet[3084]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 14 22:41:09.016286 kubelet[3084]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 14 22:41:09.021558 kubelet[3084]: I0714 22:41:09.021480 3084 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 14 22:41:09.030185 kubelet[3084]: I0714 22:41:09.030148 3084 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 14 22:41:09.030185 kubelet[3084]: I0714 22:41:09.030177 3084 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 14 22:41:09.030528 kubelet[3084]: I0714 22:41:09.030511 3084 server.go:934] "Client rotation is on, will bootstrap in background" Jul 14 22:41:09.031889 kubelet[3084]: I0714 22:41:09.031860 3084 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 14 22:41:09.033952 kubelet[3084]: I0714 22:41:09.033798 3084 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 14 22:41:09.041152 kubelet[3084]: I0714 22:41:09.041121 3084 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 14 22:41:09.043855 kubelet[3084]: I0714 22:41:09.043822 3084 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 14 22:41:09.044280 kubelet[3084]: I0714 22:41:09.043909 3084 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 14 22:41:09.044280 kubelet[3084]: I0714 22:41:09.043990 3084 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 14 22:41:09.044280 kubelet[3084]: I0714 22:41:09.044011 3084 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 14 22:41:09.044280 kubelet[3084]: I0714 22:41:09.044181 3084 topology_manager.go:138] "Creating topology manager with none policy" Jul 14 22:41:09.045444 kubelet[3084]: I0714 22:41:09.044190 3084 container_manager_linux.go:300] "Creating device plugin manager" Jul 14 22:41:09.045444 kubelet[3084]: I0714 22:41:09.044217 3084 state_mem.go:36] "Initialized new in-memory state store" Jul 14 22:41:09.054208 kubelet[3084]: I0714 22:41:09.054186 3084 kubelet.go:408] "Attempting to sync node with API server" Jul 14 22:41:09.054340 kubelet[3084]: I0714 22:41:09.054332 3084 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 14 22:41:09.054519 kubelet[3084]: I0714 22:41:09.054511 3084 kubelet.go:314] "Adding apiserver pod source" Jul 14 22:41:09.054570 kubelet[3084]: I0714 22:41:09.054564 3084 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 14 22:41:09.056709 kubelet[3084]: I0714 22:41:09.056532 3084 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Jul 14 22:41:09.057481 kubelet[3084]: I0714 22:41:09.057194 3084 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 14 22:41:09.058634 kubelet[3084]: I0714 22:41:09.058125 3084 server.go:1274] "Started kubelet" Jul 14 22:41:09.059120 kubelet[3084]: I0714 22:41:09.059014 3084 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 14 22:41:09.059609 kubelet[3084]: I0714 22:41:09.059423 3084 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 14 22:41:09.060156 kubelet[3084]: I0714 22:41:09.059900 3084 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 14 22:41:09.060446 kubelet[3084]: I0714 22:41:09.059927 3084 server.go:449] "Adding debug handlers to kubelet server" Jul 14 22:41:09.061636 kubelet[3084]: I0714 22:41:09.061050 3084 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 14 22:41:09.066482 kubelet[3084]: I0714 22:41:09.066461 3084 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 14 22:41:09.071361 kubelet[3084]: E0714 22:41:09.071343 3084 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 14 22:41:09.074405 kubelet[3084]: I0714 22:41:09.074375 3084 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 14 22:41:09.074513 kubelet[3084]: I0714 22:41:09.074496 3084 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 14 22:41:09.074628 kubelet[3084]: I0714 22:41:09.074612 3084 reconciler.go:26] "Reconciler: start to sync state" Jul 14 22:41:09.075027 kubelet[3084]: I0714 22:41:09.075007 3084 factory.go:221] Registration of the containerd container factory successfully Jul 14 22:41:09.075027 kubelet[3084]: I0714 22:41:09.075021 3084 factory.go:221] Registration of the systemd container factory successfully Jul 14 22:41:09.075179 kubelet[3084]: I0714 22:41:09.075097 3084 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 14 22:41:09.076612 kubelet[3084]: I0714 22:41:09.076591 3084 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 14 22:41:09.077746 kubelet[3084]: I0714 22:41:09.077528 3084 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 14 22:41:09.077746 kubelet[3084]: I0714 22:41:09.077545 3084 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 14 22:41:09.077746 kubelet[3084]: I0714 22:41:09.077559 3084 kubelet.go:2321] "Starting kubelet main sync loop" Jul 14 22:41:09.077746 kubelet[3084]: E0714 22:41:09.077588 3084 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 14 22:41:09.128724 kubelet[3084]: I0714 22:41:09.128702 3084 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 14 22:41:09.128724 kubelet[3084]: I0714 22:41:09.128716 3084 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 14 22:41:09.128724 kubelet[3084]: I0714 22:41:09.128728 3084 state_mem.go:36] "Initialized new in-memory state store" Jul 14 22:41:09.128971 kubelet[3084]: I0714 22:41:09.128824 3084 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 14 22:41:09.128971 kubelet[3084]: I0714 22:41:09.128831 3084 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 14 22:41:09.128971 kubelet[3084]: I0714 22:41:09.128843 3084 policy_none.go:49] "None policy: Start" Jul 14 22:41:09.129427 kubelet[3084]: I0714 22:41:09.129412 3084 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 14 22:41:09.129427 kubelet[3084]: I0714 22:41:09.129427 3084 state_mem.go:35] "Initializing new in-memory state store" Jul 14 22:41:09.129517 kubelet[3084]: I0714 22:41:09.129506 3084 state_mem.go:75] "Updated machine memory state" Jul 14 22:41:09.132445 kubelet[3084]: I0714 22:41:09.132430 3084 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 14 22:41:09.133049 kubelet[3084]: I0714 22:41:09.132885 3084 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 14 22:41:09.133049 kubelet[3084]: I0714 22:41:09.132901 3084 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 14 22:41:09.133385 kubelet[3084]: I0714 22:41:09.133376 3084 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 14 22:41:09.190418 kubelet[3084]: E0714 22:41:09.190321 3084 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Jul 14 22:41:09.222636 sudo[3114]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jul 14 22:41:09.222844 sudo[3114]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jul 14 22:41:09.238328 kubelet[3084]: I0714 22:41:09.238308 3084 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 14 22:41:09.249272 kubelet[3084]: I0714 22:41:09.249217 3084 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Jul 14 22:41:09.249394 kubelet[3084]: I0714 22:41:09.249316 3084 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Jul 14 22:41:09.375039 kubelet[3084]: I0714 22:41:09.375012 3084 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 14 22:41:09.375039 kubelet[3084]: I0714 22:41:09.375037 3084 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 14 22:41:09.375149 kubelet[3084]: I0714 22:41:09.375052 3084 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 14 22:41:09.375149 kubelet[3084]: I0714 22:41:09.375068 3084 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 14 22:41:09.375149 kubelet[3084]: I0714 22:41:09.375082 3084 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b35b56493416c25588cb530e37ffc065-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"b35b56493416c25588cb530e37ffc065\") " pod="kube-system/kube-scheduler-localhost" Jul 14 22:41:09.375149 kubelet[3084]: I0714 22:41:09.375116 3084 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/795f69a91dc182145837222cdb53e291-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"795f69a91dc182145837222cdb53e291\") " pod="kube-system/kube-apiserver-localhost" Jul 14 22:41:09.375149 kubelet[3084]: I0714 22:41:09.375133 3084 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/795f69a91dc182145837222cdb53e291-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"795f69a91dc182145837222cdb53e291\") " pod="kube-system/kube-apiserver-localhost" Jul 14 22:41:09.375270 kubelet[3084]: I0714 22:41:09.375143 3084 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/795f69a91dc182145837222cdb53e291-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"795f69a91dc182145837222cdb53e291\") " pod="kube-system/kube-apiserver-localhost" Jul 14 22:41:09.375270 kubelet[3084]: I0714 22:41:09.375151 3084 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 14 22:41:09.901320 sudo[3114]: pam_unix(sudo:session): session closed for user root Jul 14 22:41:10.056413 kubelet[3084]: I0714 22:41:10.056362 3084 apiserver.go:52] "Watching apiserver" Jul 14 22:41:10.075657 kubelet[3084]: I0714 22:41:10.075618 3084 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 14 22:41:10.241217 kubelet[3084]: I0714 22:41:10.240930 3084 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.240893356 podStartE2EDuration="1.240893356s" podCreationTimestamp="2025-07-14 22:41:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-14 22:41:10.157686102 +0000 UTC m=+1.261681207" watchObservedRunningTime="2025-07-14 22:41:10.240893356 +0000 UTC m=+1.344888459" Jul 14 22:41:10.270545 kubelet[3084]: I0714 22:41:10.270283 3084 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.270270033 podStartE2EDuration="1.270270033s" podCreationTimestamp="2025-07-14 22:41:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-14 22:41:10.241179087 +0000 UTC m=+1.345174187" watchObservedRunningTime="2025-07-14 22:41:10.270270033 +0000 UTC m=+1.374265136" Jul 14 22:41:12.358003 sudo[1971]: pam_unix(sudo:session): session closed for user root Jul 14 22:41:12.358869 sshd[1970]: Connection closed by 139.178.89.65 port 56368 Jul 14 22:41:12.377740 sshd-session[1967]: pam_unix(sshd:session): session closed for user core Jul 14 22:41:12.381141 systemd-logind[1629]: Session 9 logged out. Waiting for processes to exit. Jul 14 22:41:12.381294 systemd[1]: sshd@6-139.178.70.108:22-139.178.89.65:56368.service: Deactivated successfully. Jul 14 22:41:12.383004 systemd[1]: session-9.scope: Deactivated successfully. Jul 14 22:41:12.383173 systemd[1]: session-9.scope: Consumed 2.887s CPU time, 202.4M memory peak. Jul 14 22:41:12.385354 systemd-logind[1629]: Removed session 9. Jul 14 22:41:12.642669 kubelet[3084]: I0714 22:41:12.642547 3084 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 14 22:41:12.659121 containerd[1660]: time="2025-07-14T22:41:12.659076759Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 14 22:41:12.659701 kubelet[3084]: I0714 22:41:12.659685 3084 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 14 22:41:13.257081 kubelet[3084]: I0714 22:41:13.256868 3084 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=7.256856479 podStartE2EDuration="7.256856479s" podCreationTimestamp="2025-07-14 22:41:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-14 22:41:10.270446005 +0000 UTC m=+1.374441110" watchObservedRunningTime="2025-07-14 22:41:13.256856479 +0000 UTC m=+4.360851591" Jul 14 22:41:13.268168 systemd[1]: Created slice kubepods-besteffort-podc27bfd5a_cec8_483c_991d_d1e10cf5218a.slice - libcontainer container kubepods-besteffort-podc27bfd5a_cec8_483c_991d_d1e10cf5218a.slice. Jul 14 22:41:13.281703 systemd[1]: Created slice kubepods-burstable-poda4766057_7f97_4c02_ae2f_688484934296.slice - libcontainer container kubepods-burstable-poda4766057_7f97_4c02_ae2f_688484934296.slice. Jul 14 22:41:13.300253 kubelet[3084]: I0714 22:41:13.299855 3084 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a4766057-7f97-4c02-ae2f-688484934296-cilium-run\") pod \"cilium-6g6hj\" (UID: \"a4766057-7f97-4c02-ae2f-688484934296\") " pod="kube-system/cilium-6g6hj" Jul 14 22:41:13.300464 kubelet[3084]: I0714 22:41:13.300433 3084 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a4766057-7f97-4c02-ae2f-688484934296-lib-modules\") pod \"cilium-6g6hj\" (UID: \"a4766057-7f97-4c02-ae2f-688484934296\") " pod="kube-system/cilium-6g6hj" Jul 14 22:41:13.300581 kubelet[3084]: I0714 22:41:13.300455 3084 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fng78\" (UniqueName: \"kubernetes.io/projected/a4766057-7f97-4c02-ae2f-688484934296-kube-api-access-fng78\") pod \"cilium-6g6hj\" (UID: \"a4766057-7f97-4c02-ae2f-688484934296\") " pod="kube-system/cilium-6g6hj" Jul 14 22:41:13.300581 kubelet[3084]: I0714 22:41:13.300528 3084 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a4766057-7f97-4c02-ae2f-688484934296-hostproc\") pod \"cilium-6g6hj\" (UID: \"a4766057-7f97-4c02-ae2f-688484934296\") " pod="kube-system/cilium-6g6hj" Jul 14 22:41:13.300581 kubelet[3084]: I0714 22:41:13.300539 3084 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c27bfd5a-cec8-483c-991d-d1e10cf5218a-lib-modules\") pod \"kube-proxy-qtxg9\" (UID: \"c27bfd5a-cec8-483c-991d-d1e10cf5218a\") " pod="kube-system/kube-proxy-qtxg9" Jul 14 22:41:13.300581 kubelet[3084]: I0714 22:41:13.300552 3084 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a4766057-7f97-4c02-ae2f-688484934296-bpf-maps\") pod \"cilium-6g6hj\" (UID: \"a4766057-7f97-4c02-ae2f-688484934296\") " pod="kube-system/cilium-6g6hj" Jul 14 22:41:13.300581 kubelet[3084]: I0714 22:41:13.300563 3084 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a4766057-7f97-4c02-ae2f-688484934296-clustermesh-secrets\") pod \"cilium-6g6hj\" (UID: \"a4766057-7f97-4c02-ae2f-688484934296\") " pod="kube-system/cilium-6g6hj" Jul 14 22:41:13.300809 kubelet[3084]: I0714 22:41:13.300572 3084 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a4766057-7f97-4c02-ae2f-688484934296-cilium-config-path\") pod \"cilium-6g6hj\" (UID: \"a4766057-7f97-4c02-ae2f-688484934296\") " pod="kube-system/cilium-6g6hj" Jul 14 22:41:13.300809 kubelet[3084]: I0714 22:41:13.300751 3084 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kvr5g\" (UniqueName: \"kubernetes.io/projected/c27bfd5a-cec8-483c-991d-d1e10cf5218a-kube-api-access-kvr5g\") pod \"kube-proxy-qtxg9\" (UID: \"c27bfd5a-cec8-483c-991d-d1e10cf5218a\") " pod="kube-system/kube-proxy-qtxg9" Jul 14 22:41:13.300809 kubelet[3084]: I0714 22:41:13.300762 3084 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a4766057-7f97-4c02-ae2f-688484934296-host-proc-sys-kernel\") pod \"cilium-6g6hj\" (UID: \"a4766057-7f97-4c02-ae2f-688484934296\") " pod="kube-system/cilium-6g6hj" Jul 14 22:41:13.300809 kubelet[3084]: I0714 22:41:13.300774 3084 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a4766057-7f97-4c02-ae2f-688484934296-cni-path\") pod \"cilium-6g6hj\" (UID: \"a4766057-7f97-4c02-ae2f-688484934296\") " pod="kube-system/cilium-6g6hj" Jul 14 22:41:13.300809 kubelet[3084]: I0714 22:41:13.300783 3084 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a4766057-7f97-4c02-ae2f-688484934296-host-proc-sys-net\") pod \"cilium-6g6hj\" (UID: \"a4766057-7f97-4c02-ae2f-688484934296\") " pod="kube-system/cilium-6g6hj" Jul 14 22:41:13.300934 kubelet[3084]: I0714 22:41:13.300793 3084 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c27bfd5a-cec8-483c-991d-d1e10cf5218a-xtables-lock\") pod \"kube-proxy-qtxg9\" (UID: \"c27bfd5a-cec8-483c-991d-d1e10cf5218a\") " pod="kube-system/kube-proxy-qtxg9" Jul 14 22:41:13.301081 kubelet[3084]: I0714 22:41:13.300968 3084 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a4766057-7f97-4c02-ae2f-688484934296-etc-cni-netd\") pod \"cilium-6g6hj\" (UID: \"a4766057-7f97-4c02-ae2f-688484934296\") " pod="kube-system/cilium-6g6hj" Jul 14 22:41:13.301081 kubelet[3084]: I0714 22:41:13.300984 3084 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/c27bfd5a-cec8-483c-991d-d1e10cf5218a-kube-proxy\") pod \"kube-proxy-qtxg9\" (UID: \"c27bfd5a-cec8-483c-991d-d1e10cf5218a\") " pod="kube-system/kube-proxy-qtxg9" Jul 14 22:41:13.301081 kubelet[3084]: I0714 22:41:13.300995 3084 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a4766057-7f97-4c02-ae2f-688484934296-cilium-cgroup\") pod \"cilium-6g6hj\" (UID: \"a4766057-7f97-4c02-ae2f-688484934296\") " pod="kube-system/cilium-6g6hj" Jul 14 22:41:13.301081 kubelet[3084]: I0714 22:41:13.301005 3084 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a4766057-7f97-4c02-ae2f-688484934296-xtables-lock\") pod \"cilium-6g6hj\" (UID: \"a4766057-7f97-4c02-ae2f-688484934296\") " pod="kube-system/cilium-6g6hj" Jul 14 22:41:13.301081 kubelet[3084]: I0714 22:41:13.301013 3084 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a4766057-7f97-4c02-ae2f-688484934296-hubble-tls\") pod \"cilium-6g6hj\" (UID: \"a4766057-7f97-4c02-ae2f-688484934296\") " pod="kube-system/cilium-6g6hj" Jul 14 22:41:13.420588 kubelet[3084]: E0714 22:41:13.420559 3084 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jul 14 22:41:13.420588 kubelet[3084]: E0714 22:41:13.420580 3084 projected.go:194] Error preparing data for projected volume kube-api-access-fng78 for pod kube-system/cilium-6g6hj: configmap "kube-root-ca.crt" not found Jul 14 22:41:13.420711 kubelet[3084]: E0714 22:41:13.420619 3084 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a4766057-7f97-4c02-ae2f-688484934296-kube-api-access-fng78 podName:a4766057-7f97-4c02-ae2f-688484934296 nodeName:}" failed. No retries permitted until 2025-07-14 22:41:13.920603486 +0000 UTC m=+5.024598588 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-fng78" (UniqueName: "kubernetes.io/projected/a4766057-7f97-4c02-ae2f-688484934296-kube-api-access-fng78") pod "cilium-6g6hj" (UID: "a4766057-7f97-4c02-ae2f-688484934296") : configmap "kube-root-ca.crt" not found Jul 14 22:41:13.453904 kubelet[3084]: E0714 22:41:13.453866 3084 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jul 14 22:41:13.453904 kubelet[3084]: E0714 22:41:13.453892 3084 projected.go:194] Error preparing data for projected volume kube-api-access-kvr5g for pod kube-system/kube-proxy-qtxg9: configmap "kube-root-ca.crt" not found Jul 14 22:41:13.454021 kubelet[3084]: E0714 22:41:13.453921 3084 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c27bfd5a-cec8-483c-991d-d1e10cf5218a-kube-api-access-kvr5g podName:c27bfd5a-cec8-483c-991d-d1e10cf5218a nodeName:}" failed. No retries permitted until 2025-07-14 22:41:13.953909572 +0000 UTC m=+5.057904673 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-kvr5g" (UniqueName: "kubernetes.io/projected/c27bfd5a-cec8-483c-991d-d1e10cf5218a-kube-api-access-kvr5g") pod "kube-proxy-qtxg9" (UID: "c27bfd5a-cec8-483c-991d-d1e10cf5218a") : configmap "kube-root-ca.crt" not found Jul 14 22:41:13.840738 systemd[1]: Created slice kubepods-besteffort-pod2b56f1c8_f624_41da_b793_0b6de0f9bee9.slice - libcontainer container kubepods-besteffort-pod2b56f1c8_f624_41da_b793_0b6de0f9bee9.slice. Jul 14 22:41:13.905214 kubelet[3084]: I0714 22:41:13.905181 3084 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2b56f1c8-f624-41da-b793-0b6de0f9bee9-cilium-config-path\") pod \"cilium-operator-5d85765b45-pvpm2\" (UID: \"2b56f1c8-f624-41da-b793-0b6de0f9bee9\") " pod="kube-system/cilium-operator-5d85765b45-pvpm2" Jul 14 22:41:13.905517 kubelet[3084]: I0714 22:41:13.905225 3084 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-65mr2\" (UniqueName: \"kubernetes.io/projected/2b56f1c8-f624-41da-b793-0b6de0f9bee9-kube-api-access-65mr2\") pod \"cilium-operator-5d85765b45-pvpm2\" (UID: \"2b56f1c8-f624-41da-b793-0b6de0f9bee9\") " pod="kube-system/cilium-operator-5d85765b45-pvpm2" Jul 14 22:41:14.143294 containerd[1660]: time="2025-07-14T22:41:14.143179735Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-pvpm2,Uid:2b56f1c8-f624-41da-b793-0b6de0f9bee9,Namespace:kube-system,Attempt:0,}" Jul 14 22:41:14.179129 containerd[1660]: time="2025-07-14T22:41:14.179083324Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qtxg9,Uid:c27bfd5a-cec8-483c-991d-d1e10cf5218a,Namespace:kube-system,Attempt:0,}" Jul 14 22:41:14.184951 containerd[1660]: time="2025-07-14T22:41:14.184918411Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-6g6hj,Uid:a4766057-7f97-4c02-ae2f-688484934296,Namespace:kube-system,Attempt:0,}" Jul 14 22:41:14.556366 containerd[1660]: time="2025-07-14T22:41:14.556311508Z" level=info msg="connecting to shim cec018a341415c85344fdb27610dc35c947d1c2e7504fc85e0547f57419357f7" address="unix:///run/containerd/s/84f262615d5acba51196614c79b9e573ee3b8a2838e012d74c99ad9cc9f73803" namespace=k8s.io protocol=ttrpc version=3 Jul 14 22:41:14.579465 systemd[1]: Started cri-containerd-cec018a341415c85344fdb27610dc35c947d1c2e7504fc85e0547f57419357f7.scope - libcontainer container cec018a341415c85344fdb27610dc35c947d1c2e7504fc85e0547f57419357f7. Jul 14 22:41:14.684595 containerd[1660]: time="2025-07-14T22:41:14.684563342Z" level=info msg="connecting to shim 3d7e1f3f0a2374a22145df6387340f352a09b87301b58e97d5973c92acbecf8b" address="unix:///run/containerd/s/86937b84303d11cc237969153e10ee7db5c32555346fc84ee5d39607ad388038" namespace=k8s.io protocol=ttrpc version=3 Jul 14 22:41:14.701651 containerd[1660]: time="2025-07-14T22:41:14.701616973Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-pvpm2,Uid:2b56f1c8-f624-41da-b793-0b6de0f9bee9,Namespace:kube-system,Attempt:0,} returns sandbox id \"cec018a341415c85344fdb27610dc35c947d1c2e7504fc85e0547f57419357f7\"" Jul 14 22:41:14.706421 containerd[1660]: time="2025-07-14T22:41:14.706074929Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jul 14 22:41:14.706229 systemd[1]: Started cri-containerd-3d7e1f3f0a2374a22145df6387340f352a09b87301b58e97d5973c92acbecf8b.scope - libcontainer container 3d7e1f3f0a2374a22145df6387340f352a09b87301b58e97d5973c92acbecf8b. Jul 14 22:41:14.753141 containerd[1660]: time="2025-07-14T22:41:14.752922420Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qtxg9,Uid:c27bfd5a-cec8-483c-991d-d1e10cf5218a,Namespace:kube-system,Attempt:0,} returns sandbox id \"3d7e1f3f0a2374a22145df6387340f352a09b87301b58e97d5973c92acbecf8b\"" Jul 14 22:41:14.756098 containerd[1660]: time="2025-07-14T22:41:14.755941889Z" level=info msg="CreateContainer within sandbox \"3d7e1f3f0a2374a22145df6387340f352a09b87301b58e97d5973c92acbecf8b\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 14 22:41:14.774058 containerd[1660]: time="2025-07-14T22:41:14.774021799Z" level=info msg="connecting to shim 58a4b7ef81d8d42e03066d07ad355e8fa94b35b9f8b0b3b9074bb9cc52919408" address="unix:///run/containerd/s/62e3af6b2eb239442234aac67f9b6aefc50d127cc2455746c96b70ff4efad374" namespace=k8s.io protocol=ttrpc version=3 Jul 14 22:41:14.780927 containerd[1660]: time="2025-07-14T22:41:14.780882991Z" level=info msg="Container 5bae2ea83004b627ff2554ae86eb0da7544f6cb0a020e76fb2a2d92665355cf9: CDI devices from CRI Config.CDIDevices: []" Jul 14 22:41:14.796713 systemd[1]: Started cri-containerd-58a4b7ef81d8d42e03066d07ad355e8fa94b35b9f8b0b3b9074bb9cc52919408.scope - libcontainer container 58a4b7ef81d8d42e03066d07ad355e8fa94b35b9f8b0b3b9074bb9cc52919408. Jul 14 22:41:14.808187 containerd[1660]: time="2025-07-14T22:41:14.808105694Z" level=info msg="CreateContainer within sandbox \"3d7e1f3f0a2374a22145df6387340f352a09b87301b58e97d5973c92acbecf8b\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"5bae2ea83004b627ff2554ae86eb0da7544f6cb0a020e76fb2a2d92665355cf9\"" Jul 14 22:41:14.809056 containerd[1660]: time="2025-07-14T22:41:14.809006273Z" level=info msg="StartContainer for \"5bae2ea83004b627ff2554ae86eb0da7544f6cb0a020e76fb2a2d92665355cf9\"" Jul 14 22:41:14.810541 containerd[1660]: time="2025-07-14T22:41:14.810509672Z" level=info msg="connecting to shim 5bae2ea83004b627ff2554ae86eb0da7544f6cb0a020e76fb2a2d92665355cf9" address="unix:///run/containerd/s/86937b84303d11cc237969153e10ee7db5c32555346fc84ee5d39607ad388038" protocol=ttrpc version=3 Jul 14 22:41:14.830898 containerd[1660]: time="2025-07-14T22:41:14.830597227Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-6g6hj,Uid:a4766057-7f97-4c02-ae2f-688484934296,Namespace:kube-system,Attempt:0,} returns sandbox id \"58a4b7ef81d8d42e03066d07ad355e8fa94b35b9f8b0b3b9074bb9cc52919408\"" Jul 14 22:41:14.841571 systemd[1]: Started cri-containerd-5bae2ea83004b627ff2554ae86eb0da7544f6cb0a020e76fb2a2d92665355cf9.scope - libcontainer container 5bae2ea83004b627ff2554ae86eb0da7544f6cb0a020e76fb2a2d92665355cf9. Jul 14 22:41:14.876627 containerd[1660]: time="2025-07-14T22:41:14.876511184Z" level=info msg="StartContainer for \"5bae2ea83004b627ff2554ae86eb0da7544f6cb0a020e76fb2a2d92665355cf9\" returns successfully" Jul 14 22:41:15.156429 kubelet[3084]: I0714 22:41:15.156221 3084 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-qtxg9" podStartSLOduration=2.156208011 podStartE2EDuration="2.156208011s" podCreationTimestamp="2025-07-14 22:41:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-14 22:41:15.156095063 +0000 UTC m=+6.260090169" watchObservedRunningTime="2025-07-14 22:41:15.156208011 +0000 UTC m=+6.260203121" Jul 14 22:41:15.759090 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1577515130.mount: Deactivated successfully. Jul 14 22:41:16.484251 containerd[1660]: time="2025-07-14T22:41:16.484203647Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:41:16.484976 containerd[1660]: time="2025-07-14T22:41:16.484908360Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Jul 14 22:41:16.485084 containerd[1660]: time="2025-07-14T22:41:16.485071447Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:41:16.485838 containerd[1660]: time="2025-07-14T22:41:16.485819958Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 1.779243323s" Jul 14 22:41:16.485874 containerd[1660]: time="2025-07-14T22:41:16.485840321Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jul 14 22:41:16.486951 containerd[1660]: time="2025-07-14T22:41:16.486815989Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jul 14 22:41:16.488779 containerd[1660]: time="2025-07-14T22:41:16.488747571Z" level=info msg="CreateContainer within sandbox \"cec018a341415c85344fdb27610dc35c947d1c2e7504fc85e0547f57419357f7\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jul 14 22:41:16.497756 containerd[1660]: time="2025-07-14T22:41:16.496704536Z" level=info msg="Container 86ae1119cdf769fdf9dcc5cb64682facebddc41af3b0e40fc07ae8ae3cc86478: CDI devices from CRI Config.CDIDevices: []" Jul 14 22:41:16.498586 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2683644905.mount: Deactivated successfully. Jul 14 22:41:16.502501 containerd[1660]: time="2025-07-14T22:41:16.502469960Z" level=info msg="CreateContainer within sandbox \"cec018a341415c85344fdb27610dc35c947d1c2e7504fc85e0547f57419357f7\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"86ae1119cdf769fdf9dcc5cb64682facebddc41af3b0e40fc07ae8ae3cc86478\"" Jul 14 22:41:16.503248 containerd[1660]: time="2025-07-14T22:41:16.502939807Z" level=info msg="StartContainer for \"86ae1119cdf769fdf9dcc5cb64682facebddc41af3b0e40fc07ae8ae3cc86478\"" Jul 14 22:41:16.503785 containerd[1660]: time="2025-07-14T22:41:16.503753281Z" level=info msg="connecting to shim 86ae1119cdf769fdf9dcc5cb64682facebddc41af3b0e40fc07ae8ae3cc86478" address="unix:///run/containerd/s/84f262615d5acba51196614c79b9e573ee3b8a2838e012d74c99ad9cc9f73803" protocol=ttrpc version=3 Jul 14 22:41:16.523410 systemd[1]: Started cri-containerd-86ae1119cdf769fdf9dcc5cb64682facebddc41af3b0e40fc07ae8ae3cc86478.scope - libcontainer container 86ae1119cdf769fdf9dcc5cb64682facebddc41af3b0e40fc07ae8ae3cc86478. Jul 14 22:41:16.555601 containerd[1660]: time="2025-07-14T22:41:16.555487739Z" level=info msg="StartContainer for \"86ae1119cdf769fdf9dcc5cb64682facebddc41af3b0e40fc07ae8ae3cc86478\" returns successfully" Jul 14 22:41:21.385132 kubelet[3084]: I0714 22:41:21.385060 3084 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-pvpm2" podStartSLOduration=6.601144715 podStartE2EDuration="8.385048334s" podCreationTimestamp="2025-07-14 22:41:13 +0000 UTC" firstStartedPulling="2025-07-14 22:41:14.702704693 +0000 UTC m=+5.806699795" lastFinishedPulling="2025-07-14 22:41:16.486608311 +0000 UTC m=+7.590603414" observedRunningTime="2025-07-14 22:41:17.165135218 +0000 UTC m=+8.269130324" watchObservedRunningTime="2025-07-14 22:41:21.385048334 +0000 UTC m=+12.489043443" Jul 14 22:41:31.065524 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4187776072.mount: Deactivated successfully. Jul 14 22:41:33.512625 containerd[1660]: time="2025-07-14T22:41:33.512524773Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:41:33.514229 containerd[1660]: time="2025-07-14T22:41:33.514079460Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Jul 14 22:41:33.514827 containerd[1660]: time="2025-07-14T22:41:33.514387755Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:41:33.515492 containerd[1660]: time="2025-07-14T22:41:33.515233126Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 17.028204927s" Jul 14 22:41:33.522027 containerd[1660]: time="2025-07-14T22:41:33.515270744Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jul 14 22:41:33.523337 containerd[1660]: time="2025-07-14T22:41:33.523309752Z" level=info msg="CreateContainer within sandbox \"58a4b7ef81d8d42e03066d07ad355e8fa94b35b9f8b0b3b9074bb9cc52919408\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 14 22:41:33.543383 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1168201384.mount: Deactivated successfully. Jul 14 22:41:33.545593 containerd[1660]: time="2025-07-14T22:41:33.545184166Z" level=info msg="Container 5090f6b155c5b64a434f9170c9caf6a4e3d555757965438f9e8887202946cdf6: CDI devices from CRI Config.CDIDevices: []" Jul 14 22:41:33.545964 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4102395408.mount: Deactivated successfully. Jul 14 22:41:33.554650 containerd[1660]: time="2025-07-14T22:41:33.554615573Z" level=info msg="CreateContainer within sandbox \"58a4b7ef81d8d42e03066d07ad355e8fa94b35b9f8b0b3b9074bb9cc52919408\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"5090f6b155c5b64a434f9170c9caf6a4e3d555757965438f9e8887202946cdf6\"" Jul 14 22:41:33.555209 containerd[1660]: time="2025-07-14T22:41:33.555100540Z" level=info msg="StartContainer for \"5090f6b155c5b64a434f9170c9caf6a4e3d555757965438f9e8887202946cdf6\"" Jul 14 22:41:33.556606 containerd[1660]: time="2025-07-14T22:41:33.556585791Z" level=info msg="connecting to shim 5090f6b155c5b64a434f9170c9caf6a4e3d555757965438f9e8887202946cdf6" address="unix:///run/containerd/s/62e3af6b2eb239442234aac67f9b6aefc50d127cc2455746c96b70ff4efad374" protocol=ttrpc version=3 Jul 14 22:41:33.625339 systemd[1]: Started cri-containerd-5090f6b155c5b64a434f9170c9caf6a4e3d555757965438f9e8887202946cdf6.scope - libcontainer container 5090f6b155c5b64a434f9170c9caf6a4e3d555757965438f9e8887202946cdf6. Jul 14 22:41:33.649055 containerd[1660]: time="2025-07-14T22:41:33.648980401Z" level=info msg="StartContainer for \"5090f6b155c5b64a434f9170c9caf6a4e3d555757965438f9e8887202946cdf6\" returns successfully" Jul 14 22:41:33.660272 systemd[1]: cri-containerd-5090f6b155c5b64a434f9170c9caf6a4e3d555757965438f9e8887202946cdf6.scope: Deactivated successfully. Jul 14 22:41:33.670931 containerd[1660]: time="2025-07-14T22:41:33.670890783Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5090f6b155c5b64a434f9170c9caf6a4e3d555757965438f9e8887202946cdf6\" id:\"5090f6b155c5b64a434f9170c9caf6a4e3d555757965438f9e8887202946cdf6\" pid:3544 exited_at:{seconds:1752532893 nanos:661034241}" Jul 14 22:41:33.673708 containerd[1660]: time="2025-07-14T22:41:33.673679776Z" level=info msg="received exit event container_id:\"5090f6b155c5b64a434f9170c9caf6a4e3d555757965438f9e8887202946cdf6\" id:\"5090f6b155c5b64a434f9170c9caf6a4e3d555757965438f9e8887202946cdf6\" pid:3544 exited_at:{seconds:1752532893 nanos:661034241}" Jul 14 22:41:34.540527 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5090f6b155c5b64a434f9170c9caf6a4e3d555757965438f9e8887202946cdf6-rootfs.mount: Deactivated successfully. Jul 14 22:41:35.181715 containerd[1660]: time="2025-07-14T22:41:35.181438475Z" level=info msg="CreateContainer within sandbox \"58a4b7ef81d8d42e03066d07ad355e8fa94b35b9f8b0b3b9074bb9cc52919408\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 14 22:41:35.191175 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4085668993.mount: Deactivated successfully. Jul 14 22:41:35.191637 containerd[1660]: time="2025-07-14T22:41:35.191474892Z" level=info msg="Container 9de040cf56165f6650010473d9d2571b266505c4b6bacb91cd921b7a271edee5: CDI devices from CRI Config.CDIDevices: []" Jul 14 22:41:35.198206 containerd[1660]: time="2025-07-14T22:41:35.198176313Z" level=info msg="CreateContainer within sandbox \"58a4b7ef81d8d42e03066d07ad355e8fa94b35b9f8b0b3b9074bb9cc52919408\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"9de040cf56165f6650010473d9d2571b266505c4b6bacb91cd921b7a271edee5\"" Jul 14 22:41:35.198769 containerd[1660]: time="2025-07-14T22:41:35.198730221Z" level=info msg="StartContainer for \"9de040cf56165f6650010473d9d2571b266505c4b6bacb91cd921b7a271edee5\"" Jul 14 22:41:35.199409 containerd[1660]: time="2025-07-14T22:41:35.199381648Z" level=info msg="connecting to shim 9de040cf56165f6650010473d9d2571b266505c4b6bacb91cd921b7a271edee5" address="unix:///run/containerd/s/62e3af6b2eb239442234aac67f9b6aefc50d127cc2455746c96b70ff4efad374" protocol=ttrpc version=3 Jul 14 22:41:35.213359 systemd[1]: Started cri-containerd-9de040cf56165f6650010473d9d2571b266505c4b6bacb91cd921b7a271edee5.scope - libcontainer container 9de040cf56165f6650010473d9d2571b266505c4b6bacb91cd921b7a271edee5. Jul 14 22:41:35.232183 containerd[1660]: time="2025-07-14T22:41:35.232162905Z" level=info msg="StartContainer for \"9de040cf56165f6650010473d9d2571b266505c4b6bacb91cd921b7a271edee5\" returns successfully" Jul 14 22:41:35.241814 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 14 22:41:35.241955 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 14 22:41:35.242064 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jul 14 22:41:35.243776 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 14 22:41:35.245464 systemd[1]: cri-containerd-9de040cf56165f6650010473d9d2571b266505c4b6bacb91cd921b7a271edee5.scope: Deactivated successfully. Jul 14 22:41:35.247494 containerd[1660]: time="2025-07-14T22:41:35.247466233Z" level=info msg="received exit event container_id:\"9de040cf56165f6650010473d9d2571b266505c4b6bacb91cd921b7a271edee5\" id:\"9de040cf56165f6650010473d9d2571b266505c4b6bacb91cd921b7a271edee5\" pid:3592 exited_at:{seconds:1752532895 nanos:247185134}" Jul 14 22:41:35.247779 containerd[1660]: time="2025-07-14T22:41:35.247763391Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9de040cf56165f6650010473d9d2571b266505c4b6bacb91cd921b7a271edee5\" id:\"9de040cf56165f6650010473d9d2571b266505c4b6bacb91cd921b7a271edee5\" pid:3592 exited_at:{seconds:1752532895 nanos:247185134}" Jul 14 22:41:35.279694 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 14 22:41:35.541166 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9de040cf56165f6650010473d9d2571b266505c4b6bacb91cd921b7a271edee5-rootfs.mount: Deactivated successfully. Jul 14 22:41:36.185070 containerd[1660]: time="2025-07-14T22:41:36.185034618Z" level=info msg="CreateContainer within sandbox \"58a4b7ef81d8d42e03066d07ad355e8fa94b35b9f8b0b3b9074bb9cc52919408\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 14 22:41:36.193254 containerd[1660]: time="2025-07-14T22:41:36.191711849Z" level=info msg="Container 2cbe79a151c22136464f1ab72baf95b73ae50e70a3952d2b61802171b8aad104: CDI devices from CRI Config.CDIDevices: []" Jul 14 22:41:36.197884 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2368421227.mount: Deactivated successfully. Jul 14 22:41:36.203310 containerd[1660]: time="2025-07-14T22:41:36.203193037Z" level=info msg="CreateContainer within sandbox \"58a4b7ef81d8d42e03066d07ad355e8fa94b35b9f8b0b3b9074bb9cc52919408\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"2cbe79a151c22136464f1ab72baf95b73ae50e70a3952d2b61802171b8aad104\"" Jul 14 22:41:36.203768 containerd[1660]: time="2025-07-14T22:41:36.203753549Z" level=info msg="StartContainer for \"2cbe79a151c22136464f1ab72baf95b73ae50e70a3952d2b61802171b8aad104\"" Jul 14 22:41:36.206792 containerd[1660]: time="2025-07-14T22:41:36.206640730Z" level=info msg="connecting to shim 2cbe79a151c22136464f1ab72baf95b73ae50e70a3952d2b61802171b8aad104" address="unix:///run/containerd/s/62e3af6b2eb239442234aac67f9b6aefc50d127cc2455746c96b70ff4efad374" protocol=ttrpc version=3 Jul 14 22:41:36.232394 systemd[1]: Started cri-containerd-2cbe79a151c22136464f1ab72baf95b73ae50e70a3952d2b61802171b8aad104.scope - libcontainer container 2cbe79a151c22136464f1ab72baf95b73ae50e70a3952d2b61802171b8aad104. Jul 14 22:41:36.258262 containerd[1660]: time="2025-07-14T22:41:36.258208606Z" level=info msg="StartContainer for \"2cbe79a151c22136464f1ab72baf95b73ae50e70a3952d2b61802171b8aad104\" returns successfully" Jul 14 22:41:36.264174 systemd[1]: cri-containerd-2cbe79a151c22136464f1ab72baf95b73ae50e70a3952d2b61802171b8aad104.scope: Deactivated successfully. Jul 14 22:41:36.264595 systemd[1]: cri-containerd-2cbe79a151c22136464f1ab72baf95b73ae50e70a3952d2b61802171b8aad104.scope: Consumed 16ms CPU time, 5.3M memory peak, 1M read from disk. Jul 14 22:41:36.266141 containerd[1660]: time="2025-07-14T22:41:36.266064646Z" level=info msg="received exit event container_id:\"2cbe79a151c22136464f1ab72baf95b73ae50e70a3952d2b61802171b8aad104\" id:\"2cbe79a151c22136464f1ab72baf95b73ae50e70a3952d2b61802171b8aad104\" pid:3642 exited_at:{seconds:1752532896 nanos:265837480}" Jul 14 22:41:36.266306 containerd[1660]: time="2025-07-14T22:41:36.266294107Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2cbe79a151c22136464f1ab72baf95b73ae50e70a3952d2b61802171b8aad104\" id:\"2cbe79a151c22136464f1ab72baf95b73ae50e70a3952d2b61802171b8aad104\" pid:3642 exited_at:{seconds:1752532896 nanos:265837480}" Jul 14 22:41:36.280533 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2cbe79a151c22136464f1ab72baf95b73ae50e70a3952d2b61802171b8aad104-rootfs.mount: Deactivated successfully. Jul 14 22:41:37.188977 containerd[1660]: time="2025-07-14T22:41:37.188935184Z" level=info msg="CreateContainer within sandbox \"58a4b7ef81d8d42e03066d07ad355e8fa94b35b9f8b0b3b9074bb9cc52919408\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 14 22:41:37.195332 containerd[1660]: time="2025-07-14T22:41:37.195295587Z" level=info msg="Container c8ab3b04ea95138ca0cf67bd9ef672ccfd5c4b30d8d8a57bfbc834196fe2c320: CDI devices from CRI Config.CDIDevices: []" Jul 14 22:41:37.199952 containerd[1660]: time="2025-07-14T22:41:37.199925225Z" level=info msg="CreateContainer within sandbox \"58a4b7ef81d8d42e03066d07ad355e8fa94b35b9f8b0b3b9074bb9cc52919408\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"c8ab3b04ea95138ca0cf67bd9ef672ccfd5c4b30d8d8a57bfbc834196fe2c320\"" Jul 14 22:41:37.200402 containerd[1660]: time="2025-07-14T22:41:37.200387634Z" level=info msg="StartContainer for \"c8ab3b04ea95138ca0cf67bd9ef672ccfd5c4b30d8d8a57bfbc834196fe2c320\"" Jul 14 22:41:37.200941 containerd[1660]: time="2025-07-14T22:41:37.200925990Z" level=info msg="connecting to shim c8ab3b04ea95138ca0cf67bd9ef672ccfd5c4b30d8d8a57bfbc834196fe2c320" address="unix:///run/containerd/s/62e3af6b2eb239442234aac67f9b6aefc50d127cc2455746c96b70ff4efad374" protocol=ttrpc version=3 Jul 14 22:41:37.224385 systemd[1]: Started cri-containerd-c8ab3b04ea95138ca0cf67bd9ef672ccfd5c4b30d8d8a57bfbc834196fe2c320.scope - libcontainer container c8ab3b04ea95138ca0cf67bd9ef672ccfd5c4b30d8d8a57bfbc834196fe2c320. Jul 14 22:41:37.241620 systemd[1]: cri-containerd-c8ab3b04ea95138ca0cf67bd9ef672ccfd5c4b30d8d8a57bfbc834196fe2c320.scope: Deactivated successfully. Jul 14 22:41:37.243054 containerd[1660]: time="2025-07-14T22:41:37.242971433Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c8ab3b04ea95138ca0cf67bd9ef672ccfd5c4b30d8d8a57bfbc834196fe2c320\" id:\"c8ab3b04ea95138ca0cf67bd9ef672ccfd5c4b30d8d8a57bfbc834196fe2c320\" pid:3682 exited_at:{seconds:1752532897 nanos:241848127}" Jul 14 22:41:37.243664 containerd[1660]: time="2025-07-14T22:41:37.243645433Z" level=info msg="received exit event container_id:\"c8ab3b04ea95138ca0cf67bd9ef672ccfd5c4b30d8d8a57bfbc834196fe2c320\" id:\"c8ab3b04ea95138ca0cf67bd9ef672ccfd5c4b30d8d8a57bfbc834196fe2c320\" pid:3682 exited_at:{seconds:1752532897 nanos:241848127}" Jul 14 22:41:37.249773 containerd[1660]: time="2025-07-14T22:41:37.247808705Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda4766057_7f97_4c02_ae2f_688484934296.slice/cri-containerd-c8ab3b04ea95138ca0cf67bd9ef672ccfd5c4b30d8d8a57bfbc834196fe2c320.scope/memory.events\": no such file or directory" Jul 14 22:41:37.257192 containerd[1660]: time="2025-07-14T22:41:37.257168812Z" level=info msg="StartContainer for \"c8ab3b04ea95138ca0cf67bd9ef672ccfd5c4b30d8d8a57bfbc834196fe2c320\" returns successfully" Jul 14 22:41:37.266120 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c8ab3b04ea95138ca0cf67bd9ef672ccfd5c4b30d8d8a57bfbc834196fe2c320-rootfs.mount: Deactivated successfully. Jul 14 22:41:38.191467 containerd[1660]: time="2025-07-14T22:41:38.191434488Z" level=info msg="CreateContainer within sandbox \"58a4b7ef81d8d42e03066d07ad355e8fa94b35b9f8b0b3b9074bb9cc52919408\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 14 22:41:38.202644 containerd[1660]: time="2025-07-14T22:41:38.202613698Z" level=info msg="Container aab519c21433c065c41e900007d0c6abe4c9fd24bded4de1eaacc973bee5ad36: CDI devices from CRI Config.CDIDevices: []" Jul 14 22:41:38.211279 containerd[1660]: time="2025-07-14T22:41:38.211229424Z" level=info msg="CreateContainer within sandbox \"58a4b7ef81d8d42e03066d07ad355e8fa94b35b9f8b0b3b9074bb9cc52919408\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"aab519c21433c065c41e900007d0c6abe4c9fd24bded4de1eaacc973bee5ad36\"" Jul 14 22:41:38.211812 containerd[1660]: time="2025-07-14T22:41:38.211795011Z" level=info msg="StartContainer for \"aab519c21433c065c41e900007d0c6abe4c9fd24bded4de1eaacc973bee5ad36\"" Jul 14 22:41:38.213431 containerd[1660]: time="2025-07-14T22:41:38.213394634Z" level=info msg="connecting to shim aab519c21433c065c41e900007d0c6abe4c9fd24bded4de1eaacc973bee5ad36" address="unix:///run/containerd/s/62e3af6b2eb239442234aac67f9b6aefc50d127cc2455746c96b70ff4efad374" protocol=ttrpc version=3 Jul 14 22:41:38.231491 systemd[1]: Started cri-containerd-aab519c21433c065c41e900007d0c6abe4c9fd24bded4de1eaacc973bee5ad36.scope - libcontainer container aab519c21433c065c41e900007d0c6abe4c9fd24bded4de1eaacc973bee5ad36. Jul 14 22:41:38.254311 containerd[1660]: time="2025-07-14T22:41:38.254275378Z" level=info msg="StartContainer for \"aab519c21433c065c41e900007d0c6abe4c9fd24bded4de1eaacc973bee5ad36\" returns successfully" Jul 14 22:41:38.332837 containerd[1660]: time="2025-07-14T22:41:38.332566178Z" level=info msg="TaskExit event in podsandbox handler container_id:\"aab519c21433c065c41e900007d0c6abe4c9fd24bded4de1eaacc973bee5ad36\" id:\"c1f607e5d747ca235c2aa20aac9289ab0f23f260ca703dfa9d8d6b0f3526b786\" pid:3749 exited_at:{seconds:1752532898 nanos:331981012}" Jul 14 22:41:38.355675 kubelet[3084]: I0714 22:41:38.355647 3084 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Jul 14 22:41:38.426666 systemd[1]: Created slice kubepods-burstable-pod56001680_c8d6_42a7_b5ae_5d5fcfcc63df.slice - libcontainer container kubepods-burstable-pod56001680_c8d6_42a7_b5ae_5d5fcfcc63df.slice. Jul 14 22:41:38.430958 systemd[1]: Created slice kubepods-burstable-pod24369ebc_86b8_445c_a63b_ba33e79812b5.slice - libcontainer container kubepods-burstable-pod24369ebc_86b8_445c_a63b_ba33e79812b5.slice. Jul 14 22:41:38.463075 kubelet[3084]: I0714 22:41:38.463008 3084 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/24369ebc-86b8-445c-a63b-ba33e79812b5-config-volume\") pod \"coredns-7c65d6cfc9-j8c5d\" (UID: \"24369ebc-86b8-445c-a63b-ba33e79812b5\") " pod="kube-system/coredns-7c65d6cfc9-j8c5d" Jul 14 22:41:38.463075 kubelet[3084]: I0714 22:41:38.463036 3084 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zl2rw\" (UniqueName: \"kubernetes.io/projected/24369ebc-86b8-445c-a63b-ba33e79812b5-kube-api-access-zl2rw\") pod \"coredns-7c65d6cfc9-j8c5d\" (UID: \"24369ebc-86b8-445c-a63b-ba33e79812b5\") " pod="kube-system/coredns-7c65d6cfc9-j8c5d" Jul 14 22:41:38.463075 kubelet[3084]: I0714 22:41:38.463056 3084 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-grcbz\" (UniqueName: \"kubernetes.io/projected/56001680-c8d6-42a7-b5ae-5d5fcfcc63df-kube-api-access-grcbz\") pod \"coredns-7c65d6cfc9-xd9gw\" (UID: \"56001680-c8d6-42a7-b5ae-5d5fcfcc63df\") " pod="kube-system/coredns-7c65d6cfc9-xd9gw" Jul 14 22:41:38.463371 kubelet[3084]: I0714 22:41:38.463159 3084 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/56001680-c8d6-42a7-b5ae-5d5fcfcc63df-config-volume\") pod \"coredns-7c65d6cfc9-xd9gw\" (UID: \"56001680-c8d6-42a7-b5ae-5d5fcfcc63df\") " pod="kube-system/coredns-7c65d6cfc9-xd9gw" Jul 14 22:41:38.730977 containerd[1660]: time="2025-07-14T22:41:38.730840295Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-xd9gw,Uid:56001680-c8d6-42a7-b5ae-5d5fcfcc63df,Namespace:kube-system,Attempt:0,}" Jul 14 22:41:38.737294 containerd[1660]: time="2025-07-14T22:41:38.737274322Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-j8c5d,Uid:24369ebc-86b8-445c-a63b-ba33e79812b5,Namespace:kube-system,Attempt:0,}" Jul 14 22:41:39.204351 kubelet[3084]: I0714 22:41:39.204303 3084 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-6g6hj" podStartSLOduration=7.516319452 podStartE2EDuration="26.204289375s" podCreationTimestamp="2025-07-14 22:41:13 +0000 UTC" firstStartedPulling="2025-07-14 22:41:14.834578719 +0000 UTC m=+5.938573821" lastFinishedPulling="2025-07-14 22:41:33.522548642 +0000 UTC m=+24.626543744" observedRunningTime="2025-07-14 22:41:39.204210714 +0000 UTC m=+30.308205824" watchObservedRunningTime="2025-07-14 22:41:39.204289375 +0000 UTC m=+30.308284476" Jul 14 22:41:40.413723 systemd-networkd[1517]: cilium_host: Link UP Jul 14 22:41:40.415027 systemd-networkd[1517]: cilium_net: Link UP Jul 14 22:41:40.415204 systemd-networkd[1517]: cilium_net: Gained carrier Jul 14 22:41:40.417159 systemd-networkd[1517]: cilium_host: Gained carrier Jul 14 22:41:40.547231 systemd-networkd[1517]: cilium_vxlan: Link UP Jul 14 22:41:40.547290 systemd-networkd[1517]: cilium_vxlan: Gained carrier Jul 14 22:41:40.911272 kernel: NET: Registered PF_ALG protocol family Jul 14 22:41:41.184406 systemd-networkd[1517]: cilium_net: Gained IPv6LL Jul 14 22:41:41.312432 systemd-networkd[1517]: cilium_host: Gained IPv6LL Jul 14 22:41:41.352390 systemd-networkd[1517]: lxc_health: Link UP Jul 14 22:41:41.357442 systemd-networkd[1517]: lxc_health: Gained carrier Jul 14 22:41:41.786689 kernel: eth0: renamed from tmp73f4e Jul 14 22:41:41.786703 systemd-networkd[1517]: lxc805ac7a307b1: Link UP Jul 14 22:41:41.787396 systemd-networkd[1517]: lxc23af401278e0: Link UP Jul 14 22:41:41.795750 kernel: eth0: renamed from tmp69ab9 Jul 14 22:41:41.794931 systemd-networkd[1517]: lxc805ac7a307b1: Gained carrier Jul 14 22:41:41.796063 systemd-networkd[1517]: lxc23af401278e0: Gained carrier Jul 14 22:41:42.208414 systemd-networkd[1517]: cilium_vxlan: Gained IPv6LL Jul 14 22:41:43.168379 systemd-networkd[1517]: lxc23af401278e0: Gained IPv6LL Jul 14 22:41:43.296331 systemd-networkd[1517]: lxc_health: Gained IPv6LL Jul 14 22:41:43.680393 systemd-networkd[1517]: lxc805ac7a307b1: Gained IPv6LL Jul 14 22:41:44.497813 containerd[1660]: time="2025-07-14T22:41:44.497772483Z" level=info msg="connecting to shim 73f4e4f8c758a155fc45497878083589c38829fd4420d4867b0aeadb915ff988" address="unix:///run/containerd/s/50bba82d4d1c81265f0efae625fd1b807941f850e4276d1a6b53567273ab43b1" namespace=k8s.io protocol=ttrpc version=3 Jul 14 22:41:44.503404 containerd[1660]: time="2025-07-14T22:41:44.503330720Z" level=info msg="connecting to shim 69ab9c6e633b12c9d7382084f623a74125321afedc00b9a4128df7d8101ebd11" address="unix:///run/containerd/s/3f3cd4dcc45420b12bfdcb62178aa84e91be554813f3a59dba82d0ac2a668d6a" namespace=k8s.io protocol=ttrpc version=3 Jul 14 22:41:44.529524 systemd[1]: Started cri-containerd-73f4e4f8c758a155fc45497878083589c38829fd4420d4867b0aeadb915ff988.scope - libcontainer container 73f4e4f8c758a155fc45497878083589c38829fd4420d4867b0aeadb915ff988. Jul 14 22:41:44.542111 systemd[1]: Started cri-containerd-69ab9c6e633b12c9d7382084f623a74125321afedc00b9a4128df7d8101ebd11.scope - libcontainer container 69ab9c6e633b12c9d7382084f623a74125321afedc00b9a4128df7d8101ebd11. Jul 14 22:41:44.555459 systemd-resolved[1519]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 14 22:41:44.561555 systemd-resolved[1519]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 14 22:41:44.595596 containerd[1660]: time="2025-07-14T22:41:44.595539464Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-j8c5d,Uid:24369ebc-86b8-445c-a63b-ba33e79812b5,Namespace:kube-system,Attempt:0,} returns sandbox id \"73f4e4f8c758a155fc45497878083589c38829fd4420d4867b0aeadb915ff988\"" Jul 14 22:41:44.595962 containerd[1660]: time="2025-07-14T22:41:44.595834181Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-xd9gw,Uid:56001680-c8d6-42a7-b5ae-5d5fcfcc63df,Namespace:kube-system,Attempt:0,} returns sandbox id \"69ab9c6e633b12c9d7382084f623a74125321afedc00b9a4128df7d8101ebd11\"" Jul 14 22:41:44.598009 containerd[1660]: time="2025-07-14T22:41:44.597991982Z" level=info msg="CreateContainer within sandbox \"69ab9c6e633b12c9d7382084f623a74125321afedc00b9a4128df7d8101ebd11\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 14 22:41:44.598688 containerd[1660]: time="2025-07-14T22:41:44.598184579Z" level=info msg="CreateContainer within sandbox \"73f4e4f8c758a155fc45497878083589c38829fd4420d4867b0aeadb915ff988\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 14 22:41:44.644202 containerd[1660]: time="2025-07-14T22:41:44.644179038Z" level=info msg="Container 45418ce720bd68d0aba15394807f0783828488e40ca1649729096b72a0efad30: CDI devices from CRI Config.CDIDevices: []" Jul 14 22:41:44.644384 containerd[1660]: time="2025-07-14T22:41:44.644187812Z" level=info msg="Container c5af988b81a88ab41fac93a10c462c557ded4645a3f172bc431966303c6c08c8: CDI devices from CRI Config.CDIDevices: []" Jul 14 22:41:44.648130 containerd[1660]: time="2025-07-14T22:41:44.647898216Z" level=info msg="CreateContainer within sandbox \"69ab9c6e633b12c9d7382084f623a74125321afedc00b9a4128df7d8101ebd11\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"45418ce720bd68d0aba15394807f0783828488e40ca1649729096b72a0efad30\"" Jul 14 22:41:44.648584 containerd[1660]: time="2025-07-14T22:41:44.648565446Z" level=info msg="StartContainer for \"45418ce720bd68d0aba15394807f0783828488e40ca1649729096b72a0efad30\"" Jul 14 22:41:44.649857 containerd[1660]: time="2025-07-14T22:41:44.649260145Z" level=info msg="CreateContainer within sandbox \"73f4e4f8c758a155fc45497878083589c38829fd4420d4867b0aeadb915ff988\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c5af988b81a88ab41fac93a10c462c557ded4645a3f172bc431966303c6c08c8\"" Jul 14 22:41:44.649857 containerd[1660]: time="2025-07-14T22:41:44.649279066Z" level=info msg="connecting to shim 45418ce720bd68d0aba15394807f0783828488e40ca1649729096b72a0efad30" address="unix:///run/containerd/s/3f3cd4dcc45420b12bfdcb62178aa84e91be554813f3a59dba82d0ac2a668d6a" protocol=ttrpc version=3 Jul 14 22:41:44.649857 containerd[1660]: time="2025-07-14T22:41:44.649787648Z" level=info msg="StartContainer for \"c5af988b81a88ab41fac93a10c462c557ded4645a3f172bc431966303c6c08c8\"" Jul 14 22:41:44.651071 containerd[1660]: time="2025-07-14T22:41:44.651027804Z" level=info msg="connecting to shim c5af988b81a88ab41fac93a10c462c557ded4645a3f172bc431966303c6c08c8" address="unix:///run/containerd/s/50bba82d4d1c81265f0efae625fd1b807941f850e4276d1a6b53567273ab43b1" protocol=ttrpc version=3 Jul 14 22:41:44.671369 systemd[1]: Started cri-containerd-45418ce720bd68d0aba15394807f0783828488e40ca1649729096b72a0efad30.scope - libcontainer container 45418ce720bd68d0aba15394807f0783828488e40ca1649729096b72a0efad30. Jul 14 22:41:44.674656 systemd[1]: Started cri-containerd-c5af988b81a88ab41fac93a10c462c557ded4645a3f172bc431966303c6c08c8.scope - libcontainer container c5af988b81a88ab41fac93a10c462c557ded4645a3f172bc431966303c6c08c8. Jul 14 22:41:44.705342 containerd[1660]: time="2025-07-14T22:41:44.705316532Z" level=info msg="StartContainer for \"45418ce720bd68d0aba15394807f0783828488e40ca1649729096b72a0efad30\" returns successfully" Jul 14 22:41:44.709851 containerd[1660]: time="2025-07-14T22:41:44.709822169Z" level=info msg="StartContainer for \"c5af988b81a88ab41fac93a10c462c557ded4645a3f172bc431966303c6c08c8\" returns successfully" Jul 14 22:41:45.245001 kubelet[3084]: I0714 22:41:45.244956 3084 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-xd9gw" podStartSLOduration=32.244945131 podStartE2EDuration="32.244945131s" podCreationTimestamp="2025-07-14 22:41:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-14 22:41:45.243645407 +0000 UTC m=+36.347640517" watchObservedRunningTime="2025-07-14 22:41:45.244945131 +0000 UTC m=+36.348940236" Jul 14 22:41:45.258956 kubelet[3084]: I0714 22:41:45.258918 3084 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-j8c5d" podStartSLOduration=32.258905236 podStartE2EDuration="32.258905236s" podCreationTimestamp="2025-07-14 22:41:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-14 22:41:45.257959545 +0000 UTC m=+36.361954656" watchObservedRunningTime="2025-07-14 22:41:45.258905236 +0000 UTC m=+36.362900341" Jul 14 22:41:45.479395 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3882805338.mount: Deactivated successfully. Jul 14 22:42:34.024960 systemd[1]: Started sshd@8-139.178.70.108:22-139.178.89.65:46428.service - OpenSSH per-connection server daemon (139.178.89.65:46428). Jul 14 22:42:34.093633 sshd[4410]: Accepted publickey for core from 139.178.89.65 port 46428 ssh2: RSA SHA256:RSWGZuhuTovkP7yToXQSr6sgrWxhGTyTYOnlX2cWN2k Jul 14 22:42:34.095200 sshd-session[4410]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:42:34.104272 systemd-logind[1629]: New session 10 of user core. Jul 14 22:42:34.108330 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 14 22:42:34.536507 sshd[4413]: Connection closed by 139.178.89.65 port 46428 Jul 14 22:42:34.536871 sshd-session[4410]: pam_unix(sshd:session): session closed for user core Jul 14 22:42:34.540091 systemd[1]: sshd@8-139.178.70.108:22-139.178.89.65:46428.service: Deactivated successfully. Jul 14 22:42:34.541486 systemd[1]: session-10.scope: Deactivated successfully. Jul 14 22:42:34.543217 systemd-logind[1629]: Session 10 logged out. Waiting for processes to exit. Jul 14 22:42:34.544065 systemd-logind[1629]: Removed session 10. Jul 14 22:42:39.552737 systemd[1]: Started sshd@9-139.178.70.108:22-139.178.89.65:59012.service - OpenSSH per-connection server daemon (139.178.89.65:59012). Jul 14 22:42:39.594138 sshd[4426]: Accepted publickey for core from 139.178.89.65 port 59012 ssh2: RSA SHA256:RSWGZuhuTovkP7yToXQSr6sgrWxhGTyTYOnlX2cWN2k Jul 14 22:42:39.595025 sshd-session[4426]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:42:39.597942 systemd-logind[1629]: New session 11 of user core. Jul 14 22:42:39.602344 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 14 22:42:39.704345 sshd[4429]: Connection closed by 139.178.89.65 port 59012 Jul 14 22:42:39.705345 sshd-session[4426]: pam_unix(sshd:session): session closed for user core Jul 14 22:42:39.707851 systemd[1]: sshd@9-139.178.70.108:22-139.178.89.65:59012.service: Deactivated successfully. Jul 14 22:42:39.708965 systemd[1]: session-11.scope: Deactivated successfully. Jul 14 22:42:39.709517 systemd-logind[1629]: Session 11 logged out. Waiting for processes to exit. Jul 14 22:42:39.710433 systemd-logind[1629]: Removed session 11. Jul 14 22:42:44.715191 systemd[1]: Started sshd@10-139.178.70.108:22-139.178.89.65:59024.service - OpenSSH per-connection server daemon (139.178.89.65:59024). Jul 14 22:42:44.758135 sshd[4446]: Accepted publickey for core from 139.178.89.65 port 59024 ssh2: RSA SHA256:RSWGZuhuTovkP7yToXQSr6sgrWxhGTyTYOnlX2cWN2k Jul 14 22:42:44.758876 sshd-session[4446]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:42:44.761517 systemd-logind[1629]: New session 12 of user core. Jul 14 22:42:44.770324 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 14 22:42:44.858908 sshd[4449]: Connection closed by 139.178.89.65 port 59024 Jul 14 22:42:44.859249 sshd-session[4446]: pam_unix(sshd:session): session closed for user core Jul 14 22:42:44.861444 systemd-logind[1629]: Session 12 logged out. Waiting for processes to exit. Jul 14 22:42:44.861596 systemd[1]: sshd@10-139.178.70.108:22-139.178.89.65:59024.service: Deactivated successfully. Jul 14 22:42:44.863677 systemd[1]: session-12.scope: Deactivated successfully. Jul 14 22:42:44.865571 systemd-logind[1629]: Removed session 12. Jul 14 22:42:49.869797 systemd[1]: Started sshd@11-139.178.70.108:22-139.178.89.65:49754.service - OpenSSH per-connection server daemon (139.178.89.65:49754). Jul 14 22:42:49.911544 sshd[4464]: Accepted publickey for core from 139.178.89.65 port 49754 ssh2: RSA SHA256:RSWGZuhuTovkP7yToXQSr6sgrWxhGTyTYOnlX2cWN2k Jul 14 22:42:49.912439 sshd-session[4464]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:42:49.915159 systemd-logind[1629]: New session 13 of user core. Jul 14 22:42:49.922359 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 14 22:42:50.024568 sshd[4467]: Connection closed by 139.178.89.65 port 49754 Jul 14 22:42:50.025866 sshd-session[4464]: pam_unix(sshd:session): session closed for user core Jul 14 22:42:50.028094 systemd[1]: sshd@11-139.178.70.108:22-139.178.89.65:49754.service: Deactivated successfully. Jul 14 22:42:50.029227 systemd[1]: session-13.scope: Deactivated successfully. Jul 14 22:42:50.029765 systemd-logind[1629]: Session 13 logged out. Waiting for processes to exit. Jul 14 22:42:50.030534 systemd-logind[1629]: Removed session 13. Jul 14 22:42:55.038409 systemd[1]: Started sshd@12-139.178.70.108:22-139.178.89.65:49766.service - OpenSSH per-connection server daemon (139.178.89.65:49766). Jul 14 22:42:55.205785 sshd[4479]: Accepted publickey for core from 139.178.89.65 port 49766 ssh2: RSA SHA256:RSWGZuhuTovkP7yToXQSr6sgrWxhGTyTYOnlX2cWN2k Jul 14 22:42:55.206616 sshd-session[4479]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:42:55.209474 systemd-logind[1629]: New session 14 of user core. Jul 14 22:42:55.217318 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 14 22:42:55.310135 sshd[4482]: Connection closed by 139.178.89.65 port 49766 Jul 14 22:42:55.309700 sshd-session[4479]: pam_unix(sshd:session): session closed for user core Jul 14 22:42:55.315980 systemd-logind[1629]: Session 14 logged out. Waiting for processes to exit. Jul 14 22:42:55.316035 systemd[1]: sshd@12-139.178.70.108:22-139.178.89.65:49766.service: Deactivated successfully. Jul 14 22:42:55.317265 systemd[1]: session-14.scope: Deactivated successfully. Jul 14 22:42:55.318217 systemd-logind[1629]: Removed session 14. Jul 14 22:43:00.319674 systemd[1]: Started sshd@13-139.178.70.108:22-139.178.89.65:54468.service - OpenSSH per-connection server daemon (139.178.89.65:54468). Jul 14 22:43:00.368636 sshd[4495]: Accepted publickey for core from 139.178.89.65 port 54468 ssh2: RSA SHA256:RSWGZuhuTovkP7yToXQSr6sgrWxhGTyTYOnlX2cWN2k Jul 14 22:43:00.369371 sshd-session[4495]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:43:00.371967 systemd-logind[1629]: New session 15 of user core. Jul 14 22:43:00.380429 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 14 22:43:00.484962 sshd[4498]: Connection closed by 139.178.89.65 port 54468 Jul 14 22:43:00.485332 sshd-session[4495]: pam_unix(sshd:session): session closed for user core Jul 14 22:43:00.487541 systemd[1]: sshd@13-139.178.70.108:22-139.178.89.65:54468.service: Deactivated successfully. Jul 14 22:43:00.488664 systemd[1]: session-15.scope: Deactivated successfully. Jul 14 22:43:00.489148 systemd-logind[1629]: Session 15 logged out. Waiting for processes to exit. Jul 14 22:43:00.489962 systemd-logind[1629]: Removed session 15. Jul 14 22:43:05.496309 systemd[1]: Started sshd@14-139.178.70.108:22-139.178.89.65:54478.service - OpenSSH per-connection server daemon (139.178.89.65:54478). Jul 14 22:43:05.535026 sshd[4511]: Accepted publickey for core from 139.178.89.65 port 54478 ssh2: RSA SHA256:RSWGZuhuTovkP7yToXQSr6sgrWxhGTyTYOnlX2cWN2k Jul 14 22:43:05.535834 sshd-session[4511]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:43:05.539203 systemd-logind[1629]: New session 16 of user core. Jul 14 22:43:05.546321 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 14 22:43:05.635888 sshd[4514]: Connection closed by 139.178.89.65 port 54478 Jul 14 22:43:05.636331 sshd-session[4511]: pam_unix(sshd:session): session closed for user core Jul 14 22:43:05.643095 systemd[1]: sshd@14-139.178.70.108:22-139.178.89.65:54478.service: Deactivated successfully. Jul 14 22:43:05.644498 systemd[1]: session-16.scope: Deactivated successfully. Jul 14 22:43:05.645414 systemd-logind[1629]: Session 16 logged out. Waiting for processes to exit. Jul 14 22:43:05.648679 systemd[1]: Started sshd@15-139.178.70.108:22-139.178.89.65:54486.service - OpenSSH per-connection server daemon (139.178.89.65:54486). Jul 14 22:43:05.650111 systemd-logind[1629]: Removed session 16. Jul 14 22:43:05.685063 sshd[4527]: Accepted publickey for core from 139.178.89.65 port 54486 ssh2: RSA SHA256:RSWGZuhuTovkP7yToXQSr6sgrWxhGTyTYOnlX2cWN2k Jul 14 22:43:05.685618 sshd-session[4527]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:43:05.688743 systemd-logind[1629]: New session 17 of user core. Jul 14 22:43:05.695340 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 14 22:43:05.802031 sshd[4530]: Connection closed by 139.178.89.65 port 54486 Jul 14 22:43:05.802786 sshd-session[4527]: pam_unix(sshd:session): session closed for user core Jul 14 22:43:05.809797 systemd[1]: sshd@15-139.178.70.108:22-139.178.89.65:54486.service: Deactivated successfully. Jul 14 22:43:05.812021 systemd[1]: session-17.scope: Deactivated successfully. Jul 14 22:43:05.813438 systemd-logind[1629]: Session 17 logged out. Waiting for processes to exit. Jul 14 22:43:05.816628 systemd[1]: Started sshd@16-139.178.70.108:22-139.178.89.65:54496.service - OpenSSH per-connection server daemon (139.178.89.65:54496). Jul 14 22:43:05.818727 systemd-logind[1629]: Removed session 17. Jul 14 22:43:05.859092 sshd[4540]: Accepted publickey for core from 139.178.89.65 port 54496 ssh2: RSA SHA256:RSWGZuhuTovkP7yToXQSr6sgrWxhGTyTYOnlX2cWN2k Jul 14 22:43:05.859881 sshd-session[4540]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:43:05.863122 systemd-logind[1629]: New session 18 of user core. Jul 14 22:43:05.866360 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 14 22:43:05.955694 sshd[4543]: Connection closed by 139.178.89.65 port 54496 Jul 14 22:43:05.956123 sshd-session[4540]: pam_unix(sshd:session): session closed for user core Jul 14 22:43:05.958364 systemd[1]: sshd@16-139.178.70.108:22-139.178.89.65:54496.service: Deactivated successfully. Jul 14 22:43:05.959396 systemd[1]: session-18.scope: Deactivated successfully. Jul 14 22:43:05.959945 systemd-logind[1629]: Session 18 logged out. Waiting for processes to exit. Jul 14 22:43:05.960644 systemd-logind[1629]: Removed session 18. Jul 14 22:43:10.966600 systemd[1]: Started sshd@17-139.178.70.108:22-139.178.89.65:34000.service - OpenSSH per-connection server daemon (139.178.89.65:34000). Jul 14 22:43:11.008996 sshd[4557]: Accepted publickey for core from 139.178.89.65 port 34000 ssh2: RSA SHA256:RSWGZuhuTovkP7yToXQSr6sgrWxhGTyTYOnlX2cWN2k Jul 14 22:43:11.009713 sshd-session[4557]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:43:11.012497 systemd-logind[1629]: New session 19 of user core. Jul 14 22:43:11.022440 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 14 22:43:11.115252 sshd[4560]: Connection closed by 139.178.89.65 port 34000 Jul 14 22:43:11.115581 sshd-session[4557]: pam_unix(sshd:session): session closed for user core Jul 14 22:43:11.117420 systemd-logind[1629]: Session 19 logged out. Waiting for processes to exit. Jul 14 22:43:11.117581 systemd[1]: sshd@17-139.178.70.108:22-139.178.89.65:34000.service: Deactivated successfully. Jul 14 22:43:11.118661 systemd[1]: session-19.scope: Deactivated successfully. Jul 14 22:43:11.119899 systemd-logind[1629]: Removed session 19. Jul 14 22:43:16.129973 systemd[1]: Started sshd@18-139.178.70.108:22-139.178.89.65:34008.service - OpenSSH per-connection server daemon (139.178.89.65:34008). Jul 14 22:43:16.167699 sshd[4574]: Accepted publickey for core from 139.178.89.65 port 34008 ssh2: RSA SHA256:RSWGZuhuTovkP7yToXQSr6sgrWxhGTyTYOnlX2cWN2k Jul 14 22:43:16.168601 sshd-session[4574]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:43:16.172278 systemd-logind[1629]: New session 20 of user core. Jul 14 22:43:16.182404 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 14 22:43:16.273303 sshd[4577]: Connection closed by 139.178.89.65 port 34008 Jul 14 22:43:16.272984 sshd-session[4574]: pam_unix(sshd:session): session closed for user core Jul 14 22:43:16.274954 systemd-logind[1629]: Session 20 logged out. Waiting for processes to exit. Jul 14 22:43:16.275026 systemd[1]: sshd@18-139.178.70.108:22-139.178.89.65:34008.service: Deactivated successfully. Jul 14 22:43:16.275999 systemd[1]: session-20.scope: Deactivated successfully. Jul 14 22:43:16.277192 systemd-logind[1629]: Removed session 20. Jul 14 22:43:21.283997 systemd[1]: Started sshd@19-139.178.70.108:22-139.178.89.65:54352.service - OpenSSH per-connection server daemon (139.178.89.65:54352). Jul 14 22:43:21.328290 sshd[4589]: Accepted publickey for core from 139.178.89.65 port 54352 ssh2: RSA SHA256:RSWGZuhuTovkP7yToXQSr6sgrWxhGTyTYOnlX2cWN2k Jul 14 22:43:21.329129 sshd-session[4589]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:43:21.332589 systemd-logind[1629]: New session 21 of user core. Jul 14 22:43:21.340351 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 14 22:43:21.424773 sshd[4592]: Connection closed by 139.178.89.65 port 54352 Jul 14 22:43:21.424966 sshd-session[4589]: pam_unix(sshd:session): session closed for user core Jul 14 22:43:21.427359 systemd[1]: sshd@19-139.178.70.108:22-139.178.89.65:54352.service: Deactivated successfully. Jul 14 22:43:21.428450 systemd[1]: session-21.scope: Deactivated successfully. Jul 14 22:43:21.428965 systemd-logind[1629]: Session 21 logged out. Waiting for processes to exit. Jul 14 22:43:21.429715 systemd-logind[1629]: Removed session 21. Jul 14 22:43:26.440823 systemd[1]: Started sshd@20-139.178.70.108:22-139.178.89.65:54354.service - OpenSSH per-connection server daemon (139.178.89.65:54354). Jul 14 22:43:26.481834 sshd[4604]: Accepted publickey for core from 139.178.89.65 port 54354 ssh2: RSA SHA256:RSWGZuhuTovkP7yToXQSr6sgrWxhGTyTYOnlX2cWN2k Jul 14 22:43:26.482842 sshd-session[4604]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:43:26.486227 systemd-logind[1629]: New session 22 of user core. Jul 14 22:43:26.499467 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 14 22:43:26.591797 sshd[4607]: Connection closed by 139.178.89.65 port 54354 Jul 14 22:43:26.592343 sshd-session[4604]: pam_unix(sshd:session): session closed for user core Jul 14 22:43:26.594706 systemd[1]: sshd@20-139.178.70.108:22-139.178.89.65:54354.service: Deactivated successfully. Jul 14 22:43:26.595764 systemd[1]: session-22.scope: Deactivated successfully. Jul 14 22:43:26.596303 systemd-logind[1629]: Session 22 logged out. Waiting for processes to exit. Jul 14 22:43:26.597064 systemd-logind[1629]: Removed session 22. Jul 14 22:43:31.601976 systemd[1]: Started sshd@21-139.178.70.108:22-139.178.89.65:40124.service - OpenSSH per-connection server daemon (139.178.89.65:40124). Jul 14 22:43:31.655937 sshd[4618]: Accepted publickey for core from 139.178.89.65 port 40124 ssh2: RSA SHA256:RSWGZuhuTovkP7yToXQSr6sgrWxhGTyTYOnlX2cWN2k Jul 14 22:43:31.656612 sshd-session[4618]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:43:31.659150 systemd-logind[1629]: New session 23 of user core. Jul 14 22:43:31.666437 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 14 22:43:31.780431 sshd[4621]: Connection closed by 139.178.89.65 port 40124 Jul 14 22:43:31.781471 sshd-session[4618]: pam_unix(sshd:session): session closed for user core Jul 14 22:43:31.784696 systemd[1]: sshd@21-139.178.70.108:22-139.178.89.65:40124.service: Deactivated successfully. Jul 14 22:43:31.785860 systemd[1]: session-23.scope: Deactivated successfully. Jul 14 22:43:31.786420 systemd-logind[1629]: Session 23 logged out. Waiting for processes to exit. Jul 14 22:43:31.787191 systemd-logind[1629]: Removed session 23. Jul 14 22:43:36.790911 systemd[1]: Started sshd@22-139.178.70.108:22-139.178.89.65:40134.service - OpenSSH per-connection server daemon (139.178.89.65:40134). Jul 14 22:43:36.851741 sshd[4632]: Accepted publickey for core from 139.178.89.65 port 40134 ssh2: RSA SHA256:RSWGZuhuTovkP7yToXQSr6sgrWxhGTyTYOnlX2cWN2k Jul 14 22:43:36.852380 sshd-session[4632]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:43:36.855357 systemd-logind[1629]: New session 24 of user core. Jul 14 22:43:36.863436 systemd[1]: Started session-24.scope - Session 24 of User core. Jul 14 22:43:36.958022 sshd[4635]: Connection closed by 139.178.89.65 port 40134 Jul 14 22:43:36.957718 sshd-session[4632]: pam_unix(sshd:session): session closed for user core Jul 14 22:43:36.959629 systemd[1]: sshd@22-139.178.70.108:22-139.178.89.65:40134.service: Deactivated successfully. Jul 14 22:43:36.960782 systemd[1]: session-24.scope: Deactivated successfully. Jul 14 22:43:36.961950 systemd-logind[1629]: Session 24 logged out. Waiting for processes to exit. Jul 14 22:43:36.962762 systemd-logind[1629]: Removed session 24. Jul 14 22:43:41.967020 systemd[1]: Started sshd@23-139.178.70.108:22-139.178.89.65:53600.service - OpenSSH per-connection server daemon (139.178.89.65:53600). Jul 14 22:43:42.004761 sshd[4647]: Accepted publickey for core from 139.178.89.65 port 53600 ssh2: RSA SHA256:RSWGZuhuTovkP7yToXQSr6sgrWxhGTyTYOnlX2cWN2k Jul 14 22:43:42.005537 sshd-session[4647]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:43:42.008204 systemd-logind[1629]: New session 25 of user core. Jul 14 22:43:42.014320 systemd[1]: Started session-25.scope - Session 25 of User core. Jul 14 22:43:42.105554 sshd[4650]: Connection closed by 139.178.89.65 port 53600 Jul 14 22:43:42.105894 sshd-session[4647]: pam_unix(sshd:session): session closed for user core Jul 14 22:43:42.108061 systemd[1]: sshd@23-139.178.70.108:22-139.178.89.65:53600.service: Deactivated successfully. Jul 14 22:43:42.109406 systemd[1]: session-25.scope: Deactivated successfully. Jul 14 22:43:42.110033 systemd-logind[1629]: Session 25 logged out. Waiting for processes to exit. Jul 14 22:43:42.110988 systemd-logind[1629]: Removed session 25. Jul 14 22:43:47.120415 systemd[1]: Started sshd@24-139.178.70.108:22-139.178.89.65:53606.service - OpenSSH per-connection server daemon (139.178.89.65:53606). Jul 14 22:43:47.157667 sshd[4663]: Accepted publickey for core from 139.178.89.65 port 53606 ssh2: RSA SHA256:RSWGZuhuTovkP7yToXQSr6sgrWxhGTyTYOnlX2cWN2k Jul 14 22:43:47.158316 sshd-session[4663]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:43:47.160750 systemd-logind[1629]: New session 26 of user core. Jul 14 22:43:47.171485 systemd[1]: Started session-26.scope - Session 26 of User core. Jul 14 22:43:47.261832 sshd[4666]: Connection closed by 139.178.89.65 port 53606 Jul 14 22:43:47.262158 sshd-session[4663]: pam_unix(sshd:session): session closed for user core Jul 14 22:43:47.264372 systemd[1]: sshd@24-139.178.70.108:22-139.178.89.65:53606.service: Deactivated successfully. Jul 14 22:43:47.265383 systemd[1]: session-26.scope: Deactivated successfully. Jul 14 22:43:47.265979 systemd-logind[1629]: Session 26 logged out. Waiting for processes to exit. Jul 14 22:43:47.266704 systemd-logind[1629]: Removed session 26. Jul 14 22:43:52.273016 systemd[1]: Started sshd@25-139.178.70.108:22-139.178.89.65:57620.service - OpenSSH per-connection server daemon (139.178.89.65:57620). Jul 14 22:43:52.317521 sshd[4677]: Accepted publickey for core from 139.178.89.65 port 57620 ssh2: RSA SHA256:RSWGZuhuTovkP7yToXQSr6sgrWxhGTyTYOnlX2cWN2k Jul 14 22:43:52.318399 sshd-session[4677]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:43:52.320962 systemd-logind[1629]: New session 27 of user core. Jul 14 22:43:52.328441 systemd[1]: Started session-27.scope - Session 27 of User core. Jul 14 22:43:52.413661 sshd[4680]: Connection closed by 139.178.89.65 port 57620 Jul 14 22:43:52.414082 sshd-session[4677]: pam_unix(sshd:session): session closed for user core Jul 14 22:43:52.415945 systemd[1]: sshd@25-139.178.70.108:22-139.178.89.65:57620.service: Deactivated successfully. Jul 14 22:43:52.417064 systemd[1]: session-27.scope: Deactivated successfully. Jul 14 22:43:52.418162 systemd-logind[1629]: Session 27 logged out. Waiting for processes to exit. Jul 14 22:43:52.418732 systemd-logind[1629]: Removed session 27. Jul 14 22:43:57.425392 systemd[1]: Started sshd@26-139.178.70.108:22-139.178.89.65:57626.service - OpenSSH per-connection server daemon (139.178.89.65:57626). Jul 14 22:43:57.462204 sshd[4692]: Accepted publickey for core from 139.178.89.65 port 57626 ssh2: RSA SHA256:RSWGZuhuTovkP7yToXQSr6sgrWxhGTyTYOnlX2cWN2k Jul 14 22:43:57.462812 sshd-session[4692]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:43:57.465272 systemd-logind[1629]: New session 28 of user core. Jul 14 22:43:57.481428 systemd[1]: Started session-28.scope - Session 28 of User core. Jul 14 22:43:57.569275 sshd[4695]: Connection closed by 139.178.89.65 port 57626 Jul 14 22:43:57.569625 sshd-session[4692]: pam_unix(sshd:session): session closed for user core Jul 14 22:43:57.571565 systemd-logind[1629]: Session 28 logged out. Waiting for processes to exit. Jul 14 22:43:57.572130 systemd[1]: sshd@26-139.178.70.108:22-139.178.89.65:57626.service: Deactivated successfully. Jul 14 22:43:57.573251 systemd[1]: session-28.scope: Deactivated successfully. Jul 14 22:43:57.574506 systemd-logind[1629]: Removed session 28. Jul 14 22:44:02.583002 systemd[1]: Started sshd@27-139.178.70.108:22-139.178.89.65:57342.service - OpenSSH per-connection server daemon (139.178.89.65:57342). Jul 14 22:44:02.626331 sshd[4707]: Accepted publickey for core from 139.178.89.65 port 57342 ssh2: RSA SHA256:RSWGZuhuTovkP7yToXQSr6sgrWxhGTyTYOnlX2cWN2k Jul 14 22:44:02.627158 sshd-session[4707]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:44:02.630630 systemd-logind[1629]: New session 29 of user core. Jul 14 22:44:02.638433 systemd[1]: Started session-29.scope - Session 29 of User core. Jul 14 22:44:02.729753 sshd[4710]: Connection closed by 139.178.89.65 port 57342 Jul 14 22:44:02.730093 sshd-session[4707]: pam_unix(sshd:session): session closed for user core Jul 14 22:44:02.737149 systemd[1]: sshd@27-139.178.70.108:22-139.178.89.65:57342.service: Deactivated successfully. Jul 14 22:44:02.738171 systemd[1]: session-29.scope: Deactivated successfully. Jul 14 22:44:02.738886 systemd-logind[1629]: Session 29 logged out. Waiting for processes to exit. Jul 14 22:44:02.739955 systemd-logind[1629]: Removed session 29. Jul 14 22:44:02.740899 systemd[1]: Started sshd@28-139.178.70.108:22-139.178.89.65:57344.service - OpenSSH per-connection server daemon (139.178.89.65:57344). Jul 14 22:44:02.775757 sshd[4721]: Accepted publickey for core from 139.178.89.65 port 57344 ssh2: RSA SHA256:RSWGZuhuTovkP7yToXQSr6sgrWxhGTyTYOnlX2cWN2k Jul 14 22:44:02.776612 sshd-session[4721]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:44:02.780812 systemd-logind[1629]: New session 30 of user core. Jul 14 22:44:02.785417 systemd[1]: Started session-30.scope - Session 30 of User core. Jul 14 22:44:03.139088 sshd[4725]: Connection closed by 139.178.89.65 port 57344 Jul 14 22:44:03.139649 sshd-session[4721]: pam_unix(sshd:session): session closed for user core Jul 14 22:44:03.147916 systemd[1]: sshd@28-139.178.70.108:22-139.178.89.65:57344.service: Deactivated successfully. Jul 14 22:44:03.149164 systemd[1]: session-30.scope: Deactivated successfully. Jul 14 22:44:03.149776 systemd-logind[1629]: Session 30 logged out. Waiting for processes to exit. Jul 14 22:44:03.151301 systemd[1]: Started sshd@29-139.178.70.108:22-139.178.89.65:57356.service - OpenSSH per-connection server daemon (139.178.89.65:57356). Jul 14 22:44:03.152576 systemd-logind[1629]: Removed session 30. Jul 14 22:44:03.203519 sshd[4735]: Accepted publickey for core from 139.178.89.65 port 57356 ssh2: RSA SHA256:RSWGZuhuTovkP7yToXQSr6sgrWxhGTyTYOnlX2cWN2k Jul 14 22:44:03.204377 sshd-session[4735]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:44:03.208535 systemd-logind[1629]: New session 31 of user core. Jul 14 22:44:03.214418 systemd[1]: Started session-31.scope - Session 31 of User core. Jul 14 22:44:09.438778 sshd[4738]: Connection closed by 139.178.89.65 port 57356 Jul 14 22:44:09.439901 sshd-session[4735]: pam_unix(sshd:session): session closed for user core Jul 14 22:44:09.449115 systemd[1]: Started sshd@30-139.178.70.108:22-139.178.89.65:55786.service - OpenSSH per-connection server daemon (139.178.89.65:55786). Jul 14 22:44:09.450449 systemd[1]: sshd@29-139.178.70.108:22-139.178.89.65:57356.service: Deactivated successfully. Jul 14 22:44:09.451855 systemd[1]: session-31.scope: Deactivated successfully. Jul 14 22:44:09.454597 systemd-logind[1629]: Session 31 logged out. Waiting for processes to exit. Jul 14 22:44:09.456347 systemd-logind[1629]: Removed session 31. Jul 14 22:44:09.497860 sshd[4755]: Accepted publickey for core from 139.178.89.65 port 55786 ssh2: RSA SHA256:RSWGZuhuTovkP7yToXQSr6sgrWxhGTyTYOnlX2cWN2k Jul 14 22:44:09.499844 sshd-session[4755]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:44:09.505851 systemd-logind[1629]: New session 32 of user core. Jul 14 22:44:09.511321 systemd[1]: Started session-32.scope - Session 32 of User core. Jul 14 22:44:09.694110 sshd[4763]: Connection closed by 139.178.89.65 port 55786 Jul 14 22:44:09.694684 sshd-session[4755]: pam_unix(sshd:session): session closed for user core Jul 14 22:44:09.703428 systemd[1]: sshd@30-139.178.70.108:22-139.178.89.65:55786.service: Deactivated successfully. Jul 14 22:44:09.704984 systemd[1]: session-32.scope: Deactivated successfully. Jul 14 22:44:09.705814 systemd-logind[1629]: Session 32 logged out. Waiting for processes to exit. Jul 14 22:44:09.707069 systemd[1]: Started sshd@31-139.178.70.108:22-139.178.89.65:55788.service - OpenSSH per-connection server daemon (139.178.89.65:55788). Jul 14 22:44:09.708576 systemd-logind[1629]: Removed session 32. Jul 14 22:44:09.743585 sshd[4772]: Accepted publickey for core from 139.178.89.65 port 55788 ssh2: RSA SHA256:RSWGZuhuTovkP7yToXQSr6sgrWxhGTyTYOnlX2cWN2k Jul 14 22:44:09.744209 sshd-session[4772]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:44:09.747276 systemd-logind[1629]: New session 33 of user core. Jul 14 22:44:09.753451 systemd[1]: Started session-33.scope - Session 33 of User core. Jul 14 22:44:09.872413 sshd[4775]: Connection closed by 139.178.89.65 port 55788 Jul 14 22:44:09.872764 sshd-session[4772]: pam_unix(sshd:session): session closed for user core Jul 14 22:44:09.875335 systemd[1]: sshd@31-139.178.70.108:22-139.178.89.65:55788.service: Deactivated successfully. Jul 14 22:44:09.876390 systemd[1]: session-33.scope: Deactivated successfully. Jul 14 22:44:09.876879 systemd-logind[1629]: Session 33 logged out. Waiting for processes to exit. Jul 14 22:44:09.877724 systemd-logind[1629]: Removed session 33. Jul 14 22:44:13.423145 update_engine[1630]: I20250714 22:44:13.423095 1630 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Jul 14 22:44:13.423145 update_engine[1630]: I20250714 22:44:13.423138 1630 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Jul 14 22:44:13.425763 update_engine[1630]: I20250714 22:44:13.425179 1630 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Jul 14 22:44:13.425763 update_engine[1630]: I20250714 22:44:13.425433 1630 omaha_request_params.cc:62] Current group set to developer Jul 14 22:44:13.425763 update_engine[1630]: I20250714 22:44:13.425511 1630 update_attempter.cc:499] Already updated boot flags. Skipping. Jul 14 22:44:13.425763 update_engine[1630]: I20250714 22:44:13.425519 1630 update_attempter.cc:643] Scheduling an action processor start. Jul 14 22:44:13.425763 update_engine[1630]: I20250714 22:44:13.425529 1630 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jul 14 22:44:13.425763 update_engine[1630]: I20250714 22:44:13.425554 1630 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Jul 14 22:44:13.425763 update_engine[1630]: I20250714 22:44:13.425585 1630 omaha_request_action.cc:271] Posting an Omaha request to disabled Jul 14 22:44:13.425763 update_engine[1630]: I20250714 22:44:13.425590 1630 omaha_request_action.cc:272] Request: Jul 14 22:44:13.425763 update_engine[1630]: Jul 14 22:44:13.425763 update_engine[1630]: Jul 14 22:44:13.425763 update_engine[1630]: Jul 14 22:44:13.425763 update_engine[1630]: Jul 14 22:44:13.425763 update_engine[1630]: Jul 14 22:44:13.425763 update_engine[1630]: Jul 14 22:44:13.425763 update_engine[1630]: Jul 14 22:44:13.425763 update_engine[1630]: Jul 14 22:44:13.425763 update_engine[1630]: I20250714 22:44:13.425594 1630 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 14 22:44:13.437217 locksmithd[1666]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Jul 14 22:44:13.439543 update_engine[1630]: I20250714 22:44:13.439523 1630 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 14 22:44:13.439753 update_engine[1630]: I20250714 22:44:13.439728 1630 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 14 22:44:13.445215 update_engine[1630]: E20250714 22:44:13.445186 1630 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 14 22:44:13.445276 update_engine[1630]: I20250714 22:44:13.445255 1630 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Jul 14 22:44:14.888793 systemd[1]: Started sshd@32-139.178.70.108:22-139.178.89.65:55792.service - OpenSSH per-connection server daemon (139.178.89.65:55792). Jul 14 22:44:14.931491 sshd[4790]: Accepted publickey for core from 139.178.89.65 port 55792 ssh2: RSA SHA256:RSWGZuhuTovkP7yToXQSr6sgrWxhGTyTYOnlX2cWN2k Jul 14 22:44:14.932169 sshd-session[4790]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:44:14.934792 systemd-logind[1629]: New session 34 of user core. Jul 14 22:44:14.944420 systemd[1]: Started session-34.scope - Session 34 of User core. Jul 14 22:44:15.036729 sshd[4793]: Connection closed by 139.178.89.65 port 55792 Jul 14 22:44:15.037058 sshd-session[4790]: pam_unix(sshd:session): session closed for user core Jul 14 22:44:15.039174 systemd[1]: sshd@32-139.178.70.108:22-139.178.89.65:55792.service: Deactivated successfully. Jul 14 22:44:15.040364 systemd[1]: session-34.scope: Deactivated successfully. Jul 14 22:44:15.040866 systemd-logind[1629]: Session 34 logged out. Waiting for processes to exit. Jul 14 22:44:15.041709 systemd-logind[1629]: Removed session 34. Jul 14 22:44:20.051960 systemd[1]: Started sshd@33-139.178.70.108:22-139.178.89.65:56094.service - OpenSSH per-connection server daemon (139.178.89.65:56094). Jul 14 22:44:20.107907 sshd[4808]: Accepted publickey for core from 139.178.89.65 port 56094 ssh2: RSA SHA256:RSWGZuhuTovkP7yToXQSr6sgrWxhGTyTYOnlX2cWN2k Jul 14 22:44:20.108829 sshd-session[4808]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:44:20.112122 systemd-logind[1629]: New session 35 of user core. Jul 14 22:44:20.117390 systemd[1]: Started session-35.scope - Session 35 of User core. Jul 14 22:44:20.210609 sshd[4811]: Connection closed by 139.178.89.65 port 56094 Jul 14 22:44:20.211041 sshd-session[4808]: pam_unix(sshd:session): session closed for user core Jul 14 22:44:20.213210 systemd[1]: sshd@33-139.178.70.108:22-139.178.89.65:56094.service: Deactivated successfully. Jul 14 22:44:20.214367 systemd[1]: session-35.scope: Deactivated successfully. Jul 14 22:44:20.214848 systemd-logind[1629]: Session 35 logged out. Waiting for processes to exit. Jul 14 22:44:20.215688 systemd-logind[1629]: Removed session 35. Jul 14 22:44:23.390578 update_engine[1630]: I20250714 22:44:23.390518 1630 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 14 22:44:23.390885 update_engine[1630]: I20250714 22:44:23.390702 1630 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 14 22:44:23.390928 update_engine[1630]: I20250714 22:44:23.390909 1630 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 14 22:44:23.395355 update_engine[1630]: E20250714 22:44:23.395328 1630 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 14 22:44:23.395395 update_engine[1630]: I20250714 22:44:23.395383 1630 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Jul 14 22:44:25.221336 systemd[1]: Started sshd@34-139.178.70.108:22-139.178.89.65:56106.service - OpenSSH per-connection server daemon (139.178.89.65:56106). Jul 14 22:44:25.264224 sshd[4823]: Accepted publickey for core from 139.178.89.65 port 56106 ssh2: RSA SHA256:RSWGZuhuTovkP7yToXQSr6sgrWxhGTyTYOnlX2cWN2k Jul 14 22:44:25.265120 sshd-session[4823]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:44:25.270331 systemd-logind[1629]: New session 36 of user core. Jul 14 22:44:25.276451 systemd[1]: Started session-36.scope - Session 36 of User core. Jul 14 22:44:25.368695 sshd[4826]: Connection closed by 139.178.89.65 port 56106 Jul 14 22:44:25.368287 sshd-session[4823]: pam_unix(sshd:session): session closed for user core Jul 14 22:44:25.374520 systemd[1]: sshd@34-139.178.70.108:22-139.178.89.65:56106.service: Deactivated successfully. Jul 14 22:44:25.375789 systemd[1]: session-36.scope: Deactivated successfully. Jul 14 22:44:25.376383 systemd-logind[1629]: Session 36 logged out. Waiting for processes to exit. Jul 14 22:44:25.378078 systemd[1]: Started sshd@35-139.178.70.108:22-139.178.89.65:56120.service - OpenSSH per-connection server daemon (139.178.89.65:56120). Jul 14 22:44:25.378656 systemd-logind[1629]: Removed session 36. Jul 14 22:44:25.412465 sshd[4838]: Accepted publickey for core from 139.178.89.65 port 56120 ssh2: RSA SHA256:RSWGZuhuTovkP7yToXQSr6sgrWxhGTyTYOnlX2cWN2k Jul 14 22:44:25.413432 sshd-session[4838]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:44:25.416919 systemd-logind[1629]: New session 37 of user core. Jul 14 22:44:25.425429 systemd[1]: Started session-37.scope - Session 37 of User core. Jul 14 22:44:26.843343 containerd[1660]: time="2025-07-14T22:44:26.843299197Z" level=info msg="StopContainer for \"86ae1119cdf769fdf9dcc5cb64682facebddc41af3b0e40fc07ae8ae3cc86478\" with timeout 30 (s)" Jul 14 22:44:26.858925 containerd[1660]: time="2025-07-14T22:44:26.858591432Z" level=info msg="Stop container \"86ae1119cdf769fdf9dcc5cb64682facebddc41af3b0e40fc07ae8ae3cc86478\" with signal terminated" Jul 14 22:44:26.871644 systemd[1]: cri-containerd-86ae1119cdf769fdf9dcc5cb64682facebddc41af3b0e40fc07ae8ae3cc86478.scope: Deactivated successfully. Jul 14 22:44:26.872497 systemd[1]: cri-containerd-86ae1119cdf769fdf9dcc5cb64682facebddc41af3b0e40fc07ae8ae3cc86478.scope: Consumed 312ms CPU time, 35M memory peak, 13.7M read from disk, 4K written to disk. Jul 14 22:44:26.887540 containerd[1660]: time="2025-07-14T22:44:26.886997508Z" level=info msg="received exit event container_id:\"86ae1119cdf769fdf9dcc5cb64682facebddc41af3b0e40fc07ae8ae3cc86478\" id:\"86ae1119cdf769fdf9dcc5cb64682facebddc41af3b0e40fc07ae8ae3cc86478\" pid:3483 exited_at:{seconds:1752533066 nanos:872827762}" Jul 14 22:44:26.887993 containerd[1660]: time="2025-07-14T22:44:26.887977652Z" level=info msg="TaskExit event in podsandbox handler container_id:\"86ae1119cdf769fdf9dcc5cb64682facebddc41af3b0e40fc07ae8ae3cc86478\" id:\"86ae1119cdf769fdf9dcc5cb64682facebddc41af3b0e40fc07ae8ae3cc86478\" pid:3483 exited_at:{seconds:1752533066 nanos:872827762}" Jul 14 22:44:26.893648 containerd[1660]: time="2025-07-14T22:44:26.893622048Z" level=info msg="TaskExit event in podsandbox handler container_id:\"aab519c21433c065c41e900007d0c6abe4c9fd24bded4de1eaacc973bee5ad36\" id:\"5599d708b483cee52d1aa7c200bf9c09f711a0e98d571d27db79b96540ac5175\" pid:4869 exited_at:{seconds:1752533066 nanos:893269468}" Jul 14 22:44:26.901335 containerd[1660]: time="2025-07-14T22:44:26.901301235Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 14 22:44:26.903054 containerd[1660]: time="2025-07-14T22:44:26.902887146Z" level=info msg="StopContainer for \"aab519c21433c065c41e900007d0c6abe4c9fd24bded4de1eaacc973bee5ad36\" with timeout 2 (s)" Jul 14 22:44:26.903454 containerd[1660]: time="2025-07-14T22:44:26.903443502Z" level=info msg="Stop container \"aab519c21433c065c41e900007d0c6abe4c9fd24bded4de1eaacc973bee5ad36\" with signal terminated" Jul 14 22:44:26.908147 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-86ae1119cdf769fdf9dcc5cb64682facebddc41af3b0e40fc07ae8ae3cc86478-rootfs.mount: Deactivated successfully. Jul 14 22:44:26.911934 containerd[1660]: time="2025-07-14T22:44:26.911902598Z" level=info msg="StopContainer for \"86ae1119cdf769fdf9dcc5cb64682facebddc41af3b0e40fc07ae8ae3cc86478\" returns successfully" Jul 14 22:44:26.912663 systemd-networkd[1517]: lxc_health: Link DOWN Jul 14 22:44:26.912668 systemd-networkd[1517]: lxc_health: Lost carrier Jul 14 22:44:26.923711 containerd[1660]: time="2025-07-14T22:44:26.923684547Z" level=info msg="StopPodSandbox for \"cec018a341415c85344fdb27610dc35c947d1c2e7504fc85e0547f57419357f7\"" Jul 14 22:44:26.931165 systemd[1]: cri-containerd-aab519c21433c065c41e900007d0c6abe4c9fd24bded4de1eaacc973bee5ad36.scope: Deactivated successfully. Jul 14 22:44:26.931469 systemd[1]: cri-containerd-aab519c21433c065c41e900007d0c6abe4c9fd24bded4de1eaacc973bee5ad36.scope: Consumed 4.742s CPU time, 194.5M memory peak, 70.8M read from disk, 13.3M written to disk. Jul 14 22:44:26.932505 containerd[1660]: time="2025-07-14T22:44:26.932420627Z" level=info msg="received exit event container_id:\"aab519c21433c065c41e900007d0c6abe4c9fd24bded4de1eaacc973bee5ad36\" id:\"aab519c21433c065c41e900007d0c6abe4c9fd24bded4de1eaacc973bee5ad36\" pid:3719 exited_at:{seconds:1752533066 nanos:932180970}" Jul 14 22:44:26.932742 containerd[1660]: time="2025-07-14T22:44:26.932731821Z" level=info msg="TaskExit event in podsandbox handler container_id:\"aab519c21433c065c41e900007d0c6abe4c9fd24bded4de1eaacc973bee5ad36\" id:\"aab519c21433c065c41e900007d0c6abe4c9fd24bded4de1eaacc973bee5ad36\" pid:3719 exited_at:{seconds:1752533066 nanos:932180970}" Jul 14 22:44:26.934891 containerd[1660]: time="2025-07-14T22:44:26.934703402Z" level=info msg="Container to stop \"86ae1119cdf769fdf9dcc5cb64682facebddc41af3b0e40fc07ae8ae3cc86478\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 14 22:44:26.939815 systemd[1]: cri-containerd-cec018a341415c85344fdb27610dc35c947d1c2e7504fc85e0547f57419357f7.scope: Deactivated successfully. Jul 14 22:44:26.944718 containerd[1660]: time="2025-07-14T22:44:26.944694669Z" level=info msg="TaskExit event in podsandbox handler container_id:\"cec018a341415c85344fdb27610dc35c947d1c2e7504fc85e0547f57419357f7\" id:\"cec018a341415c85344fdb27610dc35c947d1c2e7504fc85e0547f57419357f7\" pid:3193 exit_status:137 exited_at:{seconds:1752533066 nanos:944513845}" Jul 14 22:44:26.948560 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-aab519c21433c065c41e900007d0c6abe4c9fd24bded4de1eaacc973bee5ad36-rootfs.mount: Deactivated successfully. Jul 14 22:44:26.955146 containerd[1660]: time="2025-07-14T22:44:26.955106818Z" level=info msg="StopContainer for \"aab519c21433c065c41e900007d0c6abe4c9fd24bded4de1eaacc973bee5ad36\" returns successfully" Jul 14 22:44:26.955413 containerd[1660]: time="2025-07-14T22:44:26.955386137Z" level=info msg="StopPodSandbox for \"58a4b7ef81d8d42e03066d07ad355e8fa94b35b9f8b0b3b9074bb9cc52919408\"" Jul 14 22:44:26.955437 containerd[1660]: time="2025-07-14T22:44:26.955415471Z" level=info msg="Container to stop \"5090f6b155c5b64a434f9170c9caf6a4e3d555757965438f9e8887202946cdf6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 14 22:44:26.955437 containerd[1660]: time="2025-07-14T22:44:26.955421662Z" level=info msg="Container to stop \"9de040cf56165f6650010473d9d2571b266505c4b6bacb91cd921b7a271edee5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 14 22:44:26.955437 containerd[1660]: time="2025-07-14T22:44:26.955426654Z" level=info msg="Container to stop \"2cbe79a151c22136464f1ab72baf95b73ae50e70a3952d2b61802171b8aad104\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 14 22:44:26.955437 containerd[1660]: time="2025-07-14T22:44:26.955431123Z" level=info msg="Container to stop \"c8ab3b04ea95138ca0cf67bd9ef672ccfd5c4b30d8d8a57bfbc834196fe2c320\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 14 22:44:26.955437 containerd[1660]: time="2025-07-14T22:44:26.955435119Z" level=info msg="Container to stop \"aab519c21433c065c41e900007d0c6abe4c9fd24bded4de1eaacc973bee5ad36\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 14 22:44:26.960287 systemd[1]: cri-containerd-58a4b7ef81d8d42e03066d07ad355e8fa94b35b9f8b0b3b9074bb9cc52919408.scope: Deactivated successfully. Jul 14 22:44:26.969219 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cec018a341415c85344fdb27610dc35c947d1c2e7504fc85e0547f57419357f7-rootfs.mount: Deactivated successfully. Jul 14 22:44:26.979462 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-58a4b7ef81d8d42e03066d07ad355e8fa94b35b9f8b0b3b9074bb9cc52919408-rootfs.mount: Deactivated successfully. Jul 14 22:44:26.986209 containerd[1660]: time="2025-07-14T22:44:26.986184749Z" level=info msg="TearDown network for sandbox \"cec018a341415c85344fdb27610dc35c947d1c2e7504fc85e0547f57419357f7\" successfully" Jul 14 22:44:26.986325 containerd[1660]: time="2025-07-14T22:44:26.986316311Z" level=info msg="StopPodSandbox for \"cec018a341415c85344fdb27610dc35c947d1c2e7504fc85e0547f57419357f7\" returns successfully" Jul 14 22:44:26.987059 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-cec018a341415c85344fdb27610dc35c947d1c2e7504fc85e0547f57419357f7-shm.mount: Deactivated successfully. Jul 14 22:44:26.987654 containerd[1660]: time="2025-07-14T22:44:26.987525485Z" level=info msg="shim disconnected" id=cec018a341415c85344fdb27610dc35c947d1c2e7504fc85e0547f57419357f7 namespace=k8s.io Jul 14 22:44:26.987654 containerd[1660]: time="2025-07-14T22:44:26.987538848Z" level=warning msg="cleaning up after shim disconnected" id=cec018a341415c85344fdb27610dc35c947d1c2e7504fc85e0547f57419357f7 namespace=k8s.io Jul 14 22:44:26.987654 containerd[1660]: time="2025-07-14T22:44:26.987547394Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 14 22:44:26.993561 containerd[1660]: time="2025-07-14T22:44:26.993398641Z" level=info msg="shim disconnected" id=58a4b7ef81d8d42e03066d07ad355e8fa94b35b9f8b0b3b9074bb9cc52919408 namespace=k8s.io Jul 14 22:44:26.993561 containerd[1660]: time="2025-07-14T22:44:26.993412798Z" level=warning msg="cleaning up after shim disconnected" id=58a4b7ef81d8d42e03066d07ad355e8fa94b35b9f8b0b3b9074bb9cc52919408 namespace=k8s.io Jul 14 22:44:26.993561 containerd[1660]: time="2025-07-14T22:44:26.993417559Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 14 22:44:26.993678 containerd[1660]: time="2025-07-14T22:44:26.993665904Z" level=info msg="received exit event sandbox_id:\"cec018a341415c85344fdb27610dc35c947d1c2e7504fc85e0547f57419357f7\" exit_status:137 exited_at:{seconds:1752533066 nanos:944513845}" Jul 14 22:44:27.024946 containerd[1660]: time="2025-07-14T22:44:27.024874945Z" level=info msg="received exit event sandbox_id:\"58a4b7ef81d8d42e03066d07ad355e8fa94b35b9f8b0b3b9074bb9cc52919408\" exit_status:137 exited_at:{seconds:1752533066 nanos:966422190}" Jul 14 22:44:27.025311 containerd[1660]: time="2025-07-14T22:44:27.025264593Z" level=info msg="TearDown network for sandbox \"58a4b7ef81d8d42e03066d07ad355e8fa94b35b9f8b0b3b9074bb9cc52919408\" successfully" Jul 14 22:44:27.025311 containerd[1660]: time="2025-07-14T22:44:27.025282040Z" level=info msg="StopPodSandbox for \"58a4b7ef81d8d42e03066d07ad355e8fa94b35b9f8b0b3b9074bb9cc52919408\" returns successfully" Jul 14 22:44:27.025508 containerd[1660]: time="2025-07-14T22:44:27.025472501Z" level=info msg="TaskExit event in podsandbox handler container_id:\"58a4b7ef81d8d42e03066d07ad355e8fa94b35b9f8b0b3b9074bb9cc52919408\" id:\"58a4b7ef81d8d42e03066d07ad355e8fa94b35b9f8b0b3b9074bb9cc52919408\" pid:3284 exit_status:137 exited_at:{seconds:1752533066 nanos:966422190}" Jul 14 22:44:27.187412 kubelet[3084]: I0714 22:44:27.186697 3084 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a4766057-7f97-4c02-ae2f-688484934296-bpf-maps\") pod \"a4766057-7f97-4c02-ae2f-688484934296\" (UID: \"a4766057-7f97-4c02-ae2f-688484934296\") " Jul 14 22:44:27.187412 kubelet[3084]: I0714 22:44:27.186750 3084 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a4766057-7f97-4c02-ae2f-688484934296-cilium-config-path\") pod \"a4766057-7f97-4c02-ae2f-688484934296\" (UID: \"a4766057-7f97-4c02-ae2f-688484934296\") " Jul 14 22:44:27.187412 kubelet[3084]: I0714 22:44:27.186765 3084 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a4766057-7f97-4c02-ae2f-688484934296-host-proc-sys-kernel\") pod \"a4766057-7f97-4c02-ae2f-688484934296\" (UID: \"a4766057-7f97-4c02-ae2f-688484934296\") " Jul 14 22:44:27.187412 kubelet[3084]: I0714 22:44:27.186781 3084 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2b56f1c8-f624-41da-b793-0b6de0f9bee9-cilium-config-path\") pod \"2b56f1c8-f624-41da-b793-0b6de0f9bee9\" (UID: \"2b56f1c8-f624-41da-b793-0b6de0f9bee9\") " Jul 14 22:44:27.187412 kubelet[3084]: I0714 22:44:27.186798 3084 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a4766057-7f97-4c02-ae2f-688484934296-clustermesh-secrets\") pod \"a4766057-7f97-4c02-ae2f-688484934296\" (UID: \"a4766057-7f97-4c02-ae2f-688484934296\") " Jul 14 22:44:27.187412 kubelet[3084]: I0714 22:44:27.186811 3084 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a4766057-7f97-4c02-ae2f-688484934296-hubble-tls\") pod \"a4766057-7f97-4c02-ae2f-688484934296\" (UID: \"a4766057-7f97-4c02-ae2f-688484934296\") " Jul 14 22:44:27.187849 kubelet[3084]: I0714 22:44:27.186824 3084 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a4766057-7f97-4c02-ae2f-688484934296-host-proc-sys-net\") pod \"a4766057-7f97-4c02-ae2f-688484934296\" (UID: \"a4766057-7f97-4c02-ae2f-688484934296\") " Jul 14 22:44:27.187849 kubelet[3084]: I0714 22:44:27.186835 3084 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a4766057-7f97-4c02-ae2f-688484934296-xtables-lock\") pod \"a4766057-7f97-4c02-ae2f-688484934296\" (UID: \"a4766057-7f97-4c02-ae2f-688484934296\") " Jul 14 22:44:27.187849 kubelet[3084]: I0714 22:44:27.186845 3084 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a4766057-7f97-4c02-ae2f-688484934296-cni-path\") pod \"a4766057-7f97-4c02-ae2f-688484934296\" (UID: \"a4766057-7f97-4c02-ae2f-688484934296\") " Jul 14 22:44:27.187849 kubelet[3084]: I0714 22:44:27.186856 3084 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a4766057-7f97-4c02-ae2f-688484934296-lib-modules\") pod \"a4766057-7f97-4c02-ae2f-688484934296\" (UID: \"a4766057-7f97-4c02-ae2f-688484934296\") " Jul 14 22:44:27.187849 kubelet[3084]: I0714 22:44:27.186866 3084 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a4766057-7f97-4c02-ae2f-688484934296-cilium-run\") pod \"a4766057-7f97-4c02-ae2f-688484934296\" (UID: \"a4766057-7f97-4c02-ae2f-688484934296\") " Jul 14 22:44:27.187849 kubelet[3084]: I0714 22:44:27.186877 3084 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a4766057-7f97-4c02-ae2f-688484934296-etc-cni-netd\") pod \"a4766057-7f97-4c02-ae2f-688484934296\" (UID: \"a4766057-7f97-4c02-ae2f-688484934296\") " Jul 14 22:44:27.188029 kubelet[3084]: I0714 22:44:27.186890 3084 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-65mr2\" (UniqueName: \"kubernetes.io/projected/2b56f1c8-f624-41da-b793-0b6de0f9bee9-kube-api-access-65mr2\") pod \"2b56f1c8-f624-41da-b793-0b6de0f9bee9\" (UID: \"2b56f1c8-f624-41da-b793-0b6de0f9bee9\") " Jul 14 22:44:27.188029 kubelet[3084]: I0714 22:44:27.186901 3084 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a4766057-7f97-4c02-ae2f-688484934296-cilium-cgroup\") pod \"a4766057-7f97-4c02-ae2f-688484934296\" (UID: \"a4766057-7f97-4c02-ae2f-688484934296\") " Jul 14 22:44:27.188029 kubelet[3084]: I0714 22:44:27.186913 3084 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fng78\" (UniqueName: \"kubernetes.io/projected/a4766057-7f97-4c02-ae2f-688484934296-kube-api-access-fng78\") pod \"a4766057-7f97-4c02-ae2f-688484934296\" (UID: \"a4766057-7f97-4c02-ae2f-688484934296\") " Jul 14 22:44:27.188029 kubelet[3084]: I0714 22:44:27.186923 3084 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a4766057-7f97-4c02-ae2f-688484934296-hostproc\") pod \"a4766057-7f97-4c02-ae2f-688484934296\" (UID: \"a4766057-7f97-4c02-ae2f-688484934296\") " Jul 14 22:44:27.188029 kubelet[3084]: I0714 22:44:27.186968 3084 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a4766057-7f97-4c02-ae2f-688484934296-hostproc" (OuterVolumeSpecName: "hostproc") pod "a4766057-7f97-4c02-ae2f-688484934296" (UID: "a4766057-7f97-4c02-ae2f-688484934296"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 14 22:44:27.188029 kubelet[3084]: I0714 22:44:27.186702 3084 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a4766057-7f97-4c02-ae2f-688484934296-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "a4766057-7f97-4c02-ae2f-688484934296" (UID: "a4766057-7f97-4c02-ae2f-688484934296"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 14 22:44:27.188159 kubelet[3084]: I0714 22:44:27.187488 3084 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a4766057-7f97-4c02-ae2f-688484934296-cni-path" (OuterVolumeSpecName: "cni-path") pod "a4766057-7f97-4c02-ae2f-688484934296" (UID: "a4766057-7f97-4c02-ae2f-688484934296"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 14 22:44:27.188159 kubelet[3084]: I0714 22:44:27.187580 3084 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a4766057-7f97-4c02-ae2f-688484934296-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "a4766057-7f97-4c02-ae2f-688484934296" (UID: "a4766057-7f97-4c02-ae2f-688484934296"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 14 22:44:27.191723 kubelet[3084]: I0714 22:44:27.191697 3084 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2b56f1c8-f624-41da-b793-0b6de0f9bee9-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "2b56f1c8-f624-41da-b793-0b6de0f9bee9" (UID: "2b56f1c8-f624-41da-b793-0b6de0f9bee9"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 14 22:44:27.191987 kubelet[3084]: I0714 22:44:27.191791 3084 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a4766057-7f97-4c02-ae2f-688484934296-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "a4766057-7f97-4c02-ae2f-688484934296" (UID: "a4766057-7f97-4c02-ae2f-688484934296"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 14 22:44:27.191987 kubelet[3084]: I0714 22:44:27.191819 3084 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a4766057-7f97-4c02-ae2f-688484934296-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "a4766057-7f97-4c02-ae2f-688484934296" (UID: "a4766057-7f97-4c02-ae2f-688484934296"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 14 22:44:27.191987 kubelet[3084]: I0714 22:44:27.191833 3084 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a4766057-7f97-4c02-ae2f-688484934296-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "a4766057-7f97-4c02-ae2f-688484934296" (UID: "a4766057-7f97-4c02-ae2f-688484934296"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 14 22:44:27.191987 kubelet[3084]: I0714 22:44:27.191843 3084 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a4766057-7f97-4c02-ae2f-688484934296-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "a4766057-7f97-4c02-ae2f-688484934296" (UID: "a4766057-7f97-4c02-ae2f-688484934296"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 14 22:44:27.199875 kubelet[3084]: I0714 22:44:27.199857 3084 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a4766057-7f97-4c02-ae2f-688484934296-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "a4766057-7f97-4c02-ae2f-688484934296" (UID: "a4766057-7f97-4c02-ae2f-688484934296"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 14 22:44:27.199946 kubelet[3084]: I0714 22:44:27.199866 3084 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b56f1c8-f624-41da-b793-0b6de0f9bee9-kube-api-access-65mr2" (OuterVolumeSpecName: "kube-api-access-65mr2") pod "2b56f1c8-f624-41da-b793-0b6de0f9bee9" (UID: "2b56f1c8-f624-41da-b793-0b6de0f9bee9"). InnerVolumeSpecName "kube-api-access-65mr2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 14 22:44:27.200136 kubelet[3084]: I0714 22:44:27.199883 3084 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a4766057-7f97-4c02-ae2f-688484934296-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "a4766057-7f97-4c02-ae2f-688484934296" (UID: "a4766057-7f97-4c02-ae2f-688484934296"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 14 22:44:27.201604 kubelet[3084]: I0714 22:44:27.201583 3084 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a4766057-7f97-4c02-ae2f-688484934296-kube-api-access-fng78" (OuterVolumeSpecName: "kube-api-access-fng78") pod "a4766057-7f97-4c02-ae2f-688484934296" (UID: "a4766057-7f97-4c02-ae2f-688484934296"). InnerVolumeSpecName "kube-api-access-fng78". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 14 22:44:27.201658 kubelet[3084]: I0714 22:44:27.201611 3084 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a4766057-7f97-4c02-ae2f-688484934296-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "a4766057-7f97-4c02-ae2f-688484934296" (UID: "a4766057-7f97-4c02-ae2f-688484934296"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 14 22:44:27.201658 kubelet[3084]: I0714 22:44:27.201627 3084 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a4766057-7f97-4c02-ae2f-688484934296-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "a4766057-7f97-4c02-ae2f-688484934296" (UID: "a4766057-7f97-4c02-ae2f-688484934296"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 14 22:44:27.201883 kubelet[3084]: I0714 22:44:27.201867 3084 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a4766057-7f97-4c02-ae2f-688484934296-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "a4766057-7f97-4c02-ae2f-688484934296" (UID: "a4766057-7f97-4c02-ae2f-688484934296"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 14 22:44:27.287479 kubelet[3084]: I0714 22:44:27.287457 3084 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a4766057-7f97-4c02-ae2f-688484934296-cni-path\") on node \"localhost\" DevicePath \"\"" Jul 14 22:44:27.287479 kubelet[3084]: I0714 22:44:27.287477 3084 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a4766057-7f97-4c02-ae2f-688484934296-lib-modules\") on node \"localhost\" DevicePath \"\"" Jul 14 22:44:27.287564 kubelet[3084]: I0714 22:44:27.287484 3084 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a4766057-7f97-4c02-ae2f-688484934296-cilium-run\") on node \"localhost\" DevicePath \"\"" Jul 14 22:44:27.287564 kubelet[3084]: I0714 22:44:27.287492 3084 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a4766057-7f97-4c02-ae2f-688484934296-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Jul 14 22:44:27.287564 kubelet[3084]: I0714 22:44:27.287499 3084 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-65mr2\" (UniqueName: \"kubernetes.io/projected/2b56f1c8-f624-41da-b793-0b6de0f9bee9-kube-api-access-65mr2\") on node \"localhost\" DevicePath \"\"" Jul 14 22:44:27.287564 kubelet[3084]: I0714 22:44:27.287505 3084 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a4766057-7f97-4c02-ae2f-688484934296-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Jul 14 22:44:27.287564 kubelet[3084]: I0714 22:44:27.287511 3084 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fng78\" (UniqueName: \"kubernetes.io/projected/a4766057-7f97-4c02-ae2f-688484934296-kube-api-access-fng78\") on node \"localhost\" DevicePath \"\"" Jul 14 22:44:27.287564 kubelet[3084]: I0714 22:44:27.287517 3084 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a4766057-7f97-4c02-ae2f-688484934296-hostproc\") on node \"localhost\" DevicePath \"\"" Jul 14 22:44:27.287564 kubelet[3084]: I0714 22:44:27.287522 3084 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a4766057-7f97-4c02-ae2f-688484934296-bpf-maps\") on node \"localhost\" DevicePath \"\"" Jul 14 22:44:27.287564 kubelet[3084]: I0714 22:44:27.287528 3084 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a4766057-7f97-4c02-ae2f-688484934296-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jul 14 22:44:27.287747 kubelet[3084]: I0714 22:44:27.287534 3084 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a4766057-7f97-4c02-ae2f-688484934296-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Jul 14 22:44:27.287747 kubelet[3084]: I0714 22:44:27.287540 3084 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2b56f1c8-f624-41da-b793-0b6de0f9bee9-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jul 14 22:44:27.287747 kubelet[3084]: I0714 22:44:27.287547 3084 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a4766057-7f97-4c02-ae2f-688484934296-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Jul 14 22:44:27.287747 kubelet[3084]: I0714 22:44:27.287553 3084 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a4766057-7f97-4c02-ae2f-688484934296-hubble-tls\") on node \"localhost\" DevicePath \"\"" Jul 14 22:44:27.287747 kubelet[3084]: I0714 22:44:27.287558 3084 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a4766057-7f97-4c02-ae2f-688484934296-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Jul 14 22:44:27.287747 kubelet[3084]: I0714 22:44:27.287565 3084 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a4766057-7f97-4c02-ae2f-688484934296-xtables-lock\") on node \"localhost\" DevicePath \"\"" Jul 14 22:44:27.466010 kubelet[3084]: I0714 22:44:27.465867 3084 scope.go:117] "RemoveContainer" containerID="aab519c21433c065c41e900007d0c6abe4c9fd24bded4de1eaacc973bee5ad36" Jul 14 22:44:27.470664 systemd[1]: Removed slice kubepods-burstable-poda4766057_7f97_4c02_ae2f_688484934296.slice - libcontainer container kubepods-burstable-poda4766057_7f97_4c02_ae2f_688484934296.slice. Jul 14 22:44:27.470723 systemd[1]: kubepods-burstable-poda4766057_7f97_4c02_ae2f_688484934296.slice: Consumed 4.801s CPU time, 195.6M memory peak, 71.9M read from disk, 13.3M written to disk. Jul 14 22:44:27.472533 containerd[1660]: time="2025-07-14T22:44:27.471604358Z" level=info msg="RemoveContainer for \"aab519c21433c065c41e900007d0c6abe4c9fd24bded4de1eaacc973bee5ad36\"" Jul 14 22:44:27.473939 systemd[1]: Removed slice kubepods-besteffort-pod2b56f1c8_f624_41da_b793_0b6de0f9bee9.slice - libcontainer container kubepods-besteffort-pod2b56f1c8_f624_41da_b793_0b6de0f9bee9.slice. Jul 14 22:44:27.474258 systemd[1]: kubepods-besteffort-pod2b56f1c8_f624_41da_b793_0b6de0f9bee9.slice: Consumed 340ms CPU time, 35.7M memory peak, 13.7M read from disk, 4K written to disk. Jul 14 22:44:27.478736 containerd[1660]: time="2025-07-14T22:44:27.478703741Z" level=info msg="RemoveContainer for \"aab519c21433c065c41e900007d0c6abe4c9fd24bded4de1eaacc973bee5ad36\" returns successfully" Jul 14 22:44:27.478871 kubelet[3084]: I0714 22:44:27.478855 3084 scope.go:117] "RemoveContainer" containerID="c8ab3b04ea95138ca0cf67bd9ef672ccfd5c4b30d8d8a57bfbc834196fe2c320" Jul 14 22:44:27.479818 containerd[1660]: time="2025-07-14T22:44:27.479805053Z" level=info msg="RemoveContainer for \"c8ab3b04ea95138ca0cf67bd9ef672ccfd5c4b30d8d8a57bfbc834196fe2c320\"" Jul 14 22:44:27.481543 containerd[1660]: time="2025-07-14T22:44:27.481532045Z" level=info msg="RemoveContainer for \"c8ab3b04ea95138ca0cf67bd9ef672ccfd5c4b30d8d8a57bfbc834196fe2c320\" returns successfully" Jul 14 22:44:27.481684 kubelet[3084]: I0714 22:44:27.481671 3084 scope.go:117] "RemoveContainer" containerID="2cbe79a151c22136464f1ab72baf95b73ae50e70a3952d2b61802171b8aad104" Jul 14 22:44:27.484171 containerd[1660]: time="2025-07-14T22:44:27.484137704Z" level=info msg="RemoveContainer for \"2cbe79a151c22136464f1ab72baf95b73ae50e70a3952d2b61802171b8aad104\"" Jul 14 22:44:27.493001 containerd[1660]: time="2025-07-14T22:44:27.492947287Z" level=info msg="RemoveContainer for \"2cbe79a151c22136464f1ab72baf95b73ae50e70a3952d2b61802171b8aad104\" returns successfully" Jul 14 22:44:27.493246 kubelet[3084]: I0714 22:44:27.493224 3084 scope.go:117] "RemoveContainer" containerID="9de040cf56165f6650010473d9d2571b266505c4b6bacb91cd921b7a271edee5" Jul 14 22:44:27.495066 containerd[1660]: time="2025-07-14T22:44:27.495027204Z" level=info msg="RemoveContainer for \"9de040cf56165f6650010473d9d2571b266505c4b6bacb91cd921b7a271edee5\"" Jul 14 22:44:27.496658 containerd[1660]: time="2025-07-14T22:44:27.496616296Z" level=info msg="RemoveContainer for \"9de040cf56165f6650010473d9d2571b266505c4b6bacb91cd921b7a271edee5\" returns successfully" Jul 14 22:44:27.496707 kubelet[3084]: I0714 22:44:27.496695 3084 scope.go:117] "RemoveContainer" containerID="5090f6b155c5b64a434f9170c9caf6a4e3d555757965438f9e8887202946cdf6" Jul 14 22:44:27.497383 containerd[1660]: time="2025-07-14T22:44:27.497353290Z" level=info msg="RemoveContainer for \"5090f6b155c5b64a434f9170c9caf6a4e3d555757965438f9e8887202946cdf6\"" Jul 14 22:44:27.498710 containerd[1660]: time="2025-07-14T22:44:27.498658844Z" level=info msg="RemoveContainer for \"5090f6b155c5b64a434f9170c9caf6a4e3d555757965438f9e8887202946cdf6\" returns successfully" Jul 14 22:44:27.500339 kubelet[3084]: I0714 22:44:27.500291 3084 scope.go:117] "RemoveContainer" containerID="aab519c21433c065c41e900007d0c6abe4c9fd24bded4de1eaacc973bee5ad36" Jul 14 22:44:27.501087 containerd[1660]: time="2025-07-14T22:44:27.501066452Z" level=error msg="ContainerStatus for \"aab519c21433c065c41e900007d0c6abe4c9fd24bded4de1eaacc973bee5ad36\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"aab519c21433c065c41e900007d0c6abe4c9fd24bded4de1eaacc973bee5ad36\": not found" Jul 14 22:44:27.502134 kubelet[3084]: E0714 22:44:27.502117 3084 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"aab519c21433c065c41e900007d0c6abe4c9fd24bded4de1eaacc973bee5ad36\": not found" containerID="aab519c21433c065c41e900007d0c6abe4c9fd24bded4de1eaacc973bee5ad36" Jul 14 22:44:27.502998 kubelet[3084]: I0714 22:44:27.502940 3084 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"aab519c21433c065c41e900007d0c6abe4c9fd24bded4de1eaacc973bee5ad36"} err="failed to get container status \"aab519c21433c065c41e900007d0c6abe4c9fd24bded4de1eaacc973bee5ad36\": rpc error: code = NotFound desc = an error occurred when try to find container \"aab519c21433c065c41e900007d0c6abe4c9fd24bded4de1eaacc973bee5ad36\": not found" Jul 14 22:44:27.503057 kubelet[3084]: I0714 22:44:27.503050 3084 scope.go:117] "RemoveContainer" containerID="c8ab3b04ea95138ca0cf67bd9ef672ccfd5c4b30d8d8a57bfbc834196fe2c320" Jul 14 22:44:27.503212 containerd[1660]: time="2025-07-14T22:44:27.503186223Z" level=error msg="ContainerStatus for \"c8ab3b04ea95138ca0cf67bd9ef672ccfd5c4b30d8d8a57bfbc834196fe2c320\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c8ab3b04ea95138ca0cf67bd9ef672ccfd5c4b30d8d8a57bfbc834196fe2c320\": not found" Jul 14 22:44:27.505696 kubelet[3084]: E0714 22:44:27.505685 3084 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c8ab3b04ea95138ca0cf67bd9ef672ccfd5c4b30d8d8a57bfbc834196fe2c320\": not found" containerID="c8ab3b04ea95138ca0cf67bd9ef672ccfd5c4b30d8d8a57bfbc834196fe2c320" Jul 14 22:44:27.505779 kubelet[3084]: I0714 22:44:27.505769 3084 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c8ab3b04ea95138ca0cf67bd9ef672ccfd5c4b30d8d8a57bfbc834196fe2c320"} err="failed to get container status \"c8ab3b04ea95138ca0cf67bd9ef672ccfd5c4b30d8d8a57bfbc834196fe2c320\": rpc error: code = NotFound desc = an error occurred when try to find container \"c8ab3b04ea95138ca0cf67bd9ef672ccfd5c4b30d8d8a57bfbc834196fe2c320\": not found" Jul 14 22:44:27.505847 kubelet[3084]: I0714 22:44:27.505840 3084 scope.go:117] "RemoveContainer" containerID="2cbe79a151c22136464f1ab72baf95b73ae50e70a3952d2b61802171b8aad104" Jul 14 22:44:27.506014 containerd[1660]: time="2025-07-14T22:44:27.506000386Z" level=error msg="ContainerStatus for \"2cbe79a151c22136464f1ab72baf95b73ae50e70a3952d2b61802171b8aad104\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2cbe79a151c22136464f1ab72baf95b73ae50e70a3952d2b61802171b8aad104\": not found" Jul 14 22:44:27.506118 kubelet[3084]: E0714 22:44:27.506109 3084 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2cbe79a151c22136464f1ab72baf95b73ae50e70a3952d2b61802171b8aad104\": not found" containerID="2cbe79a151c22136464f1ab72baf95b73ae50e70a3952d2b61802171b8aad104" Jul 14 22:44:27.506212 kubelet[3084]: I0714 22:44:27.506191 3084 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2cbe79a151c22136464f1ab72baf95b73ae50e70a3952d2b61802171b8aad104"} err="failed to get container status \"2cbe79a151c22136464f1ab72baf95b73ae50e70a3952d2b61802171b8aad104\": rpc error: code = NotFound desc = an error occurred when try to find container \"2cbe79a151c22136464f1ab72baf95b73ae50e70a3952d2b61802171b8aad104\": not found" Jul 14 22:44:27.506298 kubelet[3084]: I0714 22:44:27.506290 3084 scope.go:117] "RemoveContainer" containerID="9de040cf56165f6650010473d9d2571b266505c4b6bacb91cd921b7a271edee5" Jul 14 22:44:27.506449 containerd[1660]: time="2025-07-14T22:44:27.506435746Z" level=error msg="ContainerStatus for \"9de040cf56165f6650010473d9d2571b266505c4b6bacb91cd921b7a271edee5\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9de040cf56165f6650010473d9d2571b266505c4b6bacb91cd921b7a271edee5\": not found" Jul 14 22:44:27.506598 kubelet[3084]: E0714 22:44:27.506545 3084 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9de040cf56165f6650010473d9d2571b266505c4b6bacb91cd921b7a271edee5\": not found" containerID="9de040cf56165f6650010473d9d2571b266505c4b6bacb91cd921b7a271edee5" Jul 14 22:44:27.506659 kubelet[3084]: I0714 22:44:27.506637 3084 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9de040cf56165f6650010473d9d2571b266505c4b6bacb91cd921b7a271edee5"} err="failed to get container status \"9de040cf56165f6650010473d9d2571b266505c4b6bacb91cd921b7a271edee5\": rpc error: code = NotFound desc = an error occurred when try to find container \"9de040cf56165f6650010473d9d2571b266505c4b6bacb91cd921b7a271edee5\": not found" Jul 14 22:44:27.506696 kubelet[3084]: I0714 22:44:27.506691 3084 scope.go:117] "RemoveContainer" containerID="5090f6b155c5b64a434f9170c9caf6a4e3d555757965438f9e8887202946cdf6" Jul 14 22:44:27.516315 containerd[1660]: time="2025-07-14T22:44:27.516298587Z" level=error msg="ContainerStatus for \"5090f6b155c5b64a434f9170c9caf6a4e3d555757965438f9e8887202946cdf6\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5090f6b155c5b64a434f9170c9caf6a4e3d555757965438f9e8887202946cdf6\": not found" Jul 14 22:44:27.516442 kubelet[3084]: E0714 22:44:27.516423 3084 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5090f6b155c5b64a434f9170c9caf6a4e3d555757965438f9e8887202946cdf6\": not found" containerID="5090f6b155c5b64a434f9170c9caf6a4e3d555757965438f9e8887202946cdf6" Jul 14 22:44:27.516528 kubelet[3084]: I0714 22:44:27.516440 3084 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5090f6b155c5b64a434f9170c9caf6a4e3d555757965438f9e8887202946cdf6"} err="failed to get container status \"5090f6b155c5b64a434f9170c9caf6a4e3d555757965438f9e8887202946cdf6\": rpc error: code = NotFound desc = an error occurred when try to find container \"5090f6b155c5b64a434f9170c9caf6a4e3d555757965438f9e8887202946cdf6\": not found" Jul 14 22:44:27.516600 kubelet[3084]: I0714 22:44:27.516586 3084 scope.go:117] "RemoveContainer" containerID="86ae1119cdf769fdf9dcc5cb64682facebddc41af3b0e40fc07ae8ae3cc86478" Jul 14 22:44:27.521450 containerd[1660]: time="2025-07-14T22:44:27.521028494Z" level=info msg="RemoveContainer for \"86ae1119cdf769fdf9dcc5cb64682facebddc41af3b0e40fc07ae8ae3cc86478\"" Jul 14 22:44:27.522790 containerd[1660]: time="2025-07-14T22:44:27.522778174Z" level=info msg="RemoveContainer for \"86ae1119cdf769fdf9dcc5cb64682facebddc41af3b0e40fc07ae8ae3cc86478\" returns successfully" Jul 14 22:44:27.522916 kubelet[3084]: I0714 22:44:27.522897 3084 scope.go:117] "RemoveContainer" containerID="86ae1119cdf769fdf9dcc5cb64682facebddc41af3b0e40fc07ae8ae3cc86478" Jul 14 22:44:27.523027 containerd[1660]: time="2025-07-14T22:44:27.523012620Z" level=error msg="ContainerStatus for \"86ae1119cdf769fdf9dcc5cb64682facebddc41af3b0e40fc07ae8ae3cc86478\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"86ae1119cdf769fdf9dcc5cb64682facebddc41af3b0e40fc07ae8ae3cc86478\": not found" Jul 14 22:44:27.523191 kubelet[3084]: E0714 22:44:27.523164 3084 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"86ae1119cdf769fdf9dcc5cb64682facebddc41af3b0e40fc07ae8ae3cc86478\": not found" containerID="86ae1119cdf769fdf9dcc5cb64682facebddc41af3b0e40fc07ae8ae3cc86478" Jul 14 22:44:27.523223 kubelet[3084]: I0714 22:44:27.523194 3084 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"86ae1119cdf769fdf9dcc5cb64682facebddc41af3b0e40fc07ae8ae3cc86478"} err="failed to get container status \"86ae1119cdf769fdf9dcc5cb64682facebddc41af3b0e40fc07ae8ae3cc86478\": rpc error: code = NotFound desc = an error occurred when try to find container \"86ae1119cdf769fdf9dcc5cb64682facebddc41af3b0e40fc07ae8ae3cc86478\": not found" Jul 14 22:44:27.906717 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-58a4b7ef81d8d42e03066d07ad355e8fa94b35b9f8b0b3b9074bb9cc52919408-shm.mount: Deactivated successfully. Jul 14 22:44:27.906810 systemd[1]: var-lib-kubelet-pods-2b56f1c8\x2df624\x2d41da\x2db793\x2d0b6de0f9bee9-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d65mr2.mount: Deactivated successfully. Jul 14 22:44:27.906865 systemd[1]: var-lib-kubelet-pods-a4766057\x2d7f97\x2d4c02\x2dae2f\x2d688484934296-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dfng78.mount: Deactivated successfully. Jul 14 22:44:27.906913 systemd[1]: var-lib-kubelet-pods-a4766057\x2d7f97\x2d4c02\x2dae2f\x2d688484934296-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 14 22:44:27.908591 systemd[1]: var-lib-kubelet-pods-a4766057\x2d7f97\x2d4c02\x2dae2f\x2d688484934296-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 14 22:44:28.727457 sshd[4841]: Connection closed by 139.178.89.65 port 56120 Jul 14 22:44:28.727387 sshd-session[4838]: pam_unix(sshd:session): session closed for user core Jul 14 22:44:28.735148 systemd[1]: sshd@35-139.178.70.108:22-139.178.89.65:56120.service: Deactivated successfully. Jul 14 22:44:28.736666 systemd[1]: session-37.scope: Deactivated successfully. Jul 14 22:44:28.737883 systemd-logind[1629]: Session 37 logged out. Waiting for processes to exit. Jul 14 22:44:28.739769 systemd[1]: Started sshd@36-139.178.70.108:22-139.178.89.65:56126.service - OpenSSH per-connection server daemon (139.178.89.65:56126). Jul 14 22:44:28.740139 systemd-logind[1629]: Removed session 37. Jul 14 22:44:28.791931 sshd[4993]: Accepted publickey for core from 139.178.89.65 port 56126 ssh2: RSA SHA256:RSWGZuhuTovkP7yToXQSr6sgrWxhGTyTYOnlX2cWN2k Jul 14 22:44:28.792813 sshd-session[4993]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:44:28.797391 systemd-logind[1629]: New session 38 of user core. Jul 14 22:44:28.805346 systemd[1]: Started session-38.scope - Session 38 of User core. Jul 14 22:44:29.080654 kubelet[3084]: I0714 22:44:29.080636 3084 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2b56f1c8-f624-41da-b793-0b6de0f9bee9" path="/var/lib/kubelet/pods/2b56f1c8-f624-41da-b793-0b6de0f9bee9/volumes" Jul 14 22:44:29.080876 kubelet[3084]: I0714 22:44:29.080862 3084 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a4766057-7f97-4c02-ae2f-688484934296" path="/var/lib/kubelet/pods/a4766057-7f97-4c02-ae2f-688484934296/volumes" Jul 14 22:44:29.125219 sshd[4996]: Connection closed by 139.178.89.65 port 56126 Jul 14 22:44:29.126914 sshd-session[4993]: pam_unix(sshd:session): session closed for user core Jul 14 22:44:29.132658 systemd[1]: sshd@36-139.178.70.108:22-139.178.89.65:56126.service: Deactivated successfully. Jul 14 22:44:29.134688 systemd[1]: session-38.scope: Deactivated successfully. Jul 14 22:44:29.136278 systemd-logind[1629]: Session 38 logged out. Waiting for processes to exit. Jul 14 22:44:29.140523 systemd[1]: Started sshd@37-139.178.70.108:22-139.178.89.65:55580.service - OpenSSH per-connection server daemon (139.178.89.65:55580). Jul 14 22:44:29.141416 systemd-logind[1629]: Removed session 38. Jul 14 22:44:29.150166 kubelet[3084]: E0714 22:44:29.150118 3084 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="2b56f1c8-f624-41da-b793-0b6de0f9bee9" containerName="cilium-operator" Jul 14 22:44:29.150533 kubelet[3084]: E0714 22:44:29.150517 3084 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a4766057-7f97-4c02-ae2f-688484934296" containerName="apply-sysctl-overwrites" Jul 14 22:44:29.150533 kubelet[3084]: E0714 22:44:29.150529 3084 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a4766057-7f97-4c02-ae2f-688484934296" containerName="mount-bpf-fs" Jul 14 22:44:29.150589 kubelet[3084]: E0714 22:44:29.150536 3084 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a4766057-7f97-4c02-ae2f-688484934296" containerName="mount-cgroup" Jul 14 22:44:29.150589 kubelet[3084]: E0714 22:44:29.150539 3084 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a4766057-7f97-4c02-ae2f-688484934296" containerName="clean-cilium-state" Jul 14 22:44:29.150589 kubelet[3084]: E0714 22:44:29.150543 3084 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a4766057-7f97-4c02-ae2f-688484934296" containerName="cilium-agent" Jul 14 22:44:29.150982 kubelet[3084]: I0714 22:44:29.150672 3084 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b56f1c8-f624-41da-b793-0b6de0f9bee9" containerName="cilium-operator" Jul 14 22:44:29.150982 kubelet[3084]: I0714 22:44:29.150683 3084 memory_manager.go:354] "RemoveStaleState removing state" podUID="a4766057-7f97-4c02-ae2f-688484934296" containerName="cilium-agent" Jul 14 22:44:29.167740 systemd[1]: Created slice kubepods-burstable-pod7025efd6_fc43_4803_aaa6_35e14e0088b3.slice - libcontainer container kubepods-burstable-pod7025efd6_fc43_4803_aaa6_35e14e0088b3.slice. Jul 14 22:44:29.182796 kubelet[3084]: E0714 22:44:29.181572 3084 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 14 22:44:29.197057 kubelet[3084]: I0714 22:44:29.197017 3084 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7025efd6-fc43-4803-aaa6-35e14e0088b3-cni-path\") pod \"cilium-kbhs6\" (UID: \"7025efd6-fc43-4803-aaa6-35e14e0088b3\") " pod="kube-system/cilium-kbhs6" Jul 14 22:44:29.197057 kubelet[3084]: I0714 22:44:29.197054 3084 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7025efd6-fc43-4803-aaa6-35e14e0088b3-lib-modules\") pod \"cilium-kbhs6\" (UID: \"7025efd6-fc43-4803-aaa6-35e14e0088b3\") " pod="kube-system/cilium-kbhs6" Jul 14 22:44:29.197178 kubelet[3084]: I0714 22:44:29.197067 3084 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7025efd6-fc43-4803-aaa6-35e14e0088b3-xtables-lock\") pod \"cilium-kbhs6\" (UID: \"7025efd6-fc43-4803-aaa6-35e14e0088b3\") " pod="kube-system/cilium-kbhs6" Jul 14 22:44:29.197178 kubelet[3084]: I0714 22:44:29.197077 3084 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/7025efd6-fc43-4803-aaa6-35e14e0088b3-cilium-ipsec-secrets\") pod \"cilium-kbhs6\" (UID: \"7025efd6-fc43-4803-aaa6-35e14e0088b3\") " pod="kube-system/cilium-kbhs6" Jul 14 22:44:29.197178 kubelet[3084]: I0714 22:44:29.197089 3084 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7025efd6-fc43-4803-aaa6-35e14e0088b3-clustermesh-secrets\") pod \"cilium-kbhs6\" (UID: \"7025efd6-fc43-4803-aaa6-35e14e0088b3\") " pod="kube-system/cilium-kbhs6" Jul 14 22:44:29.197178 kubelet[3084]: I0714 22:44:29.197100 3084 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7025efd6-fc43-4803-aaa6-35e14e0088b3-bpf-maps\") pod \"cilium-kbhs6\" (UID: \"7025efd6-fc43-4803-aaa6-35e14e0088b3\") " pod="kube-system/cilium-kbhs6" Jul 14 22:44:29.197178 kubelet[3084]: I0714 22:44:29.197110 3084 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7025efd6-fc43-4803-aaa6-35e14e0088b3-cilium-cgroup\") pod \"cilium-kbhs6\" (UID: \"7025efd6-fc43-4803-aaa6-35e14e0088b3\") " pod="kube-system/cilium-kbhs6" Jul 14 22:44:29.197178 kubelet[3084]: I0714 22:44:29.197121 3084 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l2xcf\" (UniqueName: \"kubernetes.io/projected/7025efd6-fc43-4803-aaa6-35e14e0088b3-kube-api-access-l2xcf\") pod \"cilium-kbhs6\" (UID: \"7025efd6-fc43-4803-aaa6-35e14e0088b3\") " pod="kube-system/cilium-kbhs6" Jul 14 22:44:29.197329 kubelet[3084]: I0714 22:44:29.197134 3084 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7025efd6-fc43-4803-aaa6-35e14e0088b3-cilium-run\") pod \"cilium-kbhs6\" (UID: \"7025efd6-fc43-4803-aaa6-35e14e0088b3\") " pod="kube-system/cilium-kbhs6" Jul 14 22:44:29.197329 kubelet[3084]: I0714 22:44:29.197155 3084 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7025efd6-fc43-4803-aaa6-35e14e0088b3-etc-cni-netd\") pod \"cilium-kbhs6\" (UID: \"7025efd6-fc43-4803-aaa6-35e14e0088b3\") " pod="kube-system/cilium-kbhs6" Jul 14 22:44:29.197329 kubelet[3084]: I0714 22:44:29.197167 3084 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7025efd6-fc43-4803-aaa6-35e14e0088b3-cilium-config-path\") pod \"cilium-kbhs6\" (UID: \"7025efd6-fc43-4803-aaa6-35e14e0088b3\") " pod="kube-system/cilium-kbhs6" Jul 14 22:44:29.197329 kubelet[3084]: I0714 22:44:29.197178 3084 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7025efd6-fc43-4803-aaa6-35e14e0088b3-host-proc-sys-kernel\") pod \"cilium-kbhs6\" (UID: \"7025efd6-fc43-4803-aaa6-35e14e0088b3\") " pod="kube-system/cilium-kbhs6" Jul 14 22:44:29.197329 kubelet[3084]: I0714 22:44:29.197192 3084 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7025efd6-fc43-4803-aaa6-35e14e0088b3-host-proc-sys-net\") pod \"cilium-kbhs6\" (UID: \"7025efd6-fc43-4803-aaa6-35e14e0088b3\") " pod="kube-system/cilium-kbhs6" Jul 14 22:44:29.197329 kubelet[3084]: I0714 22:44:29.197202 3084 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7025efd6-fc43-4803-aaa6-35e14e0088b3-hostproc\") pod \"cilium-kbhs6\" (UID: \"7025efd6-fc43-4803-aaa6-35e14e0088b3\") " pod="kube-system/cilium-kbhs6" Jul 14 22:44:29.197455 kubelet[3084]: I0714 22:44:29.197214 3084 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7025efd6-fc43-4803-aaa6-35e14e0088b3-hubble-tls\") pod \"cilium-kbhs6\" (UID: \"7025efd6-fc43-4803-aaa6-35e14e0088b3\") " pod="kube-system/cilium-kbhs6" Jul 14 22:44:29.201395 sshd[5007]: Accepted publickey for core from 139.178.89.65 port 55580 ssh2: RSA SHA256:RSWGZuhuTovkP7yToXQSr6sgrWxhGTyTYOnlX2cWN2k Jul 14 22:44:29.202737 sshd-session[5007]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:44:29.206737 systemd-logind[1629]: New session 39 of user core. Jul 14 22:44:29.210324 systemd[1]: Started session-39.scope - Session 39 of User core. Jul 14 22:44:29.258836 sshd[5010]: Connection closed by 139.178.89.65 port 55580 Jul 14 22:44:29.259788 sshd-session[5007]: pam_unix(sshd:session): session closed for user core Jul 14 22:44:29.268971 systemd[1]: sshd@37-139.178.70.108:22-139.178.89.65:55580.service: Deactivated successfully. Jul 14 22:44:29.270690 systemd[1]: session-39.scope: Deactivated successfully. Jul 14 22:44:29.271573 systemd-logind[1629]: Session 39 logged out. Waiting for processes to exit. Jul 14 22:44:29.274741 systemd[1]: Started sshd@38-139.178.70.108:22-139.178.89.65:55588.service - OpenSSH per-connection server daemon (139.178.89.65:55588). Jul 14 22:44:29.275638 systemd-logind[1629]: Removed session 39. Jul 14 22:44:29.325269 sshd[5017]: Accepted publickey for core from 139.178.89.65 port 55588 ssh2: RSA SHA256:RSWGZuhuTovkP7yToXQSr6sgrWxhGTyTYOnlX2cWN2k Jul 14 22:44:29.326706 sshd-session[5017]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:44:29.329502 systemd-logind[1629]: New session 40 of user core. Jul 14 22:44:29.337369 systemd[1]: Started session-40.scope - Session 40 of User core. Jul 14 22:44:29.477289 containerd[1660]: time="2025-07-14T22:44:29.477255210Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-kbhs6,Uid:7025efd6-fc43-4803-aaa6-35e14e0088b3,Namespace:kube-system,Attempt:0,}" Jul 14 22:44:29.491611 containerd[1660]: time="2025-07-14T22:44:29.491359253Z" level=info msg="connecting to shim 8eaf703c20be3fa9f47af36b7ddcbf28e20041398d44b43e1e26004754844f53" address="unix:///run/containerd/s/3f356f6c42a253266df70f7fe866575547a053a2731fbc298762f3e2e8aae828" namespace=k8s.io protocol=ttrpc version=3 Jul 14 22:44:29.510401 systemd[1]: Started cri-containerd-8eaf703c20be3fa9f47af36b7ddcbf28e20041398d44b43e1e26004754844f53.scope - libcontainer container 8eaf703c20be3fa9f47af36b7ddcbf28e20041398d44b43e1e26004754844f53. Jul 14 22:44:29.530980 containerd[1660]: time="2025-07-14T22:44:29.530957586Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-kbhs6,Uid:7025efd6-fc43-4803-aaa6-35e14e0088b3,Namespace:kube-system,Attempt:0,} returns sandbox id \"8eaf703c20be3fa9f47af36b7ddcbf28e20041398d44b43e1e26004754844f53\"" Jul 14 22:44:29.540514 containerd[1660]: time="2025-07-14T22:44:29.540485425Z" level=info msg="CreateContainer within sandbox \"8eaf703c20be3fa9f47af36b7ddcbf28e20041398d44b43e1e26004754844f53\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 14 22:44:29.549331 containerd[1660]: time="2025-07-14T22:44:29.549300684Z" level=info msg="Container 95a263359359b6538f12eaa44c26a0afd64a7e58ce2c49b9748b48b137ef45da: CDI devices from CRI Config.CDIDevices: []" Jul 14 22:44:29.553763 containerd[1660]: time="2025-07-14T22:44:29.553738969Z" level=info msg="CreateContainer within sandbox \"8eaf703c20be3fa9f47af36b7ddcbf28e20041398d44b43e1e26004754844f53\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"95a263359359b6538f12eaa44c26a0afd64a7e58ce2c49b9748b48b137ef45da\"" Jul 14 22:44:29.554385 containerd[1660]: time="2025-07-14T22:44:29.554364935Z" level=info msg="StartContainer for \"95a263359359b6538f12eaa44c26a0afd64a7e58ce2c49b9748b48b137ef45da\"" Jul 14 22:44:29.555623 containerd[1660]: time="2025-07-14T22:44:29.555607360Z" level=info msg="connecting to shim 95a263359359b6538f12eaa44c26a0afd64a7e58ce2c49b9748b48b137ef45da" address="unix:///run/containerd/s/3f356f6c42a253266df70f7fe866575547a053a2731fbc298762f3e2e8aae828" protocol=ttrpc version=3 Jul 14 22:44:29.571337 systemd[1]: Started cri-containerd-95a263359359b6538f12eaa44c26a0afd64a7e58ce2c49b9748b48b137ef45da.scope - libcontainer container 95a263359359b6538f12eaa44c26a0afd64a7e58ce2c49b9748b48b137ef45da. Jul 14 22:44:29.588631 containerd[1660]: time="2025-07-14T22:44:29.588564564Z" level=info msg="StartContainer for \"95a263359359b6538f12eaa44c26a0afd64a7e58ce2c49b9748b48b137ef45da\" returns successfully" Jul 14 22:44:29.612890 systemd[1]: cri-containerd-95a263359359b6538f12eaa44c26a0afd64a7e58ce2c49b9748b48b137ef45da.scope: Deactivated successfully. Jul 14 22:44:29.613253 systemd[1]: cri-containerd-95a263359359b6538f12eaa44c26a0afd64a7e58ce2c49b9748b48b137ef45da.scope: Consumed 13ms CPU time, 9.6M memory peak, 3.2M read from disk. Jul 14 22:44:29.614374 containerd[1660]: time="2025-07-14T22:44:29.614351584Z" level=info msg="TaskExit event in podsandbox handler container_id:\"95a263359359b6538f12eaa44c26a0afd64a7e58ce2c49b9748b48b137ef45da\" id:\"95a263359359b6538f12eaa44c26a0afd64a7e58ce2c49b9748b48b137ef45da\" pid:5086 exited_at:{seconds:1752533069 nanos:614068638}" Jul 14 22:44:29.614416 containerd[1660]: time="2025-07-14T22:44:29.614384028Z" level=info msg="received exit event container_id:\"95a263359359b6538f12eaa44c26a0afd64a7e58ce2c49b9748b48b137ef45da\" id:\"95a263359359b6538f12eaa44c26a0afd64a7e58ce2c49b9748b48b137ef45da\" pid:5086 exited_at:{seconds:1752533069 nanos:614068638}" Jul 14 22:44:30.482134 containerd[1660]: time="2025-07-14T22:44:30.482096211Z" level=info msg="CreateContainer within sandbox \"8eaf703c20be3fa9f47af36b7ddcbf28e20041398d44b43e1e26004754844f53\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 14 22:44:30.492859 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1065867578.mount: Deactivated successfully. Jul 14 22:44:30.493867 containerd[1660]: time="2025-07-14T22:44:30.493747814Z" level=info msg="Container 072cb83668be0b4ebd53cb95781566eaaf7dd0113811caad790f6b435a5889d0: CDI devices from CRI Config.CDIDevices: []" Jul 14 22:44:30.496941 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1582881084.mount: Deactivated successfully. Jul 14 22:44:30.508978 containerd[1660]: time="2025-07-14T22:44:30.508954594Z" level=info msg="CreateContainer within sandbox \"8eaf703c20be3fa9f47af36b7ddcbf28e20041398d44b43e1e26004754844f53\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"072cb83668be0b4ebd53cb95781566eaaf7dd0113811caad790f6b435a5889d0\"" Jul 14 22:44:30.510732 containerd[1660]: time="2025-07-14T22:44:30.510693298Z" level=info msg="StartContainer for \"072cb83668be0b4ebd53cb95781566eaaf7dd0113811caad790f6b435a5889d0\"" Jul 14 22:44:30.511600 containerd[1660]: time="2025-07-14T22:44:30.511555482Z" level=info msg="connecting to shim 072cb83668be0b4ebd53cb95781566eaaf7dd0113811caad790f6b435a5889d0" address="unix:///run/containerd/s/3f356f6c42a253266df70f7fe866575547a053a2731fbc298762f3e2e8aae828" protocol=ttrpc version=3 Jul 14 22:44:30.530481 systemd[1]: Started cri-containerd-072cb83668be0b4ebd53cb95781566eaaf7dd0113811caad790f6b435a5889d0.scope - libcontainer container 072cb83668be0b4ebd53cb95781566eaaf7dd0113811caad790f6b435a5889d0. Jul 14 22:44:30.546759 containerd[1660]: time="2025-07-14T22:44:30.546719868Z" level=info msg="StartContainer for \"072cb83668be0b4ebd53cb95781566eaaf7dd0113811caad790f6b435a5889d0\" returns successfully" Jul 14 22:44:30.558733 systemd[1]: cri-containerd-072cb83668be0b4ebd53cb95781566eaaf7dd0113811caad790f6b435a5889d0.scope: Deactivated successfully. Jul 14 22:44:30.559103 systemd[1]: cri-containerd-072cb83668be0b4ebd53cb95781566eaaf7dd0113811caad790f6b435a5889d0.scope: Consumed 11ms CPU time, 7.3M memory peak, 2.2M read from disk. Jul 14 22:44:30.559534 containerd[1660]: time="2025-07-14T22:44:30.559434370Z" level=info msg="received exit event container_id:\"072cb83668be0b4ebd53cb95781566eaaf7dd0113811caad790f6b435a5889d0\" id:\"072cb83668be0b4ebd53cb95781566eaaf7dd0113811caad790f6b435a5889d0\" pid:5132 exited_at:{seconds:1752533070 nanos:558995963}" Jul 14 22:44:30.559645 containerd[1660]: time="2025-07-14T22:44:30.559628145Z" level=info msg="TaskExit event in podsandbox handler container_id:\"072cb83668be0b4ebd53cb95781566eaaf7dd0113811caad790f6b435a5889d0\" id:\"072cb83668be0b4ebd53cb95781566eaaf7dd0113811caad790f6b435a5889d0\" pid:5132 exited_at:{seconds:1752533070 nanos:558995963}" Jul 14 22:44:31.482887 containerd[1660]: time="2025-07-14T22:44:31.482610102Z" level=info msg="CreateContainer within sandbox \"8eaf703c20be3fa9f47af36b7ddcbf28e20041398d44b43e1e26004754844f53\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 14 22:44:31.493308 containerd[1660]: time="2025-07-14T22:44:31.493277220Z" level=info msg="Container 069cd8bfe136ee38e40195c9c331be4872eec040e8ea9d9a8f91ab7194345753: CDI devices from CRI Config.CDIDevices: []" Jul 14 22:44:31.497101 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3955652705.mount: Deactivated successfully. Jul 14 22:44:31.504564 containerd[1660]: time="2025-07-14T22:44:31.504525053Z" level=info msg="CreateContainer within sandbox \"8eaf703c20be3fa9f47af36b7ddcbf28e20041398d44b43e1e26004754844f53\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"069cd8bfe136ee38e40195c9c331be4872eec040e8ea9d9a8f91ab7194345753\"" Jul 14 22:44:31.504933 containerd[1660]: time="2025-07-14T22:44:31.504900497Z" level=info msg="StartContainer for \"069cd8bfe136ee38e40195c9c331be4872eec040e8ea9d9a8f91ab7194345753\"" Jul 14 22:44:31.505811 containerd[1660]: time="2025-07-14T22:44:31.505796610Z" level=info msg="connecting to shim 069cd8bfe136ee38e40195c9c331be4872eec040e8ea9d9a8f91ab7194345753" address="unix:///run/containerd/s/3f356f6c42a253266df70f7fe866575547a053a2731fbc298762f3e2e8aae828" protocol=ttrpc version=3 Jul 14 22:44:31.521458 systemd[1]: Started cri-containerd-069cd8bfe136ee38e40195c9c331be4872eec040e8ea9d9a8f91ab7194345753.scope - libcontainer container 069cd8bfe136ee38e40195c9c331be4872eec040e8ea9d9a8f91ab7194345753. Jul 14 22:44:31.543429 containerd[1660]: time="2025-07-14T22:44:31.543390180Z" level=info msg="StartContainer for \"069cd8bfe136ee38e40195c9c331be4872eec040e8ea9d9a8f91ab7194345753\" returns successfully" Jul 14 22:44:31.549331 systemd[1]: cri-containerd-069cd8bfe136ee38e40195c9c331be4872eec040e8ea9d9a8f91ab7194345753.scope: Deactivated successfully. Jul 14 22:44:31.550176 containerd[1660]: time="2025-07-14T22:44:31.550157490Z" level=info msg="received exit event container_id:\"069cd8bfe136ee38e40195c9c331be4872eec040e8ea9d9a8f91ab7194345753\" id:\"069cd8bfe136ee38e40195c9c331be4872eec040e8ea9d9a8f91ab7194345753\" pid:5176 exited_at:{seconds:1752533071 nanos:550036370}" Jul 14 22:44:31.550339 containerd[1660]: time="2025-07-14T22:44:31.550325367Z" level=info msg="TaskExit event in podsandbox handler container_id:\"069cd8bfe136ee38e40195c9c331be4872eec040e8ea9d9a8f91ab7194345753\" id:\"069cd8bfe136ee38e40195c9c331be4872eec040e8ea9d9a8f91ab7194345753\" pid:5176 exited_at:{seconds:1752533071 nanos:550036370}" Jul 14 22:44:32.304551 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-069cd8bfe136ee38e40195c9c331be4872eec040e8ea9d9a8f91ab7194345753-rootfs.mount: Deactivated successfully. Jul 14 22:44:32.486100 containerd[1660]: time="2025-07-14T22:44:32.485680388Z" level=info msg="CreateContainer within sandbox \"8eaf703c20be3fa9f47af36b7ddcbf28e20041398d44b43e1e26004754844f53\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 14 22:44:32.494278 containerd[1660]: time="2025-07-14T22:44:32.494142062Z" level=info msg="Container e368c2ab92371abb122856c6693e1af14c7f1dfbb1b86bb6073e3e5591629b9e: CDI devices from CRI Config.CDIDevices: []" Jul 14 22:44:32.497924 containerd[1660]: time="2025-07-14T22:44:32.497902515Z" level=info msg="CreateContainer within sandbox \"8eaf703c20be3fa9f47af36b7ddcbf28e20041398d44b43e1e26004754844f53\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"e368c2ab92371abb122856c6693e1af14c7f1dfbb1b86bb6073e3e5591629b9e\"" Jul 14 22:44:32.499186 containerd[1660]: time="2025-07-14T22:44:32.498826471Z" level=info msg="StartContainer for \"e368c2ab92371abb122856c6693e1af14c7f1dfbb1b86bb6073e3e5591629b9e\"" Jul 14 22:44:32.502157 containerd[1660]: time="2025-07-14T22:44:32.502017473Z" level=info msg="connecting to shim e368c2ab92371abb122856c6693e1af14c7f1dfbb1b86bb6073e3e5591629b9e" address="unix:///run/containerd/s/3f356f6c42a253266df70f7fe866575547a053a2731fbc298762f3e2e8aae828" protocol=ttrpc version=3 Jul 14 22:44:32.517273 systemd[1]: Started cri-containerd-e368c2ab92371abb122856c6693e1af14c7f1dfbb1b86bb6073e3e5591629b9e.scope - libcontainer container e368c2ab92371abb122856c6693e1af14c7f1dfbb1b86bb6073e3e5591629b9e. Jul 14 22:44:32.533951 systemd[1]: cri-containerd-e368c2ab92371abb122856c6693e1af14c7f1dfbb1b86bb6073e3e5591629b9e.scope: Deactivated successfully. Jul 14 22:44:32.534976 containerd[1660]: time="2025-07-14T22:44:32.534863093Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e368c2ab92371abb122856c6693e1af14c7f1dfbb1b86bb6073e3e5591629b9e\" id:\"e368c2ab92371abb122856c6693e1af14c7f1dfbb1b86bb6073e3e5591629b9e\" pid:5215 exited_at:{seconds:1752533072 nanos:534719483}" Jul 14 22:44:32.534976 containerd[1660]: time="2025-07-14T22:44:32.534904251Z" level=info msg="received exit event container_id:\"e368c2ab92371abb122856c6693e1af14c7f1dfbb1b86bb6073e3e5591629b9e\" id:\"e368c2ab92371abb122856c6693e1af14c7f1dfbb1b86bb6073e3e5591629b9e\" pid:5215 exited_at:{seconds:1752533072 nanos:534719483}" Jul 14 22:44:32.539539 containerd[1660]: time="2025-07-14T22:44:32.539521334Z" level=info msg="StartContainer for \"e368c2ab92371abb122856c6693e1af14c7f1dfbb1b86bb6073e3e5591629b9e\" returns successfully" Jul 14 22:44:32.548171 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e368c2ab92371abb122856c6693e1af14c7f1dfbb1b86bb6073e3e5591629b9e-rootfs.mount: Deactivated successfully. Jul 14 22:44:33.388295 update_engine[1630]: I20250714 22:44:33.388136 1630 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 14 22:44:33.388659 update_engine[1630]: I20250714 22:44:33.388310 1630 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 14 22:44:33.388659 update_engine[1630]: I20250714 22:44:33.388500 1630 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 14 22:44:33.392229 update_engine[1630]: E20250714 22:44:33.392211 1630 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 14 22:44:33.392278 update_engine[1630]: I20250714 22:44:33.392263 1630 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Jul 14 22:44:33.489393 containerd[1660]: time="2025-07-14T22:44:33.489329480Z" level=info msg="CreateContainer within sandbox \"8eaf703c20be3fa9f47af36b7ddcbf28e20041398d44b43e1e26004754844f53\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 14 22:44:33.506698 containerd[1660]: time="2025-07-14T22:44:33.505634880Z" level=info msg="Container b40d26f09cc3fe8c06848f471fdfa92a82cea9f70e2366e40b3e768acaf8c344: CDI devices from CRI Config.CDIDevices: []" Jul 14 22:44:33.507713 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3437287713.mount: Deactivated successfully. Jul 14 22:44:33.509495 containerd[1660]: time="2025-07-14T22:44:33.509464301Z" level=info msg="CreateContainer within sandbox \"8eaf703c20be3fa9f47af36b7ddcbf28e20041398d44b43e1e26004754844f53\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"b40d26f09cc3fe8c06848f471fdfa92a82cea9f70e2366e40b3e768acaf8c344\"" Jul 14 22:44:33.510299 containerd[1660]: time="2025-07-14T22:44:33.510273008Z" level=info msg="StartContainer for \"b40d26f09cc3fe8c06848f471fdfa92a82cea9f70e2366e40b3e768acaf8c344\"" Jul 14 22:44:33.511279 containerd[1660]: time="2025-07-14T22:44:33.511263503Z" level=info msg="connecting to shim b40d26f09cc3fe8c06848f471fdfa92a82cea9f70e2366e40b3e768acaf8c344" address="unix:///run/containerd/s/3f356f6c42a253266df70f7fe866575547a053a2731fbc298762f3e2e8aae828" protocol=ttrpc version=3 Jul 14 22:44:33.529417 systemd[1]: Started cri-containerd-b40d26f09cc3fe8c06848f471fdfa92a82cea9f70e2366e40b3e768acaf8c344.scope - libcontainer container b40d26f09cc3fe8c06848f471fdfa92a82cea9f70e2366e40b3e768acaf8c344. Jul 14 22:44:33.551366 containerd[1660]: time="2025-07-14T22:44:33.551304679Z" level=info msg="StartContainer for \"b40d26f09cc3fe8c06848f471fdfa92a82cea9f70e2366e40b3e768acaf8c344\" returns successfully" Jul 14 22:44:33.634037 containerd[1660]: time="2025-07-14T22:44:33.633987341Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b40d26f09cc3fe8c06848f471fdfa92a82cea9f70e2366e40b3e768acaf8c344\" id:\"341b2140dc99780631629e8babb35ebd4458c0c4efed171c22155b35be9045cc\" pid:5284 exited_at:{seconds:1752533073 nanos:633744826}" Jul 14 22:44:33.991560 kubelet[3084]: I0714 22:44:33.991526 3084 setters.go:600] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-07-14T22:44:33Z","lastTransitionTime":"2025-07-14T22:44:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jul 14 22:44:34.081365 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni-avx)) Jul 14 22:44:34.502956 kubelet[3084]: I0714 22:44:34.502854 3084 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-kbhs6" podStartSLOduration=5.502049633 podStartE2EDuration="5.502049633s" podCreationTimestamp="2025-07-14 22:44:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-14 22:44:34.50143039 +0000 UTC m=+205.605425494" watchObservedRunningTime="2025-07-14 22:44:34.502049633 +0000 UTC m=+205.606044739" Jul 14 22:44:35.726524 containerd[1660]: time="2025-07-14T22:44:35.726489339Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b40d26f09cc3fe8c06848f471fdfa92a82cea9f70e2366e40b3e768acaf8c344\" id:\"17c3fa0452d7e0d375c16113c27ca01800258216599412eef8cfb6f5e5a05675\" pid:5447 exit_status:1 exited_at:{seconds:1752533075 nanos:726078199}" Jul 14 22:44:36.531086 systemd-networkd[1517]: lxc_health: Link UP Jul 14 22:44:36.531265 systemd-networkd[1517]: lxc_health: Gained carrier Jul 14 22:44:37.843116 containerd[1660]: time="2025-07-14T22:44:37.843008204Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b40d26f09cc3fe8c06848f471fdfa92a82cea9f70e2366e40b3e768acaf8c344\" id:\"07ad62940822edc5510b16cae4d60b6743a68d7f8e2aca716dc0ed9b786d8a5d\" pid:5826 exited_at:{seconds:1752533077 nanos:842780491}" Jul 14 22:44:38.272391 systemd-networkd[1517]: lxc_health: Gained IPv6LL Jul 14 22:44:39.906726 containerd[1660]: time="2025-07-14T22:44:39.906698983Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b40d26f09cc3fe8c06848f471fdfa92a82cea9f70e2366e40b3e768acaf8c344\" id:\"273c5904063f9fcb8cfdf81898a50d9c85e4ab16c90b5c8477d465c28cb3c1e5\" pid:5858 exited_at:{seconds:1752533079 nanos:906129465}" Jul 14 22:44:41.974357 containerd[1660]: time="2025-07-14T22:44:41.974311849Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b40d26f09cc3fe8c06848f471fdfa92a82cea9f70e2366e40b3e768acaf8c344\" id:\"0bc3175834f0d8f8243a1f77d54979395b8f438672c4ab247154b1476dd0acf7\" pid:5881 exited_at:{seconds:1752533081 nanos:974100079}" Jul 14 22:44:41.975961 kubelet[3084]: E0714 22:44:41.975909 3084 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:45106->127.0.0.1:41805: write tcp 127.0.0.1:45106->127.0.0.1:41805: write: broken pipe Jul 14 22:44:41.983037 sshd[5024]: Connection closed by 139.178.89.65 port 55588 Jul 14 22:44:41.983540 sshd-session[5017]: pam_unix(sshd:session): session closed for user core Jul 14 22:44:41.986306 systemd[1]: sshd@38-139.178.70.108:22-139.178.89.65:55588.service: Deactivated successfully. Jul 14 22:44:41.987413 systemd[1]: session-40.scope: Deactivated successfully. Jul 14 22:44:41.988288 systemd-logind[1629]: Session 40 logged out. Waiting for processes to exit. Jul 14 22:44:41.989155 systemd-logind[1629]: Removed session 40. Jul 14 22:44:43.388771 update_engine[1630]: I20250714 22:44:43.388697 1630 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 14 22:44:43.388996 update_engine[1630]: I20250714 22:44:43.388856 1630 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 14 22:44:43.389024 update_engine[1630]: I20250714 22:44:43.389006 1630 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 14 22:44:43.394278 update_engine[1630]: E20250714 22:44:43.394258 1630 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 14 22:44:43.394314 update_engine[1630]: I20250714 22:44:43.394299 1630 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jul 14 22:44:43.394314 update_engine[1630]: I20250714 22:44:43.394305 1630 omaha_request_action.cc:617] Omaha request response: Jul 14 22:44:43.394365 update_engine[1630]: E20250714 22:44:43.394349 1630 omaha_request_action.cc:636] Omaha request network transfer failed. Jul 14 22:44:43.396005 update_engine[1630]: I20250714 22:44:43.395985 1630 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Jul 14 22:44:43.396031 update_engine[1630]: I20250714 22:44:43.396003 1630 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jul 14 22:44:43.396031 update_engine[1630]: I20250714 22:44:43.396008 1630 update_attempter.cc:306] Processing Done. Jul 14 22:44:43.396031 update_engine[1630]: E20250714 22:44:43.396016 1630 update_attempter.cc:619] Update failed. Jul 14 22:44:43.396031 update_engine[1630]: I20250714 22:44:43.396021 1630 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Jul 14 22:44:43.396031 update_engine[1630]: I20250714 22:44:43.396024 1630 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Jul 14 22:44:43.396031 update_engine[1630]: I20250714 22:44:43.396027 1630 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Jul 14 22:44:43.396273 update_engine[1630]: I20250714 22:44:43.396167 1630 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jul 14 22:44:43.396273 update_engine[1630]: I20250714 22:44:43.396193 1630 omaha_request_action.cc:271] Posting an Omaha request to disabled Jul 14 22:44:43.396273 update_engine[1630]: I20250714 22:44:43.396198 1630 omaha_request_action.cc:272] Request: Jul 14 22:44:43.396273 update_engine[1630]: Jul 14 22:44:43.396273 update_engine[1630]: Jul 14 22:44:43.396273 update_engine[1630]: Jul 14 22:44:43.396273 update_engine[1630]: Jul 14 22:44:43.396273 update_engine[1630]: Jul 14 22:44:43.396273 update_engine[1630]: Jul 14 22:44:43.396273 update_engine[1630]: I20250714 22:44:43.396200 1630 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 14 22:44:43.396420 locksmithd[1666]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Jul 14 22:44:43.396537 update_engine[1630]: I20250714 22:44:43.396291 1630 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 14 22:44:43.396537 update_engine[1630]: I20250714 22:44:43.396417 1630 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 14 22:44:43.399132 update_engine[1630]: E20250714 22:44:43.399116 1630 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 14 22:44:43.399161 update_engine[1630]: I20250714 22:44:43.399148 1630 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jul 14 22:44:43.399161 update_engine[1630]: I20250714 22:44:43.399154 1630 omaha_request_action.cc:617] Omaha request response: Jul 14 22:44:43.399161 update_engine[1630]: I20250714 22:44:43.399158 1630 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jul 14 22:44:43.399205 update_engine[1630]: I20250714 22:44:43.399161 1630 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jul 14 22:44:43.399205 update_engine[1630]: I20250714 22:44:43.399163 1630 update_attempter.cc:306] Processing Done. Jul 14 22:44:43.399205 update_engine[1630]: I20250714 22:44:43.399166 1630 update_attempter.cc:310] Error event sent. Jul 14 22:44:43.399205 update_engine[1630]: I20250714 22:44:43.399170 1630 update_check_scheduler.cc:74] Next update check in 45m9s Jul 14 22:44:43.399372 locksmithd[1666]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0