Sep 9 00:19:20.704921 kernel: Linux version 6.12.45-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Mon Sep 8 22:16:40 -00 2025 Sep 9 00:19:20.704937 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=c495f73c03808403ea4f55eb54c843aae6678d256d64068b1371f8afce28979a Sep 9 00:19:20.704944 kernel: Disabled fast string operations Sep 9 00:19:20.704948 kernel: BIOS-provided physical RAM map: Sep 9 00:19:20.704952 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ebff] usable Sep 9 00:19:20.704957 kernel: BIOS-e820: [mem 0x000000000009ec00-0x000000000009ffff] reserved Sep 9 00:19:20.704962 kernel: BIOS-e820: [mem 0x00000000000dc000-0x00000000000fffff] reserved Sep 9 00:19:20.704966 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007fedffff] usable Sep 9 00:19:20.704971 kernel: BIOS-e820: [mem 0x000000007fee0000-0x000000007fefefff] ACPI data Sep 9 00:19:20.704975 kernel: BIOS-e820: [mem 0x000000007feff000-0x000000007fefffff] ACPI NVS Sep 9 00:19:20.704979 kernel: BIOS-e820: [mem 0x000000007ff00000-0x000000007fffffff] usable Sep 9 00:19:20.704984 kernel: BIOS-e820: [mem 0x00000000f0000000-0x00000000f7ffffff] reserved Sep 9 00:19:20.704988 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec0ffff] reserved Sep 9 00:19:20.704992 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved Sep 9 00:19:20.704999 kernel: BIOS-e820: [mem 0x00000000fffe0000-0x00000000ffffffff] reserved Sep 9 00:19:20.705004 kernel: NX (Execute Disable) protection: active Sep 9 00:19:20.705008 kernel: APIC: Static calls initialized Sep 9 00:19:20.705013 kernel: SMBIOS 2.7 present. Sep 9 00:19:20.705018 kernel: DMI: VMware, Inc. VMware Virtual Platform/440BX Desktop Reference Platform, BIOS 6.00 05/28/2020 Sep 9 00:19:20.705023 kernel: DMI: Memory slots populated: 1/128 Sep 9 00:19:20.705029 kernel: vmware: hypercall mode: 0x00 Sep 9 00:19:20.705034 kernel: Hypervisor detected: VMware Sep 9 00:19:20.705038 kernel: vmware: TSC freq read from hypervisor : 3408.000 MHz Sep 9 00:19:20.705043 kernel: vmware: Host bus clock speed read from hypervisor : 66000000 Hz Sep 9 00:19:20.705048 kernel: vmware: using clock offset of 5042722369 ns Sep 9 00:19:20.705053 kernel: tsc: Detected 3408.000 MHz processor Sep 9 00:19:20.705058 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 9 00:19:20.705063 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 9 00:19:20.705068 kernel: last_pfn = 0x80000 max_arch_pfn = 0x400000000 Sep 9 00:19:20.705073 kernel: total RAM covered: 3072M Sep 9 00:19:20.705079 kernel: Found optimal setting for mtrr clean up Sep 9 00:19:20.705085 kernel: gran_size: 64K chunk_size: 64K num_reg: 2 lose cover RAM: 0G Sep 9 00:19:20.705090 kernel: MTRR map: 6 entries (5 fixed + 1 variable; max 21), built from 8 variable MTRRs Sep 9 00:19:20.705095 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 9 00:19:20.705099 kernel: Using GB pages for direct mapping Sep 9 00:19:20.705104 kernel: ACPI: Early table checksum verification disabled Sep 9 00:19:20.705109 kernel: ACPI: RSDP 0x00000000000F6A00 000024 (v02 PTLTD ) Sep 9 00:19:20.705114 kernel: ACPI: XSDT 0x000000007FEE965B 00005C (v01 INTEL 440BX 06040000 VMW 01324272) Sep 9 00:19:20.705119 kernel: ACPI: FACP 0x000000007FEFEE73 0000F4 (v04 INTEL 440BX 06040000 PTL 000F4240) Sep 9 00:19:20.705125 kernel: ACPI: DSDT 0x000000007FEEAD55 01411E (v01 PTLTD Custom 06040000 MSFT 03000001) Sep 9 00:19:20.705132 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 Sep 9 00:19:20.705137 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 Sep 9 00:19:20.705142 kernel: ACPI: BOOT 0x000000007FEEAD2D 000028 (v01 PTLTD $SBFTBL$ 06040000 LTP 00000001) Sep 9 00:19:20.705147 kernel: ACPI: APIC 0x000000007FEEA5EB 000742 (v01 PTLTD ? APIC 06040000 LTP 00000000) Sep 9 00:19:20.705154 kernel: ACPI: MCFG 0x000000007FEEA5AF 00003C (v01 PTLTD $PCITBL$ 06040000 LTP 00000001) Sep 9 00:19:20.705159 kernel: ACPI: SRAT 0x000000007FEE9757 0008A8 (v02 VMWARE MEMPLUG 06040000 VMW 00000001) Sep 9 00:19:20.705164 kernel: ACPI: HPET 0x000000007FEE971F 000038 (v01 VMWARE VMW HPET 06040000 VMW 00000001) Sep 9 00:19:20.705169 kernel: ACPI: WAET 0x000000007FEE96F7 000028 (v01 VMWARE VMW WAET 06040000 VMW 00000001) Sep 9 00:19:20.705174 kernel: ACPI: Reserving FACP table memory at [mem 0x7fefee73-0x7fefef66] Sep 9 00:19:20.705179 kernel: ACPI: Reserving DSDT table memory at [mem 0x7feead55-0x7fefee72] Sep 9 00:19:20.705184 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] Sep 9 00:19:20.705189 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] Sep 9 00:19:20.705194 kernel: ACPI: Reserving BOOT table memory at [mem 0x7feead2d-0x7feead54] Sep 9 00:19:20.705201 kernel: ACPI: Reserving APIC table memory at [mem 0x7feea5eb-0x7feead2c] Sep 9 00:19:20.705206 kernel: ACPI: Reserving MCFG table memory at [mem 0x7feea5af-0x7feea5ea] Sep 9 00:19:20.705211 kernel: ACPI: Reserving SRAT table memory at [mem 0x7fee9757-0x7fee9ffe] Sep 9 00:19:20.705216 kernel: ACPI: Reserving HPET table memory at [mem 0x7fee971f-0x7fee9756] Sep 9 00:19:20.705221 kernel: ACPI: Reserving WAET table memory at [mem 0x7fee96f7-0x7fee971e] Sep 9 00:19:20.705226 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Sep 9 00:19:20.705231 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Sep 9 00:19:20.705236 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000-0xbfffffff] hotplug Sep 9 00:19:20.705241 kernel: NUMA: Node 0 [mem 0x00001000-0x0009ffff] + [mem 0x00100000-0x7fffffff] -> [mem 0x00001000-0x7fffffff] Sep 9 00:19:20.705247 kernel: NODE_DATA(0) allocated [mem 0x7fff8dc0-0x7fffffff] Sep 9 00:19:20.705253 kernel: Zone ranges: Sep 9 00:19:20.705258 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 9 00:19:20.705263 kernel: DMA32 [mem 0x0000000001000000-0x000000007fffffff] Sep 9 00:19:20.705268 kernel: Normal empty Sep 9 00:19:20.705273 kernel: Device empty Sep 9 00:19:20.705278 kernel: Movable zone start for each node Sep 9 00:19:20.705283 kernel: Early memory node ranges Sep 9 00:19:20.705288 kernel: node 0: [mem 0x0000000000001000-0x000000000009dfff] Sep 9 00:19:20.705293 kernel: node 0: [mem 0x0000000000100000-0x000000007fedffff] Sep 9 00:19:20.705299 kernel: node 0: [mem 0x000000007ff00000-0x000000007fffffff] Sep 9 00:19:20.705304 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007fffffff] Sep 9 00:19:20.705310 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 9 00:19:20.705315 kernel: On node 0, zone DMA: 98 pages in unavailable ranges Sep 9 00:19:20.705320 kernel: On node 0, zone DMA32: 32 pages in unavailable ranges Sep 9 00:19:20.705325 kernel: ACPI: PM-Timer IO Port: 0x1008 Sep 9 00:19:20.705330 kernel: ACPI: LAPIC_NMI (acpi_id[0x00] high edge lint[0x1]) Sep 9 00:19:20.705335 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) Sep 9 00:19:20.705340 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) Sep 9 00:19:20.705346 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) Sep 9 00:19:20.705351 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) Sep 9 00:19:20.705356 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) Sep 9 00:19:20.705361 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) Sep 9 00:19:20.705366 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) Sep 9 00:19:20.705371 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) Sep 9 00:19:20.705376 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) Sep 9 00:19:20.705381 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) Sep 9 00:19:20.705386 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) Sep 9 00:19:20.705391 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) Sep 9 00:19:20.705397 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) Sep 9 00:19:20.705402 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) Sep 9 00:19:20.705407 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) Sep 9 00:19:20.705412 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) Sep 9 00:19:20.705417 kernel: ACPI: LAPIC_NMI (acpi_id[0x11] high edge lint[0x1]) Sep 9 00:19:20.705422 kernel: ACPI: LAPIC_NMI (acpi_id[0x12] high edge lint[0x1]) Sep 9 00:19:20.705427 kernel: ACPI: LAPIC_NMI (acpi_id[0x13] high edge lint[0x1]) Sep 9 00:19:20.705432 kernel: ACPI: LAPIC_NMI (acpi_id[0x14] high edge lint[0x1]) Sep 9 00:19:20.705437 kernel: ACPI: LAPIC_NMI (acpi_id[0x15] high edge lint[0x1]) Sep 9 00:19:20.705442 kernel: ACPI: LAPIC_NMI (acpi_id[0x16] high edge lint[0x1]) Sep 9 00:19:20.705448 kernel: ACPI: LAPIC_NMI (acpi_id[0x17] high edge lint[0x1]) Sep 9 00:19:20.705453 kernel: ACPI: LAPIC_NMI (acpi_id[0x18] high edge lint[0x1]) Sep 9 00:19:20.705458 kernel: ACPI: LAPIC_NMI (acpi_id[0x19] high edge lint[0x1]) Sep 9 00:19:20.705463 kernel: ACPI: LAPIC_NMI (acpi_id[0x1a] high edge lint[0x1]) Sep 9 00:19:20.705468 kernel: ACPI: LAPIC_NMI (acpi_id[0x1b] high edge lint[0x1]) Sep 9 00:19:20.705473 kernel: ACPI: LAPIC_NMI (acpi_id[0x1c] high edge lint[0x1]) Sep 9 00:19:20.705478 kernel: ACPI: LAPIC_NMI (acpi_id[0x1d] high edge lint[0x1]) Sep 9 00:19:20.705483 kernel: ACPI: LAPIC_NMI (acpi_id[0x1e] high edge lint[0x1]) Sep 9 00:19:20.705488 kernel: ACPI: LAPIC_NMI (acpi_id[0x1f] high edge lint[0x1]) Sep 9 00:19:20.705493 kernel: ACPI: LAPIC_NMI (acpi_id[0x20] high edge lint[0x1]) Sep 9 00:19:20.705499 kernel: ACPI: LAPIC_NMI (acpi_id[0x21] high edge lint[0x1]) Sep 9 00:19:20.705504 kernel: ACPI: LAPIC_NMI (acpi_id[0x22] high edge lint[0x1]) Sep 9 00:19:20.705509 kernel: ACPI: LAPIC_NMI (acpi_id[0x23] high edge lint[0x1]) Sep 9 00:19:20.705514 kernel: ACPI: LAPIC_NMI (acpi_id[0x24] high edge lint[0x1]) Sep 9 00:19:20.705519 kernel: ACPI: LAPIC_NMI (acpi_id[0x25] high edge lint[0x1]) Sep 9 00:19:20.705524 kernel: ACPI: LAPIC_NMI (acpi_id[0x26] high edge lint[0x1]) Sep 9 00:19:20.705530 kernel: ACPI: LAPIC_NMI (acpi_id[0x27] high edge lint[0x1]) Sep 9 00:19:20.705539 kernel: ACPI: LAPIC_NMI (acpi_id[0x28] high edge lint[0x1]) Sep 9 00:19:20.705544 kernel: ACPI: LAPIC_NMI (acpi_id[0x29] high edge lint[0x1]) Sep 9 00:19:20.705550 kernel: ACPI: LAPIC_NMI (acpi_id[0x2a] high edge lint[0x1]) Sep 9 00:19:20.705555 kernel: ACPI: LAPIC_NMI (acpi_id[0x2b] high edge lint[0x1]) Sep 9 00:19:20.705561 kernel: ACPI: LAPIC_NMI (acpi_id[0x2c] high edge lint[0x1]) Sep 9 00:19:20.705567 kernel: ACPI: LAPIC_NMI (acpi_id[0x2d] high edge lint[0x1]) Sep 9 00:19:20.705572 kernel: ACPI: LAPIC_NMI (acpi_id[0x2e] high edge lint[0x1]) Sep 9 00:19:20.705577 kernel: ACPI: LAPIC_NMI (acpi_id[0x2f] high edge lint[0x1]) Sep 9 00:19:20.705589 kernel: ACPI: LAPIC_NMI (acpi_id[0x30] high edge lint[0x1]) Sep 9 00:19:20.705595 kernel: ACPI: LAPIC_NMI (acpi_id[0x31] high edge lint[0x1]) Sep 9 00:19:20.705600 kernel: ACPI: LAPIC_NMI (acpi_id[0x32] high edge lint[0x1]) Sep 9 00:19:20.705607 kernel: ACPI: LAPIC_NMI (acpi_id[0x33] high edge lint[0x1]) Sep 9 00:19:20.705613 kernel: ACPI: LAPIC_NMI (acpi_id[0x34] high edge lint[0x1]) Sep 9 00:19:20.705618 kernel: ACPI: LAPIC_NMI (acpi_id[0x35] high edge lint[0x1]) Sep 9 00:19:20.705623 kernel: ACPI: LAPIC_NMI (acpi_id[0x36] high edge lint[0x1]) Sep 9 00:19:20.705628 kernel: ACPI: LAPIC_NMI (acpi_id[0x37] high edge lint[0x1]) Sep 9 00:19:20.705634 kernel: ACPI: LAPIC_NMI (acpi_id[0x38] high edge lint[0x1]) Sep 9 00:19:20.705639 kernel: ACPI: LAPIC_NMI (acpi_id[0x39] high edge lint[0x1]) Sep 9 00:19:20.705644 kernel: ACPI: LAPIC_NMI (acpi_id[0x3a] high edge lint[0x1]) Sep 9 00:19:20.705650 kernel: ACPI: LAPIC_NMI (acpi_id[0x3b] high edge lint[0x1]) Sep 9 00:19:20.705655 kernel: ACPI: LAPIC_NMI (acpi_id[0x3c] high edge lint[0x1]) Sep 9 00:19:20.705661 kernel: ACPI: LAPIC_NMI (acpi_id[0x3d] high edge lint[0x1]) Sep 9 00:19:20.705667 kernel: ACPI: LAPIC_NMI (acpi_id[0x3e] high edge lint[0x1]) Sep 9 00:19:20.705689 kernel: ACPI: LAPIC_NMI (acpi_id[0x3f] high edge lint[0x1]) Sep 9 00:19:20.705695 kernel: ACPI: LAPIC_NMI (acpi_id[0x40] high edge lint[0x1]) Sep 9 00:19:20.705701 kernel: ACPI: LAPIC_NMI (acpi_id[0x41] high edge lint[0x1]) Sep 9 00:19:20.705706 kernel: ACPI: LAPIC_NMI (acpi_id[0x42] high edge lint[0x1]) Sep 9 00:19:20.705712 kernel: ACPI: LAPIC_NMI (acpi_id[0x43] high edge lint[0x1]) Sep 9 00:19:20.705732 kernel: ACPI: LAPIC_NMI (acpi_id[0x44] high edge lint[0x1]) Sep 9 00:19:20.705738 kernel: ACPI: LAPIC_NMI (acpi_id[0x45] high edge lint[0x1]) Sep 9 00:19:20.705744 kernel: ACPI: LAPIC_NMI (acpi_id[0x46] high edge lint[0x1]) Sep 9 00:19:20.705749 kernel: ACPI: LAPIC_NMI (acpi_id[0x47] high edge lint[0x1]) Sep 9 00:19:20.705755 kernel: ACPI: LAPIC_NMI (acpi_id[0x48] high edge lint[0x1]) Sep 9 00:19:20.705760 kernel: ACPI: LAPIC_NMI (acpi_id[0x49] high edge lint[0x1]) Sep 9 00:19:20.705765 kernel: ACPI: LAPIC_NMI (acpi_id[0x4a] high edge lint[0x1]) Sep 9 00:19:20.705771 kernel: ACPI: LAPIC_NMI (acpi_id[0x4b] high edge lint[0x1]) Sep 9 00:19:20.705776 kernel: ACPI: LAPIC_NMI (acpi_id[0x4c] high edge lint[0x1]) Sep 9 00:19:20.705782 kernel: ACPI: LAPIC_NMI (acpi_id[0x4d] high edge lint[0x1]) Sep 9 00:19:20.705787 kernel: ACPI: LAPIC_NMI (acpi_id[0x4e] high edge lint[0x1]) Sep 9 00:19:20.705792 kernel: ACPI: LAPIC_NMI (acpi_id[0x4f] high edge lint[0x1]) Sep 9 00:19:20.705799 kernel: ACPI: LAPIC_NMI (acpi_id[0x50] high edge lint[0x1]) Sep 9 00:19:20.705804 kernel: ACPI: LAPIC_NMI (acpi_id[0x51] high edge lint[0x1]) Sep 9 00:19:20.705809 kernel: ACPI: LAPIC_NMI (acpi_id[0x52] high edge lint[0x1]) Sep 9 00:19:20.705815 kernel: ACPI: LAPIC_NMI (acpi_id[0x53] high edge lint[0x1]) Sep 9 00:19:20.705820 kernel: ACPI: LAPIC_NMI (acpi_id[0x54] high edge lint[0x1]) Sep 9 00:19:20.705825 kernel: ACPI: LAPIC_NMI (acpi_id[0x55] high edge lint[0x1]) Sep 9 00:19:20.705830 kernel: ACPI: LAPIC_NMI (acpi_id[0x56] high edge lint[0x1]) Sep 9 00:19:20.705836 kernel: ACPI: LAPIC_NMI (acpi_id[0x57] high edge lint[0x1]) Sep 9 00:19:20.705841 kernel: ACPI: LAPIC_NMI (acpi_id[0x58] high edge lint[0x1]) Sep 9 00:19:20.705846 kernel: ACPI: LAPIC_NMI (acpi_id[0x59] high edge lint[0x1]) Sep 9 00:19:20.705853 kernel: ACPI: LAPIC_NMI (acpi_id[0x5a] high edge lint[0x1]) Sep 9 00:19:20.705858 kernel: ACPI: LAPIC_NMI (acpi_id[0x5b] high edge lint[0x1]) Sep 9 00:19:20.705863 kernel: ACPI: LAPIC_NMI (acpi_id[0x5c] high edge lint[0x1]) Sep 9 00:19:20.705869 kernel: ACPI: LAPIC_NMI (acpi_id[0x5d] high edge lint[0x1]) Sep 9 00:19:20.705874 kernel: ACPI: LAPIC_NMI (acpi_id[0x5e] high edge lint[0x1]) Sep 9 00:19:20.705880 kernel: ACPI: LAPIC_NMI (acpi_id[0x5f] high edge lint[0x1]) Sep 9 00:19:20.705885 kernel: ACPI: LAPIC_NMI (acpi_id[0x60] high edge lint[0x1]) Sep 9 00:19:20.705890 kernel: ACPI: LAPIC_NMI (acpi_id[0x61] high edge lint[0x1]) Sep 9 00:19:20.705896 kernel: ACPI: LAPIC_NMI (acpi_id[0x62] high edge lint[0x1]) Sep 9 00:19:20.705902 kernel: ACPI: LAPIC_NMI (acpi_id[0x63] high edge lint[0x1]) Sep 9 00:19:20.705907 kernel: ACPI: LAPIC_NMI (acpi_id[0x64] high edge lint[0x1]) Sep 9 00:19:20.705913 kernel: ACPI: LAPIC_NMI (acpi_id[0x65] high edge lint[0x1]) Sep 9 00:19:20.705918 kernel: ACPI: LAPIC_NMI (acpi_id[0x66] high edge lint[0x1]) Sep 9 00:19:20.705923 kernel: ACPI: LAPIC_NMI (acpi_id[0x67] high edge lint[0x1]) Sep 9 00:19:20.705929 kernel: ACPI: LAPIC_NMI (acpi_id[0x68] high edge lint[0x1]) Sep 9 00:19:20.705934 kernel: ACPI: LAPIC_NMI (acpi_id[0x69] high edge lint[0x1]) Sep 9 00:19:20.705939 kernel: ACPI: LAPIC_NMI (acpi_id[0x6a] high edge lint[0x1]) Sep 9 00:19:20.705944 kernel: ACPI: LAPIC_NMI (acpi_id[0x6b] high edge lint[0x1]) Sep 9 00:19:20.705950 kernel: ACPI: LAPIC_NMI (acpi_id[0x6c] high edge lint[0x1]) Sep 9 00:19:20.705956 kernel: ACPI: LAPIC_NMI (acpi_id[0x6d] high edge lint[0x1]) Sep 9 00:19:20.705961 kernel: ACPI: LAPIC_NMI (acpi_id[0x6e] high edge lint[0x1]) Sep 9 00:19:20.705967 kernel: ACPI: LAPIC_NMI (acpi_id[0x6f] high edge lint[0x1]) Sep 9 00:19:20.705972 kernel: ACPI: LAPIC_NMI (acpi_id[0x70] high edge lint[0x1]) Sep 9 00:19:20.705978 kernel: ACPI: LAPIC_NMI (acpi_id[0x71] high edge lint[0x1]) Sep 9 00:19:20.705983 kernel: ACPI: LAPIC_NMI (acpi_id[0x72] high edge lint[0x1]) Sep 9 00:19:20.705988 kernel: ACPI: LAPIC_NMI (acpi_id[0x73] high edge lint[0x1]) Sep 9 00:19:20.705994 kernel: ACPI: LAPIC_NMI (acpi_id[0x74] high edge lint[0x1]) Sep 9 00:19:20.705999 kernel: ACPI: LAPIC_NMI (acpi_id[0x75] high edge lint[0x1]) Sep 9 00:19:20.706004 kernel: ACPI: LAPIC_NMI (acpi_id[0x76] high edge lint[0x1]) Sep 9 00:19:20.706010 kernel: ACPI: LAPIC_NMI (acpi_id[0x77] high edge lint[0x1]) Sep 9 00:19:20.706016 kernel: ACPI: LAPIC_NMI (acpi_id[0x78] high edge lint[0x1]) Sep 9 00:19:20.706021 kernel: ACPI: LAPIC_NMI (acpi_id[0x79] high edge lint[0x1]) Sep 9 00:19:20.706027 kernel: ACPI: LAPIC_NMI (acpi_id[0x7a] high edge lint[0x1]) Sep 9 00:19:20.706032 kernel: ACPI: LAPIC_NMI (acpi_id[0x7b] high edge lint[0x1]) Sep 9 00:19:20.706037 kernel: ACPI: LAPIC_NMI (acpi_id[0x7c] high edge lint[0x1]) Sep 9 00:19:20.706043 kernel: ACPI: LAPIC_NMI (acpi_id[0x7d] high edge lint[0x1]) Sep 9 00:19:20.706048 kernel: ACPI: LAPIC_NMI (acpi_id[0x7e] high edge lint[0x1]) Sep 9 00:19:20.706054 kernel: ACPI: LAPIC_NMI (acpi_id[0x7f] high edge lint[0x1]) Sep 9 00:19:20.706059 kernel: IOAPIC[0]: apic_id 1, version 17, address 0xfec00000, GSI 0-23 Sep 9 00:19:20.706066 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 high edge) Sep 9 00:19:20.706071 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 9 00:19:20.706077 kernel: ACPI: HPET id: 0x8086af01 base: 0xfed00000 Sep 9 00:19:20.706082 kernel: TSC deadline timer available Sep 9 00:19:20.706088 kernel: CPU topo: Max. logical packages: 128 Sep 9 00:19:20.706093 kernel: CPU topo: Max. logical dies: 128 Sep 9 00:19:20.706098 kernel: CPU topo: Max. dies per package: 1 Sep 9 00:19:20.706104 kernel: CPU topo: Max. threads per core: 1 Sep 9 00:19:20.706109 kernel: CPU topo: Num. cores per package: 1 Sep 9 00:19:20.706116 kernel: CPU topo: Num. threads per package: 1 Sep 9 00:19:20.706121 kernel: CPU topo: Allowing 2 present CPUs plus 126 hotplug CPUs Sep 9 00:19:20.706126 kernel: [mem 0x80000000-0xefffffff] available for PCI devices Sep 9 00:19:20.706132 kernel: Booting paravirtualized kernel on VMware hypervisor Sep 9 00:19:20.706138 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 9 00:19:20.706143 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:128 nr_cpu_ids:128 nr_node_ids:1 Sep 9 00:19:20.706149 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u262144 Sep 9 00:19:20.706154 kernel: pcpu-alloc: s207832 r8192 d29736 u262144 alloc=1*2097152 Sep 9 00:19:20.706160 kernel: pcpu-alloc: [0] 000 001 002 003 004 005 006 007 Sep 9 00:19:20.706166 kernel: pcpu-alloc: [0] 008 009 010 011 012 013 014 015 Sep 9 00:19:20.706171 kernel: pcpu-alloc: [0] 016 017 018 019 020 021 022 023 Sep 9 00:19:20.706177 kernel: pcpu-alloc: [0] 024 025 026 027 028 029 030 031 Sep 9 00:19:20.706182 kernel: pcpu-alloc: [0] 032 033 034 035 036 037 038 039 Sep 9 00:19:20.706187 kernel: pcpu-alloc: [0] 040 041 042 043 044 045 046 047 Sep 9 00:19:20.706193 kernel: pcpu-alloc: [0] 048 049 050 051 052 053 054 055 Sep 9 00:19:20.706198 kernel: pcpu-alloc: [0] 056 057 058 059 060 061 062 063 Sep 9 00:19:20.706203 kernel: pcpu-alloc: [0] 064 065 066 067 068 069 070 071 Sep 9 00:19:20.706208 kernel: pcpu-alloc: [0] 072 073 074 075 076 077 078 079 Sep 9 00:19:20.706215 kernel: pcpu-alloc: [0] 080 081 082 083 084 085 086 087 Sep 9 00:19:20.706220 kernel: pcpu-alloc: [0] 088 089 090 091 092 093 094 095 Sep 9 00:19:20.706226 kernel: pcpu-alloc: [0] 096 097 098 099 100 101 102 103 Sep 9 00:19:20.706231 kernel: pcpu-alloc: [0] 104 105 106 107 108 109 110 111 Sep 9 00:19:20.706237 kernel: pcpu-alloc: [0] 112 113 114 115 116 117 118 119 Sep 9 00:19:20.706242 kernel: pcpu-alloc: [0] 120 121 122 123 124 125 126 127 Sep 9 00:19:20.706248 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=c495f73c03808403ea4f55eb54c843aae6678d256d64068b1371f8afce28979a Sep 9 00:19:20.706254 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 9 00:19:20.706260 kernel: random: crng init done Sep 9 00:19:20.706266 kernel: printk: log_buf_len individual max cpu contribution: 4096 bytes Sep 9 00:19:20.706271 kernel: printk: log_buf_len total cpu_extra contributions: 520192 bytes Sep 9 00:19:20.706277 kernel: printk: log_buf_len min size: 262144 bytes Sep 9 00:19:20.706282 kernel: printk: log_buf_len: 1048576 bytes Sep 9 00:19:20.706287 kernel: printk: early log buf free: 245576(93%) Sep 9 00:19:20.706293 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 9 00:19:20.706298 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Sep 9 00:19:20.706304 kernel: Fallback order for Node 0: 0 Sep 9 00:19:20.706310 kernel: Built 1 zonelists, mobility grouping on. Total pages: 524157 Sep 9 00:19:20.706316 kernel: Policy zone: DMA32 Sep 9 00:19:20.706321 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 9 00:19:20.706327 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=128, Nodes=1 Sep 9 00:19:20.706332 kernel: ftrace: allocating 40099 entries in 157 pages Sep 9 00:19:20.706338 kernel: ftrace: allocated 157 pages with 5 groups Sep 9 00:19:20.706344 kernel: Dynamic Preempt: voluntary Sep 9 00:19:20.706349 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 9 00:19:20.706355 kernel: rcu: RCU event tracing is enabled. Sep 9 00:19:20.706361 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=128. Sep 9 00:19:20.706367 kernel: Trampoline variant of Tasks RCU enabled. Sep 9 00:19:20.706373 kernel: Rude variant of Tasks RCU enabled. Sep 9 00:19:20.706378 kernel: Tracing variant of Tasks RCU enabled. Sep 9 00:19:20.706384 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 9 00:19:20.706389 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=128 Sep 9 00:19:20.706395 kernel: RCU Tasks: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. Sep 9 00:19:20.706400 kernel: RCU Tasks Rude: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. Sep 9 00:19:20.706406 kernel: RCU Tasks Trace: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. Sep 9 00:19:20.706412 kernel: NR_IRQS: 33024, nr_irqs: 1448, preallocated irqs: 16 Sep 9 00:19:20.706418 kernel: rcu: srcu_init: Setting srcu_struct sizes to big. Sep 9 00:19:20.706423 kernel: Console: colour VGA+ 80x25 Sep 9 00:19:20.706428 kernel: printk: legacy console [tty0] enabled Sep 9 00:19:20.706434 kernel: printk: legacy console [ttyS0] enabled Sep 9 00:19:20.706439 kernel: ACPI: Core revision 20240827 Sep 9 00:19:20.706445 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 133484882848 ns Sep 9 00:19:20.706451 kernel: APIC: Switch to symmetric I/O mode setup Sep 9 00:19:20.706456 kernel: x2apic enabled Sep 9 00:19:20.706463 kernel: APIC: Switched APIC routing to: physical x2apic Sep 9 00:19:20.706468 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Sep 9 00:19:20.706474 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns Sep 9 00:19:20.706480 kernel: Calibrating delay loop (skipped) preset value.. 6816.00 BogoMIPS (lpj=3408000) Sep 9 00:19:20.706485 kernel: Disabled fast string operations Sep 9 00:19:20.706490 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Sep 9 00:19:20.706496 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Sep 9 00:19:20.706501 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 9 00:19:20.706507 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall and VM exit Sep 9 00:19:20.706513 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Sep 9 00:19:20.706519 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Sep 9 00:19:20.706524 kernel: RETBleed: Mitigation: Enhanced IBRS Sep 9 00:19:20.706530 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Sep 9 00:19:20.706535 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Sep 9 00:19:20.706541 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Sep 9 00:19:20.706546 kernel: SRBDS: Unknown: Dependent on hypervisor status Sep 9 00:19:20.706552 kernel: GDS: Unknown: Dependent on hypervisor status Sep 9 00:19:20.706557 kernel: active return thunk: its_return_thunk Sep 9 00:19:20.706563 kernel: ITS: Mitigation: Aligned branch/return thunks Sep 9 00:19:20.706569 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 9 00:19:20.706574 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 9 00:19:20.706580 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 9 00:19:20.706593 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 9 00:19:20.706599 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Sep 9 00:19:20.706604 kernel: Freeing SMP alternatives memory: 32K Sep 9 00:19:20.706610 kernel: pid_max: default: 131072 minimum: 1024 Sep 9 00:19:20.706615 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Sep 9 00:19:20.706622 kernel: landlock: Up and running. Sep 9 00:19:20.706628 kernel: SELinux: Initializing. Sep 9 00:19:20.706633 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Sep 9 00:19:20.706639 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Sep 9 00:19:20.706644 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) Sep 9 00:19:20.706650 kernel: Performance Events: Skylake events, core PMU driver. Sep 9 00:19:20.706655 kernel: core: CPUID marked event: 'cpu cycles' unavailable Sep 9 00:19:20.706661 kernel: core: CPUID marked event: 'instructions' unavailable Sep 9 00:19:20.706670 kernel: core: CPUID marked event: 'bus cycles' unavailable Sep 9 00:19:20.706680 kernel: core: CPUID marked event: 'cache references' unavailable Sep 9 00:19:20.706685 kernel: core: CPUID marked event: 'cache misses' unavailable Sep 9 00:19:20.706691 kernel: core: CPUID marked event: 'branch instructions' unavailable Sep 9 00:19:20.706696 kernel: core: CPUID marked event: 'branch misses' unavailable Sep 9 00:19:20.706701 kernel: ... version: 1 Sep 9 00:19:20.706707 kernel: ... bit width: 48 Sep 9 00:19:20.706712 kernel: ... generic registers: 4 Sep 9 00:19:20.706718 kernel: ... value mask: 0000ffffffffffff Sep 9 00:19:20.706723 kernel: ... max period: 000000007fffffff Sep 9 00:19:20.706730 kernel: ... fixed-purpose events: 0 Sep 9 00:19:20.706735 kernel: ... event mask: 000000000000000f Sep 9 00:19:20.706741 kernel: signal: max sigframe size: 1776 Sep 9 00:19:20.706746 kernel: rcu: Hierarchical SRCU implementation. Sep 9 00:19:20.706752 kernel: rcu: Max phase no-delay instances is 400. Sep 9 00:19:20.706757 kernel: Timer migration: 3 hierarchy levels; 8 children per group; 3 crossnode level Sep 9 00:19:20.706763 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Sep 9 00:19:20.706768 kernel: smp: Bringing up secondary CPUs ... Sep 9 00:19:20.706774 kernel: smpboot: x86: Booting SMP configuration: Sep 9 00:19:20.706780 kernel: .... node #0, CPUs: #1 Sep 9 00:19:20.706786 kernel: Disabled fast string operations Sep 9 00:19:20.706791 kernel: smp: Brought up 1 node, 2 CPUs Sep 9 00:19:20.706797 kernel: smpboot: Total of 2 processors activated (13632.00 BogoMIPS) Sep 9 00:19:20.706802 kernel: Memory: 1926316K/2096628K available (14336K kernel code, 2428K rwdata, 9956K rodata, 53832K init, 1088K bss, 158940K reserved, 0K cma-reserved) Sep 9 00:19:20.706808 kernel: devtmpfs: initialized Sep 9 00:19:20.706813 kernel: x86/mm: Memory block size: 128MB Sep 9 00:19:20.706819 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7feff000-0x7fefffff] (4096 bytes) Sep 9 00:19:20.706824 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 9 00:19:20.706831 kernel: futex hash table entries: 32768 (order: 9, 2097152 bytes, linear) Sep 9 00:19:20.706836 kernel: pinctrl core: initialized pinctrl subsystem Sep 9 00:19:20.706842 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 9 00:19:20.706847 kernel: audit: initializing netlink subsys (disabled) Sep 9 00:19:20.706853 kernel: audit: type=2000 audit(1757377157.281:1): state=initialized audit_enabled=0 res=1 Sep 9 00:19:20.706858 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 9 00:19:20.706864 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 9 00:19:20.706869 kernel: cpuidle: using governor menu Sep 9 00:19:20.706875 kernel: Simple Boot Flag at 0x36 set to 0x80 Sep 9 00:19:20.706881 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 9 00:19:20.706887 kernel: dca service started, version 1.12.1 Sep 9 00:19:20.706899 kernel: PCI: ECAM [mem 0xf0000000-0xf7ffffff] (base 0xf0000000) for domain 0000 [bus 00-7f] Sep 9 00:19:20.706905 kernel: PCI: Using configuration type 1 for base access Sep 9 00:19:20.706911 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 9 00:19:20.706917 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 9 00:19:20.706923 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Sep 9 00:19:20.706929 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 9 00:19:20.706934 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 9 00:19:20.706941 kernel: ACPI: Added _OSI(Module Device) Sep 9 00:19:20.706947 kernel: ACPI: Added _OSI(Processor Device) Sep 9 00:19:20.706953 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 9 00:19:20.706958 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 9 00:19:20.706964 kernel: ACPI: [Firmware Bug]: BIOS _OSI(Linux) query ignored Sep 9 00:19:20.706970 kernel: ACPI: Interpreter enabled Sep 9 00:19:20.706976 kernel: ACPI: PM: (supports S0 S1 S5) Sep 9 00:19:20.706981 kernel: ACPI: Using IOAPIC for interrupt routing Sep 9 00:19:20.706987 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 9 00:19:20.706994 kernel: PCI: Using E820 reservations for host bridge windows Sep 9 00:19:20.707000 kernel: ACPI: Enabled 4 GPEs in block 00 to 0F Sep 9 00:19:20.707006 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-7f]) Sep 9 00:19:20.707083 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 9 00:19:20.707136 kernel: acpi PNP0A03:00: _OSC: platform does not support [AER LTR] Sep 9 00:19:20.707184 kernel: acpi PNP0A03:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability] Sep 9 00:19:20.707192 kernel: PCI host bridge to bus 0000:00 Sep 9 00:19:20.707244 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 9 00:19:20.707289 kernel: pci_bus 0000:00: root bus resource [mem 0x000cc000-0x000dbfff window] Sep 9 00:19:20.707332 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Sep 9 00:19:20.707375 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 9 00:19:20.707417 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xfeff window] Sep 9 00:19:20.707459 kernel: pci_bus 0000:00: root bus resource [bus 00-7f] Sep 9 00:19:20.707519 kernel: pci 0000:00:00.0: [8086:7190] type 00 class 0x060000 conventional PCI endpoint Sep 9 00:19:20.707580 kernel: pci 0000:00:01.0: [8086:7191] type 01 class 0x060400 conventional PCI bridge Sep 9 00:19:20.707676 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Sep 9 00:19:20.707747 kernel: pci 0000:00:07.0: [8086:7110] type 00 class 0x060100 conventional PCI endpoint Sep 9 00:19:20.707805 kernel: pci 0000:00:07.1: [8086:7111] type 00 class 0x01018a conventional PCI endpoint Sep 9 00:19:20.707856 kernel: pci 0000:00:07.1: BAR 4 [io 0x1060-0x106f] Sep 9 00:19:20.707905 kernel: pci 0000:00:07.1: BAR 0 [io 0x01f0-0x01f7]: legacy IDE quirk Sep 9 00:19:20.707955 kernel: pci 0000:00:07.1: BAR 1 [io 0x03f6]: legacy IDE quirk Sep 9 00:19:20.708005 kernel: pci 0000:00:07.1: BAR 2 [io 0x0170-0x0177]: legacy IDE quirk Sep 9 00:19:20.708053 kernel: pci 0000:00:07.1: BAR 3 [io 0x0376]: legacy IDE quirk Sep 9 00:19:20.708106 kernel: pci 0000:00:07.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint Sep 9 00:19:20.708160 kernel: pci 0000:00:07.3: quirk: [io 0x1000-0x103f] claimed by PIIX4 ACPI Sep 9 00:19:20.708209 kernel: pci 0000:00:07.3: quirk: [io 0x1040-0x104f] claimed by PIIX4 SMB Sep 9 00:19:20.708265 kernel: pci 0000:00:07.7: [15ad:0740] type 00 class 0x088000 conventional PCI endpoint Sep 9 00:19:20.708316 kernel: pci 0000:00:07.7: BAR 0 [io 0x1080-0x10bf] Sep 9 00:19:20.708366 kernel: pci 0000:00:07.7: BAR 1 [mem 0xfebfe000-0xfebfffff 64bit] Sep 9 00:19:20.708421 kernel: pci 0000:00:0f.0: [15ad:0405] type 00 class 0x030000 conventional PCI endpoint Sep 9 00:19:20.708474 kernel: pci 0000:00:0f.0: BAR 0 [io 0x1070-0x107f] Sep 9 00:19:20.708523 kernel: pci 0000:00:0f.0: BAR 1 [mem 0xe8000000-0xefffffff pref] Sep 9 00:19:20.708573 kernel: pci 0000:00:0f.0: BAR 2 [mem 0xfe000000-0xfe7fffff] Sep 9 00:19:20.708661 kernel: pci 0000:00:0f.0: ROM [mem 0x00000000-0x00007fff pref] Sep 9 00:19:20.708711 kernel: pci 0000:00:0f.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 9 00:19:20.708766 kernel: pci 0000:00:11.0: [15ad:0790] type 01 class 0x060401 conventional PCI bridge Sep 9 00:19:20.708817 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] (subtractive decode) Sep 9 00:19:20.708869 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] Sep 9 00:19:20.708919 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] Sep 9 00:19:20.708968 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] Sep 9 00:19:20.709022 kernel: pci 0000:00:15.0: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Sep 9 00:19:20.709073 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] Sep 9 00:19:20.709123 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] Sep 9 00:19:20.709173 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] Sep 9 00:19:20.709226 kernel: pci 0000:00:15.0: PME# supported from D0 D3hot D3cold Sep 9 00:19:20.709280 kernel: pci 0000:00:15.1: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Sep 9 00:19:20.709332 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] Sep 9 00:19:20.709381 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] Sep 9 00:19:20.709432 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] Sep 9 00:19:20.709481 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] Sep 9 00:19:20.709532 kernel: pci 0000:00:15.1: PME# supported from D0 D3hot D3cold Sep 9 00:19:20.709599 kernel: pci 0000:00:15.2: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Sep 9 00:19:20.709652 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] Sep 9 00:19:20.709708 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] Sep 9 00:19:20.709758 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] Sep 9 00:19:20.709809 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] Sep 9 00:19:20.709859 kernel: pci 0000:00:15.2: PME# supported from D0 D3hot D3cold Sep 9 00:19:20.709916 kernel: pci 0000:00:15.3: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Sep 9 00:19:20.709967 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] Sep 9 00:19:20.710017 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] Sep 9 00:19:20.710067 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] Sep 9 00:19:20.710117 kernel: pci 0000:00:15.3: PME# supported from D0 D3hot D3cold Sep 9 00:19:20.710171 kernel: pci 0000:00:15.4: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Sep 9 00:19:20.710222 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] Sep 9 00:19:20.710274 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] Sep 9 00:19:20.710324 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] Sep 9 00:19:20.710374 kernel: pci 0000:00:15.4: PME# supported from D0 D3hot D3cold Sep 9 00:19:20.710428 kernel: pci 0000:00:15.5: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Sep 9 00:19:20.710479 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] Sep 9 00:19:20.710528 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] Sep 9 00:19:20.710578 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] Sep 9 00:19:20.710682 kernel: pci 0000:00:15.5: PME# supported from D0 D3hot D3cold Sep 9 00:19:20.710741 kernel: pci 0000:00:15.6: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Sep 9 00:19:20.710793 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] Sep 9 00:19:20.710844 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] Sep 9 00:19:20.710894 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] Sep 9 00:19:20.710943 kernel: pci 0000:00:15.6: PME# supported from D0 D3hot D3cold Sep 9 00:19:20.710997 kernel: pci 0000:00:15.7: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Sep 9 00:19:20.711049 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] Sep 9 00:19:20.711104 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] Sep 9 00:19:20.711155 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] Sep 9 00:19:20.711205 kernel: pci 0000:00:15.7: PME# supported from D0 D3hot D3cold Sep 9 00:19:20.711261 kernel: pci 0000:00:16.0: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Sep 9 00:19:20.711311 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] Sep 9 00:19:20.711361 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] Sep 9 00:19:20.711411 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] Sep 9 00:19:20.711463 kernel: pci 0000:00:16.0: PME# supported from D0 D3hot D3cold Sep 9 00:19:20.711519 kernel: pci 0000:00:16.1: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Sep 9 00:19:20.711569 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] Sep 9 00:19:20.711634 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] Sep 9 00:19:20.711685 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] Sep 9 00:19:20.711735 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] Sep 9 00:19:20.711784 kernel: pci 0000:00:16.1: PME# supported from D0 D3hot D3cold Sep 9 00:19:20.711841 kernel: pci 0000:00:16.2: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Sep 9 00:19:20.711892 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] Sep 9 00:19:20.711942 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] Sep 9 00:19:20.711994 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] Sep 9 00:19:20.712044 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] Sep 9 00:19:20.712094 kernel: pci 0000:00:16.2: PME# supported from D0 D3hot D3cold Sep 9 00:19:20.712151 kernel: pci 0000:00:16.3: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Sep 9 00:19:20.712204 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] Sep 9 00:19:20.712254 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] Sep 9 00:19:20.712304 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] Sep 9 00:19:20.712354 kernel: pci 0000:00:16.3: PME# supported from D0 D3hot D3cold Sep 9 00:19:20.712408 kernel: pci 0000:00:16.4: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Sep 9 00:19:20.712459 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] Sep 9 00:19:20.712509 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] Sep 9 00:19:20.712561 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] Sep 9 00:19:20.712763 kernel: pci 0000:00:16.4: PME# supported from D0 D3hot D3cold Sep 9 00:19:20.712825 kernel: pci 0000:00:16.5: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Sep 9 00:19:20.712878 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] Sep 9 00:19:20.712928 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] Sep 9 00:19:20.712978 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] Sep 9 00:19:20.713028 kernel: pci 0000:00:16.5: PME# supported from D0 D3hot D3cold Sep 9 00:19:20.713085 kernel: pci 0000:00:16.6: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Sep 9 00:19:20.713136 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] Sep 9 00:19:20.713185 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] Sep 9 00:19:20.713236 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] Sep 9 00:19:20.713285 kernel: pci 0000:00:16.6: PME# supported from D0 D3hot D3cold Sep 9 00:19:20.713341 kernel: pci 0000:00:16.7: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Sep 9 00:19:20.713392 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] Sep 9 00:19:20.713445 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] Sep 9 00:19:20.713496 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] Sep 9 00:19:20.713546 kernel: pci 0000:00:16.7: PME# supported from D0 D3hot D3cold Sep 9 00:19:20.713782 kernel: pci 0000:00:17.0: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Sep 9 00:19:20.713841 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] Sep 9 00:19:20.713892 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] Sep 9 00:19:20.713943 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] Sep 9 00:19:20.713996 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] Sep 9 00:19:20.714050 kernel: pci 0000:00:17.0: PME# supported from D0 D3hot D3cold Sep 9 00:19:20.714104 kernel: pci 0000:00:17.1: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Sep 9 00:19:20.714156 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] Sep 9 00:19:20.714209 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] Sep 9 00:19:20.714259 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] Sep 9 00:19:20.714308 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] Sep 9 00:19:20.714358 kernel: pci 0000:00:17.1: PME# supported from D0 D3hot D3cold Sep 9 00:19:20.714413 kernel: pci 0000:00:17.2: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Sep 9 00:19:20.714464 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] Sep 9 00:19:20.714513 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] Sep 9 00:19:20.714566 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] Sep 9 00:19:20.714675 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] Sep 9 00:19:20.714727 kernel: pci 0000:00:17.2: PME# supported from D0 D3hot D3cold Sep 9 00:19:20.716253 kernel: pci 0000:00:17.3: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Sep 9 00:19:20.716318 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] Sep 9 00:19:20.716373 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] Sep 9 00:19:20.716425 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] Sep 9 00:19:20.716479 kernel: pci 0000:00:17.3: PME# supported from D0 D3hot D3cold Sep 9 00:19:20.716535 kernel: pci 0000:00:17.4: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Sep 9 00:19:20.716598 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] Sep 9 00:19:20.716651 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] Sep 9 00:19:20.716702 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] Sep 9 00:19:20.716752 kernel: pci 0000:00:17.4: PME# supported from D0 D3hot D3cold Sep 9 00:19:20.716807 kernel: pci 0000:00:17.5: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Sep 9 00:19:20.716861 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] Sep 9 00:19:20.716911 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] Sep 9 00:19:20.716961 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] Sep 9 00:19:20.717012 kernel: pci 0000:00:17.5: PME# supported from D0 D3hot D3cold Sep 9 00:19:20.717067 kernel: pci 0000:00:17.6: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Sep 9 00:19:20.717118 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] Sep 9 00:19:20.717168 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] Sep 9 00:19:20.717219 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] Sep 9 00:19:20.717271 kernel: pci 0000:00:17.6: PME# supported from D0 D3hot D3cold Sep 9 00:19:20.717327 kernel: pci 0000:00:17.7: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Sep 9 00:19:20.717382 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] Sep 9 00:19:20.717432 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] Sep 9 00:19:20.717482 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] Sep 9 00:19:20.717531 kernel: pci 0000:00:17.7: PME# supported from D0 D3hot D3cold Sep 9 00:19:20.717599 kernel: pci 0000:00:18.0: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Sep 9 00:19:20.717656 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] Sep 9 00:19:20.717707 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] Sep 9 00:19:20.717758 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] Sep 9 00:19:20.717808 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] Sep 9 00:19:20.717858 kernel: pci 0000:00:18.0: PME# supported from D0 D3hot D3cold Sep 9 00:19:20.717913 kernel: pci 0000:00:18.1: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Sep 9 00:19:20.717964 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] Sep 9 00:19:20.718017 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] Sep 9 00:19:20.718069 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] Sep 9 00:19:20.718119 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] Sep 9 00:19:20.718169 kernel: pci 0000:00:18.1: PME# supported from D0 D3hot D3cold Sep 9 00:19:20.718224 kernel: pci 0000:00:18.2: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Sep 9 00:19:20.718275 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] Sep 9 00:19:20.718326 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] Sep 9 00:19:20.718378 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] Sep 9 00:19:20.718428 kernel: pci 0000:00:18.2: PME# supported from D0 D3hot D3cold Sep 9 00:19:20.718483 kernel: pci 0000:00:18.3: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Sep 9 00:19:20.718534 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] Sep 9 00:19:20.718746 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] Sep 9 00:19:20.718806 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] Sep 9 00:19:20.718858 kernel: pci 0000:00:18.3: PME# supported from D0 D3hot D3cold Sep 9 00:19:20.718919 kernel: pci 0000:00:18.4: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Sep 9 00:19:20.718971 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] Sep 9 00:19:20.719022 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] Sep 9 00:19:20.719072 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] Sep 9 00:19:20.719123 kernel: pci 0000:00:18.4: PME# supported from D0 D3hot D3cold Sep 9 00:19:20.719177 kernel: pci 0000:00:18.5: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Sep 9 00:19:20.719228 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] Sep 9 00:19:20.719282 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] Sep 9 00:19:20.719332 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] Sep 9 00:19:20.719383 kernel: pci 0000:00:18.5: PME# supported from D0 D3hot D3cold Sep 9 00:19:20.719438 kernel: pci 0000:00:18.6: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Sep 9 00:19:20.719489 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] Sep 9 00:19:20.719539 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] Sep 9 00:19:20.720273 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] Sep 9 00:19:20.720338 kernel: pci 0000:00:18.6: PME# supported from D0 D3hot D3cold Sep 9 00:19:20.720398 kernel: pci 0000:00:18.7: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Sep 9 00:19:20.720452 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] Sep 9 00:19:20.720504 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] Sep 9 00:19:20.720554 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] Sep 9 00:19:20.721508 kernel: pci 0000:00:18.7: PME# supported from D0 D3hot D3cold Sep 9 00:19:20.721571 kernel: pci_bus 0000:01: extended config space not accessible Sep 9 00:19:20.721636 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Sep 9 00:19:20.721696 kernel: pci_bus 0000:02: extended config space not accessible Sep 9 00:19:20.721715 kernel: acpiphp: Slot [32] registered Sep 9 00:19:20.721722 kernel: acpiphp: Slot [33] registered Sep 9 00:19:20.721728 kernel: acpiphp: Slot [34] registered Sep 9 00:19:20.721734 kernel: acpiphp: Slot [35] registered Sep 9 00:19:20.721740 kernel: acpiphp: Slot [36] registered Sep 9 00:19:20.721745 kernel: acpiphp: Slot [37] registered Sep 9 00:19:20.721751 kernel: acpiphp: Slot [38] registered Sep 9 00:19:20.721759 kernel: acpiphp: Slot [39] registered Sep 9 00:19:20.721765 kernel: acpiphp: Slot [40] registered Sep 9 00:19:20.721771 kernel: acpiphp: Slot [41] registered Sep 9 00:19:20.721777 kernel: acpiphp: Slot [42] registered Sep 9 00:19:20.721783 kernel: acpiphp: Slot [43] registered Sep 9 00:19:20.721788 kernel: acpiphp: Slot [44] registered Sep 9 00:19:20.721794 kernel: acpiphp: Slot [45] registered Sep 9 00:19:20.721800 kernel: acpiphp: Slot [46] registered Sep 9 00:19:20.721806 kernel: acpiphp: Slot [47] registered Sep 9 00:19:20.721812 kernel: acpiphp: Slot [48] registered Sep 9 00:19:20.721819 kernel: acpiphp: Slot [49] registered Sep 9 00:19:20.721825 kernel: acpiphp: Slot [50] registered Sep 9 00:19:20.721831 kernel: acpiphp: Slot [51] registered Sep 9 00:19:20.721837 kernel: acpiphp: Slot [52] registered Sep 9 00:19:20.721844 kernel: acpiphp: Slot [53] registered Sep 9 00:19:20.721854 kernel: acpiphp: Slot [54] registered Sep 9 00:19:20.721860 kernel: acpiphp: Slot [55] registered Sep 9 00:19:20.721866 kernel: acpiphp: Slot [56] registered Sep 9 00:19:20.721872 kernel: acpiphp: Slot [57] registered Sep 9 00:19:20.721880 kernel: acpiphp: Slot [58] registered Sep 9 00:19:20.721886 kernel: acpiphp: Slot [59] registered Sep 9 00:19:20.721892 kernel: acpiphp: Slot [60] registered Sep 9 00:19:20.721898 kernel: acpiphp: Slot [61] registered Sep 9 00:19:20.721904 kernel: acpiphp: Slot [62] registered Sep 9 00:19:20.721909 kernel: acpiphp: Slot [63] registered Sep 9 00:19:20.721964 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] (subtractive decode) Sep 9 00:19:20.722015 kernel: pci 0000:00:11.0: bridge window [mem 0x000a0000-0x000bffff window] (subtractive decode) Sep 9 00:19:20.722066 kernel: pci 0000:00:11.0: bridge window [mem 0x000cc000-0x000dbfff window] (subtractive decode) Sep 9 00:19:20.722117 kernel: pci 0000:00:11.0: bridge window [mem 0xc0000000-0xfebfffff window] (subtractive decode) Sep 9 00:19:20.722167 kernel: pci 0000:00:11.0: bridge window [io 0x0000-0x0cf7 window] (subtractive decode) Sep 9 00:19:20.722216 kernel: pci 0000:00:11.0: bridge window [io 0x0d00-0xfeff window] (subtractive decode) Sep 9 00:19:20.722273 kernel: pci 0000:03:00.0: [15ad:07c0] type 00 class 0x010700 PCIe Endpoint Sep 9 00:19:20.722325 kernel: pci 0000:03:00.0: BAR 0 [io 0x4000-0x4007] Sep 9 00:19:20.722388 kernel: pci 0000:03:00.0: BAR 1 [mem 0xfd5f8000-0xfd5fffff 64bit] Sep 9 00:19:20.722457 kernel: pci 0000:03:00.0: ROM [mem 0x00000000-0x0000ffff pref] Sep 9 00:19:20.722524 kernel: pci 0000:03:00.0: PME# supported from D0 D3hot D3cold Sep 9 00:19:20.723626 kernel: pci 0000:03:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' Sep 9 00:19:20.723692 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] Sep 9 00:19:20.723746 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] Sep 9 00:19:20.723799 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] Sep 9 00:19:20.723851 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] Sep 9 00:19:20.723901 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] Sep 9 00:19:20.723952 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] Sep 9 00:19:20.724006 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] Sep 9 00:19:20.724068 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] Sep 9 00:19:20.724127 kernel: pci 0000:0b:00.0: [15ad:07b0] type 00 class 0x020000 PCIe Endpoint Sep 9 00:19:20.724187 kernel: pci 0000:0b:00.0: BAR 0 [mem 0xfd4fc000-0xfd4fcfff] Sep 9 00:19:20.724247 kernel: pci 0000:0b:00.0: BAR 1 [mem 0xfd4fd000-0xfd4fdfff] Sep 9 00:19:20.724298 kernel: pci 0000:0b:00.0: BAR 2 [mem 0xfd4fe000-0xfd4fffff] Sep 9 00:19:20.724348 kernel: pci 0000:0b:00.0: BAR 3 [io 0x5000-0x500f] Sep 9 00:19:20.724402 kernel: pci 0000:0b:00.0: ROM [mem 0x00000000-0x0000ffff pref] Sep 9 00:19:20.724451 kernel: pci 0000:0b:00.0: supports D1 D2 Sep 9 00:19:20.724502 kernel: pci 0000:0b:00.0: PME# supported from D0 D1 D2 D3hot D3cold Sep 9 00:19:20.724551 kernel: pci 0000:0b:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' Sep 9 00:19:20.724611 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] Sep 9 00:19:20.724662 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] Sep 9 00:19:20.724744 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] Sep 9 00:19:20.724822 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] Sep 9 00:19:20.724910 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] Sep 9 00:19:20.724973 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] Sep 9 00:19:20.725026 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] Sep 9 00:19:20.725076 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] Sep 9 00:19:20.725126 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] Sep 9 00:19:20.725177 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] Sep 9 00:19:20.725226 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] Sep 9 00:19:20.725279 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] Sep 9 00:19:20.725329 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] Sep 9 00:19:20.725379 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] Sep 9 00:19:20.725428 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] Sep 9 00:19:20.725479 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] Sep 9 00:19:20.725529 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] Sep 9 00:19:20.725579 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] Sep 9 00:19:20.725640 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] Sep 9 00:19:20.725711 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] Sep 9 00:19:20.725777 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] Sep 9 00:19:20.725827 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] Sep 9 00:19:20.725877 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] Sep 9 00:19:20.725926 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] Sep 9 00:19:20.725934 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 9 Sep 9 00:19:20.725943 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 0 Sep 9 00:19:20.725949 kernel: ACPI: PCI: Interrupt link LNKB disabled Sep 9 00:19:20.725955 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 9 00:19:20.725961 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 10 Sep 9 00:19:20.725967 kernel: iommu: Default domain type: Translated Sep 9 00:19:20.725973 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 9 00:19:20.725979 kernel: PCI: Using ACPI for IRQ routing Sep 9 00:19:20.725985 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 9 00:19:20.725991 kernel: e820: reserve RAM buffer [mem 0x0009ec00-0x0009ffff] Sep 9 00:19:20.725998 kernel: e820: reserve RAM buffer [mem 0x7fee0000-0x7fffffff] Sep 9 00:19:20.726047 kernel: pci 0000:00:0f.0: vgaarb: setting as boot VGA device Sep 9 00:19:20.726095 kernel: pci 0000:00:0f.0: vgaarb: bridge control possible Sep 9 00:19:20.726144 kernel: pci 0000:00:0f.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 9 00:19:20.726152 kernel: vgaarb: loaded Sep 9 00:19:20.726159 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 Sep 9 00:19:20.726165 kernel: hpet0: 16 comparators, 64-bit 14.318180 MHz counter Sep 9 00:19:20.726171 kernel: clocksource: Switched to clocksource tsc-early Sep 9 00:19:20.726177 kernel: VFS: Disk quotas dquot_6.6.0 Sep 9 00:19:20.726185 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 9 00:19:20.726191 kernel: pnp: PnP ACPI init Sep 9 00:19:20.726244 kernel: system 00:00: [io 0x1000-0x103f] has been reserved Sep 9 00:19:20.726290 kernel: system 00:00: [io 0x1040-0x104f] has been reserved Sep 9 00:19:20.726334 kernel: system 00:00: [io 0x0cf0-0x0cf1] has been reserved Sep 9 00:19:20.726383 kernel: system 00:04: [mem 0xfed00000-0xfed003ff] has been reserved Sep 9 00:19:20.726431 kernel: pnp 00:06: [dma 2] Sep 9 00:19:20.726483 kernel: system 00:07: [io 0xfce0-0xfcff] has been reserved Sep 9 00:19:20.726528 kernel: system 00:07: [mem 0xf0000000-0xf7ffffff] has been reserved Sep 9 00:19:20.726572 kernel: system 00:07: [mem 0xfe800000-0xfe9fffff] has been reserved Sep 9 00:19:20.726581 kernel: pnp: PnP ACPI: found 8 devices Sep 9 00:19:20.728119 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 9 00:19:20.728126 kernel: NET: Registered PF_INET protocol family Sep 9 00:19:20.728132 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 9 00:19:20.728139 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Sep 9 00:19:20.728147 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 9 00:19:20.728153 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Sep 9 00:19:20.728159 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Sep 9 00:19:20.728165 kernel: TCP: Hash tables configured (established 16384 bind 16384) Sep 9 00:19:20.728171 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Sep 9 00:19:20.728177 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Sep 9 00:19:20.728183 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 9 00:19:20.728195 kernel: NET: Registered PF_XDP protocol family Sep 9 00:19:20.728265 kernel: pci 0000:00:15.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 Sep 9 00:19:20.728322 kernel: pci 0000:00:15.3: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Sep 9 00:19:20.728375 kernel: pci 0000:00:15.4: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Sep 9 00:19:20.728425 kernel: pci 0000:00:15.5: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Sep 9 00:19:20.728476 kernel: pci 0000:00:15.6: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Sep 9 00:19:20.728527 kernel: pci 0000:00:15.7: bridge window [io 0x1000-0x0fff] to [bus 0a] add_size 1000 Sep 9 00:19:20.728577 kernel: pci 0000:00:16.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 0b] add_size 200000 add_align 100000 Sep 9 00:19:20.729109 kernel: pci 0000:00:16.3: bridge window [io 0x1000-0x0fff] to [bus 0e] add_size 1000 Sep 9 00:19:20.729167 kernel: pci 0000:00:16.4: bridge window [io 0x1000-0x0fff] to [bus 0f] add_size 1000 Sep 9 00:19:20.729219 kernel: pci 0000:00:16.5: bridge window [io 0x1000-0x0fff] to [bus 10] add_size 1000 Sep 9 00:19:20.729271 kernel: pci 0000:00:16.6: bridge window [io 0x1000-0x0fff] to [bus 11] add_size 1000 Sep 9 00:19:20.729323 kernel: pci 0000:00:16.7: bridge window [io 0x1000-0x0fff] to [bus 12] add_size 1000 Sep 9 00:19:20.729373 kernel: pci 0000:00:17.3: bridge window [io 0x1000-0x0fff] to [bus 16] add_size 1000 Sep 9 00:19:20.729423 kernel: pci 0000:00:17.4: bridge window [io 0x1000-0x0fff] to [bus 17] add_size 1000 Sep 9 00:19:20.729473 kernel: pci 0000:00:17.5: bridge window [io 0x1000-0x0fff] to [bus 18] add_size 1000 Sep 9 00:19:20.729524 kernel: pci 0000:00:17.6: bridge window [io 0x1000-0x0fff] to [bus 19] add_size 1000 Sep 9 00:19:20.729577 kernel: pci 0000:00:17.7: bridge window [io 0x1000-0x0fff] to [bus 1a] add_size 1000 Sep 9 00:19:20.729646 kernel: pci 0000:00:18.2: bridge window [io 0x1000-0x0fff] to [bus 1d] add_size 1000 Sep 9 00:19:20.729697 kernel: pci 0000:00:18.3: bridge window [io 0x1000-0x0fff] to [bus 1e] add_size 1000 Sep 9 00:19:20.729748 kernel: pci 0000:00:18.4: bridge window [io 0x1000-0x0fff] to [bus 1f] add_size 1000 Sep 9 00:19:20.729798 kernel: pci 0000:00:18.5: bridge window [io 0x1000-0x0fff] to [bus 20] add_size 1000 Sep 9 00:19:20.729849 kernel: pci 0000:00:18.6: bridge window [io 0x1000-0x0fff] to [bus 21] add_size 1000 Sep 9 00:19:20.729899 kernel: pci 0000:00:18.7: bridge window [io 0x1000-0x0fff] to [bus 22] add_size 1000 Sep 9 00:19:20.729949 kernel: pci 0000:00:15.0: bridge window [mem 0xc0000000-0xc01fffff 64bit pref]: assigned Sep 9 00:19:20.730003 kernel: pci 0000:00:16.0: bridge window [mem 0xc0200000-0xc03fffff 64bit pref]: assigned Sep 9 00:19:20.730054 kernel: pci 0000:00:15.3: bridge window [io size 0x1000]: can't assign; no space Sep 9 00:19:20.730104 kernel: pci 0000:00:15.3: bridge window [io size 0x1000]: failed to assign Sep 9 00:19:20.730154 kernel: pci 0000:00:15.4: bridge window [io size 0x1000]: can't assign; no space Sep 9 00:19:20.730204 kernel: pci 0000:00:15.4: bridge window [io size 0x1000]: failed to assign Sep 9 00:19:20.730255 kernel: pci 0000:00:15.5: bridge window [io size 0x1000]: can't assign; no space Sep 9 00:19:20.730305 kernel: pci 0000:00:15.5: bridge window [io size 0x1000]: failed to assign Sep 9 00:19:20.730357 kernel: pci 0000:00:15.6: bridge window [io size 0x1000]: can't assign; no space Sep 9 00:19:20.730407 kernel: pci 0000:00:15.6: bridge window [io size 0x1000]: failed to assign Sep 9 00:19:20.730457 kernel: pci 0000:00:15.7: bridge window [io size 0x1000]: can't assign; no space Sep 9 00:19:20.730506 kernel: pci 0000:00:15.7: bridge window [io size 0x1000]: failed to assign Sep 9 00:19:20.730557 kernel: pci 0000:00:16.3: bridge window [io size 0x1000]: can't assign; no space Sep 9 00:19:20.732634 kernel: pci 0000:00:16.3: bridge window [io size 0x1000]: failed to assign Sep 9 00:19:20.732696 kernel: pci 0000:00:16.4: bridge window [io size 0x1000]: can't assign; no space Sep 9 00:19:20.732747 kernel: pci 0000:00:16.4: bridge window [io size 0x1000]: failed to assign Sep 9 00:19:20.732802 kernel: pci 0000:00:16.5: bridge window [io size 0x1000]: can't assign; no space Sep 9 00:19:20.732852 kernel: pci 0000:00:16.5: bridge window [io size 0x1000]: failed to assign Sep 9 00:19:20.732903 kernel: pci 0000:00:16.6: bridge window [io size 0x1000]: can't assign; no space Sep 9 00:19:20.732953 kernel: pci 0000:00:16.6: bridge window [io size 0x1000]: failed to assign Sep 9 00:19:20.733004 kernel: pci 0000:00:16.7: bridge window [io size 0x1000]: can't assign; no space Sep 9 00:19:20.733054 kernel: pci 0000:00:16.7: bridge window [io size 0x1000]: failed to assign Sep 9 00:19:20.733104 kernel: pci 0000:00:17.3: bridge window [io size 0x1000]: can't assign; no space Sep 9 00:19:20.733154 kernel: pci 0000:00:17.3: bridge window [io size 0x1000]: failed to assign Sep 9 00:19:20.733207 kernel: pci 0000:00:17.4: bridge window [io size 0x1000]: can't assign; no space Sep 9 00:19:20.733257 kernel: pci 0000:00:17.4: bridge window [io size 0x1000]: failed to assign Sep 9 00:19:20.733307 kernel: pci 0000:00:17.5: bridge window [io size 0x1000]: can't assign; no space Sep 9 00:19:20.733357 kernel: pci 0000:00:17.5: bridge window [io size 0x1000]: failed to assign Sep 9 00:19:20.733407 kernel: pci 0000:00:17.6: bridge window [io size 0x1000]: can't assign; no space Sep 9 00:19:20.733457 kernel: pci 0000:00:17.6: bridge window [io size 0x1000]: failed to assign Sep 9 00:19:20.733506 kernel: pci 0000:00:17.7: bridge window [io size 0x1000]: can't assign; no space Sep 9 00:19:20.733555 kernel: pci 0000:00:17.7: bridge window [io size 0x1000]: failed to assign Sep 9 00:19:20.733621 kernel: pci 0000:00:18.2: bridge window [io size 0x1000]: can't assign; no space Sep 9 00:19:20.733671 kernel: pci 0000:00:18.2: bridge window [io size 0x1000]: failed to assign Sep 9 00:19:20.733720 kernel: pci 0000:00:18.3: bridge window [io size 0x1000]: can't assign; no space Sep 9 00:19:20.733771 kernel: pci 0000:00:18.3: bridge window [io size 0x1000]: failed to assign Sep 9 00:19:20.733821 kernel: pci 0000:00:18.4: bridge window [io size 0x1000]: can't assign; no space Sep 9 00:19:20.733870 kernel: pci 0000:00:18.4: bridge window [io size 0x1000]: failed to assign Sep 9 00:19:20.733921 kernel: pci 0000:00:18.5: bridge window [io size 0x1000]: can't assign; no space Sep 9 00:19:20.733971 kernel: pci 0000:00:18.5: bridge window [io size 0x1000]: failed to assign Sep 9 00:19:20.734023 kernel: pci 0000:00:18.6: bridge window [io size 0x1000]: can't assign; no space Sep 9 00:19:20.734074 kernel: pci 0000:00:18.6: bridge window [io size 0x1000]: failed to assign Sep 9 00:19:20.734124 kernel: pci 0000:00:18.7: bridge window [io size 0x1000]: can't assign; no space Sep 9 00:19:20.734174 kernel: pci 0000:00:18.7: bridge window [io size 0x1000]: failed to assign Sep 9 00:19:20.734223 kernel: pci 0000:00:18.7: bridge window [io size 0x1000]: can't assign; no space Sep 9 00:19:20.734273 kernel: pci 0000:00:18.7: bridge window [io size 0x1000]: failed to assign Sep 9 00:19:20.734323 kernel: pci 0000:00:18.6: bridge window [io size 0x1000]: can't assign; no space Sep 9 00:19:20.734373 kernel: pci 0000:00:18.6: bridge window [io size 0x1000]: failed to assign Sep 9 00:19:20.734425 kernel: pci 0000:00:18.5: bridge window [io size 0x1000]: can't assign; no space Sep 9 00:19:20.734474 kernel: pci 0000:00:18.5: bridge window [io size 0x1000]: failed to assign Sep 9 00:19:20.734523 kernel: pci 0000:00:18.4: bridge window [io size 0x1000]: can't assign; no space Sep 9 00:19:20.734573 kernel: pci 0000:00:18.4: bridge window [io size 0x1000]: failed to assign Sep 9 00:19:20.735002 kernel: pci 0000:00:18.3: bridge window [io size 0x1000]: can't assign; no space Sep 9 00:19:20.735056 kernel: pci 0000:00:18.3: bridge window [io size 0x1000]: failed to assign Sep 9 00:19:20.735107 kernel: pci 0000:00:18.2: bridge window [io size 0x1000]: can't assign; no space Sep 9 00:19:20.735158 kernel: pci 0000:00:18.2: bridge window [io size 0x1000]: failed to assign Sep 9 00:19:20.735209 kernel: pci 0000:00:17.7: bridge window [io size 0x1000]: can't assign; no space Sep 9 00:19:20.735259 kernel: pci 0000:00:17.7: bridge window [io size 0x1000]: failed to assign Sep 9 00:19:20.735313 kernel: pci 0000:00:17.6: bridge window [io size 0x1000]: can't assign; no space Sep 9 00:19:20.735362 kernel: pci 0000:00:17.6: bridge window [io size 0x1000]: failed to assign Sep 9 00:19:20.735414 kernel: pci 0000:00:17.5: bridge window [io size 0x1000]: can't assign; no space Sep 9 00:19:20.735464 kernel: pci 0000:00:17.5: bridge window [io size 0x1000]: failed to assign Sep 9 00:19:20.735513 kernel: pci 0000:00:17.4: bridge window [io size 0x1000]: can't assign; no space Sep 9 00:19:20.735564 kernel: pci 0000:00:17.4: bridge window [io size 0x1000]: failed to assign Sep 9 00:19:20.735634 kernel: pci 0000:00:17.3: bridge window [io size 0x1000]: can't assign; no space Sep 9 00:19:20.735690 kernel: pci 0000:00:17.3: bridge window [io size 0x1000]: failed to assign Sep 9 00:19:20.735740 kernel: pci 0000:00:16.7: bridge window [io size 0x1000]: can't assign; no space Sep 9 00:19:20.735794 kernel: pci 0000:00:16.7: bridge window [io size 0x1000]: failed to assign Sep 9 00:19:20.735844 kernel: pci 0000:00:16.6: bridge window [io size 0x1000]: can't assign; no space Sep 9 00:19:20.735894 kernel: pci 0000:00:16.6: bridge window [io size 0x1000]: failed to assign Sep 9 00:19:20.735944 kernel: pci 0000:00:16.5: bridge window [io size 0x1000]: can't assign; no space Sep 9 00:19:20.735994 kernel: pci 0000:00:16.5: bridge window [io size 0x1000]: failed to assign Sep 9 00:19:20.736044 kernel: pci 0000:00:16.4: bridge window [io size 0x1000]: can't assign; no space Sep 9 00:19:20.736095 kernel: pci 0000:00:16.4: bridge window [io size 0x1000]: failed to assign Sep 9 00:19:20.736159 kernel: pci 0000:00:16.3: bridge window [io size 0x1000]: can't assign; no space Sep 9 00:19:20.736209 kernel: pci 0000:00:16.3: bridge window [io size 0x1000]: failed to assign Sep 9 00:19:20.736261 kernel: pci 0000:00:15.7: bridge window [io size 0x1000]: can't assign; no space Sep 9 00:19:20.736310 kernel: pci 0000:00:15.7: bridge window [io size 0x1000]: failed to assign Sep 9 00:19:20.736362 kernel: pci 0000:00:15.6: bridge window [io size 0x1000]: can't assign; no space Sep 9 00:19:20.736410 kernel: pci 0000:00:15.6: bridge window [io size 0x1000]: failed to assign Sep 9 00:19:20.736460 kernel: pci 0000:00:15.5: bridge window [io size 0x1000]: can't assign; no space Sep 9 00:19:20.736509 kernel: pci 0000:00:15.5: bridge window [io size 0x1000]: failed to assign Sep 9 00:19:20.736561 kernel: pci 0000:00:15.4: bridge window [io size 0x1000]: can't assign; no space Sep 9 00:19:20.739651 kernel: pci 0000:00:15.4: bridge window [io size 0x1000]: failed to assign Sep 9 00:19:20.739714 kernel: pci 0000:00:15.3: bridge window [io size 0x1000]: can't assign; no space Sep 9 00:19:20.739768 kernel: pci 0000:00:15.3: bridge window [io size 0x1000]: failed to assign Sep 9 00:19:20.739822 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Sep 9 00:19:20.739877 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] Sep 9 00:19:20.739941 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] Sep 9 00:19:20.739995 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] Sep 9 00:19:20.740049 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] Sep 9 00:19:20.740113 kernel: pci 0000:03:00.0: ROM [mem 0xfd500000-0xfd50ffff pref]: assigned Sep 9 00:19:20.740167 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] Sep 9 00:19:20.740217 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] Sep 9 00:19:20.740268 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] Sep 9 00:19:20.740318 kernel: pci 0000:00:15.0: bridge window [mem 0xc0000000-0xc01fffff 64bit pref] Sep 9 00:19:20.740371 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] Sep 9 00:19:20.740422 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] Sep 9 00:19:20.740471 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] Sep 9 00:19:20.740521 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] Sep 9 00:19:20.740575 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] Sep 9 00:19:20.740635 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] Sep 9 00:19:20.740689 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] Sep 9 00:19:20.740739 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] Sep 9 00:19:20.740791 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] Sep 9 00:19:20.740841 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] Sep 9 00:19:20.740890 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] Sep 9 00:19:20.740941 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] Sep 9 00:19:20.740991 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] Sep 9 00:19:20.741044 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] Sep 9 00:19:20.741095 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] Sep 9 00:19:20.741144 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] Sep 9 00:19:20.741194 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] Sep 9 00:19:20.741245 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] Sep 9 00:19:20.741295 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] Sep 9 00:19:20.741344 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] Sep 9 00:19:20.741398 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] Sep 9 00:19:20.741448 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] Sep 9 00:19:20.741497 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] Sep 9 00:19:20.741551 kernel: pci 0000:0b:00.0: ROM [mem 0xfd400000-0xfd40ffff pref]: assigned Sep 9 00:19:20.743148 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] Sep 9 00:19:20.743206 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] Sep 9 00:19:20.743258 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] Sep 9 00:19:20.743309 kernel: pci 0000:00:16.0: bridge window [mem 0xc0200000-0xc03fffff 64bit pref] Sep 9 00:19:20.743362 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] Sep 9 00:19:20.743412 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] Sep 9 00:19:20.743462 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] Sep 9 00:19:20.743512 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] Sep 9 00:19:20.743563 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] Sep 9 00:19:20.743782 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] Sep 9 00:19:20.743836 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] Sep 9 00:19:20.743886 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] Sep 9 00:19:20.743935 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] Sep 9 00:19:20.743984 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] Sep 9 00:19:20.744036 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] Sep 9 00:19:20.744086 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] Sep 9 00:19:20.744135 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] Sep 9 00:19:20.744184 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] Sep 9 00:19:20.744254 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] Sep 9 00:19:20.744312 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] Sep 9 00:19:20.744367 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] Sep 9 00:19:20.744421 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] Sep 9 00:19:20.744470 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] Sep 9 00:19:20.744528 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] Sep 9 00:19:20.745667 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] Sep 9 00:19:20.745761 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] Sep 9 00:19:20.745812 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] Sep 9 00:19:20.745864 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] Sep 9 00:19:20.745917 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] Sep 9 00:19:20.745967 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] Sep 9 00:19:20.746016 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] Sep 9 00:19:20.746066 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] Sep 9 00:19:20.746115 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] Sep 9 00:19:20.746164 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] Sep 9 00:19:20.746213 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] Sep 9 00:19:20.746263 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] Sep 9 00:19:20.746311 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] Sep 9 00:19:20.746360 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] Sep 9 00:19:20.746411 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] Sep 9 00:19:20.746461 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] Sep 9 00:19:20.746510 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] Sep 9 00:19:20.746558 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] Sep 9 00:19:20.746633 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] Sep 9 00:19:20.746684 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] Sep 9 00:19:20.746733 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] Sep 9 00:19:20.746786 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] Sep 9 00:19:20.746835 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] Sep 9 00:19:20.746884 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] Sep 9 00:19:20.746933 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] Sep 9 00:19:20.746982 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] Sep 9 00:19:20.747030 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] Sep 9 00:19:20.747079 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] Sep 9 00:19:20.747128 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] Sep 9 00:19:20.747180 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] Sep 9 00:19:20.747230 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] Sep 9 00:19:20.747279 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] Sep 9 00:19:20.747328 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] Sep 9 00:19:20.747377 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] Sep 9 00:19:20.747428 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] Sep 9 00:19:20.747477 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] Sep 9 00:19:20.747526 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] Sep 9 00:19:20.747575 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] Sep 9 00:19:20.747636 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] Sep 9 00:19:20.747688 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] Sep 9 00:19:20.747737 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] Sep 9 00:19:20.747786 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] Sep 9 00:19:20.747835 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] Sep 9 00:19:20.747883 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] Sep 9 00:19:20.747934 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] Sep 9 00:19:20.747985 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] Sep 9 00:19:20.748034 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] Sep 9 00:19:20.748083 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] Sep 9 00:19:20.748132 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] Sep 9 00:19:20.748181 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] Sep 9 00:19:20.748263 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] Sep 9 00:19:20.748312 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] Sep 9 00:19:20.748360 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] Sep 9 00:19:20.748412 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] Sep 9 00:19:20.748460 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] Sep 9 00:19:20.748508 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] Sep 9 00:19:20.748556 kernel: pci_bus 0000:00: resource 4 [mem 0x000a0000-0x000bffff window] Sep 9 00:19:20.748607 kernel: pci_bus 0000:00: resource 5 [mem 0x000cc000-0x000dbfff window] Sep 9 00:19:20.748666 kernel: pci_bus 0000:00: resource 6 [mem 0xc0000000-0xfebfffff window] Sep 9 00:19:20.748745 kernel: pci_bus 0000:00: resource 7 [io 0x0000-0x0cf7 window] Sep 9 00:19:20.748792 kernel: pci_bus 0000:00: resource 8 [io 0x0d00-0xfeff window] Sep 9 00:19:20.748838 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x3fff] Sep 9 00:19:20.748884 kernel: pci_bus 0000:02: resource 1 [mem 0xfd600000-0xfdffffff] Sep 9 00:19:20.748929 kernel: pci_bus 0000:02: resource 2 [mem 0xe7b00000-0xe7ffffff 64bit pref] Sep 9 00:19:20.748973 kernel: pci_bus 0000:02: resource 4 [mem 0x000a0000-0x000bffff window] Sep 9 00:19:20.749017 kernel: pci_bus 0000:02: resource 5 [mem 0x000cc000-0x000dbfff window] Sep 9 00:19:20.749065 kernel: pci_bus 0000:02: resource 6 [mem 0xc0000000-0xfebfffff window] Sep 9 00:19:20.749112 kernel: pci_bus 0000:02: resource 7 [io 0x0000-0x0cf7 window] Sep 9 00:19:20.749157 kernel: pci_bus 0000:02: resource 8 [io 0x0d00-0xfeff window] Sep 9 00:19:20.749207 kernel: pci_bus 0000:03: resource 0 [io 0x4000-0x4fff] Sep 9 00:19:20.749254 kernel: pci_bus 0000:03: resource 1 [mem 0xfd500000-0xfd5fffff] Sep 9 00:19:20.749298 kernel: pci_bus 0000:03: resource 2 [mem 0xc0000000-0xc01fffff 64bit pref] Sep 9 00:19:20.749346 kernel: pci_bus 0000:04: resource 0 [io 0x8000-0x8fff] Sep 9 00:19:20.749393 kernel: pci_bus 0000:04: resource 1 [mem 0xfd100000-0xfd1fffff] Sep 9 00:19:20.749439 kernel: pci_bus 0000:04: resource 2 [mem 0xe7800000-0xe78fffff 64bit pref] Sep 9 00:19:20.749488 kernel: pci_bus 0000:05: resource 0 [io 0xc000-0xcfff] Sep 9 00:19:20.749534 kernel: pci_bus 0000:05: resource 1 [mem 0xfcd00000-0xfcdfffff] Sep 9 00:19:20.749578 kernel: pci_bus 0000:05: resource 2 [mem 0xe7400000-0xe74fffff 64bit pref] Sep 9 00:19:20.749638 kernel: pci_bus 0000:06: resource 1 [mem 0xfc900000-0xfc9fffff] Sep 9 00:19:20.749684 kernel: pci_bus 0000:06: resource 2 [mem 0xe7000000-0xe70fffff 64bit pref] Sep 9 00:19:20.749733 kernel: pci_bus 0000:07: resource 1 [mem 0xfc500000-0xfc5fffff] Sep 9 00:19:20.749781 kernel: pci_bus 0000:07: resource 2 [mem 0xe6c00000-0xe6cfffff 64bit pref] Sep 9 00:19:20.749830 kernel: pci_bus 0000:08: resource 1 [mem 0xfc100000-0xfc1fffff] Sep 9 00:19:20.749876 kernel: pci_bus 0000:08: resource 2 [mem 0xe6800000-0xe68fffff 64bit pref] Sep 9 00:19:20.749924 kernel: pci_bus 0000:09: resource 1 [mem 0xfbd00000-0xfbdfffff] Sep 9 00:19:20.749970 kernel: pci_bus 0000:09: resource 2 [mem 0xe6400000-0xe64fffff 64bit pref] Sep 9 00:19:20.750036 kernel: pci_bus 0000:0a: resource 1 [mem 0xfb900000-0xfb9fffff] Sep 9 00:19:20.750084 kernel: pci_bus 0000:0a: resource 2 [mem 0xe6000000-0xe60fffff 64bit pref] Sep 9 00:19:20.750134 kernel: pci_bus 0000:0b: resource 0 [io 0x5000-0x5fff] Sep 9 00:19:20.750179 kernel: pci_bus 0000:0b: resource 1 [mem 0xfd400000-0xfd4fffff] Sep 9 00:19:20.750223 kernel: pci_bus 0000:0b: resource 2 [mem 0xc0200000-0xc03fffff 64bit pref] Sep 9 00:19:20.750272 kernel: pci_bus 0000:0c: resource 0 [io 0x9000-0x9fff] Sep 9 00:19:20.750320 kernel: pci_bus 0000:0c: resource 1 [mem 0xfd000000-0xfd0fffff] Sep 9 00:19:20.750365 kernel: pci_bus 0000:0c: resource 2 [mem 0xe7700000-0xe77fffff 64bit pref] Sep 9 00:19:20.750414 kernel: pci_bus 0000:0d: resource 0 [io 0xd000-0xdfff] Sep 9 00:19:20.750460 kernel: pci_bus 0000:0d: resource 1 [mem 0xfcc00000-0xfccfffff] Sep 9 00:19:20.750504 kernel: pci_bus 0000:0d: resource 2 [mem 0xe7300000-0xe73fffff 64bit pref] Sep 9 00:19:20.750554 kernel: pci_bus 0000:0e: resource 1 [mem 0xfc800000-0xfc8fffff] Sep 9 00:19:20.750617 kernel: pci_bus 0000:0e: resource 2 [mem 0xe6f00000-0xe6ffffff 64bit pref] Sep 9 00:19:20.750698 kernel: pci_bus 0000:0f: resource 1 [mem 0xfc400000-0xfc4fffff] Sep 9 00:19:20.750747 kernel: pci_bus 0000:0f: resource 2 [mem 0xe6b00000-0xe6bfffff 64bit pref] Sep 9 00:19:20.750796 kernel: pci_bus 0000:10: resource 1 [mem 0xfc000000-0xfc0fffff] Sep 9 00:19:20.750841 kernel: pci_bus 0000:10: resource 2 [mem 0xe6700000-0xe67fffff 64bit pref] Sep 9 00:19:20.750889 kernel: pci_bus 0000:11: resource 1 [mem 0xfbc00000-0xfbcfffff] Sep 9 00:19:20.750935 kernel: pci_bus 0000:11: resource 2 [mem 0xe6300000-0xe63fffff 64bit pref] Sep 9 00:19:20.750986 kernel: pci_bus 0000:12: resource 1 [mem 0xfb800000-0xfb8fffff] Sep 9 00:19:20.751032 kernel: pci_bus 0000:12: resource 2 [mem 0xe5f00000-0xe5ffffff 64bit pref] Sep 9 00:19:20.751082 kernel: pci_bus 0000:13: resource 0 [io 0x6000-0x6fff] Sep 9 00:19:20.751127 kernel: pci_bus 0000:13: resource 1 [mem 0xfd300000-0xfd3fffff] Sep 9 00:19:20.751172 kernel: pci_bus 0000:13: resource 2 [mem 0xe7a00000-0xe7afffff 64bit pref] Sep 9 00:19:20.751220 kernel: pci_bus 0000:14: resource 0 [io 0xa000-0xafff] Sep 9 00:19:20.751266 kernel: pci_bus 0000:14: resource 1 [mem 0xfcf00000-0xfcffffff] Sep 9 00:19:20.751314 kernel: pci_bus 0000:14: resource 2 [mem 0xe7600000-0xe76fffff 64bit pref] Sep 9 00:19:20.751362 kernel: pci_bus 0000:15: resource 0 [io 0xe000-0xefff] Sep 9 00:19:20.751414 kernel: pci_bus 0000:15: resource 1 [mem 0xfcb00000-0xfcbfffff] Sep 9 00:19:20.751465 kernel: pci_bus 0000:15: resource 2 [mem 0xe7200000-0xe72fffff 64bit pref] Sep 9 00:19:20.751514 kernel: pci_bus 0000:16: resource 1 [mem 0xfc700000-0xfc7fffff] Sep 9 00:19:20.751560 kernel: pci_bus 0000:16: resource 2 [mem 0xe6e00000-0xe6efffff 64bit pref] Sep 9 00:19:20.751622 kernel: pci_bus 0000:17: resource 1 [mem 0xfc300000-0xfc3fffff] Sep 9 00:19:20.751668 kernel: pci_bus 0000:17: resource 2 [mem 0xe6a00000-0xe6afffff 64bit pref] Sep 9 00:19:20.751759 kernel: pci_bus 0000:18: resource 1 [mem 0xfbf00000-0xfbffffff] Sep 9 00:19:20.751823 kernel: pci_bus 0000:18: resource 2 [mem 0xe6600000-0xe66fffff 64bit pref] Sep 9 00:19:20.751874 kernel: pci_bus 0000:19: resource 1 [mem 0xfbb00000-0xfbbfffff] Sep 9 00:19:20.751920 kernel: pci_bus 0000:19: resource 2 [mem 0xe6200000-0xe62fffff 64bit pref] Sep 9 00:19:20.751969 kernel: pci_bus 0000:1a: resource 1 [mem 0xfb700000-0xfb7fffff] Sep 9 00:19:20.752018 kernel: pci_bus 0000:1a: resource 2 [mem 0xe5e00000-0xe5efffff 64bit pref] Sep 9 00:19:20.752067 kernel: pci_bus 0000:1b: resource 0 [io 0x7000-0x7fff] Sep 9 00:19:20.752113 kernel: pci_bus 0000:1b: resource 1 [mem 0xfd200000-0xfd2fffff] Sep 9 00:19:20.752158 kernel: pci_bus 0000:1b: resource 2 [mem 0xe7900000-0xe79fffff 64bit pref] Sep 9 00:19:20.752208 kernel: pci_bus 0000:1c: resource 0 [io 0xb000-0xbfff] Sep 9 00:19:20.752253 kernel: pci_bus 0000:1c: resource 1 [mem 0xfce00000-0xfcefffff] Sep 9 00:19:20.752300 kernel: pci_bus 0000:1c: resource 2 [mem 0xe7500000-0xe75fffff 64bit pref] Sep 9 00:19:20.752360 kernel: pci_bus 0000:1d: resource 1 [mem 0xfca00000-0xfcafffff] Sep 9 00:19:20.752407 kernel: pci_bus 0000:1d: resource 2 [mem 0xe7100000-0xe71fffff 64bit pref] Sep 9 00:19:20.752456 kernel: pci_bus 0000:1e: resource 1 [mem 0xfc600000-0xfc6fffff] Sep 9 00:19:20.752502 kernel: pci_bus 0000:1e: resource 2 [mem 0xe6d00000-0xe6dfffff 64bit pref] Sep 9 00:19:20.752551 kernel: pci_bus 0000:1f: resource 1 [mem 0xfc200000-0xfc2fffff] Sep 9 00:19:20.752615 kernel: pci_bus 0000:1f: resource 2 [mem 0xe6900000-0xe69fffff 64bit pref] Sep 9 00:19:20.752666 kernel: pci_bus 0000:20: resource 1 [mem 0xfbe00000-0xfbefffff] Sep 9 00:19:20.752721 kernel: pci_bus 0000:20: resource 2 [mem 0xe6500000-0xe65fffff 64bit pref] Sep 9 00:19:20.752770 kernel: pci_bus 0000:21: resource 1 [mem 0xfba00000-0xfbafffff] Sep 9 00:19:20.752815 kernel: pci_bus 0000:21: resource 2 [mem 0xe6100000-0xe61fffff 64bit pref] Sep 9 00:19:20.752863 kernel: pci_bus 0000:22: resource 1 [mem 0xfb600000-0xfb6fffff] Sep 9 00:19:20.752908 kernel: pci_bus 0000:22: resource 2 [mem 0xe5d00000-0xe5dfffff 64bit pref] Sep 9 00:19:20.752965 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Sep 9 00:19:20.752974 kernel: PCI: CLS 32 bytes, default 64 Sep 9 00:19:20.752981 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Sep 9 00:19:20.752987 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns Sep 9 00:19:20.752993 kernel: clocksource: Switched to clocksource tsc Sep 9 00:19:20.752999 kernel: Initialise system trusted keyrings Sep 9 00:19:20.753005 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Sep 9 00:19:20.753011 kernel: Key type asymmetric registered Sep 9 00:19:20.753018 kernel: Asymmetric key parser 'x509' registered Sep 9 00:19:20.753024 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Sep 9 00:19:20.753030 kernel: io scheduler mq-deadline registered Sep 9 00:19:20.753036 kernel: io scheduler kyber registered Sep 9 00:19:20.753042 kernel: io scheduler bfq registered Sep 9 00:19:20.753091 kernel: pcieport 0000:00:15.0: PME: Signaling with IRQ 24 Sep 9 00:19:20.753142 kernel: pcieport 0000:00:15.0: pciehp: Slot #160 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 9 00:19:20.753192 kernel: pcieport 0000:00:15.1: PME: Signaling with IRQ 25 Sep 9 00:19:20.753244 kernel: pcieport 0000:00:15.1: pciehp: Slot #161 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 9 00:19:20.753293 kernel: pcieport 0000:00:15.2: PME: Signaling with IRQ 26 Sep 9 00:19:20.753342 kernel: pcieport 0000:00:15.2: pciehp: Slot #162 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 9 00:19:20.753392 kernel: pcieport 0000:00:15.3: PME: Signaling with IRQ 27 Sep 9 00:19:20.753442 kernel: pcieport 0000:00:15.3: pciehp: Slot #163 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 9 00:19:20.753491 kernel: pcieport 0000:00:15.4: PME: Signaling with IRQ 28 Sep 9 00:19:20.753540 kernel: pcieport 0000:00:15.4: pciehp: Slot #164 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 9 00:19:20.753611 kernel: pcieport 0000:00:15.5: PME: Signaling with IRQ 29 Sep 9 00:19:20.753663 kernel: pcieport 0000:00:15.5: pciehp: Slot #165 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 9 00:19:20.753724 kernel: pcieport 0000:00:15.6: PME: Signaling with IRQ 30 Sep 9 00:19:20.753774 kernel: pcieport 0000:00:15.6: pciehp: Slot #166 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 9 00:19:20.753824 kernel: pcieport 0000:00:15.7: PME: Signaling with IRQ 31 Sep 9 00:19:20.753874 kernel: pcieport 0000:00:15.7: pciehp: Slot #167 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 9 00:19:20.753924 kernel: pcieport 0000:00:16.0: PME: Signaling with IRQ 32 Sep 9 00:19:20.753974 kernel: pcieport 0000:00:16.0: pciehp: Slot #192 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 9 00:19:20.754026 kernel: pcieport 0000:00:16.1: PME: Signaling with IRQ 33 Sep 9 00:19:20.754076 kernel: pcieport 0000:00:16.1: pciehp: Slot #193 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 9 00:19:20.754126 kernel: pcieport 0000:00:16.2: PME: Signaling with IRQ 34 Sep 9 00:19:20.754175 kernel: pcieport 0000:00:16.2: pciehp: Slot #194 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 9 00:19:20.754224 kernel: pcieport 0000:00:16.3: PME: Signaling with IRQ 35 Sep 9 00:19:20.754285 kernel: pcieport 0000:00:16.3: pciehp: Slot #195 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 9 00:19:20.754338 kernel: pcieport 0000:00:16.4: PME: Signaling with IRQ 36 Sep 9 00:19:20.754389 kernel: pcieport 0000:00:16.4: pciehp: Slot #196 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 9 00:19:20.754439 kernel: pcieport 0000:00:16.5: PME: Signaling with IRQ 37 Sep 9 00:19:20.754488 kernel: pcieport 0000:00:16.5: pciehp: Slot #197 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 9 00:19:20.754538 kernel: pcieport 0000:00:16.6: PME: Signaling with IRQ 38 Sep 9 00:19:20.754604 kernel: pcieport 0000:00:16.6: pciehp: Slot #198 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 9 00:19:20.754658 kernel: pcieport 0000:00:16.7: PME: Signaling with IRQ 39 Sep 9 00:19:20.754743 kernel: pcieport 0000:00:16.7: pciehp: Slot #199 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 9 00:19:20.754796 kernel: pcieport 0000:00:17.0: PME: Signaling with IRQ 40 Sep 9 00:19:20.754846 kernel: pcieport 0000:00:17.0: pciehp: Slot #224 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 9 00:19:20.754897 kernel: pcieport 0000:00:17.1: PME: Signaling with IRQ 41 Sep 9 00:19:20.755638 kernel: pcieport 0000:00:17.1: pciehp: Slot #225 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 9 00:19:20.755704 kernel: pcieport 0000:00:17.2: PME: Signaling with IRQ 42 Sep 9 00:19:20.755758 kernel: pcieport 0000:00:17.2: pciehp: Slot #226 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 9 00:19:20.755810 kernel: pcieport 0000:00:17.3: PME: Signaling with IRQ 43 Sep 9 00:19:20.755860 kernel: pcieport 0000:00:17.3: pciehp: Slot #227 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 9 00:19:20.755913 kernel: pcieport 0000:00:17.4: PME: Signaling with IRQ 44 Sep 9 00:19:20.755964 kernel: pcieport 0000:00:17.4: pciehp: Slot #228 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 9 00:19:20.756014 kernel: pcieport 0000:00:17.5: PME: Signaling with IRQ 45 Sep 9 00:19:20.756064 kernel: pcieport 0000:00:17.5: pciehp: Slot #229 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 9 00:19:20.756114 kernel: pcieport 0000:00:17.6: PME: Signaling with IRQ 46 Sep 9 00:19:20.756163 kernel: pcieport 0000:00:17.6: pciehp: Slot #230 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 9 00:19:20.756214 kernel: pcieport 0000:00:17.7: PME: Signaling with IRQ 47 Sep 9 00:19:20.756266 kernel: pcieport 0000:00:17.7: pciehp: Slot #231 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 9 00:19:20.756316 kernel: pcieport 0000:00:18.0: PME: Signaling with IRQ 48 Sep 9 00:19:20.756367 kernel: pcieport 0000:00:18.0: pciehp: Slot #256 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 9 00:19:20.756417 kernel: pcieport 0000:00:18.1: PME: Signaling with IRQ 49 Sep 9 00:19:20.756466 kernel: pcieport 0000:00:18.1: pciehp: Slot #257 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 9 00:19:20.756515 kernel: pcieport 0000:00:18.2: PME: Signaling with IRQ 50 Sep 9 00:19:20.756565 kernel: pcieport 0000:00:18.2: pciehp: Slot #258 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 9 00:19:20.756648 kernel: pcieport 0000:00:18.3: PME: Signaling with IRQ 51 Sep 9 00:19:20.756701 kernel: pcieport 0000:00:18.3: pciehp: Slot #259 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 9 00:19:20.756750 kernel: pcieport 0000:00:18.4: PME: Signaling with IRQ 52 Sep 9 00:19:20.756800 kernel: pcieport 0000:00:18.4: pciehp: Slot #260 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 9 00:19:20.756850 kernel: pcieport 0000:00:18.5: PME: Signaling with IRQ 53 Sep 9 00:19:20.756901 kernel: pcieport 0000:00:18.5: pciehp: Slot #261 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 9 00:19:20.756951 kernel: pcieport 0000:00:18.6: PME: Signaling with IRQ 54 Sep 9 00:19:20.757001 kernel: pcieport 0000:00:18.6: pciehp: Slot #262 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 9 00:19:20.757054 kernel: pcieport 0000:00:18.7: PME: Signaling with IRQ 55 Sep 9 00:19:20.757104 kernel: pcieport 0000:00:18.7: pciehp: Slot #263 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 9 00:19:20.757116 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 9 00:19:20.757122 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 9 00:19:20.757129 kernel: 00:05: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 9 00:19:20.757135 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBC,PNP0f13:MOUS] at 0x60,0x64 irq 1,12 Sep 9 00:19:20.757142 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 9 00:19:20.757150 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 9 00:19:20.757201 kernel: rtc_cmos 00:01: registered as rtc0 Sep 9 00:19:20.757248 kernel: rtc_cmos 00:01: setting system clock to 2025-09-09T00:19:20 UTC (1757377160) Sep 9 00:19:20.757293 kernel: rtc_cmos 00:01: alarms up to one month, y3k, 114 bytes nvram Sep 9 00:19:20.757302 kernel: intel_pstate: CPU model not supported Sep 9 00:19:20.757309 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Sep 9 00:19:20.757315 kernel: NET: Registered PF_INET6 protocol family Sep 9 00:19:20.757321 kernel: Segment Routing with IPv6 Sep 9 00:19:20.757329 kernel: In-situ OAM (IOAM) with IPv6 Sep 9 00:19:20.757336 kernel: NET: Registered PF_PACKET protocol family Sep 9 00:19:20.757342 kernel: Key type dns_resolver registered Sep 9 00:19:20.757348 kernel: IPI shorthand broadcast: enabled Sep 9 00:19:20.757355 kernel: sched_clock: Marking stable (2700086207, 178126905)->(2891528103, -13314991) Sep 9 00:19:20.757361 kernel: registered taskstats version 1 Sep 9 00:19:20.757367 kernel: Loading compiled-in X.509 certificates Sep 9 00:19:20.757373 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.45-flatcar: 08d0986253b18b7fd74c2cc5404da4ba92260e75' Sep 9 00:19:20.757379 kernel: Demotion targets for Node 0: null Sep 9 00:19:20.757387 kernel: Key type .fscrypt registered Sep 9 00:19:20.757393 kernel: Key type fscrypt-provisioning registered Sep 9 00:19:20.757399 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 9 00:19:20.757406 kernel: ima: Allocated hash algorithm: sha1 Sep 9 00:19:20.757412 kernel: ima: No architecture policies found Sep 9 00:19:20.757418 kernel: clk: Disabling unused clocks Sep 9 00:19:20.757425 kernel: Warning: unable to open an initial console. Sep 9 00:19:20.757431 kernel: Freeing unused kernel image (initmem) memory: 53832K Sep 9 00:19:20.757438 kernel: Write protecting the kernel read-only data: 24576k Sep 9 00:19:20.757445 kernel: Freeing unused kernel image (rodata/data gap) memory: 284K Sep 9 00:19:20.757451 kernel: Run /init as init process Sep 9 00:19:20.757457 kernel: with arguments: Sep 9 00:19:20.757464 kernel: /init Sep 9 00:19:20.757470 kernel: with environment: Sep 9 00:19:20.757476 kernel: HOME=/ Sep 9 00:19:20.757482 kernel: TERM=linux Sep 9 00:19:20.757488 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 9 00:19:20.757495 systemd[1]: Successfully made /usr/ read-only. Sep 9 00:19:20.757505 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 9 00:19:20.757512 systemd[1]: Detected virtualization vmware. Sep 9 00:19:20.757518 systemd[1]: Detected architecture x86-64. Sep 9 00:19:20.757524 systemd[1]: Running in initrd. Sep 9 00:19:20.757531 systemd[1]: No hostname configured, using default hostname. Sep 9 00:19:20.757537 systemd[1]: Hostname set to . Sep 9 00:19:20.757544 systemd[1]: Initializing machine ID from random generator. Sep 9 00:19:20.757551 systemd[1]: Queued start job for default target initrd.target. Sep 9 00:19:20.757557 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 9 00:19:20.757564 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 9 00:19:20.757571 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 9 00:19:20.757578 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 9 00:19:20.758598 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 9 00:19:20.758608 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 9 00:19:20.758618 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 9 00:19:20.758625 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 9 00:19:20.758632 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 9 00:19:20.758638 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 9 00:19:20.758645 systemd[1]: Reached target paths.target - Path Units. Sep 9 00:19:20.758651 systemd[1]: Reached target slices.target - Slice Units. Sep 9 00:19:20.758657 systemd[1]: Reached target swap.target - Swaps. Sep 9 00:19:20.758664 systemd[1]: Reached target timers.target - Timer Units. Sep 9 00:19:20.758674 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 9 00:19:20.758682 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 9 00:19:20.758688 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 9 00:19:20.758695 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Sep 9 00:19:20.758701 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 9 00:19:20.758708 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 9 00:19:20.758716 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 9 00:19:20.758722 systemd[1]: Reached target sockets.target - Socket Units. Sep 9 00:19:20.758729 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 9 00:19:20.758737 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 9 00:19:20.758743 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 9 00:19:20.758750 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Sep 9 00:19:20.758756 systemd[1]: Starting systemd-fsck-usr.service... Sep 9 00:19:20.758763 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 9 00:19:20.758769 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 9 00:19:20.758776 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 00:19:20.758797 systemd-journald[243]: Collecting audit messages is disabled. Sep 9 00:19:20.758815 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 9 00:19:20.758822 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 9 00:19:20.758830 systemd[1]: Finished systemd-fsck-usr.service. Sep 9 00:19:20.758837 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 9 00:19:20.758843 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 9 00:19:20.758850 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 9 00:19:20.758857 kernel: Bridge firewalling registered Sep 9 00:19:20.758864 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 9 00:19:20.758870 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 9 00:19:20.758878 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 9 00:19:20.758885 systemd-journald[243]: Journal started Sep 9 00:19:20.758906 systemd-journald[243]: Runtime Journal (/run/log/journal/036f2f0ac8fa4b529070a24a77225d74) is 4.8M, max 38.9M, 34M free. Sep 9 00:19:20.724978 systemd-modules-load[244]: Inserted module 'overlay' Sep 9 00:19:20.763637 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 00:19:20.763652 systemd[1]: Started systemd-journald.service - Journal Service. Sep 9 00:19:20.745751 systemd-modules-load[244]: Inserted module 'br_netfilter' Sep 9 00:19:20.765687 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 9 00:19:20.765891 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 9 00:19:20.767274 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 9 00:19:20.768638 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 9 00:19:20.778817 systemd-tmpfiles[269]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Sep 9 00:19:20.781078 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 9 00:19:20.781937 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 9 00:19:20.785122 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 9 00:19:20.785945 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 9 00:19:20.802168 dracut-cmdline[282]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=c495f73c03808403ea4f55eb54c843aae6678d256d64068b1371f8afce28979a Sep 9 00:19:20.818114 systemd-resolved[275]: Positive Trust Anchors: Sep 9 00:19:20.818123 systemd-resolved[275]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 9 00:19:20.818145 systemd-resolved[275]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 9 00:19:20.819948 systemd-resolved[275]: Defaulting to hostname 'linux'. Sep 9 00:19:20.820521 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 9 00:19:20.820806 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 9 00:19:20.855608 kernel: SCSI subsystem initialized Sep 9 00:19:20.875605 kernel: Loading iSCSI transport class v2.0-870. Sep 9 00:19:20.885609 kernel: iscsi: registered transport (tcp) Sep 9 00:19:20.911622 kernel: iscsi: registered transport (qla4xxx) Sep 9 00:19:20.911715 kernel: QLogic iSCSI HBA Driver Sep 9 00:19:20.923123 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 9 00:19:20.933558 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 9 00:19:20.934603 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 9 00:19:20.957863 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 9 00:19:20.958752 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 9 00:19:20.995619 kernel: raid6: avx2x4 gen() 46741 MB/s Sep 9 00:19:21.012604 kernel: raid6: avx2x2 gen() 52479 MB/s Sep 9 00:19:21.029840 kernel: raid6: avx2x1 gen() 44588 MB/s Sep 9 00:19:21.029882 kernel: raid6: using algorithm avx2x2 gen() 52479 MB/s Sep 9 00:19:21.047826 kernel: raid6: .... xor() 31735 MB/s, rmw enabled Sep 9 00:19:21.047901 kernel: raid6: using avx2x2 recovery algorithm Sep 9 00:19:21.062608 kernel: xor: automatically using best checksumming function avx Sep 9 00:19:21.170604 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 9 00:19:21.175223 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 9 00:19:21.176508 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 9 00:19:21.200240 systemd-udevd[492]: Using default interface naming scheme 'v255'. Sep 9 00:19:21.204226 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 9 00:19:21.205088 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 9 00:19:21.224016 dracut-pre-trigger[496]: rd.md=0: removing MD RAID activation Sep 9 00:19:21.241192 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 9 00:19:21.242250 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 9 00:19:21.323224 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 9 00:19:21.325854 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 9 00:19:21.385613 kernel: libata version 3.00 loaded. Sep 9 00:19:21.387598 kernel: ata_piix 0000:00:07.1: version 2.13 Sep 9 00:19:21.396912 kernel: scsi host0: ata_piix Sep 9 00:19:21.397166 kernel: scsi host1: ata_piix Sep 9 00:19:21.397554 kernel: ata1: PATA max UDMA/33 cmd 0x1f0 ctl 0x3f6 bmdma 0x1060 irq 14 lpm-pol 0 Sep 9 00:19:21.398178 kernel: ata2: PATA max UDMA/33 cmd 0x170 ctl 0x376 bmdma 0x1068 irq 15 lpm-pol 0 Sep 9 00:19:21.428602 kernel: VMware PVSCSI driver - version 1.0.7.0-k Sep 9 00:19:21.433094 kernel: vmw_pvscsi: using 64bit dma Sep 9 00:19:21.433121 kernel: vmw_pvscsi: max_id: 16 Sep 9 00:19:21.433130 kernel: vmw_pvscsi: setting ring_pages to 8 Sep 9 00:19:21.435640 kernel: vmw_pvscsi: enabling reqCallThreshold Sep 9 00:19:21.435658 kernel: vmw_pvscsi: driver-based request coalescing enabled Sep 9 00:19:21.435667 kernel: vmw_pvscsi: using MSI-X Sep 9 00:19:21.436829 kernel: VMware vmxnet3 virtual NIC driver - version 1.9.0.0-k-NAPI Sep 9 00:19:21.436844 kernel: scsi host2: VMware PVSCSI storage adapter rev 2, req/cmp/msg rings: 8/8/1 pages, cmd_per_lun=254 Sep 9 00:19:21.440421 kernel: vmw_pvscsi 0000:03:00.0: VMware PVSCSI rev 2 host #2 Sep 9 00:19:21.440524 kernel: vmxnet3 0000:0b:00.0: # of Tx queues : 2, # of Rx queues : 2 Sep 9 00:19:21.440613 kernel: scsi 2:0:0:0: Direct-Access VMware Virtual disk 2.0 PQ: 0 ANSI: 6 Sep 9 00:19:21.446597 kernel: vmxnet3 0000:0b:00.0 eth0: NIC Link is Up 10000 Mbps Sep 9 00:19:21.460677 (udev-worker)[535]: id: Truncating stdout of 'dmi_memory_id' up to 16384 byte. Sep 9 00:19:21.461599 kernel: cryptd: max_cpu_qlen set to 1000 Sep 9 00:19:21.464002 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 9 00:19:21.464080 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 00:19:21.464327 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 00:19:21.464902 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 00:19:21.485130 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 00:19:21.571599 kernel: ata2.00: ATAPI: VMware Virtual IDE CDROM Drive, 00000001, max UDMA/33 Sep 9 00:19:21.574600 kernel: scsi 1:0:0:0: CD-ROM NECVMWar VMware IDE CDR10 1.00 PQ: 0 ANSI: 5 Sep 9 00:19:21.577593 kernel: vmxnet3 0000:0b:00.0 ens192: renamed from eth0 Sep 9 00:19:21.586103 kernel: sd 2:0:0:0: [sda] 17805312 512-byte logical blocks: (9.12 GB/8.49 GiB) Sep 9 00:19:21.586224 kernel: sd 2:0:0:0: [sda] Write Protect is off Sep 9 00:19:21.586292 kernel: sd 2:0:0:0: [sda] Mode Sense: 31 00 00 00 Sep 9 00:19:21.589033 kernel: sd 2:0:0:0: [sda] Cache data unavailable Sep 9 00:19:21.589407 kernel: sd 2:0:0:0: [sda] Assuming drive cache: write through Sep 9 00:19:21.591605 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input2 Sep 9 00:19:21.594592 kernel: AES CTR mode by8 optimization enabled Sep 9 00:19:21.640602 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 9 00:19:21.641597 kernel: sd 2:0:0:0: [sda] Attached SCSI disk Sep 9 00:19:21.655640 kernel: sr 1:0:0:0: [sr0] scsi3-mmc drive: 1x/1x writer dvd-ram cd/rw xa/form2 cdda tray Sep 9 00:19:21.655810 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 9 00:19:21.670619 kernel: sr 1:0:0:0: Attached scsi CD-ROM sr0 Sep 9 00:19:21.703356 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_disk OEM. Sep 9 00:19:21.710324 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_disk EFI-SYSTEM. Sep 9 00:19:21.739259 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_disk ROOT. Sep 9 00:19:21.757485 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_disk USR-A. Sep 9 00:19:21.757637 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_disk USR-A. Sep 9 00:19:21.758273 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 9 00:19:21.849601 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 9 00:19:22.043490 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 9 00:19:22.044100 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 9 00:19:22.044231 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 9 00:19:22.044426 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 9 00:19:22.045085 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 9 00:19:22.059625 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 9 00:19:22.867614 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 9 00:19:22.868160 disk-uuid[645]: The operation has completed successfully. Sep 9 00:19:22.907840 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 9 00:19:22.907905 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 9 00:19:22.918399 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 9 00:19:22.931312 sh[674]: Success Sep 9 00:19:22.944900 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 9 00:19:22.944939 kernel: device-mapper: uevent: version 1.0.3 Sep 9 00:19:22.946218 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Sep 9 00:19:22.953607 kernel: device-mapper: verity: sha256 using shash "sha256-avx2" Sep 9 00:19:22.997007 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 9 00:19:22.998653 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 9 00:19:23.007850 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 9 00:19:23.021396 kernel: BTRFS: device fsid c483a4f4-f0a7-42f4-ac8d-111955dab3a7 devid 1 transid 41 /dev/mapper/usr (254:0) scanned by mount (686) Sep 9 00:19:23.021427 kernel: BTRFS info (device dm-0): first mount of filesystem c483a4f4-f0a7-42f4-ac8d-111955dab3a7 Sep 9 00:19:23.021438 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 9 00:19:23.031312 kernel: BTRFS info (device dm-0): enabling ssd optimizations Sep 9 00:19:23.031340 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 9 00:19:23.031352 kernel: BTRFS info (device dm-0): enabling free space tree Sep 9 00:19:23.034117 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 9 00:19:23.034454 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Sep 9 00:19:23.035053 systemd[1]: Starting afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments... Sep 9 00:19:23.035642 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 9 00:19:23.072670 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (709) Sep 9 00:19:23.077401 kernel: BTRFS info (device sda6): first mount of filesystem 1ca5876a-e169-4e15-a56e-4292fa8c609f Sep 9 00:19:23.077421 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Sep 9 00:19:23.083595 kernel: BTRFS info (device sda6): enabling ssd optimizations Sep 9 00:19:23.083610 kernel: BTRFS info (device sda6): enabling free space tree Sep 9 00:19:23.090608 kernel: BTRFS info (device sda6): last unmount of filesystem 1ca5876a-e169-4e15-a56e-4292fa8c609f Sep 9 00:19:23.091638 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 9 00:19:23.094243 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 9 00:19:23.134297 systemd[1]: Finished afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments. Sep 9 00:19:23.135650 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 9 00:19:23.221627 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 9 00:19:23.222560 ignition[728]: Ignition 2.21.0 Sep 9 00:19:23.222567 ignition[728]: Stage: fetch-offline Sep 9 00:19:23.222740 ignition[728]: no configs at "/usr/lib/ignition/base.d" Sep 9 00:19:23.223116 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 9 00:19:23.222747 ignition[728]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Sep 9 00:19:23.222795 ignition[728]: parsed url from cmdline: "" Sep 9 00:19:23.222797 ignition[728]: no config URL provided Sep 9 00:19:23.222800 ignition[728]: reading system config file "/usr/lib/ignition/user.ign" Sep 9 00:19:23.222804 ignition[728]: no config at "/usr/lib/ignition/user.ign" Sep 9 00:19:23.223155 ignition[728]: config successfully fetched Sep 9 00:19:23.223173 ignition[728]: parsing config with SHA512: addfa98590ed9bf016342b701d689f1a66bd98e5e77228ad4e47af7c98df47772841c53a33383624abb83f830f8d1155830deb583b0558c415e57640dfa2d961 Sep 9 00:19:23.228500 unknown[728]: fetched base config from "system" Sep 9 00:19:23.228513 unknown[728]: fetched user config from "vmware" Sep 9 00:19:23.229030 ignition[728]: fetch-offline: fetch-offline passed Sep 9 00:19:23.229093 ignition[728]: Ignition finished successfully Sep 9 00:19:23.230089 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 9 00:19:23.250898 systemd-networkd[865]: lo: Link UP Sep 9 00:19:23.251109 systemd-networkd[865]: lo: Gained carrier Sep 9 00:19:23.256411 kernel: vmxnet3 0000:0b:00.0 ens192: intr type 3, mode 0, 3 vectors allocated Sep 9 00:19:23.256533 kernel: vmxnet3 0000:0b:00.0 ens192: NIC Link is Up 10000 Mbps Sep 9 00:19:23.251914 systemd-networkd[865]: Enumeration completed Sep 9 00:19:23.251979 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 9 00:19:23.252139 systemd[1]: Reached target network.target - Network. Sep 9 00:19:23.252176 systemd-networkd[865]: ens192: Configuring with /etc/systemd/network/10-dracut-cmdline-99.network. Sep 9 00:19:23.252232 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 9 00:19:23.252729 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 9 00:19:23.254144 systemd-networkd[865]: ens192: Link UP Sep 9 00:19:23.254147 systemd-networkd[865]: ens192: Gained carrier Sep 9 00:19:23.273931 ignition[869]: Ignition 2.21.0 Sep 9 00:19:23.273941 ignition[869]: Stage: kargs Sep 9 00:19:23.274028 ignition[869]: no configs at "/usr/lib/ignition/base.d" Sep 9 00:19:23.274034 ignition[869]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Sep 9 00:19:23.274481 ignition[869]: kargs: kargs passed Sep 9 00:19:23.274508 ignition[869]: Ignition finished successfully Sep 9 00:19:23.276198 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 9 00:19:23.276987 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 9 00:19:23.290269 ignition[876]: Ignition 2.21.0 Sep 9 00:19:23.290279 ignition[876]: Stage: disks Sep 9 00:19:23.290369 ignition[876]: no configs at "/usr/lib/ignition/base.d" Sep 9 00:19:23.290374 ignition[876]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Sep 9 00:19:23.291016 ignition[876]: disks: disks passed Sep 9 00:19:23.291053 ignition[876]: Ignition finished successfully Sep 9 00:19:23.292112 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 9 00:19:23.292439 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 9 00:19:23.292692 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 9 00:19:23.292926 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 9 00:19:23.293130 systemd[1]: Reached target sysinit.target - System Initialization. Sep 9 00:19:23.293338 systemd[1]: Reached target basic.target - Basic System. Sep 9 00:19:23.294037 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 9 00:19:23.413367 systemd-fsck[885]: ROOT: clean, 15/1628000 files, 120826/1617920 blocks Sep 9 00:19:23.422914 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 9 00:19:23.423965 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 9 00:19:23.996606 kernel: EXT4-fs (sda9): mounted filesystem 4b59fff7-9272-4156-91f8-37989d927dc6 r/w with ordered data mode. Quota mode: none. Sep 9 00:19:23.997550 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 9 00:19:23.998137 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 9 00:19:23.999610 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 9 00:19:24.001647 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 9 00:19:24.003056 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 9 00:19:24.003315 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 9 00:19:24.003580 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 9 00:19:24.016415 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 9 00:19:24.017620 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 9 00:19:24.023607 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (893) Sep 9 00:19:24.025890 kernel: BTRFS info (device sda6): first mount of filesystem 1ca5876a-e169-4e15-a56e-4292fa8c609f Sep 9 00:19:24.025919 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Sep 9 00:19:24.031087 kernel: BTRFS info (device sda6): enabling ssd optimizations Sep 9 00:19:24.031125 kernel: BTRFS info (device sda6): enabling free space tree Sep 9 00:19:24.032558 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 9 00:19:24.052914 initrd-setup-root[917]: cut: /sysroot/etc/passwd: No such file or directory Sep 9 00:19:24.055731 initrd-setup-root[924]: cut: /sysroot/etc/group: No such file or directory Sep 9 00:19:24.058554 initrd-setup-root[931]: cut: /sysroot/etc/shadow: No such file or directory Sep 9 00:19:24.061339 initrd-setup-root[938]: cut: /sysroot/etc/gshadow: No such file or directory Sep 9 00:19:24.126088 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 9 00:19:24.126739 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 9 00:19:24.128660 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 9 00:19:24.141070 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 9 00:19:24.141607 kernel: BTRFS info (device sda6): last unmount of filesystem 1ca5876a-e169-4e15-a56e-4292fa8c609f Sep 9 00:19:24.159741 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 9 00:19:24.160478 ignition[1005]: INFO : Ignition 2.21.0 Sep 9 00:19:24.160732 ignition[1005]: INFO : Stage: mount Sep 9 00:19:24.160932 ignition[1005]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 9 00:19:24.161065 ignition[1005]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Sep 9 00:19:24.161742 ignition[1005]: INFO : mount: mount passed Sep 9 00:19:24.161879 ignition[1005]: INFO : Ignition finished successfully Sep 9 00:19:24.162724 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 9 00:19:24.163373 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 9 00:19:24.178965 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 9 00:19:24.197604 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (1018) Sep 9 00:19:24.197635 kernel: BTRFS info (device sda6): first mount of filesystem 1ca5876a-e169-4e15-a56e-4292fa8c609f Sep 9 00:19:24.199603 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Sep 9 00:19:24.203023 kernel: BTRFS info (device sda6): enabling ssd optimizations Sep 9 00:19:24.203043 kernel: BTRFS info (device sda6): enabling free space tree Sep 9 00:19:24.204214 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 9 00:19:24.221094 ignition[1034]: INFO : Ignition 2.21.0 Sep 9 00:19:24.221094 ignition[1034]: INFO : Stage: files Sep 9 00:19:24.221445 ignition[1034]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 9 00:19:24.221445 ignition[1034]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Sep 9 00:19:24.221696 ignition[1034]: DEBUG : files: compiled without relabeling support, skipping Sep 9 00:19:24.222029 ignition[1034]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 9 00:19:24.222029 ignition[1034]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 9 00:19:24.223429 ignition[1034]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 9 00:19:24.223569 ignition[1034]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 9 00:19:24.223718 ignition[1034]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 9 00:19:24.223657 unknown[1034]: wrote ssh authorized keys file for user: core Sep 9 00:19:24.225109 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Sep 9 00:19:24.226282 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Sep 9 00:19:24.272171 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 9 00:19:24.515947 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Sep 9 00:19:24.516226 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 9 00:19:24.516226 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Sep 9 00:19:24.773073 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 9 00:19:24.888777 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 9 00:19:24.889043 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Sep 9 00:19:24.889043 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Sep 9 00:19:24.889043 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 9 00:19:24.889043 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 9 00:19:24.889043 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 9 00:19:24.889043 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 9 00:19:24.889043 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 9 00:19:24.890182 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 9 00:19:24.897336 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 9 00:19:24.897544 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 9 00:19:24.897544 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 9 00:19:24.904044 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 9 00:19:24.904044 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 9 00:19:24.904475 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Sep 9 00:19:25.164756 systemd-networkd[865]: ens192: Gained IPv6LL Sep 9 00:19:25.323522 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Sep 9 00:19:26.363540 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 9 00:19:26.363540 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/etc/systemd/network/00-vmware.network" Sep 9 00:19:26.364331 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/etc/systemd/network/00-vmware.network" Sep 9 00:19:26.364331 ignition[1034]: INFO : files: op(d): [started] processing unit "prepare-helm.service" Sep 9 00:19:26.364871 ignition[1034]: INFO : files: op(d): op(e): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 9 00:19:26.365184 ignition[1034]: INFO : files: op(d): op(e): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 9 00:19:26.365184 ignition[1034]: INFO : files: op(d): [finished] processing unit "prepare-helm.service" Sep 9 00:19:26.365184 ignition[1034]: INFO : files: op(f): [started] processing unit "coreos-metadata.service" Sep 9 00:19:26.365773 ignition[1034]: INFO : files: op(f): op(10): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 9 00:19:26.365773 ignition[1034]: INFO : files: op(f): op(10): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 9 00:19:26.365773 ignition[1034]: INFO : files: op(f): [finished] processing unit "coreos-metadata.service" Sep 9 00:19:26.365773 ignition[1034]: INFO : files: op(11): [started] setting preset to disabled for "coreos-metadata.service" Sep 9 00:19:26.392312 ignition[1034]: INFO : files: op(11): op(12): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 9 00:19:26.394708 ignition[1034]: INFO : files: op(11): op(12): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 9 00:19:26.394708 ignition[1034]: INFO : files: op(11): [finished] setting preset to disabled for "coreos-metadata.service" Sep 9 00:19:26.394708 ignition[1034]: INFO : files: op(13): [started] setting preset to enabled for "prepare-helm.service" Sep 9 00:19:26.394708 ignition[1034]: INFO : files: op(13): [finished] setting preset to enabled for "prepare-helm.service" Sep 9 00:19:26.394708 ignition[1034]: INFO : files: createResultFile: createFiles: op(14): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 9 00:19:26.396157 ignition[1034]: INFO : files: createResultFile: createFiles: op(14): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 9 00:19:26.396157 ignition[1034]: INFO : files: files passed Sep 9 00:19:26.396157 ignition[1034]: INFO : Ignition finished successfully Sep 9 00:19:26.396167 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 9 00:19:26.397002 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 9 00:19:26.397644 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 9 00:19:26.412428 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 9 00:19:26.412502 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 9 00:19:26.414638 initrd-setup-root-after-ignition[1067]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 9 00:19:26.414638 initrd-setup-root-after-ignition[1067]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 9 00:19:26.415505 initrd-setup-root-after-ignition[1071]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 9 00:19:26.416438 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 9 00:19:26.416778 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 9 00:19:26.417415 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 9 00:19:26.447777 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 9 00:19:26.447838 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 9 00:19:26.448120 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 9 00:19:26.448231 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 9 00:19:26.448444 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 9 00:19:26.448889 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 9 00:19:26.457361 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 9 00:19:26.458078 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 9 00:19:26.468217 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 9 00:19:26.468400 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 9 00:19:26.468630 systemd[1]: Stopped target timers.target - Timer Units. Sep 9 00:19:26.468869 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 9 00:19:26.468941 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 9 00:19:26.469323 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 9 00:19:26.469471 systemd[1]: Stopped target basic.target - Basic System. Sep 9 00:19:26.469669 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 9 00:19:26.469863 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 9 00:19:26.470066 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 9 00:19:26.470273 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Sep 9 00:19:26.470478 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 9 00:19:26.470686 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 9 00:19:26.470892 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 9 00:19:26.471098 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 9 00:19:26.471284 systemd[1]: Stopped target swap.target - Swaps. Sep 9 00:19:26.471446 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 9 00:19:26.471515 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 9 00:19:26.471781 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 9 00:19:26.472014 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 9 00:19:26.472200 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 9 00:19:26.472245 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 9 00:19:26.472415 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 9 00:19:26.472476 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 9 00:19:26.472755 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 9 00:19:26.472819 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 9 00:19:26.473052 systemd[1]: Stopped target paths.target - Path Units. Sep 9 00:19:26.473201 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 9 00:19:26.478601 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 9 00:19:26.478763 systemd[1]: Stopped target slices.target - Slice Units. Sep 9 00:19:26.478982 systemd[1]: Stopped target sockets.target - Socket Units. Sep 9 00:19:26.479177 systemd[1]: iscsid.socket: Deactivated successfully. Sep 9 00:19:26.479227 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 9 00:19:26.479467 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 9 00:19:26.479530 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 9 00:19:26.479792 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 9 00:19:26.479876 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 9 00:19:26.480117 systemd[1]: ignition-files.service: Deactivated successfully. Sep 9 00:19:26.480194 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 9 00:19:26.480865 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 9 00:19:26.480962 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 9 00:19:26.481048 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 9 00:19:26.482689 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 9 00:19:26.482808 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 9 00:19:26.482897 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 9 00:19:26.483177 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 9 00:19:26.483258 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 9 00:19:26.486478 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 9 00:19:26.488539 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 9 00:19:26.496647 ignition[1091]: INFO : Ignition 2.21.0 Sep 9 00:19:26.496907 ignition[1091]: INFO : Stage: umount Sep 9 00:19:26.497096 ignition[1091]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 9 00:19:26.497227 ignition[1091]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Sep 9 00:19:26.498097 ignition[1091]: INFO : umount: umount passed Sep 9 00:19:26.498097 ignition[1091]: INFO : Ignition finished successfully Sep 9 00:19:26.499107 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 9 00:19:26.499185 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 9 00:19:26.499433 systemd[1]: Stopped target network.target - Network. Sep 9 00:19:26.499537 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 9 00:19:26.499564 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 9 00:19:26.499728 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 9 00:19:26.499750 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 9 00:19:26.499894 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 9 00:19:26.499914 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 9 00:19:26.500062 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 9 00:19:26.500082 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 9 00:19:26.500283 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 9 00:19:26.500529 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 9 00:19:26.501908 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 9 00:19:26.501975 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 9 00:19:26.503478 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Sep 9 00:19:26.503697 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 9 00:19:26.503727 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 9 00:19:26.504549 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Sep 9 00:19:26.507216 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 9 00:19:26.507274 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 9 00:19:26.508519 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Sep 9 00:19:26.508874 systemd[1]: Stopped target network-pre.target - Preparation for Network. Sep 9 00:19:26.509143 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 9 00:19:26.509291 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 9 00:19:26.509991 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 9 00:19:26.510222 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 9 00:19:26.510358 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 9 00:19:26.510681 systemd[1]: afterburn-network-kargs.service: Deactivated successfully. Sep 9 00:19:26.510816 systemd[1]: Stopped afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments. Sep 9 00:19:26.511111 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 9 00:19:26.511240 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 9 00:19:26.511549 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 9 00:19:26.511672 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 9 00:19:26.511975 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 9 00:19:26.513229 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 9 00:19:26.519561 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 9 00:19:26.519656 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 9 00:19:26.523905 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 9 00:19:26.523981 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 9 00:19:26.524234 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 9 00:19:26.524256 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 9 00:19:26.524458 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 9 00:19:26.524473 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 9 00:19:26.524642 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 9 00:19:26.524666 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 9 00:19:26.524930 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 9 00:19:26.524954 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 9 00:19:26.525280 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 9 00:19:26.525303 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 9 00:19:26.526017 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 9 00:19:26.526124 systemd[1]: systemd-network-generator.service: Deactivated successfully. Sep 9 00:19:26.526149 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Sep 9 00:19:26.526316 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 9 00:19:26.526339 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 9 00:19:26.526612 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 9 00:19:26.526634 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 00:19:26.529189 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Sep 9 00:19:26.529221 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Sep 9 00:19:26.529247 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 9 00:19:26.535631 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 9 00:19:26.535680 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 9 00:19:26.671636 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 9 00:19:26.671704 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 9 00:19:26.671980 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 9 00:19:26.672106 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 9 00:19:26.672133 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 9 00:19:26.672708 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 9 00:19:26.693156 systemd[1]: Switching root. Sep 9 00:19:26.734984 systemd-journald[243]: Journal stopped Sep 9 00:19:28.026220 systemd-journald[243]: Received SIGTERM from PID 1 (systemd). Sep 9 00:19:28.026248 kernel: SELinux: policy capability network_peer_controls=1 Sep 9 00:19:28.026258 kernel: SELinux: policy capability open_perms=1 Sep 9 00:19:28.026264 kernel: SELinux: policy capability extended_socket_class=1 Sep 9 00:19:28.026271 kernel: SELinux: policy capability always_check_network=0 Sep 9 00:19:28.026279 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 9 00:19:28.026729 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 9 00:19:28.026745 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 9 00:19:28.026752 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 9 00:19:28.026759 kernel: SELinux: policy capability userspace_initial_context=0 Sep 9 00:19:28.026766 systemd[1]: Successfully loaded SELinux policy in 35.686ms. Sep 9 00:19:28.026774 kernel: audit: type=1403 audit(1757377167.395:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 9 00:19:28.026784 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 7.116ms. Sep 9 00:19:28.026792 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 9 00:19:28.026800 systemd[1]: Detected virtualization vmware. Sep 9 00:19:28.026808 systemd[1]: Detected architecture x86-64. Sep 9 00:19:28.026817 systemd[1]: Detected first boot. Sep 9 00:19:28.026824 systemd[1]: Initializing machine ID from random generator. Sep 9 00:19:28.026832 zram_generator::config[1137]: No configuration found. Sep 9 00:19:28.032473 kernel: vmw_vmci 0000:00:07.7: Using capabilities 0xc Sep 9 00:19:28.032499 kernel: Guest personality initialized and is active Sep 9 00:19:28.032507 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Sep 9 00:19:28.032515 kernel: Initialized host personality Sep 9 00:19:28.032525 kernel: NET: Registered PF_VSOCK protocol family Sep 9 00:19:28.032534 systemd[1]: Populated /etc with preset unit settings. Sep 9 00:19:28.032544 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Sep 9 00:19:28.032552 systemd[1]: COREOS_CUSTOM_PUBLIC_IPV4=$(ip addr show ens192 | grep -v "inet 10." | grep -Po "inet \K[\d.]+")" > ${OUTPUT}" Sep 9 00:19:28.032560 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Sep 9 00:19:28.032567 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 9 00:19:28.032574 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 9 00:19:28.036661 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 9 00:19:28.051525 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 9 00:19:28.051544 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 9 00:19:28.051553 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 9 00:19:28.051561 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 9 00:19:28.051568 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 9 00:19:28.051576 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 9 00:19:28.051604 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 9 00:19:28.051612 systemd[1]: Created slice user.slice - User and Session Slice. Sep 9 00:19:28.051619 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 9 00:19:28.051629 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 9 00:19:28.051637 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 9 00:19:28.051644 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 9 00:19:28.051653 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 9 00:19:28.051660 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 9 00:19:28.051669 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 9 00:19:28.051937 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 9 00:19:28.051946 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 9 00:19:28.051953 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 9 00:19:28.051961 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 9 00:19:28.051969 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 9 00:19:28.051978 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 9 00:19:28.051986 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 9 00:19:28.051995 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 9 00:19:28.052003 systemd[1]: Reached target slices.target - Slice Units. Sep 9 00:19:28.052010 systemd[1]: Reached target swap.target - Swaps. Sep 9 00:19:28.052017 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 9 00:19:28.052025 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 9 00:19:28.052034 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Sep 9 00:19:28.052041 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 9 00:19:28.052049 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 9 00:19:28.052056 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 9 00:19:28.052064 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 9 00:19:28.052071 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 9 00:19:28.052079 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 9 00:19:28.052086 systemd[1]: Mounting media.mount - External Media Directory... Sep 9 00:19:28.052095 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 00:19:28.052103 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 9 00:19:28.052110 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 9 00:19:28.052118 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 9 00:19:28.052126 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 9 00:19:28.052133 systemd[1]: Reached target machines.target - Containers. Sep 9 00:19:28.052141 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 9 00:19:28.052149 systemd[1]: Starting ignition-delete-config.service - Ignition (delete config)... Sep 9 00:19:28.052157 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 9 00:19:28.052165 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 9 00:19:28.052172 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 9 00:19:28.052180 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 9 00:19:28.052187 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 9 00:19:28.052194 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 9 00:19:28.052202 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 9 00:19:28.052209 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 9 00:19:28.052218 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 9 00:19:28.052226 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 9 00:19:28.052233 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 9 00:19:28.052241 systemd[1]: Stopped systemd-fsck-usr.service. Sep 9 00:19:28.052248 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 9 00:19:28.052256 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 9 00:19:28.052263 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 9 00:19:28.052271 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 9 00:19:28.052279 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 9 00:19:28.052287 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Sep 9 00:19:28.052294 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 9 00:19:28.052302 systemd[1]: verity-setup.service: Deactivated successfully. Sep 9 00:19:28.052310 systemd[1]: Stopped verity-setup.service. Sep 9 00:19:28.052317 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 00:19:28.052325 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 9 00:19:28.052333 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 9 00:19:28.052340 systemd[1]: Mounted media.mount - External Media Directory. Sep 9 00:19:28.052349 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 9 00:19:28.052356 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 9 00:19:28.052364 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 9 00:19:28.052371 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 9 00:19:28.052379 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 9 00:19:28.052386 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 9 00:19:28.052393 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 9 00:19:28.052401 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 9 00:19:28.052409 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 9 00:19:28.052417 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 9 00:19:28.052425 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 9 00:19:28.052432 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 9 00:19:28.052440 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Sep 9 00:19:28.052447 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 9 00:19:28.052454 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 9 00:19:28.052462 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 9 00:19:28.052470 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 9 00:19:28.052478 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 9 00:19:28.052486 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 9 00:19:28.052497 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 9 00:19:28.052506 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 9 00:19:28.052514 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 9 00:19:28.052521 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 9 00:19:28.052529 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Sep 9 00:19:28.052563 systemd-journald[1220]: Collecting audit messages is disabled. Sep 9 00:19:28.055142 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 9 00:19:28.055163 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 9 00:19:28.055175 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 9 00:19:28.055183 kernel: fuse: init (API version 7.41) Sep 9 00:19:28.060266 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Sep 9 00:19:28.060289 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 9 00:19:28.060297 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 9 00:19:28.060306 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 9 00:19:28.060314 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 9 00:19:28.060321 kernel: loop: module loaded Sep 9 00:19:28.060336 systemd-journald[1220]: Journal started Sep 9 00:19:28.060355 systemd-journald[1220]: Runtime Journal (/run/log/journal/6393bff654b54eeba0e78d5f2dbe9140) is 4.8M, max 38.9M, 34M free. Sep 9 00:19:27.789196 systemd[1]: Queued start job for default target multi-user.target. Sep 9 00:19:27.804774 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Sep 9 00:19:27.805043 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 9 00:19:28.064797 jq[1207]: true Sep 9 00:19:28.065272 jq[1238]: true Sep 9 00:19:28.080089 systemd[1]: Started systemd-journald.service - Journal Service. Sep 9 00:19:28.080124 kernel: loop0: detected capacity change from 0 to 146240 Sep 9 00:19:28.068475 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 9 00:19:28.068660 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 9 00:19:28.085663 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 9 00:19:28.105099 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 9 00:19:28.105264 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 9 00:19:28.106594 ignition[1256]: Ignition 2.21.0 Sep 9 00:19:28.107431 ignition[1256]: deleting config from guestinfo properties Sep 9 00:19:28.112347 ignition[1256]: Successfully deleted config Sep 9 00:19:28.122828 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 9 00:19:28.123750 systemd[1]: Finished ignition-delete-config.service - Ignition (delete config). Sep 9 00:19:28.125687 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 9 00:19:28.125075 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Sep 9 00:19:28.127987 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 9 00:19:28.128730 kernel: ACPI: bus type drm_connector registered Sep 9 00:19:28.130345 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 9 00:19:28.133082 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 9 00:19:28.136909 systemd-journald[1220]: Time spent on flushing to /var/log/journal/6393bff654b54eeba0e78d5f2dbe9140 is 38.278ms for 1774 entries. Sep 9 00:19:28.136909 systemd-journald[1220]: System Journal (/var/log/journal/6393bff654b54eeba0e78d5f2dbe9140) is 8M, max 584.8M, 576.8M free. Sep 9 00:19:28.181943 systemd-journald[1220]: Received client request to flush runtime journal. Sep 9 00:19:28.181968 kernel: loop1: detected capacity change from 0 to 224512 Sep 9 00:19:28.183406 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 9 00:19:28.187723 kernel: loop2: detected capacity change from 0 to 113872 Sep 9 00:19:28.190775 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 9 00:19:28.193659 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 9 00:19:28.220205 systemd-tmpfiles[1304]: ACLs are not supported, ignoring. Sep 9 00:19:28.220396 systemd-tmpfiles[1304]: ACLs are not supported, ignoring. Sep 9 00:19:28.226912 kernel: loop3: detected capacity change from 0 to 2960 Sep 9 00:19:28.226639 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 9 00:19:28.254350 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 9 00:19:28.265622 kernel: loop4: detected capacity change from 0 to 146240 Sep 9 00:19:28.477609 kernel: loop5: detected capacity change from 0 to 224512 Sep 9 00:19:28.517603 kernel: loop6: detected capacity change from 0 to 113872 Sep 9 00:19:28.556741 kernel: loop7: detected capacity change from 0 to 2960 Sep 9 00:19:28.589731 (sd-merge)[1310]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-vmware'. Sep 9 00:19:28.590578 (sd-merge)[1310]: Merged extensions into '/usr'. Sep 9 00:19:28.596714 systemd[1]: Reload requested from client PID 1251 ('systemd-sysext') (unit systemd-sysext.service)... Sep 9 00:19:28.596727 systemd[1]: Reloading... Sep 9 00:19:28.671607 zram_generator::config[1338]: No configuration found. Sep 9 00:19:28.799512 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 9 00:19:28.812448 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Sep 9 00:19:28.858283 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 9 00:19:28.858413 systemd[1]: Reloading finished in 261 ms. Sep 9 00:19:28.879229 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 9 00:19:28.879537 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 9 00:19:28.882973 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 9 00:19:28.887400 systemd[1]: Starting ensure-sysext.service... Sep 9 00:19:28.890690 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 9 00:19:28.892709 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 9 00:19:28.898679 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 9 00:19:28.901515 systemd[1]: Reload requested from client PID 1393 ('systemctl') (unit ensure-sysext.service)... Sep 9 00:19:28.901529 systemd[1]: Reloading... Sep 9 00:19:28.927503 ldconfig[1240]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 9 00:19:28.927575 systemd-udevd[1395]: Using default interface naming scheme 'v255'. Sep 9 00:19:28.934418 systemd-tmpfiles[1394]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Sep 9 00:19:28.934438 systemd-tmpfiles[1394]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Sep 9 00:19:28.934605 systemd-tmpfiles[1394]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 9 00:19:28.934770 systemd-tmpfiles[1394]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 9 00:19:28.935681 systemd-tmpfiles[1394]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 9 00:19:28.935884 systemd-tmpfiles[1394]: ACLs are not supported, ignoring. Sep 9 00:19:28.935919 systemd-tmpfiles[1394]: ACLs are not supported, ignoring. Sep 9 00:19:28.938162 systemd-tmpfiles[1394]: Detected autofs mount point /boot during canonicalization of boot. Sep 9 00:19:28.938168 systemd-tmpfiles[1394]: Skipping /boot Sep 9 00:19:28.945096 systemd-tmpfiles[1394]: Detected autofs mount point /boot during canonicalization of boot. Sep 9 00:19:28.945104 systemd-tmpfiles[1394]: Skipping /boot Sep 9 00:19:28.960599 zram_generator::config[1422]: No configuration found. Sep 9 00:19:29.084904 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 9 00:19:29.097475 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Sep 9 00:19:29.133604 kernel: mousedev: PS/2 mouse device common for all mice Sep 9 00:19:29.152596 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Sep 9 00:19:29.159608 kernel: ACPI: button: Power Button [PWRF] Sep 9 00:19:29.163296 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Sep 9 00:19:29.163513 systemd[1]: Reloading finished in 261 ms. Sep 9 00:19:29.172466 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 9 00:19:29.172814 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 9 00:19:29.177765 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 9 00:19:29.192597 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 9 00:19:29.201385 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 9 00:19:29.202963 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 9 00:19:29.205631 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 9 00:19:29.208114 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 9 00:19:29.216847 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 9 00:19:29.220999 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 00:19:29.223083 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 9 00:19:29.227396 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 9 00:19:29.229116 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 9 00:19:29.229307 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 9 00:19:29.229388 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 9 00:19:29.229468 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 00:19:29.232469 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 00:19:29.232623 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 9 00:19:29.232702 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 9 00:19:29.237218 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 9 00:19:29.239618 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 00:19:29.242095 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 00:19:29.250876 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 9 00:19:29.251116 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 9 00:19:29.251208 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 9 00:19:29.251332 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 00:19:29.256055 systemd[1]: Finished ensure-sysext.service. Sep 9 00:19:29.263569 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 9 00:19:29.276044 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 9 00:19:29.285322 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 9 00:19:29.286457 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 9 00:19:29.293741 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 9 00:19:29.300049 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 9 00:19:29.303127 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_disk OEM. Sep 9 00:19:29.303577 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 9 00:19:29.304211 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 9 00:19:29.309073 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 9 00:19:29.309750 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 9 00:19:29.310365 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 9 00:19:29.310517 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 9 00:19:29.310883 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 9 00:19:29.314232 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 9 00:19:29.319478 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 9 00:19:29.333290 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 9 00:19:29.333569 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 9 00:19:29.337652 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 9 00:19:29.350694 augenrules[1559]: No rules Sep 9 00:19:29.351475 systemd[1]: audit-rules.service: Deactivated successfully. Sep 9 00:19:29.351879 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 9 00:19:29.352664 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 9 00:19:29.357484 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 9 00:19:29.428750 systemd-networkd[1518]: lo: Link UP Sep 9 00:19:29.428955 systemd-networkd[1518]: lo: Gained carrier Sep 9 00:19:29.429908 systemd-networkd[1518]: Enumeration completed Sep 9 00:19:29.429983 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 9 00:19:29.432888 systemd-networkd[1518]: ens192: Configuring with /etc/systemd/network/00-vmware.network. Sep 9 00:19:29.433778 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Sep 9 00:19:29.436327 kernel: vmxnet3 0000:0b:00.0 ens192: intr type 3, mode 0, 3 vectors allocated Sep 9 00:19:29.436515 kernel: vmxnet3 0000:0b:00.0 ens192: NIC Link is Up 10000 Mbps Sep 9 00:19:29.436935 systemd-networkd[1518]: ens192: Link UP Sep 9 00:19:29.437075 systemd-networkd[1518]: ens192: Gained carrier Sep 9 00:19:29.437555 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 9 00:19:29.439701 kernel: piix4_smbus 0000:00:07.3: SMBus Host Controller not enabled! Sep 9 00:19:29.482570 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 9 00:19:29.482789 systemd[1]: Reached target time-set.target - System Time Set. Sep 9 00:19:29.484677 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Sep 9 00:19:29.491094 systemd-resolved[1519]: Positive Trust Anchors: Sep 9 00:19:29.491304 systemd-resolved[1519]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 9 00:19:29.491330 systemd-resolved[1519]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 9 00:19:29.498435 systemd-resolved[1519]: Defaulting to hostname 'linux'. Sep 9 00:19:29.499770 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 9 00:19:29.499951 systemd[1]: Reached target network.target - Network. Sep 9 00:19:29.500150 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 9 00:19:29.500389 systemd[1]: Reached target sysinit.target - System Initialization. Sep 9 00:19:29.500725 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 9 00:19:29.500914 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 9 00:19:29.501373 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Sep 9 00:19:29.501566 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 9 00:19:29.501783 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 9 00:19:29.502629 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 9 00:19:29.502753 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 9 00:19:29.502773 systemd[1]: Reached target paths.target - Path Units. Sep 9 00:19:29.502863 systemd[1]: Reached target timers.target - Timer Units. Sep 9 00:19:29.503898 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 9 00:19:29.505141 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 9 00:19:29.510703 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Sep 9 00:19:29.510958 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Sep 9 00:19:29.511086 systemd[1]: Reached target ssh-access.target - SSH Access Available. Sep 9 00:19:29.514184 (udev-worker)[1456]: id: Truncating stdout of 'dmi_memory_id' up to 16384 byte. Sep 9 00:19:29.516677 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 9 00:19:29.517094 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Sep 9 00:19:29.517696 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 9 00:19:29.523464 systemd[1]: Reached target sockets.target - Socket Units. Sep 9 00:19:29.523688 systemd[1]: Reached target basic.target - Basic System. Sep 9 00:19:29.523847 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 9 00:19:29.523914 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 9 00:19:29.525916 systemd[1]: Starting containerd.service - containerd container runtime... Sep 9 00:19:29.528253 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 9 00:19:29.530900 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 9 00:19:29.534809 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 9 00:19:29.539808 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 9 00:19:29.539934 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 9 00:19:29.542633 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Sep 9 00:19:29.547821 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 9 00:19:29.552440 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 9 00:19:29.556464 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 9 00:21:02.359688 systemd-timesyncd[1533]: Contacted time server 50.218.103.254:123 (0.flatcar.pool.ntp.org). Sep 9 00:21:02.359726 systemd-timesyncd[1533]: Initial clock synchronization to Tue 2025-09-09 00:21:02.359625 UTC. Sep 9 00:21:02.360449 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 9 00:21:02.363339 systemd-resolved[1519]: Clock change detected. Flushing caches. Sep 9 00:21:02.366116 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 9 00:21:02.366722 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 9 00:21:02.368981 jq[1596]: false Sep 9 00:21:02.369899 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 9 00:21:02.376867 systemd[1]: Starting update-engine.service - Update Engine... Sep 9 00:21:02.379550 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 9 00:21:02.380627 google_oslogin_nss_cache[1598]: oslogin_cache_refresh[1598]: Refreshing passwd entry cache Sep 9 00:21:02.381410 oslogin_cache_refresh[1598]: Refreshing passwd entry cache Sep 9 00:21:02.383816 systemd[1]: Starting vgauthd.service - VGAuth Service for open-vm-tools... Sep 9 00:21:02.387077 google_oslogin_nss_cache[1598]: oslogin_cache_refresh[1598]: Failure getting users, quitting Sep 9 00:21:02.387514 oslogin_cache_refresh[1598]: Failure getting users, quitting Sep 9 00:21:02.387583 google_oslogin_nss_cache[1598]: oslogin_cache_refresh[1598]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Sep 9 00:21:02.388254 oslogin_cache_refresh[1598]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Sep 9 00:21:02.388639 google_oslogin_nss_cache[1598]: oslogin_cache_refresh[1598]: Refreshing group entry cache Sep 9 00:21:02.388286 oslogin_cache_refresh[1598]: Refreshing group entry cache Sep 9 00:21:02.389420 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 9 00:21:02.389905 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 9 00:21:02.390063 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 9 00:21:02.390925 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 9 00:21:02.391083 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 9 00:21:02.393094 google_oslogin_nss_cache[1598]: oslogin_cache_refresh[1598]: Failure getting groups, quitting Sep 9 00:21:02.393094 google_oslogin_nss_cache[1598]: oslogin_cache_refresh[1598]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Sep 9 00:21:02.392521 oslogin_cache_refresh[1598]: Failure getting groups, quitting Sep 9 00:21:02.392529 oslogin_cache_refresh[1598]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Sep 9 00:21:02.393686 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Sep 9 00:21:02.394412 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Sep 9 00:21:02.402505 jq[1608]: true Sep 9 00:21:02.414146 systemd[1]: Started vgauthd.service - VGAuth Service for open-vm-tools. Sep 9 00:21:02.417186 tar[1614]: linux-amd64/LICENSE Sep 9 00:21:02.417186 tar[1614]: linux-amd64/helm Sep 9 00:21:02.424426 extend-filesystems[1597]: Found /dev/sda6 Sep 9 00:21:02.425253 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 00:21:02.430083 systemd[1]: Starting vmtoolsd.service - Service for virtual machines hosted on VMware... Sep 9 00:21:02.431072 systemd[1]: motdgen.service: Deactivated successfully. Sep 9 00:21:02.431228 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 9 00:21:02.435166 update_engine[1604]: I20250909 00:21:02.434870 1604 main.cc:92] Flatcar Update Engine starting Sep 9 00:21:02.440019 extend-filesystems[1597]: Found /dev/sda9 Sep 9 00:21:02.441511 extend-filesystems[1597]: Checking size of /dev/sda9 Sep 9 00:21:02.441673 jq[1623]: true Sep 9 00:21:02.450887 (ntainerd)[1637]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 9 00:21:02.463415 extend-filesystems[1597]: Old size kept for /dev/sda9 Sep 9 00:21:02.463081 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 9 00:21:02.463239 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 9 00:21:02.473468 systemd[1]: Started vmtoolsd.service - Service for virtual machines hosted on VMware. Sep 9 00:21:02.484534 unknown[1629]: Pref_Init: Using '/etc/vmware-tools/vgauth.conf' as preferences filepath Sep 9 00:21:02.496785 unknown[1629]: Core dump limit set to -1 Sep 9 00:21:02.499007 dbus-daemon[1594]: [system] SELinux support is enabled Sep 9 00:21:02.499118 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 9 00:21:02.501048 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 9 00:21:02.501065 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 9 00:21:02.501205 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 9 00:21:02.501219 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 9 00:21:02.525808 systemd[1]: Started update-engine.service - Update Engine. Sep 9 00:21:02.527851 update_engine[1604]: I20250909 00:21:02.527491 1604 update_check_scheduler.cc:74] Next update check in 9m49s Sep 9 00:21:02.538427 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 9 00:21:02.566373 systemd-logind[1603]: Watching system buttons on /dev/input/event2 (Power Button) Sep 9 00:21:02.566563 systemd-logind[1603]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 9 00:21:02.566960 systemd-logind[1603]: New seat seat0. Sep 9 00:21:02.568092 systemd[1]: Started systemd-logind.service - User Login Management. Sep 9 00:21:02.569677 sshd_keygen[1634]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 9 00:21:02.602520 bash[1664]: Updated "/home/core/.ssh/authorized_keys" Sep 9 00:21:02.605074 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 9 00:21:02.606907 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 9 00:21:02.613122 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 9 00:21:02.619072 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 9 00:21:02.655361 systemd[1]: issuegen.service: Deactivated successfully. Sep 9 00:21:02.655528 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 9 00:21:02.658627 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 9 00:21:02.667194 locksmithd[1666]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 9 00:21:02.690445 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 9 00:21:02.693016 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 9 00:21:02.694587 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 9 00:21:02.694807 systemd[1]: Reached target getty.target - Login Prompts. Sep 9 00:21:02.794836 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 00:21:02.795706 containerd[1637]: time="2025-09-09T00:21:02Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Sep 9 00:21:02.797403 containerd[1637]: time="2025-09-09T00:21:02.797167234Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 Sep 9 00:21:02.814161 containerd[1637]: time="2025-09-09T00:21:02.814125593Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="6.401µs" Sep 9 00:21:02.814161 containerd[1637]: time="2025-09-09T00:21:02.814153575Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Sep 9 00:21:02.814161 containerd[1637]: time="2025-09-09T00:21:02.814167279Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Sep 9 00:21:02.814302 containerd[1637]: time="2025-09-09T00:21:02.814289720Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Sep 9 00:21:02.814321 containerd[1637]: time="2025-09-09T00:21:02.814305615Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Sep 9 00:21:02.814334 containerd[1637]: time="2025-09-09T00:21:02.814328115Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 9 00:21:02.814392 containerd[1637]: time="2025-09-09T00:21:02.814373332Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 9 00:21:02.814411 containerd[1637]: time="2025-09-09T00:21:02.814391828Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 9 00:21:02.814583 containerd[1637]: time="2025-09-09T00:21:02.814566968Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 9 00:21:02.814604 containerd[1637]: time="2025-09-09T00:21:02.814581597Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 9 00:21:02.814604 containerd[1637]: time="2025-09-09T00:21:02.814592464Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 9 00:21:02.814604 containerd[1637]: time="2025-09-09T00:21:02.814598378Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Sep 9 00:21:02.814680 containerd[1637]: time="2025-09-09T00:21:02.814658839Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Sep 9 00:21:02.814822 containerd[1637]: time="2025-09-09T00:21:02.814808536Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 9 00:21:02.814844 containerd[1637]: time="2025-09-09T00:21:02.814830651Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 9 00:21:02.814844 containerd[1637]: time="2025-09-09T00:21:02.814839832Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Sep 9 00:21:02.814882 containerd[1637]: time="2025-09-09T00:21:02.814854767Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Sep 9 00:21:02.815010 containerd[1637]: time="2025-09-09T00:21:02.814998362Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Sep 9 00:21:02.815055 containerd[1637]: time="2025-09-09T00:21:02.815040759Z" level=info msg="metadata content store policy set" policy=shared Sep 9 00:21:02.818858 containerd[1637]: time="2025-09-09T00:21:02.818834639Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Sep 9 00:21:02.818901 containerd[1637]: time="2025-09-09T00:21:02.818873377Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Sep 9 00:21:02.818901 containerd[1637]: time="2025-09-09T00:21:02.818884801Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Sep 9 00:21:02.818901 containerd[1637]: time="2025-09-09T00:21:02.818892150Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Sep 9 00:21:02.818950 containerd[1637]: time="2025-09-09T00:21:02.818900087Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Sep 9 00:21:02.818950 containerd[1637]: time="2025-09-09T00:21:02.818908210Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Sep 9 00:21:02.818950 containerd[1637]: time="2025-09-09T00:21:02.818915580Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Sep 9 00:21:02.818950 containerd[1637]: time="2025-09-09T00:21:02.818921989Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Sep 9 00:21:02.818950 containerd[1637]: time="2025-09-09T00:21:02.818927687Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Sep 9 00:21:02.818950 containerd[1637]: time="2025-09-09T00:21:02.818935273Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Sep 9 00:21:02.818950 containerd[1637]: time="2025-09-09T00:21:02.818942424Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Sep 9 00:21:02.819038 containerd[1637]: time="2025-09-09T00:21:02.818953732Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Sep 9 00:21:02.819038 containerd[1637]: time="2025-09-09T00:21:02.819025873Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Sep 9 00:21:02.819064 containerd[1637]: time="2025-09-09T00:21:02.819043815Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Sep 9 00:21:02.819064 containerd[1637]: time="2025-09-09T00:21:02.819055858Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Sep 9 00:21:02.819090 containerd[1637]: time="2025-09-09T00:21:02.819065926Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Sep 9 00:21:02.819090 containerd[1637]: time="2025-09-09T00:21:02.819074867Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Sep 9 00:21:02.819090 containerd[1637]: time="2025-09-09T00:21:02.819081461Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Sep 9 00:21:02.819090 containerd[1637]: time="2025-09-09T00:21:02.819087563Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Sep 9 00:21:02.819146 containerd[1637]: time="2025-09-09T00:21:02.819101692Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Sep 9 00:21:02.819146 containerd[1637]: time="2025-09-09T00:21:02.819111916Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Sep 9 00:21:02.819146 containerd[1637]: time="2025-09-09T00:21:02.819120963Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Sep 9 00:21:02.819146 containerd[1637]: time="2025-09-09T00:21:02.819129426Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Sep 9 00:21:02.819200 containerd[1637]: time="2025-09-09T00:21:02.819177209Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Sep 9 00:21:02.819200 containerd[1637]: time="2025-09-09T00:21:02.819190465Z" level=info msg="Start snapshots syncer" Sep 9 00:21:02.819238 containerd[1637]: time="2025-09-09T00:21:02.819205513Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Sep 9 00:21:02.819413 containerd[1637]: time="2025-09-09T00:21:02.819371005Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Sep 9 00:21:02.819487 containerd[1637]: time="2025-09-09T00:21:02.819419920Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Sep 9 00:21:02.819656 containerd[1637]: time="2025-09-09T00:21:02.819517649Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Sep 9 00:21:02.819804 containerd[1637]: time="2025-09-09T00:21:02.819789529Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Sep 9 00:21:02.819829 containerd[1637]: time="2025-09-09T00:21:02.819818424Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Sep 9 00:21:02.819855 containerd[1637]: time="2025-09-09T00:21:02.819838925Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Sep 9 00:21:02.820038 containerd[1637]: time="2025-09-09T00:21:02.819869328Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Sep 9 00:21:02.820038 containerd[1637]: time="2025-09-09T00:21:02.819886785Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Sep 9 00:21:02.820038 containerd[1637]: time="2025-09-09T00:21:02.819899231Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Sep 9 00:21:02.820038 containerd[1637]: time="2025-09-09T00:21:02.819907668Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Sep 9 00:21:02.820038 containerd[1637]: time="2025-09-09T00:21:02.819928269Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Sep 9 00:21:02.820038 containerd[1637]: time="2025-09-09T00:21:02.819937607Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Sep 9 00:21:02.820038 containerd[1637]: time="2025-09-09T00:21:02.819949140Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Sep 9 00:21:02.820038 containerd[1637]: time="2025-09-09T00:21:02.819972777Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 9 00:21:02.820038 containerd[1637]: time="2025-09-09T00:21:02.819986237Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 9 00:21:02.820038 containerd[1637]: time="2025-09-09T00:21:02.819996485Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 9 00:21:02.820038 containerd[1637]: time="2025-09-09T00:21:02.820008029Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 9 00:21:02.820038 containerd[1637]: time="2025-09-09T00:21:02.820020076Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Sep 9 00:21:02.820038 containerd[1637]: time="2025-09-09T00:21:02.820030620Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Sep 9 00:21:02.820226 containerd[1637]: time="2025-09-09T00:21:02.820041496Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Sep 9 00:21:02.820226 containerd[1637]: time="2025-09-09T00:21:02.820060758Z" level=info msg="runtime interface created" Sep 9 00:21:02.820226 containerd[1637]: time="2025-09-09T00:21:02.820067453Z" level=info msg="created NRI interface" Sep 9 00:21:02.820226 containerd[1637]: time="2025-09-09T00:21:02.820080646Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Sep 9 00:21:02.820226 containerd[1637]: time="2025-09-09T00:21:02.820088425Z" level=info msg="Connect containerd service" Sep 9 00:21:02.820226 containerd[1637]: time="2025-09-09T00:21:02.820108969Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 9 00:21:02.820870 containerd[1637]: time="2025-09-09T00:21:02.820835696Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 9 00:21:03.010835 containerd[1637]: time="2025-09-09T00:21:03.009412782Z" level=info msg="Start subscribing containerd event" Sep 9 00:21:03.010835 containerd[1637]: time="2025-09-09T00:21:03.009458162Z" level=info msg="Start recovering state" Sep 9 00:21:03.010835 containerd[1637]: time="2025-09-09T00:21:03.009489044Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 9 00:21:03.010835 containerd[1637]: time="2025-09-09T00:21:03.009518899Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 9 00:21:03.010835 containerd[1637]: time="2025-09-09T00:21:03.009527066Z" level=info msg="Start event monitor" Sep 9 00:21:03.010835 containerd[1637]: time="2025-09-09T00:21:03.009536400Z" level=info msg="Start cni network conf syncer for default" Sep 9 00:21:03.010835 containerd[1637]: time="2025-09-09T00:21:03.009541978Z" level=info msg="Start streaming server" Sep 9 00:21:03.010835 containerd[1637]: time="2025-09-09T00:21:03.009549062Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Sep 9 00:21:03.010835 containerd[1637]: time="2025-09-09T00:21:03.009553253Z" level=info msg="runtime interface starting up..." Sep 9 00:21:03.010835 containerd[1637]: time="2025-09-09T00:21:03.009556351Z" level=info msg="starting plugins..." Sep 9 00:21:03.010835 containerd[1637]: time="2025-09-09T00:21:03.009564997Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Sep 9 00:21:03.010835 containerd[1637]: time="2025-09-09T00:21:03.009641505Z" level=info msg="containerd successfully booted in 0.214281s" Sep 9 00:21:03.010488 systemd[1]: Started containerd.service - containerd container runtime. Sep 9 00:21:03.021352 tar[1614]: linux-amd64/README.md Sep 9 00:21:03.034019 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 9 00:21:04.235537 systemd-networkd[1518]: ens192: Gained IPv6LL Sep 9 00:21:04.236850 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 9 00:21:04.237287 systemd[1]: Reached target network-online.target - Network is Online. Sep 9 00:21:04.238433 systemd[1]: Starting coreos-metadata.service - VMware metadata agent... Sep 9 00:21:04.239683 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 00:21:04.245098 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 9 00:21:04.264284 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 9 00:21:04.278105 systemd[1]: coreos-metadata.service: Deactivated successfully. Sep 9 00:21:04.278323 systemd[1]: Finished coreos-metadata.service - VMware metadata agent. Sep 9 00:21:04.278881 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 9 00:21:05.632472 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 00:21:05.632835 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 9 00:21:05.633156 systemd[1]: Startup finished in 2.732s (kernel) + 6.820s (initrd) + 5.473s (userspace) = 15.026s. Sep 9 00:21:05.646610 (kubelet)[1798]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 9 00:21:05.692504 login[1712]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Sep 9 00:21:05.693775 login[1713]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Sep 9 00:21:05.699452 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 9 00:21:05.700226 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 9 00:21:05.709605 systemd-logind[1603]: New session 1 of user core. Sep 9 00:21:05.715314 systemd-logind[1603]: New session 2 of user core. Sep 9 00:21:05.721014 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 9 00:21:05.723034 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 9 00:21:05.737507 (systemd)[1805]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 9 00:21:05.739031 systemd-logind[1603]: New session c1 of user core. Sep 9 00:21:05.847159 systemd[1805]: Queued start job for default target default.target. Sep 9 00:21:05.859449 systemd[1805]: Created slice app.slice - User Application Slice. Sep 9 00:21:05.859473 systemd[1805]: Reached target paths.target - Paths. Sep 9 00:21:05.859502 systemd[1805]: Reached target timers.target - Timers. Sep 9 00:21:05.860886 systemd[1805]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 9 00:21:05.869462 systemd[1805]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 9 00:21:05.869505 systemd[1805]: Reached target sockets.target - Sockets. Sep 9 00:21:05.869600 systemd[1805]: Reached target basic.target - Basic System. Sep 9 00:21:05.869653 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 9 00:21:05.870686 systemd[1805]: Reached target default.target - Main User Target. Sep 9 00:21:05.870708 systemd[1805]: Startup finished in 127ms. Sep 9 00:21:05.876538 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 9 00:21:05.877321 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 9 00:21:06.641757 kubelet[1798]: E0909 00:21:06.641721 1798 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 9 00:21:06.643537 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 9 00:21:06.643650 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 9 00:21:06.643907 systemd[1]: kubelet.service: Consumed 668ms CPU time, 263.1M memory peak. Sep 9 00:21:16.670165 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 9 00:21:16.671491 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 00:21:17.066002 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 00:21:17.068342 (kubelet)[1851]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 9 00:21:17.111307 kubelet[1851]: E0909 00:21:17.111269 1851 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 9 00:21:17.114134 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 9 00:21:17.114295 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 9 00:21:17.114673 systemd[1]: kubelet.service: Consumed 118ms CPU time, 108.9M memory peak. Sep 9 00:21:27.170323 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 9 00:21:27.172466 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 00:21:27.539089 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 00:21:27.547654 (kubelet)[1866]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 9 00:21:27.639780 kubelet[1866]: E0909 00:21:27.639744 1866 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 9 00:21:27.641466 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 9 00:21:27.641558 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 9 00:21:27.642002 systemd[1]: kubelet.service: Consumed 121ms CPU time, 109.4M memory peak. Sep 9 00:21:32.625236 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 9 00:21:32.627586 systemd[1]: Started sshd@0-139.178.70.101:22-139.178.68.195:49156.service - OpenSSH per-connection server daemon (139.178.68.195:49156). Sep 9 00:21:32.680417 sshd[1874]: Accepted publickey for core from 139.178.68.195 port 49156 ssh2: RSA SHA256:VfV4DbcB1YJ5ML+Hb+wSNrAGdGs+bVUt3FrVVQ/IlNk Sep 9 00:21:32.681159 sshd-session[1874]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:21:32.684193 systemd-logind[1603]: New session 3 of user core. Sep 9 00:21:32.689480 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 9 00:21:32.741186 systemd[1]: Started sshd@1-139.178.70.101:22-139.178.68.195:49168.service - OpenSSH per-connection server daemon (139.178.68.195:49168). Sep 9 00:21:32.782122 sshd[1879]: Accepted publickey for core from 139.178.68.195 port 49168 ssh2: RSA SHA256:VfV4DbcB1YJ5ML+Hb+wSNrAGdGs+bVUt3FrVVQ/IlNk Sep 9 00:21:32.782815 sshd-session[1879]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:21:32.785808 systemd-logind[1603]: New session 4 of user core. Sep 9 00:21:32.789461 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 9 00:21:32.838647 sshd[1881]: Connection closed by 139.178.68.195 port 49168 Sep 9 00:21:32.839593 sshd-session[1879]: pam_unix(sshd:session): session closed for user core Sep 9 00:21:32.848739 systemd[1]: sshd@1-139.178.70.101:22-139.178.68.195:49168.service: Deactivated successfully. Sep 9 00:21:32.849722 systemd[1]: session-4.scope: Deactivated successfully. Sep 9 00:21:32.850230 systemd-logind[1603]: Session 4 logged out. Waiting for processes to exit. Sep 9 00:21:32.851633 systemd[1]: Started sshd@2-139.178.70.101:22-139.178.68.195:49172.service - OpenSSH per-connection server daemon (139.178.68.195:49172). Sep 9 00:21:32.852372 systemd-logind[1603]: Removed session 4. Sep 9 00:21:32.896697 sshd[1887]: Accepted publickey for core from 139.178.68.195 port 49172 ssh2: RSA SHA256:VfV4DbcB1YJ5ML+Hb+wSNrAGdGs+bVUt3FrVVQ/IlNk Sep 9 00:21:32.897332 sshd-session[1887]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:21:32.899857 systemd-logind[1603]: New session 5 of user core. Sep 9 00:21:32.907462 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 9 00:21:32.954415 sshd[1889]: Connection closed by 139.178.68.195 port 49172 Sep 9 00:21:32.954740 sshd-session[1887]: pam_unix(sshd:session): session closed for user core Sep 9 00:21:32.966566 systemd[1]: sshd@2-139.178.70.101:22-139.178.68.195:49172.service: Deactivated successfully. Sep 9 00:21:32.967460 systemd[1]: session-5.scope: Deactivated successfully. Sep 9 00:21:32.968215 systemd-logind[1603]: Session 5 logged out. Waiting for processes to exit. Sep 9 00:21:32.969342 systemd[1]: Started sshd@3-139.178.70.101:22-139.178.68.195:49178.service - OpenSSH per-connection server daemon (139.178.68.195:49178). Sep 9 00:21:32.970469 systemd-logind[1603]: Removed session 5. Sep 9 00:21:33.015779 sshd[1895]: Accepted publickey for core from 139.178.68.195 port 49178 ssh2: RSA SHA256:VfV4DbcB1YJ5ML+Hb+wSNrAGdGs+bVUt3FrVVQ/IlNk Sep 9 00:21:33.016530 sshd-session[1895]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:21:33.019197 systemd-logind[1603]: New session 6 of user core. Sep 9 00:21:33.026584 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 9 00:21:33.074623 sshd[1897]: Connection closed by 139.178.68.195 port 49178 Sep 9 00:21:33.074992 sshd-session[1895]: pam_unix(sshd:session): session closed for user core Sep 9 00:21:33.083541 systemd[1]: sshd@3-139.178.70.101:22-139.178.68.195:49178.service: Deactivated successfully. Sep 9 00:21:33.084419 systemd[1]: session-6.scope: Deactivated successfully. Sep 9 00:21:33.084848 systemd-logind[1603]: Session 6 logged out. Waiting for processes to exit. Sep 9 00:21:33.086192 systemd[1]: Started sshd@4-139.178.70.101:22-139.178.68.195:49180.service - OpenSSH per-connection server daemon (139.178.68.195:49180). Sep 9 00:21:33.087864 systemd-logind[1603]: Removed session 6. Sep 9 00:21:33.124976 sshd[1903]: Accepted publickey for core from 139.178.68.195 port 49180 ssh2: RSA SHA256:VfV4DbcB1YJ5ML+Hb+wSNrAGdGs+bVUt3FrVVQ/IlNk Sep 9 00:21:33.125749 sshd-session[1903]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:21:33.128369 systemd-logind[1603]: New session 7 of user core. Sep 9 00:21:33.138525 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 9 00:21:33.194344 sudo[1906]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 9 00:21:33.194528 sudo[1906]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 9 00:21:33.206877 sudo[1906]: pam_unix(sudo:session): session closed for user root Sep 9 00:21:33.209659 sshd[1905]: Connection closed by 139.178.68.195 port 49180 Sep 9 00:21:33.210010 sshd-session[1903]: pam_unix(sshd:session): session closed for user core Sep 9 00:21:33.219452 systemd[1]: sshd@4-139.178.70.101:22-139.178.68.195:49180.service: Deactivated successfully. Sep 9 00:21:33.220420 systemd[1]: session-7.scope: Deactivated successfully. Sep 9 00:21:33.220924 systemd-logind[1603]: Session 7 logged out. Waiting for processes to exit. Sep 9 00:21:33.222888 systemd[1]: Started sshd@5-139.178.70.101:22-139.178.68.195:49184.service - OpenSSH per-connection server daemon (139.178.68.195:49184). Sep 9 00:21:33.223482 systemd-logind[1603]: Removed session 7. Sep 9 00:21:33.267642 sshd[1912]: Accepted publickey for core from 139.178.68.195 port 49184 ssh2: RSA SHA256:VfV4DbcB1YJ5ML+Hb+wSNrAGdGs+bVUt3FrVVQ/IlNk Sep 9 00:21:33.268396 sshd-session[1912]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:21:33.270906 systemd-logind[1603]: New session 8 of user core. Sep 9 00:21:33.278505 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 9 00:21:33.327621 sudo[1916]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 9 00:21:33.328058 sudo[1916]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 9 00:21:33.331052 sudo[1916]: pam_unix(sudo:session): session closed for user root Sep 9 00:21:33.335008 sudo[1915]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Sep 9 00:21:33.335207 sudo[1915]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 9 00:21:33.342708 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 9 00:21:33.368924 augenrules[1938]: No rules Sep 9 00:21:33.369564 systemd[1]: audit-rules.service: Deactivated successfully. Sep 9 00:21:33.369766 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 9 00:21:33.370508 sudo[1915]: pam_unix(sudo:session): session closed for user root Sep 9 00:21:33.371316 sshd[1914]: Connection closed by 139.178.68.195 port 49184 Sep 9 00:21:33.372178 sshd-session[1912]: pam_unix(sshd:session): session closed for user core Sep 9 00:21:33.378030 systemd[1]: sshd@5-139.178.70.101:22-139.178.68.195:49184.service: Deactivated successfully. Sep 9 00:21:33.378993 systemd[1]: session-8.scope: Deactivated successfully. Sep 9 00:21:33.379612 systemd-logind[1603]: Session 8 logged out. Waiting for processes to exit. Sep 9 00:21:33.381221 systemd[1]: Started sshd@6-139.178.70.101:22-139.178.68.195:49188.service - OpenSSH per-connection server daemon (139.178.68.195:49188). Sep 9 00:21:33.383173 systemd-logind[1603]: Removed session 8. Sep 9 00:21:33.418799 sshd[1947]: Accepted publickey for core from 139.178.68.195 port 49188 ssh2: RSA SHA256:VfV4DbcB1YJ5ML+Hb+wSNrAGdGs+bVUt3FrVVQ/IlNk Sep 9 00:21:33.419519 sshd-session[1947]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:21:33.422586 systemd-logind[1603]: New session 9 of user core. Sep 9 00:21:33.429475 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 9 00:21:33.478683 sudo[1950]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 9 00:21:33.478881 sudo[1950]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 9 00:21:33.779024 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 9 00:21:33.793599 (dockerd)[1969]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 9 00:21:33.998659 dockerd[1969]: time="2025-09-09T00:21:33.998620821Z" level=info msg="Starting up" Sep 9 00:21:33.999367 dockerd[1969]: time="2025-09-09T00:21:33.999351398Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Sep 9 00:21:34.026664 dockerd[1969]: time="2025-09-09T00:21:34.026600208Z" level=info msg="Loading containers: start." Sep 9 00:21:34.043407 kernel: Initializing XFRM netlink socket Sep 9 00:21:34.182410 systemd-networkd[1518]: docker0: Link UP Sep 9 00:21:34.185619 dockerd[1969]: time="2025-09-09T00:21:34.185580924Z" level=info msg="Loading containers: done." Sep 9 00:21:34.193760 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck465244933-merged.mount: Deactivated successfully. Sep 9 00:21:34.195771 dockerd[1969]: time="2025-09-09T00:21:34.195750308Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 9 00:21:34.195864 dockerd[1969]: time="2025-09-09T00:21:34.195854059Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 Sep 9 00:21:34.195951 dockerd[1969]: time="2025-09-09T00:21:34.195942524Z" level=info msg="Initializing buildkit" Sep 9 00:21:34.205549 dockerd[1969]: time="2025-09-09T00:21:34.205420802Z" level=info msg="Completed buildkit initialization" Sep 9 00:21:34.209690 dockerd[1969]: time="2025-09-09T00:21:34.209674454Z" level=info msg="Daemon has completed initialization" Sep 9 00:21:34.209839 dockerd[1969]: time="2025-09-09T00:21:34.209821465Z" level=info msg="API listen on /run/docker.sock" Sep 9 00:21:34.210003 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 9 00:21:35.640662 containerd[1637]: time="2025-09-09T00:21:35.640633259Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.8\"" Sep 9 00:21:36.329821 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2033046441.mount: Deactivated successfully. Sep 9 00:21:37.223206 containerd[1637]: time="2025-09-09T00:21:37.222747311Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:21:37.223667 containerd[1637]: time="2025-09-09T00:21:37.223656158Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.8: active requests=0, bytes read=28800687" Sep 9 00:21:37.223954 containerd[1637]: time="2025-09-09T00:21:37.223943757Z" level=info msg="ImageCreate event name:\"sha256:0d4edaa48e2f940c934e0f7cfd5209fc85e65ab5e842b980f41263d1764661f1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:21:37.225656 containerd[1637]: time="2025-09-09T00:21:37.225643881Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:6e1a2f9b24f69ee77d0c0edaf32b31fdbb5e1a613f4476272197e6e1e239050b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:21:37.226541 containerd[1637]: time="2025-09-09T00:21:37.226529081Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.8\" with image id \"sha256:0d4edaa48e2f940c934e0f7cfd5209fc85e65ab5e842b980f41263d1764661f1\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.8\", repo digest \"registry.k8s.io/kube-apiserver@sha256:6e1a2f9b24f69ee77d0c0edaf32b31fdbb5e1a613f4476272197e6e1e239050b\", size \"28797487\" in 1.585867424s" Sep 9 00:21:37.226595 containerd[1637]: time="2025-09-09T00:21:37.226586822Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.8\" returns image reference \"sha256:0d4edaa48e2f940c934e0f7cfd5209fc85e65ab5e842b980f41263d1764661f1\"" Sep 9 00:21:37.226949 containerd[1637]: time="2025-09-09T00:21:37.226935112Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.8\"" Sep 9 00:21:37.670229 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Sep 9 00:21:37.671925 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 00:21:37.995244 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 00:21:38.003693 (kubelet)[2232]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 9 00:21:38.032026 kubelet[2232]: E0909 00:21:38.031996 2232 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 9 00:21:38.033491 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 9 00:21:38.033577 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 9 00:21:38.033878 systemd[1]: kubelet.service: Consumed 93ms CPU time, 110.3M memory peak. Sep 9 00:21:38.790846 containerd[1637]: time="2025-09-09T00:21:38.790812571Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:21:38.795822 containerd[1637]: time="2025-09-09T00:21:38.795806163Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.8: active requests=0, bytes read=24784128" Sep 9 00:21:38.801107 containerd[1637]: time="2025-09-09T00:21:38.801081592Z" level=info msg="ImageCreate event name:\"sha256:b248d0b0c74ad8230e0bae0cbed477560e8a1e8c7ef5f29b7e75c1f273c8a091\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:21:38.810263 containerd[1637]: time="2025-09-09T00:21:38.810228008Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:8788ccd28ceed9e2e5f8fc31375ef5771df8ea6e518b362c9a06f3cc709cd6c7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:21:38.810643 containerd[1637]: time="2025-09-09T00:21:38.810544200Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.8\" with image id \"sha256:b248d0b0c74ad8230e0bae0cbed477560e8a1e8c7ef5f29b7e75c1f273c8a091\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.8\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:8788ccd28ceed9e2e5f8fc31375ef5771df8ea6e518b362c9a06f3cc709cd6c7\", size \"26387322\" in 1.583532426s" Sep 9 00:21:38.810643 containerd[1637]: time="2025-09-09T00:21:38.810562726Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.8\" returns image reference \"sha256:b248d0b0c74ad8230e0bae0cbed477560e8a1e8c7ef5f29b7e75c1f273c8a091\"" Sep 9 00:21:38.810939 containerd[1637]: time="2025-09-09T00:21:38.810928183Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.8\"" Sep 9 00:21:39.909489 containerd[1637]: time="2025-09-09T00:21:39.908962827Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:21:39.909489 containerd[1637]: time="2025-09-09T00:21:39.909364380Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.8: active requests=0, bytes read=19175036" Sep 9 00:21:39.909489 containerd[1637]: time="2025-09-09T00:21:39.909465235Z" level=info msg="ImageCreate event name:\"sha256:2ac266f06c9a5a3d0d20ae482dbccb54d3be454d5ca49f48b528bdf5bae3e908\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:21:39.910738 containerd[1637]: time="2025-09-09T00:21:39.910727339Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:43c58bcbd1c7812dd19f8bfa5ae11093ebefd28699453ce86fc710869e155cd4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:21:39.911666 containerd[1637]: time="2025-09-09T00:21:39.911654679Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.8\" with image id \"sha256:2ac266f06c9a5a3d0d20ae482dbccb54d3be454d5ca49f48b528bdf5bae3e908\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.8\", repo digest \"registry.k8s.io/kube-scheduler@sha256:43c58bcbd1c7812dd19f8bfa5ae11093ebefd28699453ce86fc710869e155cd4\", size \"20778248\" in 1.100712231s" Sep 9 00:21:39.911720 containerd[1637]: time="2025-09-09T00:21:39.911711792Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.8\" returns image reference \"sha256:2ac266f06c9a5a3d0d20ae482dbccb54d3be454d5ca49f48b528bdf5bae3e908\"" Sep 9 00:21:39.912019 containerd[1637]: time="2025-09-09T00:21:39.911988439Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.8\"" Sep 9 00:21:40.784232 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1766840735.mount: Deactivated successfully. Sep 9 00:21:41.161063 containerd[1637]: time="2025-09-09T00:21:41.160656110Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:21:41.166400 containerd[1637]: time="2025-09-09T00:21:41.166363402Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.8: active requests=0, bytes read=30897170" Sep 9 00:21:41.172265 containerd[1637]: time="2025-09-09T00:21:41.172246064Z" level=info msg="ImageCreate event name:\"sha256:d7b94972d43c5d6ce8088a8bcd08614a5ecf2bf04166232c688adcd0b8ed4b12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:21:41.180725 containerd[1637]: time="2025-09-09T00:21:41.180693129Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:adc1335b480ddd833aac3b0bd20f68ff0f3c3cf7a0bd337933b006d9f5cec40a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:21:41.180946 containerd[1637]: time="2025-09-09T00:21:41.180871485Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.8\" with image id \"sha256:d7b94972d43c5d6ce8088a8bcd08614a5ecf2bf04166232c688adcd0b8ed4b12\", repo tag \"registry.k8s.io/kube-proxy:v1.32.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:adc1335b480ddd833aac3b0bd20f68ff0f3c3cf7a0bd337933b006d9f5cec40a\", size \"30896189\" in 1.268789139s" Sep 9 00:21:41.180946 containerd[1637]: time="2025-09-09T00:21:41.180889645Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.8\" returns image reference \"sha256:d7b94972d43c5d6ce8088a8bcd08614a5ecf2bf04166232c688adcd0b8ed4b12\"" Sep 9 00:21:41.181173 containerd[1637]: time="2025-09-09T00:21:41.181156367Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 9 00:21:41.621223 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3888965936.mount: Deactivated successfully. Sep 9 00:21:42.640068 containerd[1637]: time="2025-09-09T00:21:42.639541885Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:21:42.649615 containerd[1637]: time="2025-09-09T00:21:42.649590481Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Sep 9 00:21:42.654874 containerd[1637]: time="2025-09-09T00:21:42.654851265Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:21:42.659982 containerd[1637]: time="2025-09-09T00:21:42.659957985Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:21:42.660531 containerd[1637]: time="2025-09-09T00:21:42.660516964Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.479345075s" Sep 9 00:21:42.660583 containerd[1637]: time="2025-09-09T00:21:42.660575192Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Sep 9 00:21:42.660881 containerd[1637]: time="2025-09-09T00:21:42.660859410Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 9 00:21:43.511981 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2687916415.mount: Deactivated successfully. Sep 9 00:21:43.586661 containerd[1637]: time="2025-09-09T00:21:43.586623458Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 9 00:21:43.589090 containerd[1637]: time="2025-09-09T00:21:43.589062725Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Sep 9 00:21:43.597251 containerd[1637]: time="2025-09-09T00:21:43.597177822Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 9 00:21:43.602528 containerd[1637]: time="2025-09-09T00:21:43.602474162Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 9 00:21:43.603167 containerd[1637]: time="2025-09-09T00:21:43.602840691Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 941.921532ms" Sep 9 00:21:43.603167 containerd[1637]: time="2025-09-09T00:21:43.602862510Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 9 00:21:43.603167 containerd[1637]: time="2025-09-09T00:21:43.603160422Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Sep 9 00:21:44.359072 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2387129817.mount: Deactivated successfully. Sep 9 00:21:47.606472 update_engine[1604]: I20250909 00:21:47.606436 1604 update_attempter.cc:509] Updating boot flags... Sep 9 00:21:48.170417 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Sep 9 00:21:48.172858 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 00:21:48.630892 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 00:21:48.636737 (kubelet)[2392]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 9 00:21:48.913072 kubelet[2392]: E0909 00:21:48.913003 2392 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 9 00:21:48.914390 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 9 00:21:48.914516 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 9 00:21:48.914887 systemd[1]: kubelet.service: Consumed 126ms CPU time, 107.7M memory peak. Sep 9 00:21:50.734248 containerd[1637]: time="2025-09-09T00:21:50.733740291Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:21:50.734248 containerd[1637]: time="2025-09-09T00:21:50.734217460Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57682056" Sep 9 00:21:50.734576 containerd[1637]: time="2025-09-09T00:21:50.734458275Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:21:50.736564 containerd[1637]: time="2025-09-09T00:21:50.736547511Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:21:50.737126 containerd[1637]: time="2025-09-09T00:21:50.737104644Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 7.133895746s" Sep 9 00:21:50.737165 containerd[1637]: time="2025-09-09T00:21:50.737133583Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Sep 9 00:21:52.756625 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 00:21:52.756757 systemd[1]: kubelet.service: Consumed 126ms CPU time, 107.7M memory peak. Sep 9 00:21:52.758452 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 00:21:52.778509 systemd[1]: Reload requested from client PID 2432 ('systemctl') (unit session-9.scope)... Sep 9 00:21:52.778602 systemd[1]: Reloading... Sep 9 00:21:52.895421 zram_generator::config[2482]: No configuration found. Sep 9 00:21:52.946073 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 9 00:21:52.956364 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Sep 9 00:21:53.024617 systemd[1]: Reloading finished in 245 ms. Sep 9 00:21:53.364098 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 9 00:21:53.364179 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 9 00:21:53.364381 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 00:21:53.364438 systemd[1]: kubelet.service: Consumed 49ms CPU time, 74.3M memory peak. Sep 9 00:21:53.366426 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 00:21:54.767481 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 00:21:54.777678 (kubelet)[2543]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 9 00:21:54.859554 kubelet[2543]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 00:21:54.860316 kubelet[2543]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 9 00:21:54.860316 kubelet[2543]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 00:21:54.860316 kubelet[2543]: I0909 00:21:54.859846 2543 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 9 00:21:55.297410 kubelet[2543]: I0909 00:21:55.297209 2543 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 9 00:21:55.297410 kubelet[2543]: I0909 00:21:55.297236 2543 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 9 00:21:55.297523 kubelet[2543]: I0909 00:21:55.297427 2543 server.go:954] "Client rotation is on, will bootstrap in background" Sep 9 00:21:55.746975 kubelet[2543]: E0909 00:21:55.746943 2543 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://139.178.70.101:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 139.178.70.101:6443: connect: connection refused" logger="UnhandledError" Sep 9 00:21:55.785055 kubelet[2543]: I0909 00:21:55.784943 2543 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 9 00:21:55.902018 kubelet[2543]: I0909 00:21:55.901991 2543 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 9 00:21:55.924206 kubelet[2543]: I0909 00:21:55.924177 2543 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 9 00:21:55.929074 kubelet[2543]: I0909 00:21:55.929032 2543 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 9 00:21:55.929231 kubelet[2543]: I0909 00:21:55.929069 2543 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 9 00:21:55.939405 kubelet[2543]: I0909 00:21:55.939357 2543 topology_manager.go:138] "Creating topology manager with none policy" Sep 9 00:21:55.939405 kubelet[2543]: I0909 00:21:55.939398 2543 container_manager_linux.go:304] "Creating device plugin manager" Sep 9 00:21:55.953900 kubelet[2543]: I0909 00:21:55.953877 2543 state_mem.go:36] "Initialized new in-memory state store" Sep 9 00:21:55.995037 kubelet[2543]: I0909 00:21:55.995005 2543 kubelet.go:446] "Attempting to sync node with API server" Sep 9 00:21:55.995037 kubelet[2543]: I0909 00:21:55.995041 2543 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 9 00:21:56.005956 kubelet[2543]: I0909 00:21:56.005852 2543 kubelet.go:352] "Adding apiserver pod source" Sep 9 00:21:56.005956 kubelet[2543]: I0909 00:21:56.005890 2543 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 9 00:21:56.022417 kubelet[2543]: W0909 00:21:56.022068 2543 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://139.178.70.101:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 139.178.70.101:6443: connect: connection refused Sep 9 00:21:56.022417 kubelet[2543]: E0909 00:21:56.022107 2543 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://139.178.70.101:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 139.178.70.101:6443: connect: connection refused" logger="UnhandledError" Sep 9 00:21:56.022561 kubelet[2543]: W0909 00:21:56.022530 2543 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://139.178.70.101:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 139.178.70.101:6443: connect: connection refused Sep 9 00:21:56.022598 kubelet[2543]: E0909 00:21:56.022580 2543 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://139.178.70.101:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 139.178.70.101:6443: connect: connection refused" logger="UnhandledError" Sep 9 00:21:56.027801 kubelet[2543]: I0909 00:21:56.027771 2543 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Sep 9 00:21:56.047610 kubelet[2543]: I0909 00:21:56.047454 2543 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 9 00:21:56.057976 kubelet[2543]: W0909 00:21:56.057589 2543 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 9 00:21:56.058345 kubelet[2543]: I0909 00:21:56.058327 2543 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 9 00:21:56.058379 kubelet[2543]: I0909 00:21:56.058357 2543 server.go:1287] "Started kubelet" Sep 9 00:21:56.058420 kubelet[2543]: I0909 00:21:56.058405 2543 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 9 00:21:56.059007 kubelet[2543]: I0909 00:21:56.058993 2543 server.go:479] "Adding debug handlers to kubelet server" Sep 9 00:21:56.070453 kubelet[2543]: I0909 00:21:56.070428 2543 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 9 00:21:56.089785 kubelet[2543]: I0909 00:21:56.089704 2543 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 9 00:21:56.092171 kubelet[2543]: I0909 00:21:56.092134 2543 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 9 00:21:56.092472 kubelet[2543]: E0909 00:21:56.092429 2543 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 00:21:56.093030 kubelet[2543]: I0909 00:21:56.092965 2543 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 9 00:21:56.093030 kubelet[2543]: I0909 00:21:56.093001 2543 reconciler.go:26] "Reconciler: start to sync state" Sep 9 00:21:56.094332 kubelet[2543]: W0909 00:21:56.094148 2543 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://139.178.70.101:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.70.101:6443: connect: connection refused Sep 9 00:21:56.094332 kubelet[2543]: E0909 00:21:56.094187 2543 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://139.178.70.101:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 139.178.70.101:6443: connect: connection refused" logger="UnhandledError" Sep 9 00:21:56.094332 kubelet[2543]: E0909 00:21:56.094244 2543 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.101:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.101:6443: connect: connection refused" interval="200ms" Sep 9 00:21:56.094718 kubelet[2543]: I0909 00:21:56.094686 2543 factory.go:221] Registration of the systemd container factory successfully Sep 9 00:21:56.094780 kubelet[2543]: I0909 00:21:56.094759 2543 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 9 00:21:56.131419 kubelet[2543]: I0909 00:21:56.131229 2543 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 9 00:21:56.131508 kubelet[2543]: I0909 00:21:56.131458 2543 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 9 00:21:56.158655 kubelet[2543]: I0909 00:21:56.158621 2543 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 9 00:21:56.159347 kubelet[2543]: I0909 00:21:56.159327 2543 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 9 00:21:56.159347 kubelet[2543]: I0909 00:21:56.159346 2543 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 9 00:21:56.159460 kubelet[2543]: I0909 00:21:56.159362 2543 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 9 00:21:56.159460 kubelet[2543]: I0909 00:21:56.159366 2543 kubelet.go:2382] "Starting kubelet main sync loop" Sep 9 00:21:56.159460 kubelet[2543]: E0909 00:21:56.159402 2543 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 9 00:21:56.164126 kubelet[2543]: W0909 00:21:56.164095 2543 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://139.178.70.101:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.70.101:6443: connect: connection refused Sep 9 00:21:56.164167 kubelet[2543]: E0909 00:21:56.164128 2543 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://139.178.70.101:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 139.178.70.101:6443: connect: connection refused" logger="UnhandledError" Sep 9 00:21:56.165107 kubelet[2543]: I0909 00:21:56.165092 2543 factory.go:221] Registration of the containerd container factory successfully Sep 9 00:21:56.182012 kubelet[2543]: E0909 00:21:56.158169 2543 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://139.178.70.101:6443/api/v1/namespaces/default/events\": dial tcp 139.178.70.101:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1863755ba3fc9260 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-09 00:21:56.058337888 +0000 UTC m=+1.277713581,LastTimestamp:2025-09-09 00:21:56.058337888 +0000 UTC m=+1.277713581,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 9 00:21:56.225815 kubelet[2543]: E0909 00:21:56.225695 2543 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 00:21:56.226086 kubelet[2543]: I0909 00:21:56.226076 2543 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 9 00:21:56.226136 kubelet[2543]: I0909 00:21:56.226129 2543 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 9 00:21:56.226205 kubelet[2543]: I0909 00:21:56.226197 2543 state_mem.go:36] "Initialized new in-memory state store" Sep 9 00:21:56.259526 kubelet[2543]: E0909 00:21:56.259437 2543 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 9 00:21:56.295024 kubelet[2543]: E0909 00:21:56.294990 2543 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.101:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.101:6443: connect: connection refused" interval="400ms" Sep 9 00:21:56.326218 kubelet[2543]: E0909 00:21:56.326179 2543 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 00:21:56.396237 kubelet[2543]: I0909 00:21:56.396212 2543 policy_none.go:49] "None policy: Start" Sep 9 00:21:56.396314 kubelet[2543]: I0909 00:21:56.396244 2543 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 9 00:21:56.396314 kubelet[2543]: I0909 00:21:56.396255 2543 state_mem.go:35] "Initializing new in-memory state store" Sep 9 00:21:56.426624 kubelet[2543]: E0909 00:21:56.426604 2543 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 00:21:56.437095 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 9 00:21:56.446516 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 9 00:21:56.449574 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 9 00:21:56.457937 kubelet[2543]: I0909 00:21:56.457919 2543 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 9 00:21:56.458059 kubelet[2543]: I0909 00:21:56.458047 2543 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 9 00:21:56.458095 kubelet[2543]: I0909 00:21:56.458056 2543 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 9 00:21:56.458299 kubelet[2543]: I0909 00:21:56.458281 2543 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 9 00:21:56.458879 kubelet[2543]: E0909 00:21:56.458867 2543 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 9 00:21:56.458942 kubelet[2543]: E0909 00:21:56.458899 2543 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 9 00:21:56.474992 systemd[1]: Created slice kubepods-burstable-poda9176403b596d0b29ae8ad12d635226d.slice - libcontainer container kubepods-burstable-poda9176403b596d0b29ae8ad12d635226d.slice. Sep 9 00:21:56.495805 kubelet[2543]: E0909 00:21:56.495780 2543 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 00:21:56.499209 systemd[1]: Created slice kubepods-burstable-pod08b50f7fae72aa1db4778b1c3826279c.slice - libcontainer container kubepods-burstable-pod08b50f7fae72aa1db4778b1c3826279c.slice. Sep 9 00:21:56.500787 kubelet[2543]: E0909 00:21:56.500735 2543 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 00:21:56.503027 systemd[1]: Created slice kubepods-burstable-poda88c9297c136b0f15880bf567e89a977.slice - libcontainer container kubepods-burstable-poda88c9297c136b0f15880bf567e89a977.slice. Sep 9 00:21:56.504590 kubelet[2543]: E0909 00:21:56.504573 2543 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 00:21:56.530235 kubelet[2543]: I0909 00:21:56.530145 2543 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/08b50f7fae72aa1db4778b1c3826279c-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"08b50f7fae72aa1db4778b1c3826279c\") " pod="kube-system/kube-apiserver-localhost" Sep 9 00:21:56.530235 kubelet[2543]: I0909 00:21:56.530176 2543 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/08b50f7fae72aa1db4778b1c3826279c-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"08b50f7fae72aa1db4778b1c3826279c\") " pod="kube-system/kube-apiserver-localhost" Sep 9 00:21:56.530235 kubelet[2543]: I0909 00:21:56.530193 2543 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:21:56.530235 kubelet[2543]: I0909 00:21:56.530205 2543 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:21:56.530358 kubelet[2543]: I0909 00:21:56.530344 2543 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a9176403b596d0b29ae8ad12d635226d-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"a9176403b596d0b29ae8ad12d635226d\") " pod="kube-system/kube-scheduler-localhost" Sep 9 00:21:56.530358 kubelet[2543]: I0909 00:21:56.530355 2543 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/08b50f7fae72aa1db4778b1c3826279c-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"08b50f7fae72aa1db4778b1c3826279c\") " pod="kube-system/kube-apiserver-localhost" Sep 9 00:21:56.530417 kubelet[2543]: I0909 00:21:56.530365 2543 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:21:56.530417 kubelet[2543]: I0909 00:21:56.530373 2543 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:21:56.530417 kubelet[2543]: I0909 00:21:56.530394 2543 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:21:56.559184 kubelet[2543]: I0909 00:21:56.559154 2543 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 9 00:21:56.559515 kubelet[2543]: E0909 00:21:56.559499 2543 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://139.178.70.101:6443/api/v1/nodes\": dial tcp 139.178.70.101:6443: connect: connection refused" node="localhost" Sep 9 00:21:56.695981 kubelet[2543]: E0909 00:21:56.695953 2543 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.101:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.101:6443: connect: connection refused" interval="800ms" Sep 9 00:21:56.761283 kubelet[2543]: I0909 00:21:56.761259 2543 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 9 00:21:56.761532 kubelet[2543]: E0909 00:21:56.761516 2543 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://139.178.70.101:6443/api/v1/nodes\": dial tcp 139.178.70.101:6443: connect: connection refused" node="localhost" Sep 9 00:21:56.798576 containerd[1637]: time="2025-09-09T00:21:56.798502531Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:a9176403b596d0b29ae8ad12d635226d,Namespace:kube-system,Attempt:0,}" Sep 9 00:21:56.802143 containerd[1637]: time="2025-09-09T00:21:56.802116228Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:08b50f7fae72aa1db4778b1c3826279c,Namespace:kube-system,Attempt:0,}" Sep 9 00:21:56.806181 containerd[1637]: time="2025-09-09T00:21:56.806109938Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:a88c9297c136b0f15880bf567e89a977,Namespace:kube-system,Attempt:0,}" Sep 9 00:21:56.965318 containerd[1637]: time="2025-09-09T00:21:56.965123895Z" level=info msg="connecting to shim 1331b9d73109061d56fc010e52a656e1b83a584b7d1d40a578bd4d681d42a0e8" address="unix:///run/containerd/s/50f7166768f825350a9aed1d0b5c9a2c673316c1f284d4d3029c489463d803bc" namespace=k8s.io protocol=ttrpc version=3 Sep 9 00:21:56.965875 containerd[1637]: time="2025-09-09T00:21:56.965833503Z" level=info msg="connecting to shim 2737aa76fbd68b4a02ab33af2fbae4d6a3a9604af132dcb336953a1525100c21" address="unix:///run/containerd/s/e098d3e43bb0a4440f2e837decec9d75d8ebf3deb4556fe65bcd601a50dcedeb" namespace=k8s.io protocol=ttrpc version=3 Sep 9 00:21:56.967434 containerd[1637]: time="2025-09-09T00:21:56.967368244Z" level=info msg="connecting to shim d3370e1d6c661dae24e8d8447f59ba3a42ad779427860e21efedafc95284ab8e" address="unix:///run/containerd/s/52950268087af6b8f6284cdefecaa755dddf07af9bcd468943e8c03233bee44c" namespace=k8s.io protocol=ttrpc version=3 Sep 9 00:21:57.037567 systemd[1]: Started cri-containerd-1331b9d73109061d56fc010e52a656e1b83a584b7d1d40a578bd4d681d42a0e8.scope - libcontainer container 1331b9d73109061d56fc010e52a656e1b83a584b7d1d40a578bd4d681d42a0e8. Sep 9 00:21:57.039308 systemd[1]: Started cri-containerd-2737aa76fbd68b4a02ab33af2fbae4d6a3a9604af132dcb336953a1525100c21.scope - libcontainer container 2737aa76fbd68b4a02ab33af2fbae4d6a3a9604af132dcb336953a1525100c21. Sep 9 00:21:57.040880 systemd[1]: Started cri-containerd-d3370e1d6c661dae24e8d8447f59ba3a42ad779427860e21efedafc95284ab8e.scope - libcontainer container d3370e1d6c661dae24e8d8447f59ba3a42ad779427860e21efedafc95284ab8e. Sep 9 00:21:57.068292 kubelet[2543]: W0909 00:21:57.068213 2543 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://139.178.70.101:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 139.178.70.101:6443: connect: connection refused Sep 9 00:21:57.068292 kubelet[2543]: E0909 00:21:57.068256 2543 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://139.178.70.101:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 139.178.70.101:6443: connect: connection refused" logger="UnhandledError" Sep 9 00:21:57.096120 containerd[1637]: time="2025-09-09T00:21:57.095997533Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:a88c9297c136b0f15880bf567e89a977,Namespace:kube-system,Attempt:0,} returns sandbox id \"d3370e1d6c661dae24e8d8447f59ba3a42ad779427860e21efedafc95284ab8e\"" Sep 9 00:21:57.100363 containerd[1637]: time="2025-09-09T00:21:57.100342684Z" level=info msg="CreateContainer within sandbox \"d3370e1d6c661dae24e8d8447f59ba3a42ad779427860e21efedafc95284ab8e\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 9 00:21:57.100719 containerd[1637]: time="2025-09-09T00:21:57.100703356Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:08b50f7fae72aa1db4778b1c3826279c,Namespace:kube-system,Attempt:0,} returns sandbox id \"1331b9d73109061d56fc010e52a656e1b83a584b7d1d40a578bd4d681d42a0e8\"" Sep 9 00:21:57.101974 containerd[1637]: time="2025-09-09T00:21:57.101957980Z" level=info msg="CreateContainer within sandbox \"1331b9d73109061d56fc010e52a656e1b83a584b7d1d40a578bd4d681d42a0e8\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 9 00:21:57.111114 containerd[1637]: time="2025-09-09T00:21:57.111088047Z" level=info msg="Container abf5da4b98cc5cd320978c414526e771d354df058dce024c4d5d8c5c4b0efb29: CDI devices from CRI Config.CDIDevices: []" Sep 9 00:21:57.111789 containerd[1637]: time="2025-09-09T00:21:57.111740453Z" level=info msg="Container 47779a5ac464b4de61fe427e4fef3d998f0c65519b7bb4f19bf4c55216d79eb4: CDI devices from CRI Config.CDIDevices: []" Sep 9 00:21:57.114907 containerd[1637]: time="2025-09-09T00:21:57.114841759Z" level=info msg="CreateContainer within sandbox \"1331b9d73109061d56fc010e52a656e1b83a584b7d1d40a578bd4d681d42a0e8\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"abf5da4b98cc5cd320978c414526e771d354df058dce024c4d5d8c5c4b0efb29\"" Sep 9 00:21:57.115951 containerd[1637]: time="2025-09-09T00:21:57.115935824Z" level=info msg="StartContainer for \"abf5da4b98cc5cd320978c414526e771d354df058dce024c4d5d8c5c4b0efb29\"" Sep 9 00:21:57.116319 containerd[1637]: time="2025-09-09T00:21:57.116201734Z" level=info msg="CreateContainer within sandbox \"d3370e1d6c661dae24e8d8447f59ba3a42ad779427860e21efedafc95284ab8e\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"47779a5ac464b4de61fe427e4fef3d998f0c65519b7bb4f19bf4c55216d79eb4\"" Sep 9 00:21:57.117274 containerd[1637]: time="2025-09-09T00:21:57.117168325Z" level=info msg="StartContainer for \"47779a5ac464b4de61fe427e4fef3d998f0c65519b7bb4f19bf4c55216d79eb4\"" Sep 9 00:21:57.117913 containerd[1637]: time="2025-09-09T00:21:57.117898748Z" level=info msg="connecting to shim abf5da4b98cc5cd320978c414526e771d354df058dce024c4d5d8c5c4b0efb29" address="unix:///run/containerd/s/50f7166768f825350a9aed1d0b5c9a2c673316c1f284d4d3029c489463d803bc" protocol=ttrpc version=3 Sep 9 00:21:57.118819 containerd[1637]: time="2025-09-09T00:21:57.118796667Z" level=info msg="connecting to shim 47779a5ac464b4de61fe427e4fef3d998f0c65519b7bb4f19bf4c55216d79eb4" address="unix:///run/containerd/s/52950268087af6b8f6284cdefecaa755dddf07af9bcd468943e8c03233bee44c" protocol=ttrpc version=3 Sep 9 00:21:57.135538 systemd[1]: Started cri-containerd-abf5da4b98cc5cd320978c414526e771d354df058dce024c4d5d8c5c4b0efb29.scope - libcontainer container abf5da4b98cc5cd320978c414526e771d354df058dce024c4d5d8c5c4b0efb29. Sep 9 00:21:57.141176 containerd[1637]: time="2025-09-09T00:21:57.141144891Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:a9176403b596d0b29ae8ad12d635226d,Namespace:kube-system,Attempt:0,} returns sandbox id \"2737aa76fbd68b4a02ab33af2fbae4d6a3a9604af132dcb336953a1525100c21\"" Sep 9 00:21:57.143699 containerd[1637]: time="2025-09-09T00:21:57.143674866Z" level=info msg="CreateContainer within sandbox \"2737aa76fbd68b4a02ab33af2fbae4d6a3a9604af132dcb336953a1525100c21\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 9 00:21:57.148559 systemd[1]: Started cri-containerd-47779a5ac464b4de61fe427e4fef3d998f0c65519b7bb4f19bf4c55216d79eb4.scope - libcontainer container 47779a5ac464b4de61fe427e4fef3d998f0c65519b7bb4f19bf4c55216d79eb4. Sep 9 00:21:57.156984 containerd[1637]: time="2025-09-09T00:21:57.156946643Z" level=info msg="Container b6703a3bc1b77818832d0e064396968fc3bd2656396c1c11086392f87d52d9a2: CDI devices from CRI Config.CDIDevices: []" Sep 9 00:21:57.163398 containerd[1637]: time="2025-09-09T00:21:57.163350195Z" level=info msg="CreateContainer within sandbox \"2737aa76fbd68b4a02ab33af2fbae4d6a3a9604af132dcb336953a1525100c21\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"b6703a3bc1b77818832d0e064396968fc3bd2656396c1c11086392f87d52d9a2\"" Sep 9 00:21:57.163906 kubelet[2543]: I0909 00:21:57.163585 2543 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 9 00:21:57.164481 containerd[1637]: time="2025-09-09T00:21:57.164369325Z" level=info msg="StartContainer for \"b6703a3bc1b77818832d0e064396968fc3bd2656396c1c11086392f87d52d9a2\"" Sep 9 00:21:57.166003 containerd[1637]: time="2025-09-09T00:21:57.165310240Z" level=info msg="connecting to shim b6703a3bc1b77818832d0e064396968fc3bd2656396c1c11086392f87d52d9a2" address="unix:///run/containerd/s/e098d3e43bb0a4440f2e837decec9d75d8ebf3deb4556fe65bcd601a50dcedeb" protocol=ttrpc version=3 Sep 9 00:21:57.166081 kubelet[2543]: E0909 00:21:57.165978 2543 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://139.178.70.101:6443/api/v1/nodes\": dial tcp 139.178.70.101:6443: connect: connection refused" node="localhost" Sep 9 00:21:57.187623 systemd[1]: Started cri-containerd-b6703a3bc1b77818832d0e064396968fc3bd2656396c1c11086392f87d52d9a2.scope - libcontainer container b6703a3bc1b77818832d0e064396968fc3bd2656396c1c11086392f87d52d9a2. Sep 9 00:21:57.207869 containerd[1637]: time="2025-09-09T00:21:57.207844671Z" level=info msg="StartContainer for \"abf5da4b98cc5cd320978c414526e771d354df058dce024c4d5d8c5c4b0efb29\" returns successfully" Sep 9 00:21:57.223004 containerd[1637]: time="2025-09-09T00:21:57.222982157Z" level=info msg="StartContainer for \"47779a5ac464b4de61fe427e4fef3d998f0c65519b7bb4f19bf4c55216d79eb4\" returns successfully" Sep 9 00:21:57.230437 kubelet[2543]: E0909 00:21:57.230315 2543 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 00:21:57.233174 kubelet[2543]: E0909 00:21:57.233072 2543 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 00:21:57.236089 kubelet[2543]: W0909 00:21:57.236056 2543 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://139.178.70.101:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.70.101:6443: connect: connection refused Sep 9 00:21:57.236247 kubelet[2543]: E0909 00:21:57.236166 2543 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://139.178.70.101:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 139.178.70.101:6443: connect: connection refused" logger="UnhandledError" Sep 9 00:21:57.249575 containerd[1637]: time="2025-09-09T00:21:57.249553912Z" level=info msg="StartContainer for \"b6703a3bc1b77818832d0e064396968fc3bd2656396c1c11086392f87d52d9a2\" returns successfully" Sep 9 00:21:57.269953 kubelet[2543]: W0909 00:21:57.269892 2543 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://139.178.70.101:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 139.178.70.101:6443: connect: connection refused Sep 9 00:21:57.269953 kubelet[2543]: E0909 00:21:57.269937 2543 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://139.178.70.101:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 139.178.70.101:6443: connect: connection refused" logger="UnhandledError" Sep 9 00:21:57.407071 kubelet[2543]: W0909 00:21:57.406829 2543 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://139.178.70.101:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.70.101:6443: connect: connection refused Sep 9 00:21:57.407220 kubelet[2543]: E0909 00:21:57.407118 2543 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://139.178.70.101:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 139.178.70.101:6443: connect: connection refused" logger="UnhandledError" Sep 9 00:21:57.496353 kubelet[2543]: E0909 00:21:57.496325 2543 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.101:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.101:6443: connect: connection refused" interval="1.6s" Sep 9 00:21:57.967398 kubelet[2543]: I0909 00:21:57.967217 2543 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 9 00:21:58.239216 kubelet[2543]: E0909 00:21:58.239061 2543 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 00:21:58.239604 kubelet[2543]: E0909 00:21:58.239501 2543 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 00:21:58.769427 kubelet[2543]: I0909 00:21:58.769245 2543 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 9 00:21:58.769427 kubelet[2543]: E0909 00:21:58.769280 2543 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Sep 9 00:21:58.776863 kubelet[2543]: E0909 00:21:58.776838 2543 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 00:21:58.863404 kubelet[2543]: E0909 00:21:58.863315 2543 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 00:21:58.877431 kubelet[2543]: E0909 00:21:58.877417 2543 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 00:21:58.978077 kubelet[2543]: E0909 00:21:58.978048 2543 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 00:21:59.079221 kubelet[2543]: E0909 00:21:59.079120 2543 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 00:21:59.179799 kubelet[2543]: E0909 00:21:59.179756 2543 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 00:21:59.240814 kubelet[2543]: E0909 00:21:59.240794 2543 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 00:21:59.280152 kubelet[2543]: E0909 00:21:59.280122 2543 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 00:21:59.380758 kubelet[2543]: E0909 00:21:59.380662 2543 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 00:21:59.481295 kubelet[2543]: E0909 00:21:59.481253 2543 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 00:21:59.581933 kubelet[2543]: E0909 00:21:59.581904 2543 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 00:21:59.631265 kubelet[2543]: E0909 00:21:59.631201 2543 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 00:21:59.682525 kubelet[2543]: E0909 00:21:59.682498 2543 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 00:21:59.783198 kubelet[2543]: E0909 00:21:59.783165 2543 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 00:21:59.883829 kubelet[2543]: E0909 00:21:59.883747 2543 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 00:21:59.984530 kubelet[2543]: E0909 00:21:59.984496 2543 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 00:22:00.084818 kubelet[2543]: E0909 00:22:00.084789 2543 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 00:22:00.185193 kubelet[2543]: E0909 00:22:00.185133 2543 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 00:22:00.242335 kubelet[2543]: E0909 00:22:00.242237 2543 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 00:22:00.285970 kubelet[2543]: E0909 00:22:00.285936 2543 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 00:22:00.386843 kubelet[2543]: E0909 00:22:00.386812 2543 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 00:22:00.492995 kubelet[2543]: I0909 00:22:00.492899 2543 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 9 00:22:00.527076 kubelet[2543]: I0909 00:22:00.527036 2543 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 9 00:22:00.530403 kubelet[2543]: I0909 00:22:00.530310 2543 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 9 00:22:00.994878 systemd[1]: Reload requested from client PID 2804 ('systemctl') (unit session-9.scope)... Sep 9 00:22:00.995061 systemd[1]: Reloading... Sep 9 00:22:01.025758 kubelet[2543]: I0909 00:22:01.025735 2543 apiserver.go:52] "Watching apiserver" Sep 9 00:22:01.074411 zram_generator::config[2859]: No configuration found. Sep 9 00:22:01.093771 kubelet[2543]: I0909 00:22:01.093746 2543 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 9 00:22:01.137993 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 9 00:22:01.146247 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Sep 9 00:22:01.223592 systemd[1]: Reloading finished in 228 ms. Sep 9 00:22:01.253381 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 00:22:01.264711 systemd[1]: kubelet.service: Deactivated successfully. Sep 9 00:22:01.264930 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 00:22:01.265000 systemd[1]: kubelet.service: Consumed 679ms CPU time, 130.3M memory peak. Sep 9 00:22:01.267033 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 00:22:02.223198 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 00:22:02.234059 (kubelet)[2915]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 9 00:22:02.292668 kubelet[2915]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 00:22:02.292668 kubelet[2915]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 9 00:22:02.292668 kubelet[2915]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 00:22:02.292668 kubelet[2915]: I0909 00:22:02.291807 2915 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 9 00:22:02.302770 kubelet[2915]: I0909 00:22:02.299982 2915 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 9 00:22:02.302770 kubelet[2915]: I0909 00:22:02.300003 2915 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 9 00:22:02.302770 kubelet[2915]: I0909 00:22:02.300259 2915 server.go:954] "Client rotation is on, will bootstrap in background" Sep 9 00:22:02.302770 kubelet[2915]: I0909 00:22:02.301370 2915 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 9 00:22:02.303908 kubelet[2915]: I0909 00:22:02.303679 2915 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 9 00:22:02.306838 kubelet[2915]: I0909 00:22:02.306820 2915 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 9 00:22:02.309170 sudo[2926]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Sep 9 00:22:02.309756 kubelet[2915]: I0909 00:22:02.309268 2915 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 9 00:22:02.309756 kubelet[2915]: I0909 00:22:02.309486 2915 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 9 00:22:02.309756 kubelet[2915]: I0909 00:22:02.309513 2915 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 9 00:22:02.309756 kubelet[2915]: I0909 00:22:02.309670 2915 topology_manager.go:138] "Creating topology manager with none policy" Sep 9 00:22:02.309955 kubelet[2915]: I0909 00:22:02.309680 2915 container_manager_linux.go:304] "Creating device plugin manager" Sep 9 00:22:02.309955 kubelet[2915]: I0909 00:22:02.309718 2915 state_mem.go:36] "Initialized new in-memory state store" Sep 9 00:22:02.309955 kubelet[2915]: I0909 00:22:02.309869 2915 kubelet.go:446] "Attempting to sync node with API server" Sep 9 00:22:02.309955 kubelet[2915]: I0909 00:22:02.309891 2915 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 9 00:22:02.309955 kubelet[2915]: I0909 00:22:02.309908 2915 kubelet.go:352] "Adding apiserver pod source" Sep 9 00:22:02.310105 sudo[2926]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Sep 9 00:22:02.313125 kubelet[2915]: I0909 00:22:02.313114 2915 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 9 00:22:02.324752 kubelet[2915]: I0909 00:22:02.324607 2915 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Sep 9 00:22:02.325106 kubelet[2915]: I0909 00:22:02.325002 2915 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 9 00:22:02.325417 kubelet[2915]: I0909 00:22:02.325403 2915 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 9 00:22:02.325480 kubelet[2915]: I0909 00:22:02.325457 2915 server.go:1287] "Started kubelet" Sep 9 00:22:02.331801 kubelet[2915]: I0909 00:22:02.331764 2915 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 9 00:22:02.340895 kubelet[2915]: I0909 00:22:02.340244 2915 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 9 00:22:02.343207 kubelet[2915]: I0909 00:22:02.343167 2915 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 9 00:22:02.344055 kubelet[2915]: I0909 00:22:02.344046 2915 server.go:479] "Adding debug handlers to kubelet server" Sep 9 00:22:02.344211 kubelet[2915]: I0909 00:22:02.344155 2915 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 9 00:22:02.344302 kubelet[2915]: I0909 00:22:02.344290 2915 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 9 00:22:02.345021 kubelet[2915]: I0909 00:22:02.345006 2915 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 9 00:22:02.345352 kubelet[2915]: I0909 00:22:02.345345 2915 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 9 00:22:02.345516 kubelet[2915]: I0909 00:22:02.345509 2915 reconciler.go:26] "Reconciler: start to sync state" Sep 9 00:22:02.348268 kubelet[2915]: I0909 00:22:02.348215 2915 factory.go:221] Registration of the systemd container factory successfully Sep 9 00:22:02.348521 kubelet[2915]: I0909 00:22:02.348497 2915 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 9 00:22:02.350056 kubelet[2915]: E0909 00:22:02.350022 2915 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 9 00:22:02.350961 kubelet[2915]: I0909 00:22:02.350953 2915 factory.go:221] Registration of the containerd container factory successfully Sep 9 00:22:02.362467 kubelet[2915]: I0909 00:22:02.362436 2915 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 9 00:22:02.364796 kubelet[2915]: I0909 00:22:02.364782 2915 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 9 00:22:02.364796 kubelet[2915]: I0909 00:22:02.364795 2915 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 9 00:22:02.364857 kubelet[2915]: I0909 00:22:02.364807 2915 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 9 00:22:02.364857 kubelet[2915]: I0909 00:22:02.364811 2915 kubelet.go:2382] "Starting kubelet main sync loop" Sep 9 00:22:02.364857 kubelet[2915]: E0909 00:22:02.364836 2915 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 9 00:22:02.398646 kubelet[2915]: I0909 00:22:02.398629 2915 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 9 00:22:02.398646 kubelet[2915]: I0909 00:22:02.398641 2915 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 9 00:22:02.398646 kubelet[2915]: I0909 00:22:02.398653 2915 state_mem.go:36] "Initialized new in-memory state store" Sep 9 00:22:02.398776 kubelet[2915]: I0909 00:22:02.398759 2915 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 9 00:22:02.398776 kubelet[2915]: I0909 00:22:02.398767 2915 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 9 00:22:02.398809 kubelet[2915]: I0909 00:22:02.398779 2915 policy_none.go:49] "None policy: Start" Sep 9 00:22:02.398809 kubelet[2915]: I0909 00:22:02.398785 2915 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 9 00:22:02.398809 kubelet[2915]: I0909 00:22:02.398791 2915 state_mem.go:35] "Initializing new in-memory state store" Sep 9 00:22:02.398857 kubelet[2915]: I0909 00:22:02.398851 2915 state_mem.go:75] "Updated machine memory state" Sep 9 00:22:02.404139 kubelet[2915]: I0909 00:22:02.404084 2915 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 9 00:22:02.404207 kubelet[2915]: I0909 00:22:02.404187 2915 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 9 00:22:02.404228 kubelet[2915]: I0909 00:22:02.404199 2915 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 9 00:22:02.404840 kubelet[2915]: I0909 00:22:02.404521 2915 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 9 00:22:02.406932 kubelet[2915]: E0909 00:22:02.406482 2915 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 9 00:22:02.465554 kubelet[2915]: I0909 00:22:02.465528 2915 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 9 00:22:02.465810 kubelet[2915]: I0909 00:22:02.465792 2915 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 9 00:22:02.465940 kubelet[2915]: I0909 00:22:02.465931 2915 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 9 00:22:02.472154 kubelet[2915]: E0909 00:22:02.472127 2915 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Sep 9 00:22:02.472456 kubelet[2915]: E0909 00:22:02.472291 2915 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 9 00:22:02.472780 kubelet[2915]: E0909 00:22:02.472768 2915 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Sep 9 00:22:02.508896 kubelet[2915]: I0909 00:22:02.508838 2915 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 9 00:22:02.514200 kubelet[2915]: I0909 00:22:02.514178 2915 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Sep 9 00:22:02.514285 kubelet[2915]: I0909 00:22:02.514234 2915 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 9 00:22:02.647159 kubelet[2915]: I0909 00:22:02.647133 2915 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:22:02.647159 kubelet[2915]: I0909 00:22:02.647159 2915 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a9176403b596d0b29ae8ad12d635226d-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"a9176403b596d0b29ae8ad12d635226d\") " pod="kube-system/kube-scheduler-localhost" Sep 9 00:22:02.647287 kubelet[2915]: I0909 00:22:02.647172 2915 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/08b50f7fae72aa1db4778b1c3826279c-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"08b50f7fae72aa1db4778b1c3826279c\") " pod="kube-system/kube-apiserver-localhost" Sep 9 00:22:02.647287 kubelet[2915]: I0909 00:22:02.647182 2915 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:22:02.647287 kubelet[2915]: I0909 00:22:02.647193 2915 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:22:02.647287 kubelet[2915]: I0909 00:22:02.647201 2915 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:22:02.647287 kubelet[2915]: I0909 00:22:02.647209 2915 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:22:02.647375 kubelet[2915]: I0909 00:22:02.647217 2915 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/08b50f7fae72aa1db4778b1c3826279c-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"08b50f7fae72aa1db4778b1c3826279c\") " pod="kube-system/kube-apiserver-localhost" Sep 9 00:22:02.647375 kubelet[2915]: I0909 00:22:02.647226 2915 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/08b50f7fae72aa1db4778b1c3826279c-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"08b50f7fae72aa1db4778b1c3826279c\") " pod="kube-system/kube-apiserver-localhost" Sep 9 00:22:02.761991 sudo[2926]: pam_unix(sudo:session): session closed for user root Sep 9 00:22:03.321440 kubelet[2915]: I0909 00:22:03.321400 2915 apiserver.go:52] "Watching apiserver" Sep 9 00:22:03.346134 kubelet[2915]: I0909 00:22:03.346096 2915 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 9 00:22:03.386738 kubelet[2915]: I0909 00:22:03.386602 2915 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 9 00:22:03.396403 kubelet[2915]: E0909 00:22:03.396367 2915 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 9 00:22:03.462312 kubelet[2915]: I0909 00:22:03.462274 2915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=3.462259562 podStartE2EDuration="3.462259562s" podCreationTimestamp="2025-09-09 00:22:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 00:22:03.461878804 +0000 UTC m=+1.204667621" watchObservedRunningTime="2025-09-09 00:22:03.462259562 +0000 UTC m=+1.205048369" Sep 9 00:22:03.462631 kubelet[2915]: I0909 00:22:03.462349 2915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=3.462345447 podStartE2EDuration="3.462345447s" podCreationTimestamp="2025-09-09 00:22:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 00:22:03.43856963 +0000 UTC m=+1.181358441" watchObservedRunningTime="2025-09-09 00:22:03.462345447 +0000 UTC m=+1.205134252" Sep 9 00:22:03.473023 kubelet[2915]: I0909 00:22:03.472930 2915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=3.472915385 podStartE2EDuration="3.472915385s" podCreationTimestamp="2025-09-09 00:22:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 00:22:03.472881932 +0000 UTC m=+1.215670759" watchObservedRunningTime="2025-09-09 00:22:03.472915385 +0000 UTC m=+1.215704197" Sep 9 00:22:04.270698 sudo[1950]: pam_unix(sudo:session): session closed for user root Sep 9 00:22:04.271794 sshd[1949]: Connection closed by 139.178.68.195 port 49188 Sep 9 00:22:04.272745 sshd-session[1947]: pam_unix(sshd:session): session closed for user core Sep 9 00:22:04.275310 systemd-logind[1603]: Session 9 logged out. Waiting for processes to exit. Sep 9 00:22:04.276830 systemd[1]: sshd@6-139.178.70.101:22-139.178.68.195:49188.service: Deactivated successfully. Sep 9 00:22:04.278964 systemd[1]: session-9.scope: Deactivated successfully. Sep 9 00:22:04.279159 systemd[1]: session-9.scope: Consumed 3.108s CPU time, 209.9M memory peak. Sep 9 00:22:04.281236 systemd-logind[1603]: Removed session 9. Sep 9 00:22:06.302298 kubelet[2915]: I0909 00:22:06.302273 2915 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 9 00:22:06.302624 containerd[1637]: time="2025-09-09T00:22:06.302500766Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 9 00:22:06.302853 kubelet[2915]: I0909 00:22:06.302694 2915 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 9 00:22:06.942348 systemd[1]: Created slice kubepods-besteffort-pod90fc270b_a471_4336_9444_bbf95c6a7153.slice - libcontainer container kubepods-besteffort-pod90fc270b_a471_4336_9444_bbf95c6a7153.slice. Sep 9 00:22:06.955590 systemd[1]: Created slice kubepods-burstable-pod09de569b_4a33_43cd_a9ba_be8d79e6a589.slice - libcontainer container kubepods-burstable-pod09de569b_4a33_43cd_a9ba_be8d79e6a589.slice. Sep 9 00:22:06.972446 kubelet[2915]: I0909 00:22:06.972418 2915 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kz9fr\" (UniqueName: \"kubernetes.io/projected/90fc270b-a471-4336-9444-bbf95c6a7153-kube-api-access-kz9fr\") pod \"kube-proxy-8bd5k\" (UID: \"90fc270b-a471-4336-9444-bbf95c6a7153\") " pod="kube-system/kube-proxy-8bd5k" Sep 9 00:22:06.972636 kubelet[2915]: I0909 00:22:06.972614 2915 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/09de569b-4a33-43cd-a9ba-be8d79e6a589-bpf-maps\") pod \"cilium-fsnhm\" (UID: \"09de569b-4a33-43cd-a9ba-be8d79e6a589\") " pod="kube-system/cilium-fsnhm" Sep 9 00:22:06.972710 kubelet[2915]: I0909 00:22:06.972698 2915 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/09de569b-4a33-43cd-a9ba-be8d79e6a589-hostproc\") pod \"cilium-fsnhm\" (UID: \"09de569b-4a33-43cd-a9ba-be8d79e6a589\") " pod="kube-system/cilium-fsnhm" Sep 9 00:22:06.972786 kubelet[2915]: I0909 00:22:06.972775 2915 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/09de569b-4a33-43cd-a9ba-be8d79e6a589-clustermesh-secrets\") pod \"cilium-fsnhm\" (UID: \"09de569b-4a33-43cd-a9ba-be8d79e6a589\") " pod="kube-system/cilium-fsnhm" Sep 9 00:22:06.972857 kubelet[2915]: I0909 00:22:06.972847 2915 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/09de569b-4a33-43cd-a9ba-be8d79e6a589-cilium-config-path\") pod \"cilium-fsnhm\" (UID: \"09de569b-4a33-43cd-a9ba-be8d79e6a589\") " pod="kube-system/cilium-fsnhm" Sep 9 00:22:06.972917 kubelet[2915]: I0909 00:22:06.972901 2915 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/09de569b-4a33-43cd-a9ba-be8d79e6a589-xtables-lock\") pod \"cilium-fsnhm\" (UID: \"09de569b-4a33-43cd-a9ba-be8d79e6a589\") " pod="kube-system/cilium-fsnhm" Sep 9 00:22:06.972977 kubelet[2915]: I0909 00:22:06.972968 2915 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/09de569b-4a33-43cd-a9ba-be8d79e6a589-etc-cni-netd\") pod \"cilium-fsnhm\" (UID: \"09de569b-4a33-43cd-a9ba-be8d79e6a589\") " pod="kube-system/cilium-fsnhm" Sep 9 00:22:06.973047 kubelet[2915]: I0909 00:22:06.973038 2915 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/90fc270b-a471-4336-9444-bbf95c6a7153-kube-proxy\") pod \"kube-proxy-8bd5k\" (UID: \"90fc270b-a471-4336-9444-bbf95c6a7153\") " pod="kube-system/kube-proxy-8bd5k" Sep 9 00:22:06.973125 kubelet[2915]: I0909 00:22:06.973112 2915 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/09de569b-4a33-43cd-a9ba-be8d79e6a589-cni-path\") pod \"cilium-fsnhm\" (UID: \"09de569b-4a33-43cd-a9ba-be8d79e6a589\") " pod="kube-system/cilium-fsnhm" Sep 9 00:22:06.973192 kubelet[2915]: I0909 00:22:06.973176 2915 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/09de569b-4a33-43cd-a9ba-be8d79e6a589-host-proc-sys-kernel\") pod \"cilium-fsnhm\" (UID: \"09de569b-4a33-43cd-a9ba-be8d79e6a589\") " pod="kube-system/cilium-fsnhm" Sep 9 00:22:06.973249 kubelet[2915]: I0909 00:22:06.973237 2915 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/09de569b-4a33-43cd-a9ba-be8d79e6a589-hubble-tls\") pod \"cilium-fsnhm\" (UID: \"09de569b-4a33-43cd-a9ba-be8d79e6a589\") " pod="kube-system/cilium-fsnhm" Sep 9 00:22:06.973320 kubelet[2915]: I0909 00:22:06.973309 2915 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/09de569b-4a33-43cd-a9ba-be8d79e6a589-cilium-cgroup\") pod \"cilium-fsnhm\" (UID: \"09de569b-4a33-43cd-a9ba-be8d79e6a589\") " pod="kube-system/cilium-fsnhm" Sep 9 00:22:06.973410 kubelet[2915]: I0909 00:22:06.973361 2915 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/09de569b-4a33-43cd-a9ba-be8d79e6a589-host-proc-sys-net\") pod \"cilium-fsnhm\" (UID: \"09de569b-4a33-43cd-a9ba-be8d79e6a589\") " pod="kube-system/cilium-fsnhm" Sep 9 00:22:06.973410 kubelet[2915]: I0909 00:22:06.973376 2915 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/09de569b-4a33-43cd-a9ba-be8d79e6a589-cilium-run\") pod \"cilium-fsnhm\" (UID: \"09de569b-4a33-43cd-a9ba-be8d79e6a589\") " pod="kube-system/cilium-fsnhm" Sep 9 00:22:06.973556 kubelet[2915]: I0909 00:22:06.973491 2915 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nm6kz\" (UniqueName: \"kubernetes.io/projected/09de569b-4a33-43cd-a9ba-be8d79e6a589-kube-api-access-nm6kz\") pod \"cilium-fsnhm\" (UID: \"09de569b-4a33-43cd-a9ba-be8d79e6a589\") " pod="kube-system/cilium-fsnhm" Sep 9 00:22:06.973556 kubelet[2915]: I0909 00:22:06.973513 2915 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/90fc270b-a471-4336-9444-bbf95c6a7153-xtables-lock\") pod \"kube-proxy-8bd5k\" (UID: \"90fc270b-a471-4336-9444-bbf95c6a7153\") " pod="kube-system/kube-proxy-8bd5k" Sep 9 00:22:06.973556 kubelet[2915]: I0909 00:22:06.973525 2915 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/09de569b-4a33-43cd-a9ba-be8d79e6a589-lib-modules\") pod \"cilium-fsnhm\" (UID: \"09de569b-4a33-43cd-a9ba-be8d79e6a589\") " pod="kube-system/cilium-fsnhm" Sep 9 00:22:06.973556 kubelet[2915]: I0909 00:22:06.973534 2915 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/90fc270b-a471-4336-9444-bbf95c6a7153-lib-modules\") pod \"kube-proxy-8bd5k\" (UID: \"90fc270b-a471-4336-9444-bbf95c6a7153\") " pod="kube-system/kube-proxy-8bd5k" Sep 9 00:22:07.094915 kubelet[2915]: E0909 00:22:07.094887 2915 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Sep 9 00:22:07.095017 kubelet[2915]: E0909 00:22:07.094929 2915 projected.go:194] Error preparing data for projected volume kube-api-access-nm6kz for pod kube-system/cilium-fsnhm: configmap "kube-root-ca.crt" not found Sep 9 00:22:07.095017 kubelet[2915]: E0909 00:22:07.094990 2915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/09de569b-4a33-43cd-a9ba-be8d79e6a589-kube-api-access-nm6kz podName:09de569b-4a33-43cd-a9ba-be8d79e6a589 nodeName:}" failed. No retries permitted until 2025-09-09 00:22:07.594970438 +0000 UTC m=+5.337759242 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-nm6kz" (UniqueName: "kubernetes.io/projected/09de569b-4a33-43cd-a9ba-be8d79e6a589-kube-api-access-nm6kz") pod "cilium-fsnhm" (UID: "09de569b-4a33-43cd-a9ba-be8d79e6a589") : configmap "kube-root-ca.crt" not found Sep 9 00:22:07.097394 kubelet[2915]: E0909 00:22:07.096440 2915 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Sep 9 00:22:07.097394 kubelet[2915]: E0909 00:22:07.097355 2915 projected.go:194] Error preparing data for projected volume kube-api-access-kz9fr for pod kube-system/kube-proxy-8bd5k: configmap "kube-root-ca.crt" not found Sep 9 00:22:07.097523 kubelet[2915]: E0909 00:22:07.097499 2915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/90fc270b-a471-4336-9444-bbf95c6a7153-kube-api-access-kz9fr podName:90fc270b-a471-4336-9444-bbf95c6a7153 nodeName:}" failed. No retries permitted until 2025-09-09 00:22:07.597378691 +0000 UTC m=+5.340167492 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-kz9fr" (UniqueName: "kubernetes.io/projected/90fc270b-a471-4336-9444-bbf95c6a7153-kube-api-access-kz9fr") pod "kube-proxy-8bd5k" (UID: "90fc270b-a471-4336-9444-bbf95c6a7153") : configmap "kube-root-ca.crt" not found Sep 9 00:22:07.329403 systemd[1]: Created slice kubepods-besteffort-pod654e17db_9be8_48e8_935f_11005671e9f0.slice - libcontainer container kubepods-besteffort-pod654e17db_9be8_48e8_935f_11005671e9f0.slice. Sep 9 00:22:07.376897 kubelet[2915]: I0909 00:22:07.376871 2915 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-86wrk\" (UniqueName: \"kubernetes.io/projected/654e17db-9be8-48e8-935f-11005671e9f0-kube-api-access-86wrk\") pod \"cilium-operator-6c4d7847fc-c6dnr\" (UID: \"654e17db-9be8-48e8-935f-11005671e9f0\") " pod="kube-system/cilium-operator-6c4d7847fc-c6dnr" Sep 9 00:22:07.377234 kubelet[2915]: I0909 00:22:07.377201 2915 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/654e17db-9be8-48e8-935f-11005671e9f0-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-c6dnr\" (UID: \"654e17db-9be8-48e8-935f-11005671e9f0\") " pod="kube-system/cilium-operator-6c4d7847fc-c6dnr" Sep 9 00:22:07.634822 containerd[1637]: time="2025-09-09T00:22:07.634758331Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-c6dnr,Uid:654e17db-9be8-48e8-935f-11005671e9f0,Namespace:kube-system,Attempt:0,}" Sep 9 00:22:07.645465 containerd[1637]: time="2025-09-09T00:22:07.645435221Z" level=info msg="connecting to shim da66d1a2b0a15d584067a9c06c6ad00c969bf3d954a8f4936381f1a7f35c9c41" address="unix:///run/containerd/s/b277d4f4b5b9f4b97239de1498e19dda123b7a98810d891c6b287eac44538d2c" namespace=k8s.io protocol=ttrpc version=3 Sep 9 00:22:07.666515 systemd[1]: Started cri-containerd-da66d1a2b0a15d584067a9c06c6ad00c969bf3d954a8f4936381f1a7f35c9c41.scope - libcontainer container da66d1a2b0a15d584067a9c06c6ad00c969bf3d954a8f4936381f1a7f35c9c41. Sep 9 00:22:07.704770 containerd[1637]: time="2025-09-09T00:22:07.704743858Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-c6dnr,Uid:654e17db-9be8-48e8-935f-11005671e9f0,Namespace:kube-system,Attempt:0,} returns sandbox id \"da66d1a2b0a15d584067a9c06c6ad00c969bf3d954a8f4936381f1a7f35c9c41\"" Sep 9 00:22:07.706200 containerd[1637]: time="2025-09-09T00:22:07.706148924Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Sep 9 00:22:07.851077 containerd[1637]: time="2025-09-09T00:22:07.851054594Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8bd5k,Uid:90fc270b-a471-4336-9444-bbf95c6a7153,Namespace:kube-system,Attempt:0,}" Sep 9 00:22:07.859804 containerd[1637]: time="2025-09-09T00:22:07.859668838Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-fsnhm,Uid:09de569b-4a33-43cd-a9ba-be8d79e6a589,Namespace:kube-system,Attempt:0,}" Sep 9 00:22:08.030773 containerd[1637]: time="2025-09-09T00:22:08.030721621Z" level=info msg="connecting to shim 9f8d25b3ebd6e50ac16a17a5908174023805e9a3cdfe1cde2ada9a9bc7a4f77d" address="unix:///run/containerd/s/62ff6a469f3608c759bd5ccb26c3851dd372710d78b35f38d208c7c1eb895a24" namespace=k8s.io protocol=ttrpc version=3 Sep 9 00:22:08.053521 systemd[1]: Started cri-containerd-9f8d25b3ebd6e50ac16a17a5908174023805e9a3cdfe1cde2ada9a9bc7a4f77d.scope - libcontainer container 9f8d25b3ebd6e50ac16a17a5908174023805e9a3cdfe1cde2ada9a9bc7a4f77d. Sep 9 00:22:08.091984 containerd[1637]: time="2025-09-09T00:22:08.091903286Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8bd5k,Uid:90fc270b-a471-4336-9444-bbf95c6a7153,Namespace:kube-system,Attempt:0,} returns sandbox id \"9f8d25b3ebd6e50ac16a17a5908174023805e9a3cdfe1cde2ada9a9bc7a4f77d\"" Sep 9 00:22:08.099406 containerd[1637]: time="2025-09-09T00:22:08.098259459Z" level=info msg="CreateContainer within sandbox \"9f8d25b3ebd6e50ac16a17a5908174023805e9a3cdfe1cde2ada9a9bc7a4f77d\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 9 00:22:08.101334 containerd[1637]: time="2025-09-09T00:22:08.101304300Z" level=info msg="connecting to shim bdf0758736b7418e9c0e6b8e66b7bf09aa7b03b274b29ce85d7523a0ce149a91" address="unix:///run/containerd/s/2020be3e7b15880cacbe18abb0907da50267f01610d4da4cb50cb7da541d49ac" namespace=k8s.io protocol=ttrpc version=3 Sep 9 00:22:08.116559 containerd[1637]: time="2025-09-09T00:22:08.116443668Z" level=info msg="Container 8b2e35ff76027b468c2f72790287927d6c7a0d17782efcc117c37b1785d68432: CDI devices from CRI Config.CDIDevices: []" Sep 9 00:22:08.117451 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3773722498.mount: Deactivated successfully. Sep 9 00:22:08.123200 containerd[1637]: time="2025-09-09T00:22:08.122937329Z" level=info msg="CreateContainer within sandbox \"9f8d25b3ebd6e50ac16a17a5908174023805e9a3cdfe1cde2ada9a9bc7a4f77d\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"8b2e35ff76027b468c2f72790287927d6c7a0d17782efcc117c37b1785d68432\"" Sep 9 00:22:08.125052 containerd[1637]: time="2025-09-09T00:22:08.125035100Z" level=info msg="StartContainer for \"8b2e35ff76027b468c2f72790287927d6c7a0d17782efcc117c37b1785d68432\"" Sep 9 00:22:08.126883 containerd[1637]: time="2025-09-09T00:22:08.126865188Z" level=info msg="connecting to shim 8b2e35ff76027b468c2f72790287927d6c7a0d17782efcc117c37b1785d68432" address="unix:///run/containerd/s/62ff6a469f3608c759bd5ccb26c3851dd372710d78b35f38d208c7c1eb895a24" protocol=ttrpc version=3 Sep 9 00:22:08.128522 systemd[1]: Started cri-containerd-bdf0758736b7418e9c0e6b8e66b7bf09aa7b03b274b29ce85d7523a0ce149a91.scope - libcontainer container bdf0758736b7418e9c0e6b8e66b7bf09aa7b03b274b29ce85d7523a0ce149a91. Sep 9 00:22:08.150559 systemd[1]: Started cri-containerd-8b2e35ff76027b468c2f72790287927d6c7a0d17782efcc117c37b1785d68432.scope - libcontainer container 8b2e35ff76027b468c2f72790287927d6c7a0d17782efcc117c37b1785d68432. Sep 9 00:22:08.165624 containerd[1637]: time="2025-09-09T00:22:08.165357691Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-fsnhm,Uid:09de569b-4a33-43cd-a9ba-be8d79e6a589,Namespace:kube-system,Attempt:0,} returns sandbox id \"bdf0758736b7418e9c0e6b8e66b7bf09aa7b03b274b29ce85d7523a0ce149a91\"" Sep 9 00:22:08.185742 containerd[1637]: time="2025-09-09T00:22:08.185718476Z" level=info msg="StartContainer for \"8b2e35ff76027b468c2f72790287927d6c7a0d17782efcc117c37b1785d68432\" returns successfully" Sep 9 00:22:08.552961 kubelet[2915]: I0909 00:22:08.552923 2915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-8bd5k" podStartSLOduration=2.55291026 podStartE2EDuration="2.55291026s" podCreationTimestamp="2025-09-09 00:22:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 00:22:08.404077411 +0000 UTC m=+6.146866215" watchObservedRunningTime="2025-09-09 00:22:08.55291026 +0000 UTC m=+6.295699067" Sep 9 00:22:09.192491 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2319561594.mount: Deactivated successfully. Sep 9 00:22:09.940607 containerd[1637]: time="2025-09-09T00:22:09.940578874Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:22:09.941783 containerd[1637]: time="2025-09-09T00:22:09.941760129Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Sep 9 00:22:09.942715 containerd[1637]: time="2025-09-09T00:22:09.942678996Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:22:09.943622 containerd[1637]: time="2025-09-09T00:22:09.943541281Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.237276198s" Sep 9 00:22:09.943622 containerd[1637]: time="2025-09-09T00:22:09.943564443Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Sep 9 00:22:09.944648 containerd[1637]: time="2025-09-09T00:22:09.944361147Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Sep 9 00:22:09.945417 containerd[1637]: time="2025-09-09T00:22:09.945403112Z" level=info msg="CreateContainer within sandbox \"da66d1a2b0a15d584067a9c06c6ad00c969bf3d954a8f4936381f1a7f35c9c41\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Sep 9 00:22:09.958079 containerd[1637]: time="2025-09-09T00:22:09.958037874Z" level=info msg="Container 681dec33e046acda51c4a788f31d553582301a3ae3061526a1a51f8d2fe5f1fe: CDI devices from CRI Config.CDIDevices: []" Sep 9 00:22:09.984757 containerd[1637]: time="2025-09-09T00:22:09.984377850Z" level=info msg="CreateContainer within sandbox \"da66d1a2b0a15d584067a9c06c6ad00c969bf3d954a8f4936381f1a7f35c9c41\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"681dec33e046acda51c4a788f31d553582301a3ae3061526a1a51f8d2fe5f1fe\"" Sep 9 00:22:09.985007 containerd[1637]: time="2025-09-09T00:22:09.984990252Z" level=info msg="StartContainer for \"681dec33e046acda51c4a788f31d553582301a3ae3061526a1a51f8d2fe5f1fe\"" Sep 9 00:22:09.986731 containerd[1637]: time="2025-09-09T00:22:09.986701711Z" level=info msg="connecting to shim 681dec33e046acda51c4a788f31d553582301a3ae3061526a1a51f8d2fe5f1fe" address="unix:///run/containerd/s/b277d4f4b5b9f4b97239de1498e19dda123b7a98810d891c6b287eac44538d2c" protocol=ttrpc version=3 Sep 9 00:22:10.005553 systemd[1]: Started cri-containerd-681dec33e046acda51c4a788f31d553582301a3ae3061526a1a51f8d2fe5f1fe.scope - libcontainer container 681dec33e046acda51c4a788f31d553582301a3ae3061526a1a51f8d2fe5f1fe. Sep 9 00:22:10.044237 containerd[1637]: time="2025-09-09T00:22:10.044210917Z" level=info msg="StartContainer for \"681dec33e046acda51c4a788f31d553582301a3ae3061526a1a51f8d2fe5f1fe\" returns successfully" Sep 9 00:22:10.410564 kubelet[2915]: I0909 00:22:10.410439 2915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-c6dnr" podStartSLOduration=1.169005664 podStartE2EDuration="3.407448164s" podCreationTimestamp="2025-09-09 00:22:07 +0000 UTC" firstStartedPulling="2025-09-09 00:22:07.705737991 +0000 UTC m=+5.448526796" lastFinishedPulling="2025-09-09 00:22:09.944180497 +0000 UTC m=+7.686969296" observedRunningTime="2025-09-09 00:22:10.407296669 +0000 UTC m=+8.150085481" watchObservedRunningTime="2025-09-09 00:22:10.407448164 +0000 UTC m=+8.150236972" Sep 9 00:22:13.892150 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount654418044.mount: Deactivated successfully. Sep 9 00:22:17.955203 containerd[1637]: time="2025-09-09T00:22:17.955156598Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:22:17.965843 containerd[1637]: time="2025-09-09T00:22:17.965804795Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Sep 9 00:22:17.966754 containerd[1637]: time="2025-09-09T00:22:17.966711780Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:22:17.967475 containerd[1637]: time="2025-09-09T00:22:17.967358525Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 8.022979462s" Sep 9 00:22:17.967475 containerd[1637]: time="2025-09-09T00:22:17.967379316Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Sep 9 00:22:17.970002 containerd[1637]: time="2025-09-09T00:22:17.969743139Z" level=info msg="CreateContainer within sandbox \"bdf0758736b7418e9c0e6b8e66b7bf09aa7b03b274b29ce85d7523a0ce149a91\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 9 00:22:17.980564 containerd[1637]: time="2025-09-09T00:22:17.980539852Z" level=info msg="Container 1e77b2ad2c458ec575713686e1fbb7bc6873c9ea10c12afbce916619ab11339e: CDI devices from CRI Config.CDIDevices: []" Sep 9 00:22:17.982220 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2139252439.mount: Deactivated successfully. Sep 9 00:22:17.992398 containerd[1637]: time="2025-09-09T00:22:17.992357362Z" level=info msg="CreateContainer within sandbox \"bdf0758736b7418e9c0e6b8e66b7bf09aa7b03b274b29ce85d7523a0ce149a91\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"1e77b2ad2c458ec575713686e1fbb7bc6873c9ea10c12afbce916619ab11339e\"" Sep 9 00:22:17.993462 containerd[1637]: time="2025-09-09T00:22:17.993444440Z" level=info msg="StartContainer for \"1e77b2ad2c458ec575713686e1fbb7bc6873c9ea10c12afbce916619ab11339e\"" Sep 9 00:22:17.994890 containerd[1637]: time="2025-09-09T00:22:17.994869450Z" level=info msg="connecting to shim 1e77b2ad2c458ec575713686e1fbb7bc6873c9ea10c12afbce916619ab11339e" address="unix:///run/containerd/s/2020be3e7b15880cacbe18abb0907da50267f01610d4da4cb50cb7da541d49ac" protocol=ttrpc version=3 Sep 9 00:22:18.042535 systemd[1]: Started cri-containerd-1e77b2ad2c458ec575713686e1fbb7bc6873c9ea10c12afbce916619ab11339e.scope - libcontainer container 1e77b2ad2c458ec575713686e1fbb7bc6873c9ea10c12afbce916619ab11339e. Sep 9 00:22:18.073338 containerd[1637]: time="2025-09-09T00:22:18.073310175Z" level=info msg="StartContainer for \"1e77b2ad2c458ec575713686e1fbb7bc6873c9ea10c12afbce916619ab11339e\" returns successfully" Sep 9 00:22:18.082222 systemd[1]: cri-containerd-1e77b2ad2c458ec575713686e1fbb7bc6873c9ea10c12afbce916619ab11339e.scope: Deactivated successfully. Sep 9 00:22:18.153048 containerd[1637]: time="2025-09-09T00:22:18.152941640Z" level=info msg="received exit event container_id:\"1e77b2ad2c458ec575713686e1fbb7bc6873c9ea10c12afbce916619ab11339e\" id:\"1e77b2ad2c458ec575713686e1fbb7bc6873c9ea10c12afbce916619ab11339e\" pid:3372 exited_at:{seconds:1757377338 nanos:84533077}" Sep 9 00:22:18.153851 containerd[1637]: time="2025-09-09T00:22:18.153690243Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1e77b2ad2c458ec575713686e1fbb7bc6873c9ea10c12afbce916619ab11339e\" id:\"1e77b2ad2c458ec575713686e1fbb7bc6873c9ea10c12afbce916619ab11339e\" pid:3372 exited_at:{seconds:1757377338 nanos:84533077}" Sep 9 00:22:18.167430 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1e77b2ad2c458ec575713686e1fbb7bc6873c9ea10c12afbce916619ab11339e-rootfs.mount: Deactivated successfully. Sep 9 00:22:19.460432 containerd[1637]: time="2025-09-09T00:22:19.460305122Z" level=info msg="CreateContainer within sandbox \"bdf0758736b7418e9c0e6b8e66b7bf09aa7b03b274b29ce85d7523a0ce149a91\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 9 00:22:19.538501 containerd[1637]: time="2025-09-09T00:22:19.538138618Z" level=info msg="Container a985d625e118e7b68b871caf7e10ff0630c1926c08d03ac86e706ed03af4953f: CDI devices from CRI Config.CDIDevices: []" Sep 9 00:22:19.540342 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2134845325.mount: Deactivated successfully. Sep 9 00:22:19.574081 containerd[1637]: time="2025-09-09T00:22:19.574039846Z" level=info msg="CreateContainer within sandbox \"bdf0758736b7418e9c0e6b8e66b7bf09aa7b03b274b29ce85d7523a0ce149a91\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"a985d625e118e7b68b871caf7e10ff0630c1926c08d03ac86e706ed03af4953f\"" Sep 9 00:22:19.574846 containerd[1637]: time="2025-09-09T00:22:19.574825672Z" level=info msg="StartContainer for \"a985d625e118e7b68b871caf7e10ff0630c1926c08d03ac86e706ed03af4953f\"" Sep 9 00:22:19.575757 containerd[1637]: time="2025-09-09T00:22:19.575734846Z" level=info msg="connecting to shim a985d625e118e7b68b871caf7e10ff0630c1926c08d03ac86e706ed03af4953f" address="unix:///run/containerd/s/2020be3e7b15880cacbe18abb0907da50267f01610d4da4cb50cb7da541d49ac" protocol=ttrpc version=3 Sep 9 00:22:19.596768 systemd[1]: Started cri-containerd-a985d625e118e7b68b871caf7e10ff0630c1926c08d03ac86e706ed03af4953f.scope - libcontainer container a985d625e118e7b68b871caf7e10ff0630c1926c08d03ac86e706ed03af4953f. Sep 9 00:22:19.632266 containerd[1637]: time="2025-09-09T00:22:19.632233448Z" level=info msg="StartContainer for \"a985d625e118e7b68b871caf7e10ff0630c1926c08d03ac86e706ed03af4953f\" returns successfully" Sep 9 00:22:19.641585 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 9 00:22:19.641838 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 9 00:22:19.642357 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Sep 9 00:22:19.644959 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 9 00:22:19.647791 containerd[1637]: time="2025-09-09T00:22:19.647760640Z" level=info msg="received exit event container_id:\"a985d625e118e7b68b871caf7e10ff0630c1926c08d03ac86e706ed03af4953f\" id:\"a985d625e118e7b68b871caf7e10ff0630c1926c08d03ac86e706ed03af4953f\" pid:3416 exited_at:{seconds:1757377339 nanos:647570084}" Sep 9 00:22:19.647979 containerd[1637]: time="2025-09-09T00:22:19.647960274Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a985d625e118e7b68b871caf7e10ff0630c1926c08d03ac86e706ed03af4953f\" id:\"a985d625e118e7b68b871caf7e10ff0630c1926c08d03ac86e706ed03af4953f\" pid:3416 exited_at:{seconds:1757377339 nanos:647570084}" Sep 9 00:22:19.648086 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 9 00:22:19.648779 systemd[1]: cri-containerd-a985d625e118e7b68b871caf7e10ff0630c1926c08d03ac86e706ed03af4953f.scope: Deactivated successfully. Sep 9 00:22:19.670924 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a985d625e118e7b68b871caf7e10ff0630c1926c08d03ac86e706ed03af4953f-rootfs.mount: Deactivated successfully. Sep 9 00:22:19.738666 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 9 00:22:20.463117 containerd[1637]: time="2025-09-09T00:22:20.463081192Z" level=info msg="CreateContainer within sandbox \"bdf0758736b7418e9c0e6b8e66b7bf09aa7b03b274b29ce85d7523a0ce149a91\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 9 00:22:20.471074 containerd[1637]: time="2025-09-09T00:22:20.471037365Z" level=info msg="Container d5e7cd22c6b1f5718ac9054e58110d0a5ee514c961b3f730c018699a95ed512a: CDI devices from CRI Config.CDIDevices: []" Sep 9 00:22:20.484432 containerd[1637]: time="2025-09-09T00:22:20.484365214Z" level=info msg="CreateContainer within sandbox \"bdf0758736b7418e9c0e6b8e66b7bf09aa7b03b274b29ce85d7523a0ce149a91\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"d5e7cd22c6b1f5718ac9054e58110d0a5ee514c961b3f730c018699a95ed512a\"" Sep 9 00:22:20.485030 containerd[1637]: time="2025-09-09T00:22:20.484887842Z" level=info msg="StartContainer for \"d5e7cd22c6b1f5718ac9054e58110d0a5ee514c961b3f730c018699a95ed512a\"" Sep 9 00:22:20.487201 containerd[1637]: time="2025-09-09T00:22:20.486924168Z" level=info msg="connecting to shim d5e7cd22c6b1f5718ac9054e58110d0a5ee514c961b3f730c018699a95ed512a" address="unix:///run/containerd/s/2020be3e7b15880cacbe18abb0907da50267f01610d4da4cb50cb7da541d49ac" protocol=ttrpc version=3 Sep 9 00:22:20.506568 systemd[1]: Started cri-containerd-d5e7cd22c6b1f5718ac9054e58110d0a5ee514c961b3f730c018699a95ed512a.scope - libcontainer container d5e7cd22c6b1f5718ac9054e58110d0a5ee514c961b3f730c018699a95ed512a. Sep 9 00:22:20.531339 containerd[1637]: time="2025-09-09T00:22:20.531319315Z" level=info msg="StartContainer for \"d5e7cd22c6b1f5718ac9054e58110d0a5ee514c961b3f730c018699a95ed512a\" returns successfully" Sep 9 00:22:20.544973 systemd[1]: cri-containerd-d5e7cd22c6b1f5718ac9054e58110d0a5ee514c961b3f730c018699a95ed512a.scope: Deactivated successfully. Sep 9 00:22:20.545158 systemd[1]: cri-containerd-d5e7cd22c6b1f5718ac9054e58110d0a5ee514c961b3f730c018699a95ed512a.scope: Consumed 16ms CPU time, 5.9M memory peak, 1M read from disk. Sep 9 00:22:20.546499 containerd[1637]: time="2025-09-09T00:22:20.546368864Z" level=info msg="received exit event container_id:\"d5e7cd22c6b1f5718ac9054e58110d0a5ee514c961b3f730c018699a95ed512a\" id:\"d5e7cd22c6b1f5718ac9054e58110d0a5ee514c961b3f730c018699a95ed512a\" pid:3467 exited_at:{seconds:1757377340 nanos:545967609}" Sep 9 00:22:20.546499 containerd[1637]: time="2025-09-09T00:22:20.546482085Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d5e7cd22c6b1f5718ac9054e58110d0a5ee514c961b3f730c018699a95ed512a\" id:\"d5e7cd22c6b1f5718ac9054e58110d0a5ee514c961b3f730c018699a95ed512a\" pid:3467 exited_at:{seconds:1757377340 nanos:545967609}" Sep 9 00:22:20.563517 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d5e7cd22c6b1f5718ac9054e58110d0a5ee514c961b3f730c018699a95ed512a-rootfs.mount: Deactivated successfully. Sep 9 00:22:21.463454 containerd[1637]: time="2025-09-09T00:22:21.463429453Z" level=info msg="CreateContainer within sandbox \"bdf0758736b7418e9c0e6b8e66b7bf09aa7b03b274b29ce85d7523a0ce149a91\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 9 00:22:21.503833 containerd[1637]: time="2025-09-09T00:22:21.503805965Z" level=info msg="Container 83c17ffabbd5d5a5764c8ff4416dade6dd3dffd0daef1c30e22d152b2fcb39bd: CDI devices from CRI Config.CDIDevices: []" Sep 9 00:22:21.526021 containerd[1637]: time="2025-09-09T00:22:21.525969254Z" level=info msg="CreateContainer within sandbox \"bdf0758736b7418e9c0e6b8e66b7bf09aa7b03b274b29ce85d7523a0ce149a91\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"83c17ffabbd5d5a5764c8ff4416dade6dd3dffd0daef1c30e22d152b2fcb39bd\"" Sep 9 00:22:21.526536 containerd[1637]: time="2025-09-09T00:22:21.526518213Z" level=info msg="StartContainer for \"83c17ffabbd5d5a5764c8ff4416dade6dd3dffd0daef1c30e22d152b2fcb39bd\"" Sep 9 00:22:21.527046 containerd[1637]: time="2025-09-09T00:22:21.527027178Z" level=info msg="connecting to shim 83c17ffabbd5d5a5764c8ff4416dade6dd3dffd0daef1c30e22d152b2fcb39bd" address="unix:///run/containerd/s/2020be3e7b15880cacbe18abb0907da50267f01610d4da4cb50cb7da541d49ac" protocol=ttrpc version=3 Sep 9 00:22:21.548556 systemd[1]: Started cri-containerd-83c17ffabbd5d5a5764c8ff4416dade6dd3dffd0daef1c30e22d152b2fcb39bd.scope - libcontainer container 83c17ffabbd5d5a5764c8ff4416dade6dd3dffd0daef1c30e22d152b2fcb39bd. Sep 9 00:22:21.567648 systemd[1]: cri-containerd-83c17ffabbd5d5a5764c8ff4416dade6dd3dffd0daef1c30e22d152b2fcb39bd.scope: Deactivated successfully. Sep 9 00:22:21.568648 containerd[1637]: time="2025-09-09T00:22:21.568622964Z" level=info msg="TaskExit event in podsandbox handler container_id:\"83c17ffabbd5d5a5764c8ff4416dade6dd3dffd0daef1c30e22d152b2fcb39bd\" id:\"83c17ffabbd5d5a5764c8ff4416dade6dd3dffd0daef1c30e22d152b2fcb39bd\" pid:3506 exited_at:{seconds:1757377341 nanos:568460535}" Sep 9 00:22:21.579045 containerd[1637]: time="2025-09-09T00:22:21.578923126Z" level=info msg="received exit event container_id:\"83c17ffabbd5d5a5764c8ff4416dade6dd3dffd0daef1c30e22d152b2fcb39bd\" id:\"83c17ffabbd5d5a5764c8ff4416dade6dd3dffd0daef1c30e22d152b2fcb39bd\" pid:3506 exited_at:{seconds:1757377341 nanos:568460535}" Sep 9 00:22:21.584794 containerd[1637]: time="2025-09-09T00:22:21.584666831Z" level=info msg="StartContainer for \"83c17ffabbd5d5a5764c8ff4416dade6dd3dffd0daef1c30e22d152b2fcb39bd\" returns successfully" Sep 9 00:22:21.594658 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-83c17ffabbd5d5a5764c8ff4416dade6dd3dffd0daef1c30e22d152b2fcb39bd-rootfs.mount: Deactivated successfully. Sep 9 00:22:22.482395 containerd[1637]: time="2025-09-09T00:22:22.482360442Z" level=info msg="CreateContainer within sandbox \"bdf0758736b7418e9c0e6b8e66b7bf09aa7b03b274b29ce85d7523a0ce149a91\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 9 00:22:22.527595 containerd[1637]: time="2025-09-09T00:22:22.527566645Z" level=info msg="Container 477f39457b4285854432e68e7786017342437dd0803ec39080e460176aa5f0a8: CDI devices from CRI Config.CDIDevices: []" Sep 9 00:22:22.557544 containerd[1637]: time="2025-09-09T00:22:22.557506963Z" level=info msg="CreateContainer within sandbox \"bdf0758736b7418e9c0e6b8e66b7bf09aa7b03b274b29ce85d7523a0ce149a91\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"477f39457b4285854432e68e7786017342437dd0803ec39080e460176aa5f0a8\"" Sep 9 00:22:22.558066 containerd[1637]: time="2025-09-09T00:22:22.558000677Z" level=info msg="StartContainer for \"477f39457b4285854432e68e7786017342437dd0803ec39080e460176aa5f0a8\"" Sep 9 00:22:22.559131 containerd[1637]: time="2025-09-09T00:22:22.559072461Z" level=info msg="connecting to shim 477f39457b4285854432e68e7786017342437dd0803ec39080e460176aa5f0a8" address="unix:///run/containerd/s/2020be3e7b15880cacbe18abb0907da50267f01610d4da4cb50cb7da541d49ac" protocol=ttrpc version=3 Sep 9 00:22:22.580541 systemd[1]: Started cri-containerd-477f39457b4285854432e68e7786017342437dd0803ec39080e460176aa5f0a8.scope - libcontainer container 477f39457b4285854432e68e7786017342437dd0803ec39080e460176aa5f0a8. Sep 9 00:22:22.621760 containerd[1637]: time="2025-09-09T00:22:22.621685592Z" level=info msg="StartContainer for \"477f39457b4285854432e68e7786017342437dd0803ec39080e460176aa5f0a8\" returns successfully" Sep 9 00:22:22.697561 containerd[1637]: time="2025-09-09T00:22:22.697533003Z" level=info msg="TaskExit event in podsandbox handler container_id:\"477f39457b4285854432e68e7786017342437dd0803ec39080e460176aa5f0a8\" id:\"ccfe4cbcc894d5e64e778ce5e79ccdc2029cae83fe391db34279d1be492d2ed1\" pid:3576 exited_at:{seconds:1757377342 nanos:697294328}" Sep 9 00:22:22.788089 kubelet[2915]: I0909 00:22:22.787038 2915 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 9 00:22:22.813092 kubelet[2915]: W0909 00:22:22.812996 2915 reflector.go:569] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Sep 9 00:22:22.813092 kubelet[2915]: E0909 00:22:22.813024 2915 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:localhost\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'localhost' and this object" logger="UnhandledError" Sep 9 00:22:22.813092 kubelet[2915]: I0909 00:22:22.813049 2915 status_manager.go:890] "Failed to get status for pod" podUID="45e08f7b-96cd-4a3b-a68f-92d80eafc73f" pod="kube-system/coredns-668d6bf9bc-xqnbp" err="pods \"coredns-668d6bf9bc-xqnbp\" is forbidden: User \"system:node:localhost\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'localhost' and this object" Sep 9 00:22:22.817298 kubelet[2915]: I0909 00:22:22.817087 2915 status_manager.go:890] "Failed to get status for pod" podUID="45e08f7b-96cd-4a3b-a68f-92d80eafc73f" pod="kube-system/coredns-668d6bf9bc-xqnbp" err="pods \"coredns-668d6bf9bc-xqnbp\" is forbidden: User \"system:node:localhost\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'localhost' and this object" Sep 9 00:22:22.819983 systemd[1]: Created slice kubepods-burstable-pod45e08f7b_96cd_4a3b_a68f_92d80eafc73f.slice - libcontainer container kubepods-burstable-pod45e08f7b_96cd_4a3b_a68f_92d80eafc73f.slice. Sep 9 00:22:22.828689 systemd[1]: Created slice kubepods-burstable-podd2f01cb8_57fd_4d86_94ca_0f3e6d8df306.slice - libcontainer container kubepods-burstable-podd2f01cb8_57fd_4d86_94ca_0f3e6d8df306.slice. Sep 9 00:22:22.971775 kubelet[2915]: I0909 00:22:22.971738 2915 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d2f01cb8-57fd-4d86-94ca-0f3e6d8df306-config-volume\") pod \"coredns-668d6bf9bc-gdljh\" (UID: \"d2f01cb8-57fd-4d86-94ca-0f3e6d8df306\") " pod="kube-system/coredns-668d6bf9bc-gdljh" Sep 9 00:22:22.971775 kubelet[2915]: I0909 00:22:22.971771 2915 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/45e08f7b-96cd-4a3b-a68f-92d80eafc73f-config-volume\") pod \"coredns-668d6bf9bc-xqnbp\" (UID: \"45e08f7b-96cd-4a3b-a68f-92d80eafc73f\") " pod="kube-system/coredns-668d6bf9bc-xqnbp" Sep 9 00:22:22.971923 kubelet[2915]: I0909 00:22:22.971782 2915 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xhksc\" (UniqueName: \"kubernetes.io/projected/d2f01cb8-57fd-4d86-94ca-0f3e6d8df306-kube-api-access-xhksc\") pod \"coredns-668d6bf9bc-gdljh\" (UID: \"d2f01cb8-57fd-4d86-94ca-0f3e6d8df306\") " pod="kube-system/coredns-668d6bf9bc-gdljh" Sep 9 00:22:22.971923 kubelet[2915]: I0909 00:22:22.971794 2915 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tmd56\" (UniqueName: \"kubernetes.io/projected/45e08f7b-96cd-4a3b-a68f-92d80eafc73f-kube-api-access-tmd56\") pod \"coredns-668d6bf9bc-xqnbp\" (UID: \"45e08f7b-96cd-4a3b-a68f-92d80eafc73f\") " pod="kube-system/coredns-668d6bf9bc-xqnbp" Sep 9 00:22:23.488475 kubelet[2915]: I0909 00:22:23.488432 2915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-fsnhm" podStartSLOduration=7.687400553 podStartE2EDuration="17.48841987s" podCreationTimestamp="2025-09-09 00:22:06 +0000 UTC" firstStartedPulling="2025-09-09 00:22:08.166948715 +0000 UTC m=+5.909737517" lastFinishedPulling="2025-09-09 00:22:17.967968031 +0000 UTC m=+15.710756834" observedRunningTime="2025-09-09 00:22:23.48311966 +0000 UTC m=+21.225908471" watchObservedRunningTime="2025-09-09 00:22:23.48841987 +0000 UTC m=+21.231208675" Sep 9 00:22:24.075137 kubelet[2915]: E0909 00:22:24.074907 2915 configmap.go:193] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition Sep 9 00:22:24.075137 kubelet[2915]: E0909 00:22:24.074977 2915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d2f01cb8-57fd-4d86-94ca-0f3e6d8df306-config-volume podName:d2f01cb8-57fd-4d86-94ca-0f3e6d8df306 nodeName:}" failed. No retries permitted until 2025-09-09 00:22:24.574964559 +0000 UTC m=+22.317753361 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/d2f01cb8-57fd-4d86-94ca-0f3e6d8df306-config-volume") pod "coredns-668d6bf9bc-gdljh" (UID: "d2f01cb8-57fd-4d86-94ca-0f3e6d8df306") : failed to sync configmap cache: timed out waiting for the condition Sep 9 00:22:24.075588 kubelet[2915]: E0909 00:22:24.075534 2915 configmap.go:193] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition Sep 9 00:22:24.075588 kubelet[2915]: E0909 00:22:24.075563 2915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/45e08f7b-96cd-4a3b-a68f-92d80eafc73f-config-volume podName:45e08f7b-96cd-4a3b-a68f-92d80eafc73f nodeName:}" failed. No retries permitted until 2025-09-09 00:22:24.575555586 +0000 UTC m=+22.318344388 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/45e08f7b-96cd-4a3b-a68f-92d80eafc73f-config-volume") pod "coredns-668d6bf9bc-xqnbp" (UID: "45e08f7b-96cd-4a3b-a68f-92d80eafc73f") : failed to sync configmap cache: timed out waiting for the condition Sep 9 00:22:24.625340 containerd[1637]: time="2025-09-09T00:22:24.625118281Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-xqnbp,Uid:45e08f7b-96cd-4a3b-a68f-92d80eafc73f,Namespace:kube-system,Attempt:0,}" Sep 9 00:22:24.632413 containerd[1637]: time="2025-09-09T00:22:24.632359210Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-gdljh,Uid:d2f01cb8-57fd-4d86-94ca-0f3e6d8df306,Namespace:kube-system,Attempt:0,}" Sep 9 00:22:48.307591 systemd-networkd[1518]: cilium_host: Link UP Sep 9 00:22:48.307702 systemd-networkd[1518]: cilium_net: Link UP Sep 9 00:22:48.307803 systemd-networkd[1518]: cilium_net: Gained carrier Sep 9 00:22:48.307898 systemd-networkd[1518]: cilium_host: Gained carrier Sep 9 00:22:48.428572 systemd-networkd[1518]: cilium_host: Gained IPv6LL Sep 9 00:22:48.499101 systemd-networkd[1518]: cilium_vxlan: Link UP Sep 9 00:22:48.499252 systemd-networkd[1518]: cilium_vxlan: Gained carrier Sep 9 00:22:48.875557 systemd-networkd[1518]: cilium_net: Gained IPv6LL Sep 9 00:22:49.187715 kernel: NET: Registered PF_ALG protocol family Sep 9 00:22:49.657067 systemd-networkd[1518]: lxc_health: Link UP Sep 9 00:22:49.669611 systemd-networkd[1518]: lxc_health: Gained carrier Sep 9 00:22:50.027480 systemd-networkd[1518]: cilium_vxlan: Gained IPv6LL Sep 9 00:22:50.206956 kernel: eth0: renamed from tmp2efda Sep 9 00:22:50.209089 systemd-networkd[1518]: lxc62a3f27c777c: Link UP Sep 9 00:22:50.209311 systemd-networkd[1518]: lxc62a3f27c777c: Gained carrier Sep 9 00:22:50.209628 systemd-networkd[1518]: lxcb312e57515cd: Link UP Sep 9 00:22:50.214649 kernel: eth0: renamed from tmp4c5ca Sep 9 00:22:50.217096 systemd-networkd[1518]: lxcb312e57515cd: Gained carrier Sep 9 00:22:50.987503 systemd-networkd[1518]: lxc_health: Gained IPv6LL Sep 9 00:22:51.755479 systemd-networkd[1518]: lxcb312e57515cd: Gained IPv6LL Sep 9 00:22:52.139536 systemd-networkd[1518]: lxc62a3f27c777c: Gained IPv6LL Sep 9 00:22:52.956505 containerd[1637]: time="2025-09-09T00:22:52.956452601Z" level=info msg="connecting to shim 4c5ca353c5ba677d9cd25602f75750d731774381d2e630f56bbd622d3243d3a7" address="unix:///run/containerd/s/d6eab20a49afb393f6f6ebb2d4bcbb8da1adec36add06e20f05be1882f99ab0d" namespace=k8s.io protocol=ttrpc version=3 Sep 9 00:22:52.965083 containerd[1637]: time="2025-09-09T00:22:52.965029062Z" level=info msg="connecting to shim 2efdae06ae8461ac3ad8f2516e44e63733685fed88a4a373e5fe6ed645a60997" address="unix:///run/containerd/s/4b864239d7408e96f56eb5bea6adc77f514f397dad7ecf984476b6078d2148bf" namespace=k8s.io protocol=ttrpc version=3 Sep 9 00:22:52.979539 systemd[1]: Started cri-containerd-4c5ca353c5ba677d9cd25602f75750d731774381d2e630f56bbd622d3243d3a7.scope - libcontainer container 4c5ca353c5ba677d9cd25602f75750d731774381d2e630f56bbd622d3243d3a7. Sep 9 00:22:52.990495 systemd[1]: Started cri-containerd-2efdae06ae8461ac3ad8f2516e44e63733685fed88a4a373e5fe6ed645a60997.scope - libcontainer container 2efdae06ae8461ac3ad8f2516e44e63733685fed88a4a373e5fe6ed645a60997. Sep 9 00:22:52.999169 systemd-resolved[1519]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 9 00:22:53.006054 systemd-resolved[1519]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 9 00:22:53.048784 containerd[1637]: time="2025-09-09T00:22:53.048732432Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-xqnbp,Uid:45e08f7b-96cd-4a3b-a68f-92d80eafc73f,Namespace:kube-system,Attempt:0,} returns sandbox id \"4c5ca353c5ba677d9cd25602f75750d731774381d2e630f56bbd622d3243d3a7\"" Sep 9 00:22:53.058166 containerd[1637]: time="2025-09-09T00:22:53.057650318Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-gdljh,Uid:d2f01cb8-57fd-4d86-94ca-0f3e6d8df306,Namespace:kube-system,Attempt:0,} returns sandbox id \"2efdae06ae8461ac3ad8f2516e44e63733685fed88a4a373e5fe6ed645a60997\"" Sep 9 00:22:53.064266 containerd[1637]: time="2025-09-09T00:22:53.064202309Z" level=info msg="CreateContainer within sandbox \"4c5ca353c5ba677d9cd25602f75750d731774381d2e630f56bbd622d3243d3a7\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 9 00:22:53.065141 containerd[1637]: time="2025-09-09T00:22:53.065116146Z" level=info msg="CreateContainer within sandbox \"2efdae06ae8461ac3ad8f2516e44e63733685fed88a4a373e5fe6ed645a60997\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 9 00:22:53.076227 containerd[1637]: time="2025-09-09T00:22:53.075957378Z" level=info msg="Container 98c56329ecbbe36f9fb2314d7b5f8c5a0e728c9128cbbc1665d3e2d04a25ef65: CDI devices from CRI Config.CDIDevices: []" Sep 9 00:22:53.079058 containerd[1637]: time="2025-09-09T00:22:53.079036571Z" level=info msg="Container 98985336dc102926d21bb8357718d9278409072b240d0700f7ded52db9df746f: CDI devices from CRI Config.CDIDevices: []" Sep 9 00:22:53.082221 containerd[1637]: time="2025-09-09T00:22:53.082174166Z" level=info msg="CreateContainer within sandbox \"4c5ca353c5ba677d9cd25602f75750d731774381d2e630f56bbd622d3243d3a7\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"98c56329ecbbe36f9fb2314d7b5f8c5a0e728c9128cbbc1665d3e2d04a25ef65\"" Sep 9 00:22:53.083089 containerd[1637]: time="2025-09-09T00:22:53.083067503Z" level=info msg="CreateContainer within sandbox \"2efdae06ae8461ac3ad8f2516e44e63733685fed88a4a373e5fe6ed645a60997\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"98985336dc102926d21bb8357718d9278409072b240d0700f7ded52db9df746f\"" Sep 9 00:22:53.083297 containerd[1637]: time="2025-09-09T00:22:53.083259129Z" level=info msg="StartContainer for \"98c56329ecbbe36f9fb2314d7b5f8c5a0e728c9128cbbc1665d3e2d04a25ef65\"" Sep 9 00:22:53.083447 containerd[1637]: time="2025-09-09T00:22:53.083431221Z" level=info msg="StartContainer for \"98985336dc102926d21bb8357718d9278409072b240d0700f7ded52db9df746f\"" Sep 9 00:22:53.084826 containerd[1637]: time="2025-09-09T00:22:53.084807358Z" level=info msg="connecting to shim 98985336dc102926d21bb8357718d9278409072b240d0700f7ded52db9df746f" address="unix:///run/containerd/s/4b864239d7408e96f56eb5bea6adc77f514f397dad7ecf984476b6078d2148bf" protocol=ttrpc version=3 Sep 9 00:22:53.085885 containerd[1637]: time="2025-09-09T00:22:53.085856204Z" level=info msg="connecting to shim 98c56329ecbbe36f9fb2314d7b5f8c5a0e728c9128cbbc1665d3e2d04a25ef65" address="unix:///run/containerd/s/d6eab20a49afb393f6f6ebb2d4bcbb8da1adec36add06e20f05be1882f99ab0d" protocol=ttrpc version=3 Sep 9 00:22:53.102500 systemd[1]: Started cri-containerd-98985336dc102926d21bb8357718d9278409072b240d0700f7ded52db9df746f.scope - libcontainer container 98985336dc102926d21bb8357718d9278409072b240d0700f7ded52db9df746f. Sep 9 00:22:53.105053 systemd[1]: Started cri-containerd-98c56329ecbbe36f9fb2314d7b5f8c5a0e728c9128cbbc1665d3e2d04a25ef65.scope - libcontainer container 98c56329ecbbe36f9fb2314d7b5f8c5a0e728c9128cbbc1665d3e2d04a25ef65. Sep 9 00:22:53.134102 containerd[1637]: time="2025-09-09T00:22:53.134078145Z" level=info msg="StartContainer for \"98c56329ecbbe36f9fb2314d7b5f8c5a0e728c9128cbbc1665d3e2d04a25ef65\" returns successfully" Sep 9 00:22:53.135150 containerd[1637]: time="2025-09-09T00:22:53.135133425Z" level=info msg="StartContainer for \"98985336dc102926d21bb8357718d9278409072b240d0700f7ded52db9df746f\" returns successfully" Sep 9 00:22:53.565511 kubelet[2915]: I0909 00:22:53.565238 2915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-xqnbp" podStartSLOduration=46.565214815 podStartE2EDuration="46.565214815s" podCreationTimestamp="2025-09-09 00:22:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 00:22:53.564457592 +0000 UTC m=+51.307246405" watchObservedRunningTime="2025-09-09 00:22:53.565214815 +0000 UTC m=+51.308003622" Sep 9 00:22:53.588891 kubelet[2915]: I0909 00:22:53.588853 2915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-gdljh" podStartSLOduration=46.588840994 podStartE2EDuration="46.588840994s" podCreationTimestamp="2025-09-09 00:22:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 00:22:53.58797558 +0000 UTC m=+51.330764382" watchObservedRunningTime="2025-09-09 00:22:53.588840994 +0000 UTC m=+51.331629800" Sep 9 00:22:53.945431 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3702889769.mount: Deactivated successfully. Sep 9 00:23:11.165996 systemd[1]: Started sshd@7-139.178.70.101:22-139.178.68.195:56774.service - OpenSSH per-connection server daemon (139.178.68.195:56774). Sep 9 00:23:11.236074 sshd[4230]: Accepted publickey for core from 139.178.68.195 port 56774 ssh2: RSA SHA256:VfV4DbcB1YJ5ML+Hb+wSNrAGdGs+bVUt3FrVVQ/IlNk Sep 9 00:23:11.238227 sshd-session[4230]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:23:11.245130 systemd-logind[1603]: New session 10 of user core. Sep 9 00:23:11.251490 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 9 00:23:12.530583 sshd[4232]: Connection closed by 139.178.68.195 port 56774 Sep 9 00:23:12.531201 sshd-session[4230]: pam_unix(sshd:session): session closed for user core Sep 9 00:23:12.539636 systemd[1]: sshd@7-139.178.70.101:22-139.178.68.195:56774.service: Deactivated successfully. Sep 9 00:23:12.541032 systemd[1]: session-10.scope: Deactivated successfully. Sep 9 00:23:12.542040 systemd-logind[1603]: Session 10 logged out. Waiting for processes to exit. Sep 9 00:23:12.543042 systemd-logind[1603]: Removed session 10. Sep 9 00:23:17.541062 systemd[1]: Started sshd@8-139.178.70.101:22-139.178.68.195:56788.service - OpenSSH per-connection server daemon (139.178.68.195:56788). Sep 9 00:23:17.660081 sshd[4245]: Accepted publickey for core from 139.178.68.195 port 56788 ssh2: RSA SHA256:VfV4DbcB1YJ5ML+Hb+wSNrAGdGs+bVUt3FrVVQ/IlNk Sep 9 00:23:17.661004 sshd-session[4245]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:23:17.664681 systemd-logind[1603]: New session 11 of user core. Sep 9 00:23:17.670507 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 9 00:23:17.819823 sshd[4247]: Connection closed by 139.178.68.195 port 56788 Sep 9 00:23:17.822874 systemd-logind[1603]: Session 11 logged out. Waiting for processes to exit. Sep 9 00:23:17.820363 sshd-session[4245]: pam_unix(sshd:session): session closed for user core Sep 9 00:23:17.823249 systemd[1]: sshd@8-139.178.70.101:22-139.178.68.195:56788.service: Deactivated successfully. Sep 9 00:23:17.824259 systemd[1]: session-11.scope: Deactivated successfully. Sep 9 00:23:17.825901 systemd-logind[1603]: Removed session 11. Sep 9 00:23:22.830856 systemd[1]: Started sshd@9-139.178.70.101:22-139.178.68.195:54780.service - OpenSSH per-connection server daemon (139.178.68.195:54780). Sep 9 00:23:22.876933 sshd[4259]: Accepted publickey for core from 139.178.68.195 port 54780 ssh2: RSA SHA256:VfV4DbcB1YJ5ML+Hb+wSNrAGdGs+bVUt3FrVVQ/IlNk Sep 9 00:23:22.877609 sshd-session[4259]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:23:22.880934 systemd-logind[1603]: New session 12 of user core. Sep 9 00:23:22.890468 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 9 00:23:23.019078 sshd[4261]: Connection closed by 139.178.68.195 port 54780 Sep 9 00:23:23.019478 sshd-session[4259]: pam_unix(sshd:session): session closed for user core Sep 9 00:23:23.021736 systemd[1]: sshd@9-139.178.70.101:22-139.178.68.195:54780.service: Deactivated successfully. Sep 9 00:23:23.023085 systemd[1]: session-12.scope: Deactivated successfully. Sep 9 00:23:23.023946 systemd-logind[1603]: Session 12 logged out. Waiting for processes to exit. Sep 9 00:23:23.024778 systemd-logind[1603]: Removed session 12. Sep 9 00:23:28.033537 systemd[1]: Started sshd@10-139.178.70.101:22-139.178.68.195:54792.service - OpenSSH per-connection server daemon (139.178.68.195:54792). Sep 9 00:23:28.076513 sshd[4275]: Accepted publickey for core from 139.178.68.195 port 54792 ssh2: RSA SHA256:VfV4DbcB1YJ5ML+Hb+wSNrAGdGs+bVUt3FrVVQ/IlNk Sep 9 00:23:28.077257 sshd-session[4275]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:23:28.079973 systemd-logind[1603]: New session 13 of user core. Sep 9 00:23:28.085498 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 9 00:23:28.177174 sshd[4277]: Connection closed by 139.178.68.195 port 54792 Sep 9 00:23:28.177540 sshd-session[4275]: pam_unix(sshd:session): session closed for user core Sep 9 00:23:28.188042 systemd[1]: sshd@10-139.178.70.101:22-139.178.68.195:54792.service: Deactivated successfully. Sep 9 00:23:28.189377 systemd[1]: session-13.scope: Deactivated successfully. Sep 9 00:23:28.190113 systemd-logind[1603]: Session 13 logged out. Waiting for processes to exit. Sep 9 00:23:28.192292 systemd-logind[1603]: Removed session 13. Sep 9 00:23:28.193259 systemd[1]: Started sshd@11-139.178.70.101:22-139.178.68.195:54796.service - OpenSSH per-connection server daemon (139.178.68.195:54796). Sep 9 00:23:28.239697 sshd[4289]: Accepted publickey for core from 139.178.68.195 port 54796 ssh2: RSA SHA256:VfV4DbcB1YJ5ML+Hb+wSNrAGdGs+bVUt3FrVVQ/IlNk Sep 9 00:23:28.240561 sshd-session[4289]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:23:28.243256 systemd-logind[1603]: New session 14 of user core. Sep 9 00:23:28.258555 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 9 00:23:28.366686 sshd[4291]: Connection closed by 139.178.68.195 port 54796 Sep 9 00:23:28.367988 sshd-session[4289]: pam_unix(sshd:session): session closed for user core Sep 9 00:23:28.376271 systemd[1]: sshd@11-139.178.70.101:22-139.178.68.195:54796.service: Deactivated successfully. Sep 9 00:23:28.378942 systemd[1]: session-14.scope: Deactivated successfully. Sep 9 00:23:28.380418 systemd-logind[1603]: Session 14 logged out. Waiting for processes to exit. Sep 9 00:23:28.386261 systemd[1]: Started sshd@12-139.178.70.101:22-139.178.68.195:54804.service - OpenSSH per-connection server daemon (139.178.68.195:54804). Sep 9 00:23:28.389911 systemd-logind[1603]: Removed session 14. Sep 9 00:23:28.431662 sshd[4301]: Accepted publickey for core from 139.178.68.195 port 54804 ssh2: RSA SHA256:VfV4DbcB1YJ5ML+Hb+wSNrAGdGs+bVUt3FrVVQ/IlNk Sep 9 00:23:28.432430 sshd-session[4301]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:23:28.435603 systemd-logind[1603]: New session 15 of user core. Sep 9 00:23:28.438484 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 9 00:23:28.529749 sshd[4303]: Connection closed by 139.178.68.195 port 54804 Sep 9 00:23:28.530089 sshd-session[4301]: pam_unix(sshd:session): session closed for user core Sep 9 00:23:28.532215 systemd[1]: sshd@12-139.178.70.101:22-139.178.68.195:54804.service: Deactivated successfully. Sep 9 00:23:28.533298 systemd[1]: session-15.scope: Deactivated successfully. Sep 9 00:23:28.533875 systemd-logind[1603]: Session 15 logged out. Waiting for processes to exit. Sep 9 00:23:28.534853 systemd-logind[1603]: Removed session 15. Sep 9 00:23:33.546553 systemd[1]: Started sshd@13-139.178.70.101:22-139.178.68.195:35914.service - OpenSSH per-connection server daemon (139.178.68.195:35914). Sep 9 00:23:33.593669 sshd[4317]: Accepted publickey for core from 139.178.68.195 port 35914 ssh2: RSA SHA256:VfV4DbcB1YJ5ML+Hb+wSNrAGdGs+bVUt3FrVVQ/IlNk Sep 9 00:23:33.594661 sshd-session[4317]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:23:33.599429 systemd-logind[1603]: New session 16 of user core. Sep 9 00:23:33.604527 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 9 00:23:33.708460 sshd[4319]: Connection closed by 139.178.68.195 port 35914 Sep 9 00:23:33.708815 sshd-session[4317]: pam_unix(sshd:session): session closed for user core Sep 9 00:23:33.711941 systemd[1]: sshd@13-139.178.70.101:22-139.178.68.195:35914.service: Deactivated successfully. Sep 9 00:23:33.713223 systemd[1]: session-16.scope: Deactivated successfully. Sep 9 00:23:33.713809 systemd-logind[1603]: Session 16 logged out. Waiting for processes to exit. Sep 9 00:23:33.714829 systemd-logind[1603]: Removed session 16. Sep 9 00:23:38.719495 systemd[1]: Started sshd@14-139.178.70.101:22-139.178.68.195:35916.service - OpenSSH per-connection server daemon (139.178.68.195:35916). Sep 9 00:23:38.776037 sshd[4331]: Accepted publickey for core from 139.178.68.195 port 35916 ssh2: RSA SHA256:VfV4DbcB1YJ5ML+Hb+wSNrAGdGs+bVUt3FrVVQ/IlNk Sep 9 00:23:38.781793 sshd-session[4331]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:23:38.785259 systemd-logind[1603]: New session 17 of user core. Sep 9 00:23:38.791611 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 9 00:23:38.922699 sshd[4333]: Connection closed by 139.178.68.195 port 35916 Sep 9 00:23:38.923139 sshd-session[4331]: pam_unix(sshd:session): session closed for user core Sep 9 00:23:38.932884 systemd[1]: sshd@14-139.178.70.101:22-139.178.68.195:35916.service: Deactivated successfully. Sep 9 00:23:38.934304 systemd[1]: session-17.scope: Deactivated successfully. Sep 9 00:23:38.934912 systemd-logind[1603]: Session 17 logged out. Waiting for processes to exit. Sep 9 00:23:38.937237 systemd[1]: Started sshd@15-139.178.70.101:22-139.178.68.195:35922.service - OpenSSH per-connection server daemon (139.178.68.195:35922). Sep 9 00:23:38.937989 systemd-logind[1603]: Removed session 17. Sep 9 00:23:39.008065 sshd[4345]: Accepted publickey for core from 139.178.68.195 port 35922 ssh2: RSA SHA256:VfV4DbcB1YJ5ML+Hb+wSNrAGdGs+bVUt3FrVVQ/IlNk Sep 9 00:23:39.008900 sshd-session[4345]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:23:39.012173 systemd-logind[1603]: New session 18 of user core. Sep 9 00:23:39.023547 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 9 00:23:40.245976 sshd[4347]: Connection closed by 139.178.68.195 port 35922 Sep 9 00:23:40.245858 sshd-session[4345]: pam_unix(sshd:session): session closed for user core Sep 9 00:23:40.252651 systemd[1]: sshd@15-139.178.70.101:22-139.178.68.195:35922.service: Deactivated successfully. Sep 9 00:23:40.254059 systemd[1]: session-18.scope: Deactivated successfully. Sep 9 00:23:40.255019 systemd-logind[1603]: Session 18 logged out. Waiting for processes to exit. Sep 9 00:23:40.256784 systemd[1]: Started sshd@16-139.178.70.101:22-139.178.68.195:41762.service - OpenSSH per-connection server daemon (139.178.68.195:41762). Sep 9 00:23:40.258494 systemd-logind[1603]: Removed session 18. Sep 9 00:23:40.314282 sshd[4357]: Accepted publickey for core from 139.178.68.195 port 41762 ssh2: RSA SHA256:VfV4DbcB1YJ5ML+Hb+wSNrAGdGs+bVUt3FrVVQ/IlNk Sep 9 00:23:40.315300 sshd-session[4357]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:23:40.318497 systemd-logind[1603]: New session 19 of user core. Sep 9 00:23:40.322504 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 9 00:23:41.037415 sshd[4359]: Connection closed by 139.178.68.195 port 41762 Sep 9 00:23:41.036961 sshd-session[4357]: pam_unix(sshd:session): session closed for user core Sep 9 00:23:41.046182 systemd[1]: sshd@16-139.178.70.101:22-139.178.68.195:41762.service: Deactivated successfully. Sep 9 00:23:41.048988 systemd[1]: session-19.scope: Deactivated successfully. Sep 9 00:23:41.051152 systemd-logind[1603]: Session 19 logged out. Waiting for processes to exit. Sep 9 00:23:41.053619 systemd[1]: Started sshd@17-139.178.70.101:22-139.178.68.195:41764.service - OpenSSH per-connection server daemon (139.178.68.195:41764). Sep 9 00:23:41.055458 systemd-logind[1603]: Removed session 19. Sep 9 00:23:41.198626 sshd[4378]: Accepted publickey for core from 139.178.68.195 port 41764 ssh2: RSA SHA256:VfV4DbcB1YJ5ML+Hb+wSNrAGdGs+bVUt3FrVVQ/IlNk Sep 9 00:23:41.199965 sshd-session[4378]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:23:41.203449 systemd-logind[1603]: New session 20 of user core. Sep 9 00:23:41.208554 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 9 00:23:41.488559 sshd[4380]: Connection closed by 139.178.68.195 port 41764 Sep 9 00:23:41.489510 sshd-session[4378]: pam_unix(sshd:session): session closed for user core Sep 9 00:23:41.497949 systemd[1]: sshd@17-139.178.70.101:22-139.178.68.195:41764.service: Deactivated successfully. Sep 9 00:23:41.499798 systemd[1]: session-20.scope: Deactivated successfully. Sep 9 00:23:41.501364 systemd-logind[1603]: Session 20 logged out. Waiting for processes to exit. Sep 9 00:23:41.505102 systemd[1]: Started sshd@18-139.178.70.101:22-139.178.68.195:41770.service - OpenSSH per-connection server daemon (139.178.68.195:41770). Sep 9 00:23:41.506583 systemd-logind[1603]: Removed session 20. Sep 9 00:23:41.547020 sshd[4390]: Accepted publickey for core from 139.178.68.195 port 41770 ssh2: RSA SHA256:VfV4DbcB1YJ5ML+Hb+wSNrAGdGs+bVUt3FrVVQ/IlNk Sep 9 00:23:41.548132 sshd-session[4390]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:23:41.552311 systemd-logind[1603]: New session 21 of user core. Sep 9 00:23:41.556627 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 9 00:23:41.657684 sshd[4392]: Connection closed by 139.178.68.195 port 41770 Sep 9 00:23:41.658194 sshd-session[4390]: pam_unix(sshd:session): session closed for user core Sep 9 00:23:41.660676 systemd[1]: sshd@18-139.178.70.101:22-139.178.68.195:41770.service: Deactivated successfully. Sep 9 00:23:41.661974 systemd[1]: session-21.scope: Deactivated successfully. Sep 9 00:23:41.662547 systemd-logind[1603]: Session 21 logged out. Waiting for processes to exit. Sep 9 00:23:41.663610 systemd-logind[1603]: Removed session 21. Sep 9 00:23:46.673203 systemd[1]: Started sshd@19-139.178.70.101:22-139.178.68.195:41776.service - OpenSSH per-connection server daemon (139.178.68.195:41776). Sep 9 00:23:46.721876 sshd[4408]: Accepted publickey for core from 139.178.68.195 port 41776 ssh2: RSA SHA256:VfV4DbcB1YJ5ML+Hb+wSNrAGdGs+bVUt3FrVVQ/IlNk Sep 9 00:23:46.722826 sshd-session[4408]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:23:46.725678 systemd-logind[1603]: New session 22 of user core. Sep 9 00:23:46.731519 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 9 00:23:46.823657 sshd[4410]: Connection closed by 139.178.68.195 port 41776 Sep 9 00:23:46.823992 sshd-session[4408]: pam_unix(sshd:session): session closed for user core Sep 9 00:23:46.826000 systemd-logind[1603]: Session 22 logged out. Waiting for processes to exit. Sep 9 00:23:46.826075 systemd[1]: sshd@19-139.178.70.101:22-139.178.68.195:41776.service: Deactivated successfully. Sep 9 00:23:46.827195 systemd[1]: session-22.scope: Deactivated successfully. Sep 9 00:23:46.828532 systemd-logind[1603]: Removed session 22. Sep 9 00:23:51.838646 systemd[1]: Started sshd@20-139.178.70.101:22-139.178.68.195:40366.service - OpenSSH per-connection server daemon (139.178.68.195:40366). Sep 9 00:23:51.885525 sshd[4421]: Accepted publickey for core from 139.178.68.195 port 40366 ssh2: RSA SHA256:VfV4DbcB1YJ5ML+Hb+wSNrAGdGs+bVUt3FrVVQ/IlNk Sep 9 00:23:51.886547 sshd-session[4421]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:23:51.889714 systemd-logind[1603]: New session 23 of user core. Sep 9 00:23:51.896661 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 9 00:23:51.989158 sshd[4423]: Connection closed by 139.178.68.195 port 40366 Sep 9 00:23:51.989542 sshd-session[4421]: pam_unix(sshd:session): session closed for user core Sep 9 00:23:51.991351 systemd[1]: sshd@20-139.178.70.101:22-139.178.68.195:40366.service: Deactivated successfully. Sep 9 00:23:51.992820 systemd[1]: session-23.scope: Deactivated successfully. Sep 9 00:23:51.993566 systemd-logind[1603]: Session 23 logged out. Waiting for processes to exit. Sep 9 00:23:51.995103 systemd-logind[1603]: Removed session 23. Sep 9 00:23:57.000544 systemd[1]: Started sshd@21-139.178.70.101:22-139.178.68.195:40374.service - OpenSSH per-connection server daemon (139.178.68.195:40374). Sep 9 00:23:57.042218 sshd[4435]: Accepted publickey for core from 139.178.68.195 port 40374 ssh2: RSA SHA256:VfV4DbcB1YJ5ML+Hb+wSNrAGdGs+bVUt3FrVVQ/IlNk Sep 9 00:23:57.042956 sshd-session[4435]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:23:57.046084 systemd-logind[1603]: New session 24 of user core. Sep 9 00:23:57.053485 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 9 00:23:57.140728 sshd[4437]: Connection closed by 139.178.68.195 port 40374 Sep 9 00:23:57.141905 sshd-session[4435]: pam_unix(sshd:session): session closed for user core Sep 9 00:23:57.149029 systemd[1]: sshd@21-139.178.70.101:22-139.178.68.195:40374.service: Deactivated successfully. Sep 9 00:23:57.150239 systemd[1]: session-24.scope: Deactivated successfully. Sep 9 00:23:57.150961 systemd-logind[1603]: Session 24 logged out. Waiting for processes to exit. Sep 9 00:23:57.153716 systemd[1]: Started sshd@22-139.178.70.101:22-139.178.68.195:40382.service - OpenSSH per-connection server daemon (139.178.68.195:40382). Sep 9 00:23:57.154303 systemd-logind[1603]: Removed session 24. Sep 9 00:23:57.196501 sshd[4448]: Accepted publickey for core from 139.178.68.195 port 40382 ssh2: RSA SHA256:VfV4DbcB1YJ5ML+Hb+wSNrAGdGs+bVUt3FrVVQ/IlNk Sep 9 00:23:57.197261 sshd-session[4448]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:23:57.200271 systemd-logind[1603]: New session 25 of user core. Sep 9 00:23:57.204471 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 9 00:23:58.518198 containerd[1637]: time="2025-09-09T00:23:58.518049241Z" level=info msg="StopContainer for \"681dec33e046acda51c4a788f31d553582301a3ae3061526a1a51f8d2fe5f1fe\" with timeout 30 (s)" Sep 9 00:23:58.521575 containerd[1637]: time="2025-09-09T00:23:58.521516968Z" level=info msg="Stop container \"681dec33e046acda51c4a788f31d553582301a3ae3061526a1a51f8d2fe5f1fe\" with signal terminated" Sep 9 00:23:58.530632 systemd[1]: cri-containerd-681dec33e046acda51c4a788f31d553582301a3ae3061526a1a51f8d2fe5f1fe.scope: Deactivated successfully. Sep 9 00:23:58.531916 containerd[1637]: time="2025-09-09T00:23:58.531826226Z" level=info msg="received exit event container_id:\"681dec33e046acda51c4a788f31d553582301a3ae3061526a1a51f8d2fe5f1fe\" id:\"681dec33e046acda51c4a788f31d553582301a3ae3061526a1a51f8d2fe5f1fe\" pid:3184 exited_at:{seconds:1757377438 nanos:531469295}" Sep 9 00:23:58.531982 containerd[1637]: time="2025-09-09T00:23:58.531971051Z" level=info msg="TaskExit event in podsandbox handler container_id:\"681dec33e046acda51c4a788f31d553582301a3ae3061526a1a51f8d2fe5f1fe\" id:\"681dec33e046acda51c4a788f31d553582301a3ae3061526a1a51f8d2fe5f1fe\" pid:3184 exited_at:{seconds:1757377438 nanos:531469295}" Sep 9 00:23:58.533800 systemd[1]: cri-containerd-681dec33e046acda51c4a788f31d553582301a3ae3061526a1a51f8d2fe5f1fe.scope: Consumed 223ms CPU time, 31M memory peak, 7.9M read from disk, 4K written to disk. Sep 9 00:23:58.548625 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-681dec33e046acda51c4a788f31d553582301a3ae3061526a1a51f8d2fe5f1fe-rootfs.mount: Deactivated successfully. Sep 9 00:23:58.561563 containerd[1637]: time="2025-09-09T00:23:58.561534884Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 9 00:23:58.572649 containerd[1637]: time="2025-09-09T00:23:58.572623368Z" level=info msg="TaskExit event in podsandbox handler container_id:\"477f39457b4285854432e68e7786017342437dd0803ec39080e460176aa5f0a8\" id:\"0b2a071fa347d5747bd842d79e9f17fb03a0dcefb9692afd97c41356b424335b\" pid:4474 exited_at:{seconds:1757377438 nanos:572443277}" Sep 9 00:23:58.574832 containerd[1637]: time="2025-09-09T00:23:58.574813777Z" level=info msg="StopContainer for \"477f39457b4285854432e68e7786017342437dd0803ec39080e460176aa5f0a8\" with timeout 2 (s)" Sep 9 00:23:58.574978 containerd[1637]: time="2025-09-09T00:23:58.574965373Z" level=info msg="Stop container \"477f39457b4285854432e68e7786017342437dd0803ec39080e460176aa5f0a8\" with signal terminated" Sep 9 00:23:58.586309 systemd-networkd[1518]: lxc_health: Link DOWN Sep 9 00:23:58.586313 systemd-networkd[1518]: lxc_health: Lost carrier Sep 9 00:23:58.618703 containerd[1637]: time="2025-09-09T00:23:58.618647769Z" level=info msg="StopContainer for \"681dec33e046acda51c4a788f31d553582301a3ae3061526a1a51f8d2fe5f1fe\" returns successfully" Sep 9 00:23:58.619306 containerd[1637]: time="2025-09-09T00:23:58.619144835Z" level=info msg="StopPodSandbox for \"da66d1a2b0a15d584067a9c06c6ad00c969bf3d954a8f4936381f1a7f35c9c41\"" Sep 9 00:23:58.619306 containerd[1637]: time="2025-09-09T00:23:58.619180818Z" level=info msg="Container to stop \"681dec33e046acda51c4a788f31d553582301a3ae3061526a1a51f8d2fe5f1fe\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 9 00:23:58.623697 systemd[1]: cri-containerd-da66d1a2b0a15d584067a9c06c6ad00c969bf3d954a8f4936381f1a7f35c9c41.scope: Deactivated successfully. Sep 9 00:23:58.628835 containerd[1637]: time="2025-09-09T00:23:58.628751867Z" level=info msg="TaskExit event in podsandbox handler container_id:\"da66d1a2b0a15d584067a9c06c6ad00c969bf3d954a8f4936381f1a7f35c9c41\" id:\"da66d1a2b0a15d584067a9c06c6ad00c969bf3d954a8f4936381f1a7f35c9c41\" pid:3019 exit_status:137 exited_at:{seconds:1757377438 nanos:628583145}" Sep 9 00:23:58.646759 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-da66d1a2b0a15d584067a9c06c6ad00c969bf3d954a8f4936381f1a7f35c9c41-rootfs.mount: Deactivated successfully. Sep 9 00:23:58.657907 systemd[1]: cri-containerd-477f39457b4285854432e68e7786017342437dd0803ec39080e460176aa5f0a8.scope: Deactivated successfully. Sep 9 00:23:58.658254 systemd[1]: cri-containerd-477f39457b4285854432e68e7786017342437dd0803ec39080e460176aa5f0a8.scope: Consumed 4.671s CPU time, 190.6M memory peak, 71.3M read from disk, 13.3M written to disk. Sep 9 00:23:58.665293 containerd[1637]: time="2025-09-09T00:23:58.659257050Z" level=info msg="received exit event container_id:\"477f39457b4285854432e68e7786017342437dd0803ec39080e460176aa5f0a8\" id:\"477f39457b4285854432e68e7786017342437dd0803ec39080e460176aa5f0a8\" pid:3543 exited_at:{seconds:1757377438 nanos:658571581}" Sep 9 00:23:58.671849 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-477f39457b4285854432e68e7786017342437dd0803ec39080e460176aa5f0a8-rootfs.mount: Deactivated successfully. Sep 9 00:23:58.691252 containerd[1637]: time="2025-09-09T00:23:58.691221048Z" level=info msg="TaskExit event in podsandbox handler container_id:\"477f39457b4285854432e68e7786017342437dd0803ec39080e460176aa5f0a8\" id:\"477f39457b4285854432e68e7786017342437dd0803ec39080e460176aa5f0a8\" pid:3543 exited_at:{seconds:1757377438 nanos:658571581}" Sep 9 00:23:58.693239 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-da66d1a2b0a15d584067a9c06c6ad00c969bf3d954a8f4936381f1a7f35c9c41-shm.mount: Deactivated successfully. Sep 9 00:23:58.695433 containerd[1637]: time="2025-09-09T00:23:58.695204949Z" level=info msg="received exit event sandbox_id:\"da66d1a2b0a15d584067a9c06c6ad00c969bf3d954a8f4936381f1a7f35c9c41\" exit_status:137 exited_at:{seconds:1757377438 nanos:628583145}" Sep 9 00:23:58.703077 containerd[1637]: time="2025-09-09T00:23:58.703046705Z" level=info msg="TearDown network for sandbox \"da66d1a2b0a15d584067a9c06c6ad00c969bf3d954a8f4936381f1a7f35c9c41\" successfully" Sep 9 00:23:58.703210 containerd[1637]: time="2025-09-09T00:23:58.703200196Z" level=info msg="StopPodSandbox for \"da66d1a2b0a15d584067a9c06c6ad00c969bf3d954a8f4936381f1a7f35c9c41\" returns successfully" Sep 9 00:23:58.703418 containerd[1637]: time="2025-09-09T00:23:58.703408764Z" level=info msg="shim disconnected" id=da66d1a2b0a15d584067a9c06c6ad00c969bf3d954a8f4936381f1a7f35c9c41 namespace=k8s.io Sep 9 00:23:58.703490 containerd[1637]: time="2025-09-09T00:23:58.703468203Z" level=warning msg="cleaning up after shim disconnected" id=da66d1a2b0a15d584067a9c06c6ad00c969bf3d954a8f4936381f1a7f35c9c41 namespace=k8s.io Sep 9 00:23:58.706360 containerd[1637]: time="2025-09-09T00:23:58.703527772Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 9 00:23:58.715174 containerd[1637]: time="2025-09-09T00:23:58.715150094Z" level=info msg="StopContainer for \"477f39457b4285854432e68e7786017342437dd0803ec39080e460176aa5f0a8\" returns successfully" Sep 9 00:23:58.715436 containerd[1637]: time="2025-09-09T00:23:58.715422003Z" level=info msg="StopPodSandbox for \"bdf0758736b7418e9c0e6b8e66b7bf09aa7b03b274b29ce85d7523a0ce149a91\"" Sep 9 00:23:58.715561 containerd[1637]: time="2025-09-09T00:23:58.715455165Z" level=info msg="Container to stop \"a985d625e118e7b68b871caf7e10ff0630c1926c08d03ac86e706ed03af4953f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 9 00:23:58.715561 containerd[1637]: time="2025-09-09T00:23:58.715461152Z" level=info msg="Container to stop \"83c17ffabbd5d5a5764c8ff4416dade6dd3dffd0daef1c30e22d152b2fcb39bd\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 9 00:23:58.715753 containerd[1637]: time="2025-09-09T00:23:58.715661065Z" level=info msg="Container to stop \"1e77b2ad2c458ec575713686e1fbb7bc6873c9ea10c12afbce916619ab11339e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 9 00:23:58.715753 containerd[1637]: time="2025-09-09T00:23:58.715670737Z" level=info msg="Container to stop \"d5e7cd22c6b1f5718ac9054e58110d0a5ee514c961b3f730c018699a95ed512a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 9 00:23:58.715753 containerd[1637]: time="2025-09-09T00:23:58.715675755Z" level=info msg="Container to stop \"477f39457b4285854432e68e7786017342437dd0803ec39080e460176aa5f0a8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 9 00:23:58.724637 systemd[1]: cri-containerd-bdf0758736b7418e9c0e6b8e66b7bf09aa7b03b274b29ce85d7523a0ce149a91.scope: Deactivated successfully. Sep 9 00:23:58.725514 kubelet[2915]: I0909 00:23:58.725461 2915 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-86wrk\" (UniqueName: \"kubernetes.io/projected/654e17db-9be8-48e8-935f-11005671e9f0-kube-api-access-86wrk\") pod \"654e17db-9be8-48e8-935f-11005671e9f0\" (UID: \"654e17db-9be8-48e8-935f-11005671e9f0\") " Sep 9 00:23:58.725514 kubelet[2915]: I0909 00:23:58.725497 2915 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/654e17db-9be8-48e8-935f-11005671e9f0-cilium-config-path\") pod \"654e17db-9be8-48e8-935f-11005671e9f0\" (UID: \"654e17db-9be8-48e8-935f-11005671e9f0\") " Sep 9 00:23:58.726314 containerd[1637]: time="2025-09-09T00:23:58.726290418Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bdf0758736b7418e9c0e6b8e66b7bf09aa7b03b274b29ce85d7523a0ce149a91\" id:\"bdf0758736b7418e9c0e6b8e66b7bf09aa7b03b274b29ce85d7523a0ce149a91\" pid:3115 exit_status:137 exited_at:{seconds:1757377438 nanos:725635176}" Sep 9 00:23:58.728504 kubelet[2915]: I0909 00:23:58.728078 2915 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/654e17db-9be8-48e8-935f-11005671e9f0-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "654e17db-9be8-48e8-935f-11005671e9f0" (UID: "654e17db-9be8-48e8-935f-11005671e9f0"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 9 00:23:58.751296 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bdf0758736b7418e9c0e6b8e66b7bf09aa7b03b274b29ce85d7523a0ce149a91-rootfs.mount: Deactivated successfully. Sep 9 00:23:58.755908 kubelet[2915]: I0909 00:23:58.755591 2915 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/654e17db-9be8-48e8-935f-11005671e9f0-kube-api-access-86wrk" (OuterVolumeSpecName: "kube-api-access-86wrk") pod "654e17db-9be8-48e8-935f-11005671e9f0" (UID: "654e17db-9be8-48e8-935f-11005671e9f0"). InnerVolumeSpecName "kube-api-access-86wrk". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 9 00:23:58.772219 containerd[1637]: time="2025-09-09T00:23:58.772057400Z" level=info msg="shim disconnected" id=bdf0758736b7418e9c0e6b8e66b7bf09aa7b03b274b29ce85d7523a0ce149a91 namespace=k8s.io Sep 9 00:23:58.772219 containerd[1637]: time="2025-09-09T00:23:58.772076125Z" level=warning msg="cleaning up after shim disconnected" id=bdf0758736b7418e9c0e6b8e66b7bf09aa7b03b274b29ce85d7523a0ce149a91 namespace=k8s.io Sep 9 00:23:58.772219 containerd[1637]: time="2025-09-09T00:23:58.772080670Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 9 00:23:58.774331 containerd[1637]: time="2025-09-09T00:23:58.772592911Z" level=info msg="received exit event sandbox_id:\"bdf0758736b7418e9c0e6b8e66b7bf09aa7b03b274b29ce85d7523a0ce149a91\" exit_status:137 exited_at:{seconds:1757377438 nanos:725635176}" Sep 9 00:23:58.774983 containerd[1637]: time="2025-09-09T00:23:58.774970610Z" level=info msg="TearDown network for sandbox \"bdf0758736b7418e9c0e6b8e66b7bf09aa7b03b274b29ce85d7523a0ce149a91\" successfully" Sep 9 00:23:58.775030 containerd[1637]: time="2025-09-09T00:23:58.775022466Z" level=info msg="StopPodSandbox for \"bdf0758736b7418e9c0e6b8e66b7bf09aa7b03b274b29ce85d7523a0ce149a91\" returns successfully" Sep 9 00:23:58.825835 kubelet[2915]: I0909 00:23:58.825804 2915 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/09de569b-4a33-43cd-a9ba-be8d79e6a589-clustermesh-secrets\") pod \"09de569b-4a33-43cd-a9ba-be8d79e6a589\" (UID: \"09de569b-4a33-43cd-a9ba-be8d79e6a589\") " Sep 9 00:23:58.825835 kubelet[2915]: I0909 00:23:58.825842 2915 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/09de569b-4a33-43cd-a9ba-be8d79e6a589-hostproc\") pod \"09de569b-4a33-43cd-a9ba-be8d79e6a589\" (UID: \"09de569b-4a33-43cd-a9ba-be8d79e6a589\") " Sep 9 00:23:58.826500 kubelet[2915]: I0909 00:23:58.825852 2915 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/09de569b-4a33-43cd-a9ba-be8d79e6a589-host-proc-sys-net\") pod \"09de569b-4a33-43cd-a9ba-be8d79e6a589\" (UID: \"09de569b-4a33-43cd-a9ba-be8d79e6a589\") " Sep 9 00:23:58.826500 kubelet[2915]: I0909 00:23:58.825862 2915 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/09de569b-4a33-43cd-a9ba-be8d79e6a589-cilium-run\") pod \"09de569b-4a33-43cd-a9ba-be8d79e6a589\" (UID: \"09de569b-4a33-43cd-a9ba-be8d79e6a589\") " Sep 9 00:23:58.826500 kubelet[2915]: I0909 00:23:58.825871 2915 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/09de569b-4a33-43cd-a9ba-be8d79e6a589-bpf-maps\") pod \"09de569b-4a33-43cd-a9ba-be8d79e6a589\" (UID: \"09de569b-4a33-43cd-a9ba-be8d79e6a589\") " Sep 9 00:23:58.826500 kubelet[2915]: I0909 00:23:58.825881 2915 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/09de569b-4a33-43cd-a9ba-be8d79e6a589-hubble-tls\") pod \"09de569b-4a33-43cd-a9ba-be8d79e6a589\" (UID: \"09de569b-4a33-43cd-a9ba-be8d79e6a589\") " Sep 9 00:23:58.826500 kubelet[2915]: I0909 00:23:58.825889 2915 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/09de569b-4a33-43cd-a9ba-be8d79e6a589-cilium-cgroup\") pod \"09de569b-4a33-43cd-a9ba-be8d79e6a589\" (UID: \"09de569b-4a33-43cd-a9ba-be8d79e6a589\") " Sep 9 00:23:58.826500 kubelet[2915]: I0909 00:23:58.825899 2915 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nm6kz\" (UniqueName: \"kubernetes.io/projected/09de569b-4a33-43cd-a9ba-be8d79e6a589-kube-api-access-nm6kz\") pod \"09de569b-4a33-43cd-a9ba-be8d79e6a589\" (UID: \"09de569b-4a33-43cd-a9ba-be8d79e6a589\") " Sep 9 00:23:58.826630 kubelet[2915]: I0909 00:23:58.825908 2915 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/09de569b-4a33-43cd-a9ba-be8d79e6a589-xtables-lock\") pod \"09de569b-4a33-43cd-a9ba-be8d79e6a589\" (UID: \"09de569b-4a33-43cd-a9ba-be8d79e6a589\") " Sep 9 00:23:58.826630 kubelet[2915]: I0909 00:23:58.825916 2915 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/09de569b-4a33-43cd-a9ba-be8d79e6a589-cni-path\") pod \"09de569b-4a33-43cd-a9ba-be8d79e6a589\" (UID: \"09de569b-4a33-43cd-a9ba-be8d79e6a589\") " Sep 9 00:23:58.826630 kubelet[2915]: I0909 00:23:58.825913 2915 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/09de569b-4a33-43cd-a9ba-be8d79e6a589-hostproc" (OuterVolumeSpecName: "hostproc") pod "09de569b-4a33-43cd-a9ba-be8d79e6a589" (UID: "09de569b-4a33-43cd-a9ba-be8d79e6a589"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 9 00:23:58.826630 kubelet[2915]: I0909 00:23:58.825937 2915 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/09de569b-4a33-43cd-a9ba-be8d79e6a589-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "09de569b-4a33-43cd-a9ba-be8d79e6a589" (UID: "09de569b-4a33-43cd-a9ba-be8d79e6a589"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 9 00:23:58.826630 kubelet[2915]: I0909 00:23:58.825949 2915 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/09de569b-4a33-43cd-a9ba-be8d79e6a589-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "09de569b-4a33-43cd-a9ba-be8d79e6a589" (UID: "09de569b-4a33-43cd-a9ba-be8d79e6a589"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 9 00:23:58.826724 kubelet[2915]: I0909 00:23:58.825957 2915 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/09de569b-4a33-43cd-a9ba-be8d79e6a589-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "09de569b-4a33-43cd-a9ba-be8d79e6a589" (UID: "09de569b-4a33-43cd-a9ba-be8d79e6a589"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 9 00:23:58.826724 kubelet[2915]: I0909 00:23:58.825965 2915 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/09de569b-4a33-43cd-a9ba-be8d79e6a589-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "09de569b-4a33-43cd-a9ba-be8d79e6a589" (UID: "09de569b-4a33-43cd-a9ba-be8d79e6a589"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 9 00:23:58.827078 kubelet[2915]: I0909 00:23:58.826863 2915 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/09de569b-4a33-43cd-a9ba-be8d79e6a589-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "09de569b-4a33-43cd-a9ba-be8d79e6a589" (UID: "09de569b-4a33-43cd-a9ba-be8d79e6a589"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 9 00:23:58.829379 kubelet[2915]: I0909 00:23:58.829361 2915 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09de569b-4a33-43cd-a9ba-be8d79e6a589-kube-api-access-nm6kz" (OuterVolumeSpecName: "kube-api-access-nm6kz") pod "09de569b-4a33-43cd-a9ba-be8d79e6a589" (UID: "09de569b-4a33-43cd-a9ba-be8d79e6a589"). InnerVolumeSpecName "kube-api-access-nm6kz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 9 00:23:58.829528 kubelet[2915]: I0909 00:23:58.825924 2915 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/09de569b-4a33-43cd-a9ba-be8d79e6a589-lib-modules\") pod \"09de569b-4a33-43cd-a9ba-be8d79e6a589\" (UID: \"09de569b-4a33-43cd-a9ba-be8d79e6a589\") " Sep 9 00:23:58.829528 kubelet[2915]: I0909 00:23:58.829471 2915 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/09de569b-4a33-43cd-a9ba-be8d79e6a589-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "09de569b-4a33-43cd-a9ba-be8d79e6a589" (UID: "09de569b-4a33-43cd-a9ba-be8d79e6a589"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 9 00:23:58.829528 kubelet[2915]: I0909 00:23:58.829485 2915 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/09de569b-4a33-43cd-a9ba-be8d79e6a589-cni-path" (OuterVolumeSpecName: "cni-path") pod "09de569b-4a33-43cd-a9ba-be8d79e6a589" (UID: "09de569b-4a33-43cd-a9ba-be8d79e6a589"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 9 00:23:58.829528 kubelet[2915]: I0909 00:23:58.829492 2915 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/09de569b-4a33-43cd-a9ba-be8d79e6a589-cilium-config-path\") pod \"09de569b-4a33-43cd-a9ba-be8d79e6a589\" (UID: \"09de569b-4a33-43cd-a9ba-be8d79e6a589\") " Sep 9 00:23:58.829528 kubelet[2915]: I0909 00:23:58.829505 2915 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/09de569b-4a33-43cd-a9ba-be8d79e6a589-etc-cni-netd\") pod \"09de569b-4a33-43cd-a9ba-be8d79e6a589\" (UID: \"09de569b-4a33-43cd-a9ba-be8d79e6a589\") " Sep 9 00:23:58.829627 kubelet[2915]: I0909 00:23:58.829515 2915 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/09de569b-4a33-43cd-a9ba-be8d79e6a589-host-proc-sys-kernel\") pod \"09de569b-4a33-43cd-a9ba-be8d79e6a589\" (UID: \"09de569b-4a33-43cd-a9ba-be8d79e6a589\") " Sep 9 00:23:58.829627 kubelet[2915]: I0909 00:23:58.829546 2915 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-86wrk\" (UniqueName: \"kubernetes.io/projected/654e17db-9be8-48e8-935f-11005671e9f0-kube-api-access-86wrk\") on node \"localhost\" DevicePath \"\"" Sep 9 00:23:58.829627 kubelet[2915]: I0909 00:23:58.829553 2915 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/09de569b-4a33-43cd-a9ba-be8d79e6a589-hostproc\") on node \"localhost\" DevicePath \"\"" Sep 9 00:23:58.829627 kubelet[2915]: I0909 00:23:58.829559 2915 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/09de569b-4a33-43cd-a9ba-be8d79e6a589-bpf-maps\") on node \"localhost\" DevicePath \"\"" Sep 9 00:23:58.829627 kubelet[2915]: I0909 00:23:58.829563 2915 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/654e17db-9be8-48e8-935f-11005671e9f0-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 9 00:23:58.829627 kubelet[2915]: I0909 00:23:58.829568 2915 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/09de569b-4a33-43cd-a9ba-be8d79e6a589-xtables-lock\") on node \"localhost\" DevicePath \"\"" Sep 9 00:23:58.829627 kubelet[2915]: I0909 00:23:58.829573 2915 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/09de569b-4a33-43cd-a9ba-be8d79e6a589-cni-path\") on node \"localhost\" DevicePath \"\"" Sep 9 00:23:58.829627 kubelet[2915]: I0909 00:23:58.829577 2915 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/09de569b-4a33-43cd-a9ba-be8d79e6a589-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Sep 9 00:23:58.829751 kubelet[2915]: I0909 00:23:58.829582 2915 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-nm6kz\" (UniqueName: \"kubernetes.io/projected/09de569b-4a33-43cd-a9ba-be8d79e6a589-kube-api-access-nm6kz\") on node \"localhost\" DevicePath \"\"" Sep 9 00:23:58.829751 kubelet[2915]: I0909 00:23:58.829596 2915 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/09de569b-4a33-43cd-a9ba-be8d79e6a589-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "09de569b-4a33-43cd-a9ba-be8d79e6a589" (UID: "09de569b-4a33-43cd-a9ba-be8d79e6a589"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 9 00:23:58.829821 kubelet[2915]: I0909 00:23:58.829518 2915 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09de569b-4a33-43cd-a9ba-be8d79e6a589-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "09de569b-4a33-43cd-a9ba-be8d79e6a589" (UID: "09de569b-4a33-43cd-a9ba-be8d79e6a589"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 9 00:23:58.829821 kubelet[2915]: I0909 00:23:58.829812 2915 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/09de569b-4a33-43cd-a9ba-be8d79e6a589-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "09de569b-4a33-43cd-a9ba-be8d79e6a589" (UID: "09de569b-4a33-43cd-a9ba-be8d79e6a589"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 9 00:23:58.831615 kubelet[2915]: I0909 00:23:58.831601 2915 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09de569b-4a33-43cd-a9ba-be8d79e6a589-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "09de569b-4a33-43cd-a9ba-be8d79e6a589" (UID: "09de569b-4a33-43cd-a9ba-be8d79e6a589"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 9 00:23:58.831704 kubelet[2915]: I0909 00:23:58.831694 2915 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09de569b-4a33-43cd-a9ba-be8d79e6a589-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "09de569b-4a33-43cd-a9ba-be8d79e6a589" (UID: "09de569b-4a33-43cd-a9ba-be8d79e6a589"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 9 00:23:58.930036 kubelet[2915]: I0909 00:23:58.930007 2915 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/09de569b-4a33-43cd-a9ba-be8d79e6a589-cilium-run\") on node \"localhost\" DevicePath \"\"" Sep 9 00:23:58.930192 kubelet[2915]: I0909 00:23:58.930145 2915 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/09de569b-4a33-43cd-a9ba-be8d79e6a589-hubble-tls\") on node \"localhost\" DevicePath \"\"" Sep 9 00:23:58.930192 kubelet[2915]: I0909 00:23:58.930156 2915 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/09de569b-4a33-43cd-a9ba-be8d79e6a589-lib-modules\") on node \"localhost\" DevicePath \"\"" Sep 9 00:23:58.930192 kubelet[2915]: I0909 00:23:58.930162 2915 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/09de569b-4a33-43cd-a9ba-be8d79e6a589-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 9 00:23:58.930192 kubelet[2915]: I0909 00:23:58.930168 2915 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/09de569b-4a33-43cd-a9ba-be8d79e6a589-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Sep 9 00:23:58.930192 kubelet[2915]: I0909 00:23:58.930172 2915 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/09de569b-4a33-43cd-a9ba-be8d79e6a589-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Sep 9 00:23:58.930192 kubelet[2915]: I0909 00:23:58.930177 2915 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/09de569b-4a33-43cd-a9ba-be8d79e6a589-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Sep 9 00:23:58.930192 kubelet[2915]: I0909 00:23:58.930182 2915 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/09de569b-4a33-43cd-a9ba-be8d79e6a589-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Sep 9 00:23:59.548502 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-bdf0758736b7418e9c0e6b8e66b7bf09aa7b03b274b29ce85d7523a0ce149a91-shm.mount: Deactivated successfully. Sep 9 00:23:59.548564 systemd[1]: var-lib-kubelet-pods-09de569b\x2d4a33\x2d43cd\x2da9ba\x2dbe8d79e6a589-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dnm6kz.mount: Deactivated successfully. Sep 9 00:23:59.548609 systemd[1]: var-lib-kubelet-pods-654e17db\x2d9be8\x2d48e8\x2d935f\x2d11005671e9f0-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d86wrk.mount: Deactivated successfully. Sep 9 00:23:59.548649 systemd[1]: var-lib-kubelet-pods-09de569b\x2d4a33\x2d43cd\x2da9ba\x2dbe8d79e6a589-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 9 00:23:59.548686 systemd[1]: var-lib-kubelet-pods-09de569b\x2d4a33\x2d43cd\x2da9ba\x2dbe8d79e6a589-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 9 00:23:59.643501 kubelet[2915]: I0909 00:23:59.643479 2915 scope.go:117] "RemoveContainer" containerID="477f39457b4285854432e68e7786017342437dd0803ec39080e460176aa5f0a8" Sep 9 00:23:59.644864 containerd[1637]: time="2025-09-09T00:23:59.644444215Z" level=info msg="RemoveContainer for \"477f39457b4285854432e68e7786017342437dd0803ec39080e460176aa5f0a8\"" Sep 9 00:23:59.647459 systemd[1]: Removed slice kubepods-burstable-pod09de569b_4a33_43cd_a9ba_be8d79e6a589.slice - libcontainer container kubepods-burstable-pod09de569b_4a33_43cd_a9ba_be8d79e6a589.slice. Sep 9 00:23:59.647522 systemd[1]: kubepods-burstable-pod09de569b_4a33_43cd_a9ba_be8d79e6a589.slice: Consumed 4.731s CPU time, 191.4M memory peak, 72.3M read from disk, 13.3M written to disk. Sep 9 00:23:59.649684 containerd[1637]: time="2025-09-09T00:23:59.649658776Z" level=info msg="RemoveContainer for \"477f39457b4285854432e68e7786017342437dd0803ec39080e460176aa5f0a8\" returns successfully" Sep 9 00:23:59.650486 kubelet[2915]: I0909 00:23:59.650434 2915 scope.go:117] "RemoveContainer" containerID="83c17ffabbd5d5a5764c8ff4416dade6dd3dffd0daef1c30e22d152b2fcb39bd" Sep 9 00:23:59.652085 systemd[1]: Removed slice kubepods-besteffort-pod654e17db_9be8_48e8_935f_11005671e9f0.slice - libcontainer container kubepods-besteffort-pod654e17db_9be8_48e8_935f_11005671e9f0.slice. Sep 9 00:23:59.652153 systemd[1]: kubepods-besteffort-pod654e17db_9be8_48e8_935f_11005671e9f0.slice: Consumed 246ms CPU time, 31.7M memory peak, 7.9M read from disk, 4K written to disk. Sep 9 00:23:59.653724 containerd[1637]: time="2025-09-09T00:23:59.653630293Z" level=info msg="RemoveContainer for \"83c17ffabbd5d5a5764c8ff4416dade6dd3dffd0daef1c30e22d152b2fcb39bd\"" Sep 9 00:23:59.657087 containerd[1637]: time="2025-09-09T00:23:59.657063065Z" level=info msg="RemoveContainer for \"83c17ffabbd5d5a5764c8ff4416dade6dd3dffd0daef1c30e22d152b2fcb39bd\" returns successfully" Sep 9 00:23:59.657507 kubelet[2915]: I0909 00:23:59.657486 2915 scope.go:117] "RemoveContainer" containerID="d5e7cd22c6b1f5718ac9054e58110d0a5ee514c961b3f730c018699a95ed512a" Sep 9 00:23:59.659583 containerd[1637]: time="2025-09-09T00:23:59.659523907Z" level=info msg="RemoveContainer for \"d5e7cd22c6b1f5718ac9054e58110d0a5ee514c961b3f730c018699a95ed512a\"" Sep 9 00:23:59.664007 containerd[1637]: time="2025-09-09T00:23:59.663952529Z" level=info msg="RemoveContainer for \"d5e7cd22c6b1f5718ac9054e58110d0a5ee514c961b3f730c018699a95ed512a\" returns successfully" Sep 9 00:23:59.664853 kubelet[2915]: I0909 00:23:59.664824 2915 scope.go:117] "RemoveContainer" containerID="a985d625e118e7b68b871caf7e10ff0630c1926c08d03ac86e706ed03af4953f" Sep 9 00:23:59.668758 containerd[1637]: time="2025-09-09T00:23:59.668558314Z" level=info msg="RemoveContainer for \"a985d625e118e7b68b871caf7e10ff0630c1926c08d03ac86e706ed03af4953f\"" Sep 9 00:23:59.669886 containerd[1637]: time="2025-09-09T00:23:59.669870927Z" level=info msg="RemoveContainer for \"a985d625e118e7b68b871caf7e10ff0630c1926c08d03ac86e706ed03af4953f\" returns successfully" Sep 9 00:23:59.669964 kubelet[2915]: I0909 00:23:59.669948 2915 scope.go:117] "RemoveContainer" containerID="1e77b2ad2c458ec575713686e1fbb7bc6873c9ea10c12afbce916619ab11339e" Sep 9 00:23:59.670664 containerd[1637]: time="2025-09-09T00:23:59.670649017Z" level=info msg="RemoveContainer for \"1e77b2ad2c458ec575713686e1fbb7bc6873c9ea10c12afbce916619ab11339e\"" Sep 9 00:23:59.671889 containerd[1637]: time="2025-09-09T00:23:59.671875240Z" level=info msg="RemoveContainer for \"1e77b2ad2c458ec575713686e1fbb7bc6873c9ea10c12afbce916619ab11339e\" returns successfully" Sep 9 00:23:59.671956 kubelet[2915]: I0909 00:23:59.671942 2915 scope.go:117] "RemoveContainer" containerID="477f39457b4285854432e68e7786017342437dd0803ec39080e460176aa5f0a8" Sep 9 00:23:59.674051 containerd[1637]: time="2025-09-09T00:23:59.672048252Z" level=error msg="ContainerStatus for \"477f39457b4285854432e68e7786017342437dd0803ec39080e460176aa5f0a8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"477f39457b4285854432e68e7786017342437dd0803ec39080e460176aa5f0a8\": not found" Sep 9 00:23:59.674649 kubelet[2915]: E0909 00:23:59.674631 2915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"477f39457b4285854432e68e7786017342437dd0803ec39080e460176aa5f0a8\": not found" containerID="477f39457b4285854432e68e7786017342437dd0803ec39080e460176aa5f0a8" Sep 9 00:23:59.674709 kubelet[2915]: I0909 00:23:59.674656 2915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"477f39457b4285854432e68e7786017342437dd0803ec39080e460176aa5f0a8"} err="failed to get container status \"477f39457b4285854432e68e7786017342437dd0803ec39080e460176aa5f0a8\": rpc error: code = NotFound desc = an error occurred when try to find container \"477f39457b4285854432e68e7786017342437dd0803ec39080e460176aa5f0a8\": not found" Sep 9 00:23:59.674709 kubelet[2915]: I0909 00:23:59.674707 2915 scope.go:117] "RemoveContainer" containerID="83c17ffabbd5d5a5764c8ff4416dade6dd3dffd0daef1c30e22d152b2fcb39bd" Sep 9 00:23:59.674912 containerd[1637]: time="2025-09-09T00:23:59.674873903Z" level=error msg="ContainerStatus for \"83c17ffabbd5d5a5764c8ff4416dade6dd3dffd0daef1c30e22d152b2fcb39bd\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"83c17ffabbd5d5a5764c8ff4416dade6dd3dffd0daef1c30e22d152b2fcb39bd\": not found" Sep 9 00:23:59.674955 kubelet[2915]: E0909 00:23:59.674933 2915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"83c17ffabbd5d5a5764c8ff4416dade6dd3dffd0daef1c30e22d152b2fcb39bd\": not found" containerID="83c17ffabbd5d5a5764c8ff4416dade6dd3dffd0daef1c30e22d152b2fcb39bd" Sep 9 00:23:59.674955 kubelet[2915]: I0909 00:23:59.674943 2915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"83c17ffabbd5d5a5764c8ff4416dade6dd3dffd0daef1c30e22d152b2fcb39bd"} err="failed to get container status \"83c17ffabbd5d5a5764c8ff4416dade6dd3dffd0daef1c30e22d152b2fcb39bd\": rpc error: code = NotFound desc = an error occurred when try to find container \"83c17ffabbd5d5a5764c8ff4416dade6dd3dffd0daef1c30e22d152b2fcb39bd\": not found" Sep 9 00:23:59.674955 kubelet[2915]: I0909 00:23:59.674951 2915 scope.go:117] "RemoveContainer" containerID="d5e7cd22c6b1f5718ac9054e58110d0a5ee514c961b3f730c018699a95ed512a" Sep 9 00:23:59.675187 containerd[1637]: time="2025-09-09T00:23:59.675062500Z" level=error msg="ContainerStatus for \"d5e7cd22c6b1f5718ac9054e58110d0a5ee514c961b3f730c018699a95ed512a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d5e7cd22c6b1f5718ac9054e58110d0a5ee514c961b3f730c018699a95ed512a\": not found" Sep 9 00:23:59.675214 kubelet[2915]: E0909 00:23:59.675126 2915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d5e7cd22c6b1f5718ac9054e58110d0a5ee514c961b3f730c018699a95ed512a\": not found" containerID="d5e7cd22c6b1f5718ac9054e58110d0a5ee514c961b3f730c018699a95ed512a" Sep 9 00:23:59.675214 kubelet[2915]: I0909 00:23:59.675139 2915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d5e7cd22c6b1f5718ac9054e58110d0a5ee514c961b3f730c018699a95ed512a"} err="failed to get container status \"d5e7cd22c6b1f5718ac9054e58110d0a5ee514c961b3f730c018699a95ed512a\": rpc error: code = NotFound desc = an error occurred when try to find container \"d5e7cd22c6b1f5718ac9054e58110d0a5ee514c961b3f730c018699a95ed512a\": not found" Sep 9 00:23:59.675214 kubelet[2915]: I0909 00:23:59.675148 2915 scope.go:117] "RemoveContainer" containerID="a985d625e118e7b68b871caf7e10ff0630c1926c08d03ac86e706ed03af4953f" Sep 9 00:23:59.675266 containerd[1637]: time="2025-09-09T00:23:59.675236361Z" level=error msg="ContainerStatus for \"a985d625e118e7b68b871caf7e10ff0630c1926c08d03ac86e706ed03af4953f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a985d625e118e7b68b871caf7e10ff0630c1926c08d03ac86e706ed03af4953f\": not found" Sep 9 00:23:59.675303 kubelet[2915]: E0909 00:23:59.675282 2915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a985d625e118e7b68b871caf7e10ff0630c1926c08d03ac86e706ed03af4953f\": not found" containerID="a985d625e118e7b68b871caf7e10ff0630c1926c08d03ac86e706ed03af4953f" Sep 9 00:23:59.675303 kubelet[2915]: I0909 00:23:59.675295 2915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a985d625e118e7b68b871caf7e10ff0630c1926c08d03ac86e706ed03af4953f"} err="failed to get container status \"a985d625e118e7b68b871caf7e10ff0630c1926c08d03ac86e706ed03af4953f\": rpc error: code = NotFound desc = an error occurred when try to find container \"a985d625e118e7b68b871caf7e10ff0630c1926c08d03ac86e706ed03af4953f\": not found" Sep 9 00:23:59.675461 kubelet[2915]: I0909 00:23:59.675304 2915 scope.go:117] "RemoveContainer" containerID="1e77b2ad2c458ec575713686e1fbb7bc6873c9ea10c12afbce916619ab11339e" Sep 9 00:23:59.675461 kubelet[2915]: E0909 00:23:59.675457 2915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1e77b2ad2c458ec575713686e1fbb7bc6873c9ea10c12afbce916619ab11339e\": not found" containerID="1e77b2ad2c458ec575713686e1fbb7bc6873c9ea10c12afbce916619ab11339e" Sep 9 00:23:59.675527 containerd[1637]: time="2025-09-09T00:23:59.675381832Z" level=error msg="ContainerStatus for \"1e77b2ad2c458ec575713686e1fbb7bc6873c9ea10c12afbce916619ab11339e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1e77b2ad2c458ec575713686e1fbb7bc6873c9ea10c12afbce916619ab11339e\": not found" Sep 9 00:23:59.675548 kubelet[2915]: I0909 00:23:59.675467 2915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1e77b2ad2c458ec575713686e1fbb7bc6873c9ea10c12afbce916619ab11339e"} err="failed to get container status \"1e77b2ad2c458ec575713686e1fbb7bc6873c9ea10c12afbce916619ab11339e\": rpc error: code = NotFound desc = an error occurred when try to find container \"1e77b2ad2c458ec575713686e1fbb7bc6873c9ea10c12afbce916619ab11339e\": not found" Sep 9 00:23:59.675548 kubelet[2915]: I0909 00:23:59.675473 2915 scope.go:117] "RemoveContainer" containerID="681dec33e046acda51c4a788f31d553582301a3ae3061526a1a51f8d2fe5f1fe" Sep 9 00:23:59.676273 containerd[1637]: time="2025-09-09T00:23:59.676258614Z" level=info msg="RemoveContainer for \"681dec33e046acda51c4a788f31d553582301a3ae3061526a1a51f8d2fe5f1fe\"" Sep 9 00:23:59.677361 containerd[1637]: time="2025-09-09T00:23:59.677347216Z" level=info msg="RemoveContainer for \"681dec33e046acda51c4a788f31d553582301a3ae3061526a1a51f8d2fe5f1fe\" returns successfully" Sep 9 00:24:00.367353 kubelet[2915]: I0909 00:24:00.367325 2915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09de569b-4a33-43cd-a9ba-be8d79e6a589" path="/var/lib/kubelet/pods/09de569b-4a33-43cd-a9ba-be8d79e6a589/volumes" Sep 9 00:24:00.367826 kubelet[2915]: I0909 00:24:00.367808 2915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="654e17db-9be8-48e8-935f-11005671e9f0" path="/var/lib/kubelet/pods/654e17db-9be8-48e8-935f-11005671e9f0/volumes" Sep 9 00:24:00.492785 sshd[4450]: Connection closed by 139.178.68.195 port 40382 Sep 9 00:24:00.493266 sshd-session[4448]: pam_unix(sshd:session): session closed for user core Sep 9 00:24:00.500588 systemd[1]: sshd@22-139.178.70.101:22-139.178.68.195:40382.service: Deactivated successfully. Sep 9 00:24:00.501577 systemd[1]: session-25.scope: Deactivated successfully. Sep 9 00:24:00.502413 systemd-logind[1603]: Session 25 logged out. Waiting for processes to exit. Sep 9 00:24:00.503813 systemd[1]: Started sshd@23-139.178.70.101:22-139.178.68.195:52250.service - OpenSSH per-connection server daemon (139.178.68.195:52250). Sep 9 00:24:00.504696 systemd-logind[1603]: Removed session 25. Sep 9 00:24:00.554707 sshd[4606]: Accepted publickey for core from 139.178.68.195 port 52250 ssh2: RSA SHA256:VfV4DbcB1YJ5ML+Hb+wSNrAGdGs+bVUt3FrVVQ/IlNk Sep 9 00:24:00.555537 sshd-session[4606]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:24:00.558256 systemd-logind[1603]: New session 26 of user core. Sep 9 00:24:00.568476 systemd[1]: Started session-26.scope - Session 26 of User core. Sep 9 00:24:00.894038 sshd[4608]: Connection closed by 139.178.68.195 port 52250 Sep 9 00:24:00.894284 sshd-session[4606]: pam_unix(sshd:session): session closed for user core Sep 9 00:24:00.905684 systemd[1]: sshd@23-139.178.70.101:22-139.178.68.195:52250.service: Deactivated successfully. Sep 9 00:24:00.906946 systemd[1]: session-26.scope: Deactivated successfully. Sep 9 00:24:00.907931 systemd-logind[1603]: Session 26 logged out. Waiting for processes to exit. Sep 9 00:24:00.912398 kubelet[2915]: I0909 00:24:00.912349 2915 memory_manager.go:355] "RemoveStaleState removing state" podUID="654e17db-9be8-48e8-935f-11005671e9f0" containerName="cilium-operator" Sep 9 00:24:00.912398 kubelet[2915]: I0909 00:24:00.912366 2915 memory_manager.go:355] "RemoveStaleState removing state" podUID="09de569b-4a33-43cd-a9ba-be8d79e6a589" containerName="cilium-agent" Sep 9 00:24:00.912881 systemd[1]: Started sshd@24-139.178.70.101:22-139.178.68.195:52266.service - OpenSSH per-connection server daemon (139.178.68.195:52266). Sep 9 00:24:00.915528 systemd-logind[1603]: Removed session 26. Sep 9 00:24:00.925224 systemd[1]: Created slice kubepods-burstable-pod93c08b3f_9dc5_4810_8968_c2fd2ec93b4a.slice - libcontainer container kubepods-burstable-pod93c08b3f_9dc5_4810_8968_c2fd2ec93b4a.slice. Sep 9 00:24:00.940464 kubelet[2915]: I0909 00:24:00.940435 2915 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/93c08b3f-9dc5-4810-8968-c2fd2ec93b4a-cilium-cgroup\") pod \"cilium-2968k\" (UID: \"93c08b3f-9dc5-4810-8968-c2fd2ec93b4a\") " pod="kube-system/cilium-2968k" Sep 9 00:24:00.940621 kubelet[2915]: I0909 00:24:00.940560 2915 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/93c08b3f-9dc5-4810-8968-c2fd2ec93b4a-host-proc-sys-net\") pod \"cilium-2968k\" (UID: \"93c08b3f-9dc5-4810-8968-c2fd2ec93b4a\") " pod="kube-system/cilium-2968k" Sep 9 00:24:00.940621 kubelet[2915]: I0909 00:24:00.940576 2915 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/93c08b3f-9dc5-4810-8968-c2fd2ec93b4a-bpf-maps\") pod \"cilium-2968k\" (UID: \"93c08b3f-9dc5-4810-8968-c2fd2ec93b4a\") " pod="kube-system/cilium-2968k" Sep 9 00:24:00.940621 kubelet[2915]: I0909 00:24:00.940588 2915 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/93c08b3f-9dc5-4810-8968-c2fd2ec93b4a-hostproc\") pod \"cilium-2968k\" (UID: \"93c08b3f-9dc5-4810-8968-c2fd2ec93b4a\") " pod="kube-system/cilium-2968k" Sep 9 00:24:00.940621 kubelet[2915]: I0909 00:24:00.940600 2915 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/93c08b3f-9dc5-4810-8968-c2fd2ec93b4a-cni-path\") pod \"cilium-2968k\" (UID: \"93c08b3f-9dc5-4810-8968-c2fd2ec93b4a\") " pod="kube-system/cilium-2968k" Sep 9 00:24:00.940816 kubelet[2915]: I0909 00:24:00.940691 2915 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/93c08b3f-9dc5-4810-8968-c2fd2ec93b4a-etc-cni-netd\") pod \"cilium-2968k\" (UID: \"93c08b3f-9dc5-4810-8968-c2fd2ec93b4a\") " pod="kube-system/cilium-2968k" Sep 9 00:24:00.940816 kubelet[2915]: I0909 00:24:00.940708 2915 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/93c08b3f-9dc5-4810-8968-c2fd2ec93b4a-cilium-run\") pod \"cilium-2968k\" (UID: \"93c08b3f-9dc5-4810-8968-c2fd2ec93b4a\") " pod="kube-system/cilium-2968k" Sep 9 00:24:00.940816 kubelet[2915]: I0909 00:24:00.940730 2915 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/93c08b3f-9dc5-4810-8968-c2fd2ec93b4a-host-proc-sys-kernel\") pod \"cilium-2968k\" (UID: \"93c08b3f-9dc5-4810-8968-c2fd2ec93b4a\") " pod="kube-system/cilium-2968k" Sep 9 00:24:00.940969 kubelet[2915]: I0909 00:24:00.940742 2915 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/93c08b3f-9dc5-4810-8968-c2fd2ec93b4a-xtables-lock\") pod \"cilium-2968k\" (UID: \"93c08b3f-9dc5-4810-8968-c2fd2ec93b4a\") " pod="kube-system/cilium-2968k" Sep 9 00:24:00.940969 kubelet[2915]: I0909 00:24:00.940906 2915 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2r8nc\" (UniqueName: \"kubernetes.io/projected/93c08b3f-9dc5-4810-8968-c2fd2ec93b4a-kube-api-access-2r8nc\") pod \"cilium-2968k\" (UID: \"93c08b3f-9dc5-4810-8968-c2fd2ec93b4a\") " pod="kube-system/cilium-2968k" Sep 9 00:24:00.940969 kubelet[2915]: I0909 00:24:00.940919 2915 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/93c08b3f-9dc5-4810-8968-c2fd2ec93b4a-lib-modules\") pod \"cilium-2968k\" (UID: \"93c08b3f-9dc5-4810-8968-c2fd2ec93b4a\") " pod="kube-system/cilium-2968k" Sep 9 00:24:00.940969 kubelet[2915]: I0909 00:24:00.940927 2915 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/93c08b3f-9dc5-4810-8968-c2fd2ec93b4a-cilium-config-path\") pod \"cilium-2968k\" (UID: \"93c08b3f-9dc5-4810-8968-c2fd2ec93b4a\") " pod="kube-system/cilium-2968k" Sep 9 00:24:00.941196 kubelet[2915]: I0909 00:24:00.940937 2915 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/93c08b3f-9dc5-4810-8968-c2fd2ec93b4a-cilium-ipsec-secrets\") pod \"cilium-2968k\" (UID: \"93c08b3f-9dc5-4810-8968-c2fd2ec93b4a\") " pod="kube-system/cilium-2968k" Sep 9 00:24:00.941196 kubelet[2915]: I0909 00:24:00.941044 2915 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/93c08b3f-9dc5-4810-8968-c2fd2ec93b4a-clustermesh-secrets\") pod \"cilium-2968k\" (UID: \"93c08b3f-9dc5-4810-8968-c2fd2ec93b4a\") " pod="kube-system/cilium-2968k" Sep 9 00:24:00.941196 kubelet[2915]: I0909 00:24:00.941055 2915 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/93c08b3f-9dc5-4810-8968-c2fd2ec93b4a-hubble-tls\") pod \"cilium-2968k\" (UID: \"93c08b3f-9dc5-4810-8968-c2fd2ec93b4a\") " pod="kube-system/cilium-2968k" Sep 9 00:24:00.964928 sshd[4618]: Accepted publickey for core from 139.178.68.195 port 52266 ssh2: RSA SHA256:VfV4DbcB1YJ5ML+Hb+wSNrAGdGs+bVUt3FrVVQ/IlNk Sep 9 00:24:00.965455 sshd-session[4618]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:24:00.969601 systemd-logind[1603]: New session 27 of user core. Sep 9 00:24:00.977537 systemd[1]: Started session-27.scope - Session 27 of User core. Sep 9 00:24:01.026148 sshd[4620]: Connection closed by 139.178.68.195 port 52266 Sep 9 00:24:01.026614 sshd-session[4618]: pam_unix(sshd:session): session closed for user core Sep 9 00:24:01.036492 systemd[1]: sshd@24-139.178.70.101:22-139.178.68.195:52266.service: Deactivated successfully. Sep 9 00:24:01.037890 systemd[1]: session-27.scope: Deactivated successfully. Sep 9 00:24:01.038541 systemd-logind[1603]: Session 27 logged out. Waiting for processes to exit. Sep 9 00:24:01.040739 systemd[1]: Started sshd@25-139.178.70.101:22-139.178.68.195:52268.service - OpenSSH per-connection server daemon (139.178.68.195:52268). Sep 9 00:24:01.043883 systemd-logind[1603]: Removed session 27. Sep 9 00:24:01.093308 sshd[4627]: Accepted publickey for core from 139.178.68.195 port 52268 ssh2: RSA SHA256:VfV4DbcB1YJ5ML+Hb+wSNrAGdGs+bVUt3FrVVQ/IlNk Sep 9 00:24:01.094116 sshd-session[4627]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:24:01.096742 systemd-logind[1603]: New session 28 of user core. Sep 9 00:24:01.107528 systemd[1]: Started session-28.scope - Session 28 of User core. Sep 9 00:24:01.230445 containerd[1637]: time="2025-09-09T00:24:01.230373383Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-2968k,Uid:93c08b3f-9dc5-4810-8968-c2fd2ec93b4a,Namespace:kube-system,Attempt:0,}" Sep 9 00:24:01.243188 containerd[1637]: time="2025-09-09T00:24:01.243149568Z" level=info msg="connecting to shim 3926e050b00aaaa18b91a453decea05922ec3941bb5fc1ccbc24086044718518" address="unix:///run/containerd/s/b80169813e0f3c508665916f6be4fadb429c24a6140d67788255a6ef6dc020e9" namespace=k8s.io protocol=ttrpc version=3 Sep 9 00:24:01.264556 systemd[1]: Started cri-containerd-3926e050b00aaaa18b91a453decea05922ec3941bb5fc1ccbc24086044718518.scope - libcontainer container 3926e050b00aaaa18b91a453decea05922ec3941bb5fc1ccbc24086044718518. Sep 9 00:24:01.282901 containerd[1637]: time="2025-09-09T00:24:01.282875773Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-2968k,Uid:93c08b3f-9dc5-4810-8968-c2fd2ec93b4a,Namespace:kube-system,Attempt:0,} returns sandbox id \"3926e050b00aaaa18b91a453decea05922ec3941bb5fc1ccbc24086044718518\"" Sep 9 00:24:01.284830 containerd[1637]: time="2025-09-09T00:24:01.284809684Z" level=info msg="CreateContainer within sandbox \"3926e050b00aaaa18b91a453decea05922ec3941bb5fc1ccbc24086044718518\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 9 00:24:01.289889 containerd[1637]: time="2025-09-09T00:24:01.289801453Z" level=info msg="Container f91dd40110665409a20fccf3506af1ba4caf2e04ef074fdb95559652c5262288: CDI devices from CRI Config.CDIDevices: []" Sep 9 00:24:01.293014 containerd[1637]: time="2025-09-09T00:24:01.292995337Z" level=info msg="CreateContainer within sandbox \"3926e050b00aaaa18b91a453decea05922ec3941bb5fc1ccbc24086044718518\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"f91dd40110665409a20fccf3506af1ba4caf2e04ef074fdb95559652c5262288\"" Sep 9 00:24:01.293513 containerd[1637]: time="2025-09-09T00:24:01.293437714Z" level=info msg="StartContainer for \"f91dd40110665409a20fccf3506af1ba4caf2e04ef074fdb95559652c5262288\"" Sep 9 00:24:01.294145 containerd[1637]: time="2025-09-09T00:24:01.294133230Z" level=info msg="connecting to shim f91dd40110665409a20fccf3506af1ba4caf2e04ef074fdb95559652c5262288" address="unix:///run/containerd/s/b80169813e0f3c508665916f6be4fadb429c24a6140d67788255a6ef6dc020e9" protocol=ttrpc version=3 Sep 9 00:24:01.309518 systemd[1]: Started cri-containerd-f91dd40110665409a20fccf3506af1ba4caf2e04ef074fdb95559652c5262288.scope - libcontainer container f91dd40110665409a20fccf3506af1ba4caf2e04ef074fdb95559652c5262288. Sep 9 00:24:01.342455 containerd[1637]: time="2025-09-09T00:24:01.342418636Z" level=info msg="StartContainer for \"f91dd40110665409a20fccf3506af1ba4caf2e04ef074fdb95559652c5262288\" returns successfully" Sep 9 00:24:01.372292 systemd[1]: cri-containerd-f91dd40110665409a20fccf3506af1ba4caf2e04ef074fdb95559652c5262288.scope: Deactivated successfully. Sep 9 00:24:01.372739 systemd[1]: cri-containerd-f91dd40110665409a20fccf3506af1ba4caf2e04ef074fdb95559652c5262288.scope: Consumed 14ms CPU time, 9.3M memory peak, 2.8M read from disk. Sep 9 00:24:01.374091 containerd[1637]: time="2025-09-09T00:24:01.373318746Z" level=info msg="received exit event container_id:\"f91dd40110665409a20fccf3506af1ba4caf2e04ef074fdb95559652c5262288\" id:\"f91dd40110665409a20fccf3506af1ba4caf2e04ef074fdb95559652c5262288\" pid:4699 exited_at:{seconds:1757377441 nanos:372723751}" Sep 9 00:24:01.374252 containerd[1637]: time="2025-09-09T00:24:01.373709152Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f91dd40110665409a20fccf3506af1ba4caf2e04ef074fdb95559652c5262288\" id:\"f91dd40110665409a20fccf3506af1ba4caf2e04ef074fdb95559652c5262288\" pid:4699 exited_at:{seconds:1757377441 nanos:372723751}" Sep 9 00:24:01.654528 containerd[1637]: time="2025-09-09T00:24:01.654018017Z" level=info msg="CreateContainer within sandbox \"3926e050b00aaaa18b91a453decea05922ec3941bb5fc1ccbc24086044718518\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 9 00:24:01.660503 containerd[1637]: time="2025-09-09T00:24:01.660475457Z" level=info msg="Container 21796c9edc44aa6387c83521462dcf496c2cbc9b649e32588572e05af35d1853: CDI devices from CRI Config.CDIDevices: []" Sep 9 00:24:01.664407 containerd[1637]: time="2025-09-09T00:24:01.664361819Z" level=info msg="CreateContainer within sandbox \"3926e050b00aaaa18b91a453decea05922ec3941bb5fc1ccbc24086044718518\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"21796c9edc44aa6387c83521462dcf496c2cbc9b649e32588572e05af35d1853\"" Sep 9 00:24:01.665282 containerd[1637]: time="2025-09-09T00:24:01.665263226Z" level=info msg="StartContainer for \"21796c9edc44aa6387c83521462dcf496c2cbc9b649e32588572e05af35d1853\"" Sep 9 00:24:01.666305 containerd[1637]: time="2025-09-09T00:24:01.665896159Z" level=info msg="connecting to shim 21796c9edc44aa6387c83521462dcf496c2cbc9b649e32588572e05af35d1853" address="unix:///run/containerd/s/b80169813e0f3c508665916f6be4fadb429c24a6140d67788255a6ef6dc020e9" protocol=ttrpc version=3 Sep 9 00:24:01.680657 systemd[1]: Started cri-containerd-21796c9edc44aa6387c83521462dcf496c2cbc9b649e32588572e05af35d1853.scope - libcontainer container 21796c9edc44aa6387c83521462dcf496c2cbc9b649e32588572e05af35d1853. Sep 9 00:24:01.697038 containerd[1637]: time="2025-09-09T00:24:01.696999776Z" level=info msg="StartContainer for \"21796c9edc44aa6387c83521462dcf496c2cbc9b649e32588572e05af35d1853\" returns successfully" Sep 9 00:24:01.707294 systemd[1]: cri-containerd-21796c9edc44aa6387c83521462dcf496c2cbc9b649e32588572e05af35d1853.scope: Deactivated successfully. Sep 9 00:24:01.707595 systemd[1]: cri-containerd-21796c9edc44aa6387c83521462dcf496c2cbc9b649e32588572e05af35d1853.scope: Consumed 11ms CPU time, 7.6M memory peak, 2.2M read from disk. Sep 9 00:24:01.708340 containerd[1637]: time="2025-09-09T00:24:01.708313282Z" level=info msg="received exit event container_id:\"21796c9edc44aa6387c83521462dcf496c2cbc9b649e32588572e05af35d1853\" id:\"21796c9edc44aa6387c83521462dcf496c2cbc9b649e32588572e05af35d1853\" pid:4743 exited_at:{seconds:1757377441 nanos:708178912}" Sep 9 00:24:01.708495 containerd[1637]: time="2025-09-09T00:24:01.708344981Z" level=info msg="TaskExit event in podsandbox handler container_id:\"21796c9edc44aa6387c83521462dcf496c2cbc9b649e32588572e05af35d1853\" id:\"21796c9edc44aa6387c83521462dcf496c2cbc9b649e32588572e05af35d1853\" pid:4743 exited_at:{seconds:1757377441 nanos:708178912}" Sep 9 00:24:02.419105 containerd[1637]: time="2025-09-09T00:24:02.419077834Z" level=info msg="StopPodSandbox for \"da66d1a2b0a15d584067a9c06c6ad00c969bf3d954a8f4936381f1a7f35c9c41\"" Sep 9 00:24:02.419653 containerd[1637]: time="2025-09-09T00:24:02.419637887Z" level=info msg="TearDown network for sandbox \"da66d1a2b0a15d584067a9c06c6ad00c969bf3d954a8f4936381f1a7f35c9c41\" successfully" Sep 9 00:24:02.419794 containerd[1637]: time="2025-09-09T00:24:02.419697944Z" level=info msg="StopPodSandbox for \"da66d1a2b0a15d584067a9c06c6ad00c969bf3d954a8f4936381f1a7f35c9c41\" returns successfully" Sep 9 00:24:02.420005 containerd[1637]: time="2025-09-09T00:24:02.419986350Z" level=info msg="RemovePodSandbox for \"da66d1a2b0a15d584067a9c06c6ad00c969bf3d954a8f4936381f1a7f35c9c41\"" Sep 9 00:24:02.420052 containerd[1637]: time="2025-09-09T00:24:02.420006552Z" level=info msg="Forcibly stopping sandbox \"da66d1a2b0a15d584067a9c06c6ad00c969bf3d954a8f4936381f1a7f35c9c41\"" Sep 9 00:24:02.420082 containerd[1637]: time="2025-09-09T00:24:02.420056646Z" level=info msg="TearDown network for sandbox \"da66d1a2b0a15d584067a9c06c6ad00c969bf3d954a8f4936381f1a7f35c9c41\" successfully" Sep 9 00:24:02.421074 containerd[1637]: time="2025-09-09T00:24:02.420992835Z" level=info msg="Ensure that sandbox da66d1a2b0a15d584067a9c06c6ad00c969bf3d954a8f4936381f1a7f35c9c41 in task-service has been cleanup successfully" Sep 9 00:24:02.425702 containerd[1637]: time="2025-09-09T00:24:02.425667756Z" level=info msg="RemovePodSandbox \"da66d1a2b0a15d584067a9c06c6ad00c969bf3d954a8f4936381f1a7f35c9c41\" returns successfully" Sep 9 00:24:02.426106 containerd[1637]: time="2025-09-09T00:24:02.426026624Z" level=info msg="StopPodSandbox for \"bdf0758736b7418e9c0e6b8e66b7bf09aa7b03b274b29ce85d7523a0ce149a91\"" Sep 9 00:24:02.426106 containerd[1637]: time="2025-09-09T00:24:02.426090870Z" level=info msg="TearDown network for sandbox \"bdf0758736b7418e9c0e6b8e66b7bf09aa7b03b274b29ce85d7523a0ce149a91\" successfully" Sep 9 00:24:02.426106 containerd[1637]: time="2025-09-09T00:24:02.426097607Z" level=info msg="StopPodSandbox for \"bdf0758736b7418e9c0e6b8e66b7bf09aa7b03b274b29ce85d7523a0ce149a91\" returns successfully" Sep 9 00:24:02.426273 containerd[1637]: time="2025-09-09T00:24:02.426254081Z" level=info msg="RemovePodSandbox for \"bdf0758736b7418e9c0e6b8e66b7bf09aa7b03b274b29ce85d7523a0ce149a91\"" Sep 9 00:24:02.426273 containerd[1637]: time="2025-09-09T00:24:02.426269631Z" level=info msg="Forcibly stopping sandbox \"bdf0758736b7418e9c0e6b8e66b7bf09aa7b03b274b29ce85d7523a0ce149a91\"" Sep 9 00:24:02.426429 containerd[1637]: time="2025-09-09T00:24:02.426308161Z" level=info msg="TearDown network for sandbox \"bdf0758736b7418e9c0e6b8e66b7bf09aa7b03b274b29ce85d7523a0ce149a91\" successfully" Sep 9 00:24:02.427352 containerd[1637]: time="2025-09-09T00:24:02.427332841Z" level=info msg="Ensure that sandbox bdf0758736b7418e9c0e6b8e66b7bf09aa7b03b274b29ce85d7523a0ce149a91 in task-service has been cleanup successfully" Sep 9 00:24:02.431345 containerd[1637]: time="2025-09-09T00:24:02.431312220Z" level=info msg="RemovePodSandbox \"bdf0758736b7418e9c0e6b8e66b7bf09aa7b03b274b29ce85d7523a0ce149a91\" returns successfully" Sep 9 00:24:02.475060 kubelet[2915]: E0909 00:24:02.475003 2915 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 9 00:24:02.656176 containerd[1637]: time="2025-09-09T00:24:02.656063325Z" level=info msg="CreateContainer within sandbox \"3926e050b00aaaa18b91a453decea05922ec3941bb5fc1ccbc24086044718518\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 9 00:24:02.681400 containerd[1637]: time="2025-09-09T00:24:02.680017529Z" level=info msg="Container fb7c11ca1682e60092a432f7a863cbd946a4b895becb59ef5d6b05e741186b0b: CDI devices from CRI Config.CDIDevices: []" Sep 9 00:24:02.705962 containerd[1637]: time="2025-09-09T00:24:02.705939442Z" level=info msg="CreateContainer within sandbox \"3926e050b00aaaa18b91a453decea05922ec3941bb5fc1ccbc24086044718518\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"fb7c11ca1682e60092a432f7a863cbd946a4b895becb59ef5d6b05e741186b0b\"" Sep 9 00:24:02.706413 containerd[1637]: time="2025-09-09T00:24:02.706401100Z" level=info msg="StartContainer for \"fb7c11ca1682e60092a432f7a863cbd946a4b895becb59ef5d6b05e741186b0b\"" Sep 9 00:24:02.707368 containerd[1637]: time="2025-09-09T00:24:02.707325611Z" level=info msg="connecting to shim fb7c11ca1682e60092a432f7a863cbd946a4b895becb59ef5d6b05e741186b0b" address="unix:///run/containerd/s/b80169813e0f3c508665916f6be4fadb429c24a6140d67788255a6ef6dc020e9" protocol=ttrpc version=3 Sep 9 00:24:02.724474 systemd[1]: Started cri-containerd-fb7c11ca1682e60092a432f7a863cbd946a4b895becb59ef5d6b05e741186b0b.scope - libcontainer container fb7c11ca1682e60092a432f7a863cbd946a4b895becb59ef5d6b05e741186b0b. Sep 9 00:24:02.748156 containerd[1637]: time="2025-09-09T00:24:02.748121369Z" level=info msg="StartContainer for \"fb7c11ca1682e60092a432f7a863cbd946a4b895becb59ef5d6b05e741186b0b\" returns successfully" Sep 9 00:24:02.753996 systemd[1]: cri-containerd-fb7c11ca1682e60092a432f7a863cbd946a4b895becb59ef5d6b05e741186b0b.scope: Deactivated successfully. Sep 9 00:24:02.754927 containerd[1637]: time="2025-09-09T00:24:02.754907828Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fb7c11ca1682e60092a432f7a863cbd946a4b895becb59ef5d6b05e741186b0b\" id:\"fb7c11ca1682e60092a432f7a863cbd946a4b895becb59ef5d6b05e741186b0b\" pid:4789 exited_at:{seconds:1757377442 nanos:754668605}" Sep 9 00:24:02.755050 containerd[1637]: time="2025-09-09T00:24:02.754988928Z" level=info msg="received exit event container_id:\"fb7c11ca1682e60092a432f7a863cbd946a4b895becb59ef5d6b05e741186b0b\" id:\"fb7c11ca1682e60092a432f7a863cbd946a4b895becb59ef5d6b05e741186b0b\" pid:4789 exited_at:{seconds:1757377442 nanos:754668605}" Sep 9 00:24:02.769224 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fb7c11ca1682e60092a432f7a863cbd946a4b895becb59ef5d6b05e741186b0b-rootfs.mount: Deactivated successfully. Sep 9 00:24:03.662615 containerd[1637]: time="2025-09-09T00:24:03.662586410Z" level=info msg="CreateContainer within sandbox \"3926e050b00aaaa18b91a453decea05922ec3941bb5fc1ccbc24086044718518\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 9 00:24:03.671202 containerd[1637]: time="2025-09-09T00:24:03.670869452Z" level=info msg="Container 988a61668343139286b34386328dff241d4f52ca777d7c29127ab00a3ea2c795: CDI devices from CRI Config.CDIDevices: []" Sep 9 00:24:03.671118 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1779461548.mount: Deactivated successfully. Sep 9 00:24:03.677369 containerd[1637]: time="2025-09-09T00:24:03.677331314Z" level=info msg="CreateContainer within sandbox \"3926e050b00aaaa18b91a453decea05922ec3941bb5fc1ccbc24086044718518\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"988a61668343139286b34386328dff241d4f52ca777d7c29127ab00a3ea2c795\"" Sep 9 00:24:03.677865 containerd[1637]: time="2025-09-09T00:24:03.677742608Z" level=info msg="StartContainer for \"988a61668343139286b34386328dff241d4f52ca777d7c29127ab00a3ea2c795\"" Sep 9 00:24:03.678557 containerd[1637]: time="2025-09-09T00:24:03.678533004Z" level=info msg="connecting to shim 988a61668343139286b34386328dff241d4f52ca777d7c29127ab00a3ea2c795" address="unix:///run/containerd/s/b80169813e0f3c508665916f6be4fadb429c24a6140d67788255a6ef6dc020e9" protocol=ttrpc version=3 Sep 9 00:24:03.693471 systemd[1]: Started cri-containerd-988a61668343139286b34386328dff241d4f52ca777d7c29127ab00a3ea2c795.scope - libcontainer container 988a61668343139286b34386328dff241d4f52ca777d7c29127ab00a3ea2c795. Sep 9 00:24:03.713841 systemd[1]: cri-containerd-988a61668343139286b34386328dff241d4f52ca777d7c29127ab00a3ea2c795.scope: Deactivated successfully. Sep 9 00:24:03.714902 containerd[1637]: time="2025-09-09T00:24:03.714880249Z" level=info msg="TaskExit event in podsandbox handler container_id:\"988a61668343139286b34386328dff241d4f52ca777d7c29127ab00a3ea2c795\" id:\"988a61668343139286b34386328dff241d4f52ca777d7c29127ab00a3ea2c795\" pid:4830 exited_at:{seconds:1757377443 nanos:714268333}" Sep 9 00:24:03.714962 containerd[1637]: time="2025-09-09T00:24:03.714949012Z" level=info msg="received exit event container_id:\"988a61668343139286b34386328dff241d4f52ca777d7c29127ab00a3ea2c795\" id:\"988a61668343139286b34386328dff241d4f52ca777d7c29127ab00a3ea2c795\" pid:4830 exited_at:{seconds:1757377443 nanos:714268333}" Sep 9 00:24:03.720097 containerd[1637]: time="2025-09-09T00:24:03.720077269Z" level=info msg="StartContainer for \"988a61668343139286b34386328dff241d4f52ca777d7c29127ab00a3ea2c795\" returns successfully" Sep 9 00:24:03.727227 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-988a61668343139286b34386328dff241d4f52ca777d7c29127ab00a3ea2c795-rootfs.mount: Deactivated successfully. Sep 9 00:24:04.663335 containerd[1637]: time="2025-09-09T00:24:04.663091511Z" level=info msg="CreateContainer within sandbox \"3926e050b00aaaa18b91a453decea05922ec3941bb5fc1ccbc24086044718518\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 9 00:24:04.672070 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4153378740.mount: Deactivated successfully. Sep 9 00:24:04.675231 containerd[1637]: time="2025-09-09T00:24:04.674581814Z" level=info msg="Container 105aa6d257b334660c56723076ce182e106f7c85a1c54999b075c773d48ae47f: CDI devices from CRI Config.CDIDevices: []" Sep 9 00:24:04.676919 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2313210452.mount: Deactivated successfully. Sep 9 00:24:04.684744 containerd[1637]: time="2025-09-09T00:24:04.684716932Z" level=info msg="CreateContainer within sandbox \"3926e050b00aaaa18b91a453decea05922ec3941bb5fc1ccbc24086044718518\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"105aa6d257b334660c56723076ce182e106f7c85a1c54999b075c773d48ae47f\"" Sep 9 00:24:04.686116 containerd[1637]: time="2025-09-09T00:24:04.686097121Z" level=info msg="StartContainer for \"105aa6d257b334660c56723076ce182e106f7c85a1c54999b075c773d48ae47f\"" Sep 9 00:24:04.687263 containerd[1637]: time="2025-09-09T00:24:04.687243525Z" level=info msg="connecting to shim 105aa6d257b334660c56723076ce182e106f7c85a1c54999b075c773d48ae47f" address="unix:///run/containerd/s/b80169813e0f3c508665916f6be4fadb429c24a6140d67788255a6ef6dc020e9" protocol=ttrpc version=3 Sep 9 00:24:04.708545 systemd[1]: Started cri-containerd-105aa6d257b334660c56723076ce182e106f7c85a1c54999b075c773d48ae47f.scope - libcontainer container 105aa6d257b334660c56723076ce182e106f7c85a1c54999b075c773d48ae47f. Sep 9 00:24:04.755496 containerd[1637]: time="2025-09-09T00:24:04.755462652Z" level=info msg="StartContainer for \"105aa6d257b334660c56723076ce182e106f7c85a1c54999b075c773d48ae47f\" returns successfully" Sep 9 00:24:04.844857 containerd[1637]: time="2025-09-09T00:24:04.844817715Z" level=info msg="TaskExit event in podsandbox handler container_id:\"105aa6d257b334660c56723076ce182e106f7c85a1c54999b075c773d48ae47f\" id:\"a759eef601776c3084b9c927d0c4dccee35833bb42d725cbbb41e9e313a5b27c\" pid:4896 exited_at:{seconds:1757377444 nanos:844628999}" Sep 9 00:24:05.074440 kubelet[2915]: I0909 00:24:05.074406 2915 setters.go:602] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-09T00:24:05Z","lastTransitionTime":"2025-09-09T00:24:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Sep 9 00:24:05.352407 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni-avx)) Sep 9 00:24:07.408222 containerd[1637]: time="2025-09-09T00:24:07.408158845Z" level=info msg="TaskExit event in podsandbox handler container_id:\"105aa6d257b334660c56723076ce182e106f7c85a1c54999b075c773d48ae47f\" id:\"535ed8945db7ac1f65d47e4fddc304d316baddce889b3ba6318de1f3feb98c1f\" pid:5217 exit_status:1 exited_at:{seconds:1757377447 nanos:407783532}" Sep 9 00:24:07.412597 kubelet[2915]: E0909 00:24:07.412480 2915 upgradeaware.go:441] Error proxying data from backend to client: writeto tcp 127.0.0.1:42600->127.0.0.1:43897: read tcp 127.0.0.1:42600->127.0.0.1:43897: read: connection reset by peer Sep 9 00:24:07.412597 kubelet[2915]: E0909 00:24:07.412483 2915 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:42600->127.0.0.1:43897: write tcp 127.0.0.1:42600->127.0.0.1:43897: write: broken pipe Sep 9 00:24:07.801461 systemd-networkd[1518]: lxc_health: Link UP Sep 9 00:24:07.803661 systemd-networkd[1518]: lxc_health: Gained carrier Sep 9 00:24:09.241199 kubelet[2915]: I0909 00:24:09.241162 2915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-2968k" podStartSLOduration=9.241151517 podStartE2EDuration="9.241151517s" podCreationTimestamp="2025-09-09 00:24:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 00:24:05.679668934 +0000 UTC m=+123.422457740" watchObservedRunningTime="2025-09-09 00:24:09.241151517 +0000 UTC m=+126.983940328" Sep 9 00:24:09.452863 systemd-networkd[1518]: lxc_health: Gained IPv6LL Sep 9 00:24:09.499179 containerd[1637]: time="2025-09-09T00:24:09.498854040Z" level=info msg="TaskExit event in podsandbox handler container_id:\"105aa6d257b334660c56723076ce182e106f7c85a1c54999b075c773d48ae47f\" id:\"1547b8511f0681bfda021d586a119d87c7696ab381dccaa55832e1f92308a03f\" pid:5436 exited_at:{seconds:1757377449 nanos:498595580}" Sep 9 00:24:11.594603 containerd[1637]: time="2025-09-09T00:24:11.594571144Z" level=info msg="TaskExit event in podsandbox handler container_id:\"105aa6d257b334660c56723076ce182e106f7c85a1c54999b075c773d48ae47f\" id:\"b1dfbc4b0373a5689960d7e420f03c76f5f7477b69098e9c2416ca3972adaae6\" pid:5469 exited_at:{seconds:1757377451 nanos:594207638}" Sep 9 00:24:13.755762 containerd[1637]: time="2025-09-09T00:24:13.755735326Z" level=info msg="TaskExit event in podsandbox handler container_id:\"105aa6d257b334660c56723076ce182e106f7c85a1c54999b075c773d48ae47f\" id:\"f5f35caa26d7e83d7be656a3df0bfb5164ce825f572da6fd81b9933259f5d7c5\" pid:5492 exited_at:{seconds:1757377453 nanos:755441173}" Sep 9 00:24:13.757811 kubelet[2915]: E0909 00:24:13.757691 2915 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:59944->127.0.0.1:43897: write tcp 127.0.0.1:59944->127.0.0.1:43897: write: broken pipe Sep 9 00:24:13.765402 sshd[4633]: Connection closed by 139.178.68.195 port 52268 Sep 9 00:24:13.769178 sshd-session[4627]: pam_unix(sshd:session): session closed for user core Sep 9 00:24:13.790597 systemd-logind[1603]: Session 28 logged out. Waiting for processes to exit. Sep 9 00:24:13.791542 systemd[1]: sshd@25-139.178.70.101:22-139.178.68.195:52268.service: Deactivated successfully. Sep 9 00:24:13.794604 systemd[1]: session-28.scope: Deactivated successfully. Sep 9 00:24:13.796253 systemd-logind[1603]: Removed session 28.