Jul 9 13:07:31.707288 kernel: Linux version 6.12.36-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Wed Jul 9 08:38:39 -00 2025 Jul 9 13:07:31.707311 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=f85d3be94c634d7d72fbcd0e670073ce56ae2e0cc763f83b329300b7cea5203d Jul 9 13:07:31.707321 kernel: Disabled fast string operations Jul 9 13:07:31.707327 kernel: BIOS-provided physical RAM map: Jul 9 13:07:31.707334 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ebff] usable Jul 9 13:07:31.707364 kernel: BIOS-e820: [mem 0x000000000009ec00-0x000000000009ffff] reserved Jul 9 13:07:31.707374 kernel: BIOS-e820: [mem 0x00000000000dc000-0x00000000000fffff] reserved Jul 9 13:07:31.707380 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007fedffff] usable Jul 9 13:07:31.707387 kernel: BIOS-e820: [mem 0x000000007fee0000-0x000000007fefefff] ACPI data Jul 9 13:07:31.707394 kernel: BIOS-e820: [mem 0x000000007feff000-0x000000007fefffff] ACPI NVS Jul 9 13:07:31.707401 kernel: BIOS-e820: [mem 0x000000007ff00000-0x000000007fffffff] usable Jul 9 13:07:31.707408 kernel: BIOS-e820: [mem 0x00000000f0000000-0x00000000f7ffffff] reserved Jul 9 13:07:31.707415 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec0ffff] reserved Jul 9 13:07:31.707421 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved Jul 9 13:07:31.707431 kernel: BIOS-e820: [mem 0x00000000fffe0000-0x00000000ffffffff] reserved Jul 9 13:07:31.707438 kernel: NX (Execute Disable) protection: active Jul 9 13:07:31.707445 kernel: APIC: Static calls initialized Jul 9 13:07:31.707452 kernel: SMBIOS 2.7 present. Jul 9 13:07:31.707460 kernel: DMI: VMware, Inc. VMware Virtual Platform/440BX Desktop Reference Platform, BIOS 6.00 05/28/2020 Jul 9 13:07:31.707466 kernel: DMI: Memory slots populated: 1/128 Jul 9 13:07:31.707473 kernel: vmware: hypercall mode: 0x00 Jul 9 13:07:31.707477 kernel: Hypervisor detected: VMware Jul 9 13:07:31.707482 kernel: vmware: TSC freq read from hypervisor : 3408.000 MHz Jul 9 13:07:31.707487 kernel: vmware: Host bus clock speed read from hypervisor : 66000000 Hz Jul 9 13:07:31.707491 kernel: vmware: using clock offset of 6097326126 ns Jul 9 13:07:31.707496 kernel: tsc: Detected 3408.000 MHz processor Jul 9 13:07:31.707501 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jul 9 13:07:31.707506 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jul 9 13:07:31.707511 kernel: last_pfn = 0x80000 max_arch_pfn = 0x400000000 Jul 9 13:07:31.707516 kernel: total RAM covered: 3072M Jul 9 13:07:31.707522 kernel: Found optimal setting for mtrr clean up Jul 9 13:07:31.707527 kernel: gran_size: 64K chunk_size: 64K num_reg: 2 lose cover RAM: 0G Jul 9 13:07:31.707532 kernel: MTRR map: 6 entries (5 fixed + 1 variable; max 21), built from 8 variable MTRRs Jul 9 13:07:31.707537 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jul 9 13:07:31.707541 kernel: Using GB pages for direct mapping Jul 9 13:07:31.707546 kernel: ACPI: Early table checksum verification disabled Jul 9 13:07:31.707551 kernel: ACPI: RSDP 0x00000000000F6A00 000024 (v02 PTLTD ) Jul 9 13:07:31.707556 kernel: ACPI: XSDT 0x000000007FEE965B 00005C (v01 INTEL 440BX 06040000 VMW 01324272) Jul 9 13:07:31.707560 kernel: ACPI: FACP 0x000000007FEFEE73 0000F4 (v04 INTEL 440BX 06040000 PTL 000F4240) Jul 9 13:07:31.707567 kernel: ACPI: DSDT 0x000000007FEEAD55 01411E (v01 PTLTD Custom 06040000 MSFT 03000001) Jul 9 13:07:31.707580 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 Jul 9 13:07:31.707586 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 Jul 9 13:07:31.707591 kernel: ACPI: BOOT 0x000000007FEEAD2D 000028 (v01 PTLTD $SBFTBL$ 06040000 LTP 00000001) Jul 9 13:07:31.707596 kernel: ACPI: APIC 0x000000007FEEA5EB 000742 (v01 PTLTD ? APIC 06040000 LTP 00000000) Jul 9 13:07:31.707603 kernel: ACPI: MCFG 0x000000007FEEA5AF 00003C (v01 PTLTD $PCITBL$ 06040000 LTP 00000001) Jul 9 13:07:31.707608 kernel: ACPI: SRAT 0x000000007FEE9757 0008A8 (v02 VMWARE MEMPLUG 06040000 VMW 00000001) Jul 9 13:07:31.707613 kernel: ACPI: HPET 0x000000007FEE971F 000038 (v01 VMWARE VMW HPET 06040000 VMW 00000001) Jul 9 13:07:31.707618 kernel: ACPI: WAET 0x000000007FEE96F7 000028 (v01 VMWARE VMW WAET 06040000 VMW 00000001) Jul 9 13:07:31.707623 kernel: ACPI: Reserving FACP table memory at [mem 0x7fefee73-0x7fefef66] Jul 9 13:07:31.707628 kernel: ACPI: Reserving DSDT table memory at [mem 0x7feead55-0x7fefee72] Jul 9 13:07:31.707633 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] Jul 9 13:07:31.707638 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] Jul 9 13:07:31.707643 kernel: ACPI: Reserving BOOT table memory at [mem 0x7feead2d-0x7feead54] Jul 9 13:07:31.707648 kernel: ACPI: Reserving APIC table memory at [mem 0x7feea5eb-0x7feead2c] Jul 9 13:07:31.707654 kernel: ACPI: Reserving MCFG table memory at [mem 0x7feea5af-0x7feea5ea] Jul 9 13:07:31.707659 kernel: ACPI: Reserving SRAT table memory at [mem 0x7fee9757-0x7fee9ffe] Jul 9 13:07:31.707664 kernel: ACPI: Reserving HPET table memory at [mem 0x7fee971f-0x7fee9756] Jul 9 13:07:31.707669 kernel: ACPI: Reserving WAET table memory at [mem 0x7fee96f7-0x7fee971e] Jul 9 13:07:31.707674 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Jul 9 13:07:31.707679 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Jul 9 13:07:31.707684 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000-0xbfffffff] hotplug Jul 9 13:07:31.707689 kernel: NUMA: Node 0 [mem 0x00001000-0x0009ffff] + [mem 0x00100000-0x7fffffff] -> [mem 0x00001000-0x7fffffff] Jul 9 13:07:31.707694 kernel: NODE_DATA(0) allocated [mem 0x7fff8dc0-0x7fffffff] Jul 9 13:07:31.707700 kernel: Zone ranges: Jul 9 13:07:31.707705 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jul 9 13:07:31.707710 kernel: DMA32 [mem 0x0000000001000000-0x000000007fffffff] Jul 9 13:07:31.707715 kernel: Normal empty Jul 9 13:07:31.707720 kernel: Device empty Jul 9 13:07:31.707725 kernel: Movable zone start for each node Jul 9 13:07:31.707730 kernel: Early memory node ranges Jul 9 13:07:31.707734 kernel: node 0: [mem 0x0000000000001000-0x000000000009dfff] Jul 9 13:07:31.707739 kernel: node 0: [mem 0x0000000000100000-0x000000007fedffff] Jul 9 13:07:31.707745 kernel: node 0: [mem 0x000000007ff00000-0x000000007fffffff] Jul 9 13:07:31.707751 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007fffffff] Jul 9 13:07:31.707756 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 9 13:07:31.707761 kernel: On node 0, zone DMA: 98 pages in unavailable ranges Jul 9 13:07:31.707766 kernel: On node 0, zone DMA32: 32 pages in unavailable ranges Jul 9 13:07:31.707771 kernel: ACPI: PM-Timer IO Port: 0x1008 Jul 9 13:07:31.707776 kernel: ACPI: LAPIC_NMI (acpi_id[0x00] high edge lint[0x1]) Jul 9 13:07:31.707781 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) Jul 9 13:07:31.707786 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) Jul 9 13:07:31.707791 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) Jul 9 13:07:31.707796 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) Jul 9 13:07:31.707801 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) Jul 9 13:07:31.707806 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) Jul 9 13:07:31.707811 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) Jul 9 13:07:31.707816 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) Jul 9 13:07:31.707821 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) Jul 9 13:07:31.707826 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) Jul 9 13:07:31.707831 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) Jul 9 13:07:31.707835 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) Jul 9 13:07:31.707841 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) Jul 9 13:07:31.707846 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) Jul 9 13:07:31.707851 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) Jul 9 13:07:31.707856 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) Jul 9 13:07:31.707861 kernel: ACPI: LAPIC_NMI (acpi_id[0x11] high edge lint[0x1]) Jul 9 13:07:31.707866 kernel: ACPI: LAPIC_NMI (acpi_id[0x12] high edge lint[0x1]) Jul 9 13:07:31.707871 kernel: ACPI: LAPIC_NMI (acpi_id[0x13] high edge lint[0x1]) Jul 9 13:07:31.707876 kernel: ACPI: LAPIC_NMI (acpi_id[0x14] high edge lint[0x1]) Jul 9 13:07:31.707881 kernel: ACPI: LAPIC_NMI (acpi_id[0x15] high edge lint[0x1]) Jul 9 13:07:31.707886 kernel: ACPI: LAPIC_NMI (acpi_id[0x16] high edge lint[0x1]) Jul 9 13:07:31.707892 kernel: ACPI: LAPIC_NMI (acpi_id[0x17] high edge lint[0x1]) Jul 9 13:07:31.707897 kernel: ACPI: LAPIC_NMI (acpi_id[0x18] high edge lint[0x1]) Jul 9 13:07:31.707902 kernel: ACPI: LAPIC_NMI (acpi_id[0x19] high edge lint[0x1]) Jul 9 13:07:31.707907 kernel: ACPI: LAPIC_NMI (acpi_id[0x1a] high edge lint[0x1]) Jul 9 13:07:31.707911 kernel: ACPI: LAPIC_NMI (acpi_id[0x1b] high edge lint[0x1]) Jul 9 13:07:31.707916 kernel: ACPI: LAPIC_NMI (acpi_id[0x1c] high edge lint[0x1]) Jul 9 13:07:31.707921 kernel: ACPI: LAPIC_NMI (acpi_id[0x1d] high edge lint[0x1]) Jul 9 13:07:31.707926 kernel: ACPI: LAPIC_NMI (acpi_id[0x1e] high edge lint[0x1]) Jul 9 13:07:31.707931 kernel: ACPI: LAPIC_NMI (acpi_id[0x1f] high edge lint[0x1]) Jul 9 13:07:31.707936 kernel: ACPI: LAPIC_NMI (acpi_id[0x20] high edge lint[0x1]) Jul 9 13:07:31.707942 kernel: ACPI: LAPIC_NMI (acpi_id[0x21] high edge lint[0x1]) Jul 9 13:07:31.707946 kernel: ACPI: LAPIC_NMI (acpi_id[0x22] high edge lint[0x1]) Jul 9 13:07:31.707951 kernel: ACPI: LAPIC_NMI (acpi_id[0x23] high edge lint[0x1]) Jul 9 13:07:31.707956 kernel: ACPI: LAPIC_NMI (acpi_id[0x24] high edge lint[0x1]) Jul 9 13:07:31.707961 kernel: ACPI: LAPIC_NMI (acpi_id[0x25] high edge lint[0x1]) Jul 9 13:07:31.707966 kernel: ACPI: LAPIC_NMI (acpi_id[0x26] high edge lint[0x1]) Jul 9 13:07:31.707972 kernel: ACPI: LAPIC_NMI (acpi_id[0x27] high edge lint[0x1]) Jul 9 13:07:31.707980 kernel: ACPI: LAPIC_NMI (acpi_id[0x28] high edge lint[0x1]) Jul 9 13:07:31.707985 kernel: ACPI: LAPIC_NMI (acpi_id[0x29] high edge lint[0x1]) Jul 9 13:07:31.707990 kernel: ACPI: LAPIC_NMI (acpi_id[0x2a] high edge lint[0x1]) Jul 9 13:07:31.707996 kernel: ACPI: LAPIC_NMI (acpi_id[0x2b] high edge lint[0x1]) Jul 9 13:07:31.708002 kernel: ACPI: LAPIC_NMI (acpi_id[0x2c] high edge lint[0x1]) Jul 9 13:07:31.708007 kernel: ACPI: LAPIC_NMI (acpi_id[0x2d] high edge lint[0x1]) Jul 9 13:07:31.708012 kernel: ACPI: LAPIC_NMI (acpi_id[0x2e] high edge lint[0x1]) Jul 9 13:07:31.708017 kernel: ACPI: LAPIC_NMI (acpi_id[0x2f] high edge lint[0x1]) Jul 9 13:07:31.708022 kernel: ACPI: LAPIC_NMI (acpi_id[0x30] high edge lint[0x1]) Jul 9 13:07:31.708028 kernel: ACPI: LAPIC_NMI (acpi_id[0x31] high edge lint[0x1]) Jul 9 13:07:31.708033 kernel: ACPI: LAPIC_NMI (acpi_id[0x32] high edge lint[0x1]) Jul 9 13:07:31.708039 kernel: ACPI: LAPIC_NMI (acpi_id[0x33] high edge lint[0x1]) Jul 9 13:07:31.708044 kernel: ACPI: LAPIC_NMI (acpi_id[0x34] high edge lint[0x1]) Jul 9 13:07:31.708049 kernel: ACPI: LAPIC_NMI (acpi_id[0x35] high edge lint[0x1]) Jul 9 13:07:31.708055 kernel: ACPI: LAPIC_NMI (acpi_id[0x36] high edge lint[0x1]) Jul 9 13:07:31.708060 kernel: ACPI: LAPIC_NMI (acpi_id[0x37] high edge lint[0x1]) Jul 9 13:07:31.708065 kernel: ACPI: LAPIC_NMI (acpi_id[0x38] high edge lint[0x1]) Jul 9 13:07:31.708070 kernel: ACPI: LAPIC_NMI (acpi_id[0x39] high edge lint[0x1]) Jul 9 13:07:31.708075 kernel: ACPI: LAPIC_NMI (acpi_id[0x3a] high edge lint[0x1]) Jul 9 13:07:31.708080 kernel: ACPI: LAPIC_NMI (acpi_id[0x3b] high edge lint[0x1]) Jul 9 13:07:31.708085 kernel: ACPI: LAPIC_NMI (acpi_id[0x3c] high edge lint[0x1]) Jul 9 13:07:31.708092 kernel: ACPI: LAPIC_NMI (acpi_id[0x3d] high edge lint[0x1]) Jul 9 13:07:31.708097 kernel: ACPI: LAPIC_NMI (acpi_id[0x3e] high edge lint[0x1]) Jul 9 13:07:31.708103 kernel: ACPI: LAPIC_NMI (acpi_id[0x3f] high edge lint[0x1]) Jul 9 13:07:31.708108 kernel: ACPI: LAPIC_NMI (acpi_id[0x40] high edge lint[0x1]) Jul 9 13:07:31.708113 kernel: ACPI: LAPIC_NMI (acpi_id[0x41] high edge lint[0x1]) Jul 9 13:07:31.708118 kernel: ACPI: LAPIC_NMI (acpi_id[0x42] high edge lint[0x1]) Jul 9 13:07:31.708123 kernel: ACPI: LAPIC_NMI (acpi_id[0x43] high edge lint[0x1]) Jul 9 13:07:31.708129 kernel: ACPI: LAPIC_NMI (acpi_id[0x44] high edge lint[0x1]) Jul 9 13:07:31.708134 kernel: ACPI: LAPIC_NMI (acpi_id[0x45] high edge lint[0x1]) Jul 9 13:07:31.708140 kernel: ACPI: LAPIC_NMI (acpi_id[0x46] high edge lint[0x1]) Jul 9 13:07:31.708145 kernel: ACPI: LAPIC_NMI (acpi_id[0x47] high edge lint[0x1]) Jul 9 13:07:31.708150 kernel: ACPI: LAPIC_NMI (acpi_id[0x48] high edge lint[0x1]) Jul 9 13:07:31.708156 kernel: ACPI: LAPIC_NMI (acpi_id[0x49] high edge lint[0x1]) Jul 9 13:07:31.708161 kernel: ACPI: LAPIC_NMI (acpi_id[0x4a] high edge lint[0x1]) Jul 9 13:07:31.708166 kernel: ACPI: LAPIC_NMI (acpi_id[0x4b] high edge lint[0x1]) Jul 9 13:07:31.708179 kernel: ACPI: LAPIC_NMI (acpi_id[0x4c] high edge lint[0x1]) Jul 9 13:07:31.708185 kernel: ACPI: LAPIC_NMI (acpi_id[0x4d] high edge lint[0x1]) Jul 9 13:07:31.708190 kernel: ACPI: LAPIC_NMI (acpi_id[0x4e] high edge lint[0x1]) Jul 9 13:07:31.708195 kernel: ACPI: LAPIC_NMI (acpi_id[0x4f] high edge lint[0x1]) Jul 9 13:07:31.708202 kernel: ACPI: LAPIC_NMI (acpi_id[0x50] high edge lint[0x1]) Jul 9 13:07:31.708207 kernel: ACPI: LAPIC_NMI (acpi_id[0x51] high edge lint[0x1]) Jul 9 13:07:31.708213 kernel: ACPI: LAPIC_NMI (acpi_id[0x52] high edge lint[0x1]) Jul 9 13:07:31.708218 kernel: ACPI: LAPIC_NMI (acpi_id[0x53] high edge lint[0x1]) Jul 9 13:07:31.708223 kernel: ACPI: LAPIC_NMI (acpi_id[0x54] high edge lint[0x1]) Jul 9 13:07:31.708229 kernel: ACPI: LAPIC_NMI (acpi_id[0x55] high edge lint[0x1]) Jul 9 13:07:31.708237 kernel: ACPI: LAPIC_NMI (acpi_id[0x56] high edge lint[0x1]) Jul 9 13:07:31.708245 kernel: ACPI: LAPIC_NMI (acpi_id[0x57] high edge lint[0x1]) Jul 9 13:07:31.708250 kernel: ACPI: LAPIC_NMI (acpi_id[0x58] high edge lint[0x1]) Jul 9 13:07:31.708255 kernel: ACPI: LAPIC_NMI (acpi_id[0x59] high edge lint[0x1]) Jul 9 13:07:31.708262 kernel: ACPI: LAPIC_NMI (acpi_id[0x5a] high edge lint[0x1]) Jul 9 13:07:31.708267 kernel: ACPI: LAPIC_NMI (acpi_id[0x5b] high edge lint[0x1]) Jul 9 13:07:31.708272 kernel: ACPI: LAPIC_NMI (acpi_id[0x5c] high edge lint[0x1]) Jul 9 13:07:31.708278 kernel: ACPI: LAPIC_NMI (acpi_id[0x5d] high edge lint[0x1]) Jul 9 13:07:31.708283 kernel: ACPI: LAPIC_NMI (acpi_id[0x5e] high edge lint[0x1]) Jul 9 13:07:31.708288 kernel: ACPI: LAPIC_NMI (acpi_id[0x5f] high edge lint[0x1]) Jul 9 13:07:31.708293 kernel: ACPI: LAPIC_NMI (acpi_id[0x60] high edge lint[0x1]) Jul 9 13:07:31.708298 kernel: ACPI: LAPIC_NMI (acpi_id[0x61] high edge lint[0x1]) Jul 9 13:07:31.708303 kernel: ACPI: LAPIC_NMI (acpi_id[0x62] high edge lint[0x1]) Jul 9 13:07:31.708309 kernel: ACPI: LAPIC_NMI (acpi_id[0x63] high edge lint[0x1]) Jul 9 13:07:31.708315 kernel: ACPI: LAPIC_NMI (acpi_id[0x64] high edge lint[0x1]) Jul 9 13:07:31.708320 kernel: ACPI: LAPIC_NMI (acpi_id[0x65] high edge lint[0x1]) Jul 9 13:07:31.708325 kernel: ACPI: LAPIC_NMI (acpi_id[0x66] high edge lint[0x1]) Jul 9 13:07:31.708331 kernel: ACPI: LAPIC_NMI (acpi_id[0x67] high edge lint[0x1]) Jul 9 13:07:31.708336 kernel: ACPI: LAPIC_NMI (acpi_id[0x68] high edge lint[0x1]) Jul 9 13:07:31.708341 kernel: ACPI: LAPIC_NMI (acpi_id[0x69] high edge lint[0x1]) Jul 9 13:07:31.708347 kernel: ACPI: LAPIC_NMI (acpi_id[0x6a] high edge lint[0x1]) Jul 9 13:07:31.708352 kernel: ACPI: LAPIC_NMI (acpi_id[0x6b] high edge lint[0x1]) Jul 9 13:07:31.708357 kernel: ACPI: LAPIC_NMI (acpi_id[0x6c] high edge lint[0x1]) Jul 9 13:07:31.708362 kernel: ACPI: LAPIC_NMI (acpi_id[0x6d] high edge lint[0x1]) Jul 9 13:07:31.708368 kernel: ACPI: LAPIC_NMI (acpi_id[0x6e] high edge lint[0x1]) Jul 9 13:07:31.708374 kernel: ACPI: LAPIC_NMI (acpi_id[0x6f] high edge lint[0x1]) Jul 9 13:07:31.708379 kernel: ACPI: LAPIC_NMI (acpi_id[0x70] high edge lint[0x1]) Jul 9 13:07:31.708384 kernel: ACPI: LAPIC_NMI (acpi_id[0x71] high edge lint[0x1]) Jul 9 13:07:31.708389 kernel: ACPI: LAPIC_NMI (acpi_id[0x72] high edge lint[0x1]) Jul 9 13:07:31.708394 kernel: ACPI: LAPIC_NMI (acpi_id[0x73] high edge lint[0x1]) Jul 9 13:07:31.708400 kernel: ACPI: LAPIC_NMI (acpi_id[0x74] high edge lint[0x1]) Jul 9 13:07:31.708405 kernel: ACPI: LAPIC_NMI (acpi_id[0x75] high edge lint[0x1]) Jul 9 13:07:31.708410 kernel: ACPI: LAPIC_NMI (acpi_id[0x76] high edge lint[0x1]) Jul 9 13:07:31.708415 kernel: ACPI: LAPIC_NMI (acpi_id[0x77] high edge lint[0x1]) Jul 9 13:07:31.708421 kernel: ACPI: LAPIC_NMI (acpi_id[0x78] high edge lint[0x1]) Jul 9 13:07:31.708427 kernel: ACPI: LAPIC_NMI (acpi_id[0x79] high edge lint[0x1]) Jul 9 13:07:31.708432 kernel: ACPI: LAPIC_NMI (acpi_id[0x7a] high edge lint[0x1]) Jul 9 13:07:31.708437 kernel: ACPI: LAPIC_NMI (acpi_id[0x7b] high edge lint[0x1]) Jul 9 13:07:31.708442 kernel: ACPI: LAPIC_NMI (acpi_id[0x7c] high edge lint[0x1]) Jul 9 13:07:31.708447 kernel: ACPI: LAPIC_NMI (acpi_id[0x7d] high edge lint[0x1]) Jul 9 13:07:31.708453 kernel: ACPI: LAPIC_NMI (acpi_id[0x7e] high edge lint[0x1]) Jul 9 13:07:31.708458 kernel: ACPI: LAPIC_NMI (acpi_id[0x7f] high edge lint[0x1]) Jul 9 13:07:31.708463 kernel: IOAPIC[0]: apic_id 1, version 17, address 0xfec00000, GSI 0-23 Jul 9 13:07:31.708470 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 high edge) Jul 9 13:07:31.708475 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jul 9 13:07:31.708480 kernel: ACPI: HPET id: 0x8086af01 base: 0xfed00000 Jul 9 13:07:31.708486 kernel: TSC deadline timer available Jul 9 13:07:31.708491 kernel: CPU topo: Max. logical packages: 128 Jul 9 13:07:31.708497 kernel: CPU topo: Max. logical dies: 128 Jul 9 13:07:31.708502 kernel: CPU topo: Max. dies per package: 1 Jul 9 13:07:31.708507 kernel: CPU topo: Max. threads per core: 1 Jul 9 13:07:31.708512 kernel: CPU topo: Num. cores per package: 1 Jul 9 13:07:31.708517 kernel: CPU topo: Num. threads per package: 1 Jul 9 13:07:31.708524 kernel: CPU topo: Allowing 2 present CPUs plus 126 hotplug CPUs Jul 9 13:07:31.708529 kernel: [mem 0x80000000-0xefffffff] available for PCI devices Jul 9 13:07:31.708535 kernel: Booting paravirtualized kernel on VMware hypervisor Jul 9 13:07:31.708540 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jul 9 13:07:31.708545 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:128 nr_cpu_ids:128 nr_node_ids:1 Jul 9 13:07:31.708551 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u262144 Jul 9 13:07:31.708556 kernel: pcpu-alloc: s207832 r8192 d29736 u262144 alloc=1*2097152 Jul 9 13:07:31.708561 kernel: pcpu-alloc: [0] 000 001 002 003 004 005 006 007 Jul 9 13:07:31.708566 kernel: pcpu-alloc: [0] 008 009 010 011 012 013 014 015 Jul 9 13:07:31.708573 kernel: pcpu-alloc: [0] 016 017 018 019 020 021 022 023 Jul 9 13:07:31.708619 kernel: pcpu-alloc: [0] 024 025 026 027 028 029 030 031 Jul 9 13:07:31.708624 kernel: pcpu-alloc: [0] 032 033 034 035 036 037 038 039 Jul 9 13:07:31.708629 kernel: pcpu-alloc: [0] 040 041 042 043 044 045 046 047 Jul 9 13:07:31.708635 kernel: pcpu-alloc: [0] 048 049 050 051 052 053 054 055 Jul 9 13:07:31.708640 kernel: pcpu-alloc: [0] 056 057 058 059 060 061 062 063 Jul 9 13:07:31.708645 kernel: pcpu-alloc: [0] 064 065 066 067 068 069 070 071 Jul 9 13:07:31.708650 kernel: pcpu-alloc: [0] 072 073 074 075 076 077 078 079 Jul 9 13:07:31.708655 kernel: pcpu-alloc: [0] 080 081 082 083 084 085 086 087 Jul 9 13:07:31.708662 kernel: pcpu-alloc: [0] 088 089 090 091 092 093 094 095 Jul 9 13:07:31.708667 kernel: pcpu-alloc: [0] 096 097 098 099 100 101 102 103 Jul 9 13:07:31.708672 kernel: pcpu-alloc: [0] 104 105 106 107 108 109 110 111 Jul 9 13:07:31.708678 kernel: pcpu-alloc: [0] 112 113 114 115 116 117 118 119 Jul 9 13:07:31.708683 kernel: pcpu-alloc: [0] 120 121 122 123 124 125 126 127 Jul 9 13:07:31.708689 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=f85d3be94c634d7d72fbcd0e670073ce56ae2e0cc763f83b329300b7cea5203d Jul 9 13:07:31.708694 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 9 13:07:31.708701 kernel: random: crng init done Jul 9 13:07:31.708706 kernel: printk: log_buf_len individual max cpu contribution: 4096 bytes Jul 9 13:07:31.708711 kernel: printk: log_buf_len total cpu_extra contributions: 520192 bytes Jul 9 13:07:31.708717 kernel: printk: log_buf_len min size: 262144 bytes Jul 9 13:07:31.708722 kernel: printk: log_buf_len: 1048576 bytes Jul 9 13:07:31.708727 kernel: printk: early log buf free: 245592(93%) Jul 9 13:07:31.708733 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 9 13:07:31.708738 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jul 9 13:07:31.708743 kernel: Fallback order for Node 0: 0 Jul 9 13:07:31.708748 kernel: Built 1 zonelists, mobility grouping on. Total pages: 524157 Jul 9 13:07:31.708755 kernel: Policy zone: DMA32 Jul 9 13:07:31.708760 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 9 13:07:31.708765 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=128, Nodes=1 Jul 9 13:07:31.708771 kernel: ftrace: allocating 40097 entries in 157 pages Jul 9 13:07:31.708776 kernel: ftrace: allocated 157 pages with 5 groups Jul 9 13:07:31.708781 kernel: Dynamic Preempt: voluntary Jul 9 13:07:31.708786 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 9 13:07:31.708792 kernel: rcu: RCU event tracing is enabled. Jul 9 13:07:31.708797 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=128. Jul 9 13:07:31.708804 kernel: Trampoline variant of Tasks RCU enabled. Jul 9 13:07:31.708809 kernel: Rude variant of Tasks RCU enabled. Jul 9 13:07:31.708814 kernel: Tracing variant of Tasks RCU enabled. Jul 9 13:07:31.708820 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 9 13:07:31.708825 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=128 Jul 9 13:07:31.708830 kernel: RCU Tasks: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. Jul 9 13:07:31.708837 kernel: RCU Tasks Rude: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. Jul 9 13:07:31.708846 kernel: RCU Tasks Trace: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. Jul 9 13:07:31.708854 kernel: NR_IRQS: 33024, nr_irqs: 1448, preallocated irqs: 16 Jul 9 13:07:31.708864 kernel: rcu: srcu_init: Setting srcu_struct sizes to big. Jul 9 13:07:31.708872 kernel: Console: colour VGA+ 80x25 Jul 9 13:07:31.708881 kernel: printk: legacy console [tty0] enabled Jul 9 13:07:31.708888 kernel: printk: legacy console [ttyS0] enabled Jul 9 13:07:31.708893 kernel: ACPI: Core revision 20240827 Jul 9 13:07:31.708899 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 133484882848 ns Jul 9 13:07:31.708904 kernel: APIC: Switch to symmetric I/O mode setup Jul 9 13:07:31.708910 kernel: x2apic enabled Jul 9 13:07:31.708915 kernel: APIC: Switched APIC routing to: physical x2apic Jul 9 13:07:31.708922 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jul 9 13:07:31.708928 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns Jul 9 13:07:31.708933 kernel: Calibrating delay loop (skipped) preset value.. 6816.00 BogoMIPS (lpj=3408000) Jul 9 13:07:31.708939 kernel: Disabled fast string operations Jul 9 13:07:31.708944 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Jul 9 13:07:31.708949 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Jul 9 13:07:31.708955 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jul 9 13:07:31.708960 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall and VM exit Jul 9 13:07:31.708965 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Jul 9 13:07:31.708971 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Jul 9 13:07:31.708977 kernel: RETBleed: Mitigation: Enhanced IBRS Jul 9 13:07:31.708982 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jul 9 13:07:31.708987 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jul 9 13:07:31.708993 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jul 9 13:07:31.708998 kernel: SRBDS: Unknown: Dependent on hypervisor status Jul 9 13:07:31.709004 kernel: GDS: Unknown: Dependent on hypervisor status Jul 9 13:07:31.709009 kernel: ITS: Mitigation: Aligned branch/return thunks Jul 9 13:07:31.709014 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jul 9 13:07:31.709021 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jul 9 13:07:31.709026 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jul 9 13:07:31.709032 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jul 9 13:07:31.709037 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jul 9 13:07:31.709042 kernel: Freeing SMP alternatives memory: 32K Jul 9 13:07:31.709048 kernel: pid_max: default: 131072 minimum: 1024 Jul 9 13:07:31.709053 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jul 9 13:07:31.709058 kernel: landlock: Up and running. Jul 9 13:07:31.709064 kernel: SELinux: Initializing. Jul 9 13:07:31.709070 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jul 9 13:07:31.709076 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jul 9 13:07:31.709081 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) Jul 9 13:07:31.709086 kernel: Performance Events: Skylake events, core PMU driver. Jul 9 13:07:31.709092 kernel: core: CPUID marked event: 'cpu cycles' unavailable Jul 9 13:07:31.709097 kernel: core: CPUID marked event: 'instructions' unavailable Jul 9 13:07:31.709102 kernel: core: CPUID marked event: 'bus cycles' unavailable Jul 9 13:07:31.709107 kernel: core: CPUID marked event: 'cache references' unavailable Jul 9 13:07:31.709114 kernel: core: CPUID marked event: 'cache misses' unavailable Jul 9 13:07:31.709119 kernel: core: CPUID marked event: 'branch instructions' unavailable Jul 9 13:07:31.709124 kernel: core: CPUID marked event: 'branch misses' unavailable Jul 9 13:07:31.709129 kernel: ... version: 1 Jul 9 13:07:31.709135 kernel: ... bit width: 48 Jul 9 13:07:31.709140 kernel: ... generic registers: 4 Jul 9 13:07:31.709145 kernel: ... value mask: 0000ffffffffffff Jul 9 13:07:31.709153 kernel: ... max period: 000000007fffffff Jul 9 13:07:31.709162 kernel: ... fixed-purpose events: 0 Jul 9 13:07:31.709172 kernel: ... event mask: 000000000000000f Jul 9 13:07:31.709178 kernel: signal: max sigframe size: 1776 Jul 9 13:07:31.709183 kernel: rcu: Hierarchical SRCU implementation. Jul 9 13:07:31.709189 kernel: rcu: Max phase no-delay instances is 400. Jul 9 13:07:31.709194 kernel: Timer migration: 3 hierarchy levels; 8 children per group; 3 crossnode level Jul 9 13:07:31.709200 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jul 9 13:07:31.709205 kernel: smp: Bringing up secondary CPUs ... Jul 9 13:07:31.709210 kernel: smpboot: x86: Booting SMP configuration: Jul 9 13:07:31.709216 kernel: .... node #0, CPUs: #1 Jul 9 13:07:31.709221 kernel: Disabled fast string operations Jul 9 13:07:31.709227 kernel: smp: Brought up 1 node, 2 CPUs Jul 9 13:07:31.709235 kernel: smpboot: Total of 2 processors activated (13632.00 BogoMIPS) Jul 9 13:07:31.709245 kernel: Memory: 1924264K/2096628K available (14336K kernel code, 2430K rwdata, 9956K rodata, 54568K init, 2400K bss, 160980K reserved, 0K cma-reserved) Jul 9 13:07:31.709252 kernel: devtmpfs: initialized Jul 9 13:07:31.709257 kernel: x86/mm: Memory block size: 128MB Jul 9 13:07:31.709263 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7feff000-0x7fefffff] (4096 bytes) Jul 9 13:07:31.709268 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 9 13:07:31.709273 kernel: futex hash table entries: 32768 (order: 9, 2097152 bytes, linear) Jul 9 13:07:31.709279 kernel: pinctrl core: initialized pinctrl subsystem Jul 9 13:07:31.709285 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 9 13:07:31.709291 kernel: audit: initializing netlink subsys (disabled) Jul 9 13:07:31.709296 kernel: audit: type=2000 audit(1752066448.292:1): state=initialized audit_enabled=0 res=1 Jul 9 13:07:31.709302 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 9 13:07:31.709307 kernel: thermal_sys: Registered thermal governor 'user_space' Jul 9 13:07:31.709312 kernel: cpuidle: using governor menu Jul 9 13:07:31.709318 kernel: Simple Boot Flag at 0x36 set to 0x80 Jul 9 13:07:31.709323 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 9 13:07:31.709328 kernel: dca service started, version 1.12.1 Jul 9 13:07:31.709335 kernel: PCI: ECAM [mem 0xf0000000-0xf7ffffff] (base 0xf0000000) for domain 0000 [bus 00-7f] Jul 9 13:07:31.709346 kernel: PCI: Using configuration type 1 for base access Jul 9 13:07:31.709353 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jul 9 13:07:31.709359 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 9 13:07:31.709364 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jul 9 13:07:31.709370 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 9 13:07:31.709375 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jul 9 13:07:31.709381 kernel: ACPI: Added _OSI(Module Device) Jul 9 13:07:31.709388 kernel: ACPI: Added _OSI(Processor Device) Jul 9 13:07:31.709393 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 9 13:07:31.709399 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 9 13:07:31.709404 kernel: ACPI: [Firmware Bug]: BIOS _OSI(Linux) query ignored Jul 9 13:07:31.709410 kernel: ACPI: Interpreter enabled Jul 9 13:07:31.709416 kernel: ACPI: PM: (supports S0 S1 S5) Jul 9 13:07:31.709421 kernel: ACPI: Using IOAPIC for interrupt routing Jul 9 13:07:31.709427 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jul 9 13:07:31.709433 kernel: PCI: Using E820 reservations for host bridge windows Jul 9 13:07:31.709439 kernel: ACPI: Enabled 4 GPEs in block 00 to 0F Jul 9 13:07:31.709445 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-7f]) Jul 9 13:07:31.709540 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 9 13:07:31.709609 kernel: acpi PNP0A03:00: _OSC: platform does not support [AER LTR] Jul 9 13:07:31.709660 kernel: acpi PNP0A03:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability] Jul 9 13:07:31.709669 kernel: PCI host bridge to bus 0000:00 Jul 9 13:07:31.709720 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jul 9 13:07:31.709767 kernel: pci_bus 0000:00: root bus resource [mem 0x000cc000-0x000dbfff window] Jul 9 13:07:31.709810 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jul 9 13:07:31.709853 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jul 9 13:07:31.709895 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xfeff window] Jul 9 13:07:31.709947 kernel: pci_bus 0000:00: root bus resource [bus 00-7f] Jul 9 13:07:31.710020 kernel: pci 0000:00:00.0: [8086:7190] type 00 class 0x060000 conventional PCI endpoint Jul 9 13:07:31.710079 kernel: pci 0000:00:01.0: [8086:7191] type 01 class 0x060400 conventional PCI bridge Jul 9 13:07:31.710131 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Jul 9 13:07:31.710206 kernel: pci 0000:00:07.0: [8086:7110] type 00 class 0x060100 conventional PCI endpoint Jul 9 13:07:31.710286 kernel: pci 0000:00:07.1: [8086:7111] type 00 class 0x01018a conventional PCI endpoint Jul 9 13:07:31.710339 kernel: pci 0000:00:07.1: BAR 4 [io 0x1060-0x106f] Jul 9 13:07:31.710396 kernel: pci 0000:00:07.1: BAR 0 [io 0x01f0-0x01f7]: legacy IDE quirk Jul 9 13:07:31.710453 kernel: pci 0000:00:07.1: BAR 1 [io 0x03f6]: legacy IDE quirk Jul 9 13:07:31.710513 kernel: pci 0000:00:07.1: BAR 2 [io 0x0170-0x0177]: legacy IDE quirk Jul 9 13:07:31.710567 kernel: pci 0000:00:07.1: BAR 3 [io 0x0376]: legacy IDE quirk Jul 9 13:07:31.710652 kernel: pci 0000:00:07.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint Jul 9 13:07:31.710724 kernel: pci 0000:00:07.3: quirk: [io 0x1000-0x103f] claimed by PIIX4 ACPI Jul 9 13:07:31.710786 kernel: pci 0000:00:07.3: quirk: [io 0x1040-0x104f] claimed by PIIX4 SMB Jul 9 13:07:31.710840 kernel: pci 0000:00:07.7: [15ad:0740] type 00 class 0x088000 conventional PCI endpoint Jul 9 13:07:31.710890 kernel: pci 0000:00:07.7: BAR 0 [io 0x1080-0x10bf] Jul 9 13:07:31.710939 kernel: pci 0000:00:07.7: BAR 1 [mem 0xfebfe000-0xfebfffff 64bit] Jul 9 13:07:31.710991 kernel: pci 0000:00:0f.0: [15ad:0405] type 00 class 0x030000 conventional PCI endpoint Jul 9 13:07:31.711040 kernel: pci 0000:00:0f.0: BAR 0 [io 0x1070-0x107f] Jul 9 13:07:31.711091 kernel: pci 0000:00:0f.0: BAR 1 [mem 0xe8000000-0xefffffff pref] Jul 9 13:07:31.711139 kernel: pci 0000:00:0f.0: BAR 2 [mem 0xfe000000-0xfe7fffff] Jul 9 13:07:31.711190 kernel: pci 0000:00:0f.0: ROM [mem 0x00000000-0x00007fff pref] Jul 9 13:07:31.711245 kernel: pci 0000:00:0f.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jul 9 13:07:31.711302 kernel: pci 0000:00:11.0: [15ad:0790] type 01 class 0x060401 conventional PCI bridge Jul 9 13:07:31.711351 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] (subtractive decode) Jul 9 13:07:31.711398 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] Jul 9 13:07:31.711449 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] Jul 9 13:07:31.711496 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] Jul 9 13:07:31.711551 kernel: pci 0000:00:15.0: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Jul 9 13:07:31.711854 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] Jul 9 13:07:31.711925 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] Jul 9 13:07:31.711976 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] Jul 9 13:07:31.712025 kernel: pci 0000:00:15.0: PME# supported from D0 D3hot D3cold Jul 9 13:07:31.712079 kernel: pci 0000:00:15.1: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Jul 9 13:07:31.712145 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] Jul 9 13:07:31.712195 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] Jul 9 13:07:31.712253 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] Jul 9 13:07:31.712311 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] Jul 9 13:07:31.712368 kernel: pci 0000:00:15.1: PME# supported from D0 D3hot D3cold Jul 9 13:07:31.712422 kernel: pci 0000:00:15.2: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Jul 9 13:07:31.712494 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] Jul 9 13:07:31.712545 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] Jul 9 13:07:31.712620 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] Jul 9 13:07:31.712670 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] Jul 9 13:07:31.712718 kernel: pci 0000:00:15.2: PME# supported from D0 D3hot D3cold Jul 9 13:07:31.712772 kernel: pci 0000:00:15.3: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Jul 9 13:07:31.712825 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] Jul 9 13:07:31.712874 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] Jul 9 13:07:31.712922 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] Jul 9 13:07:31.712970 kernel: pci 0000:00:15.3: PME# supported from D0 D3hot D3cold Jul 9 13:07:31.713028 kernel: pci 0000:00:15.4: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Jul 9 13:07:31.713078 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] Jul 9 13:07:31.713127 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] Jul 9 13:07:31.713189 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] Jul 9 13:07:31.713246 kernel: pci 0000:00:15.4: PME# supported from D0 D3hot D3cold Jul 9 13:07:31.713305 kernel: pci 0000:00:15.5: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Jul 9 13:07:31.713355 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] Jul 9 13:07:31.713404 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] Jul 9 13:07:31.713453 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] Jul 9 13:07:31.713501 kernel: pci 0000:00:15.5: PME# supported from D0 D3hot D3cold Jul 9 13:07:31.713554 kernel: pci 0000:00:15.6: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Jul 9 13:07:31.713648 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] Jul 9 13:07:31.713741 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] Jul 9 13:07:31.713797 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] Jul 9 13:07:31.713851 kernel: pci 0000:00:15.6: PME# supported from D0 D3hot D3cold Jul 9 13:07:31.713904 kernel: pci 0000:00:15.7: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Jul 9 13:07:31.713958 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] Jul 9 13:07:31.714012 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] Jul 9 13:07:31.714064 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] Jul 9 13:07:31.714112 kernel: pci 0000:00:15.7: PME# supported from D0 D3hot D3cold Jul 9 13:07:31.714165 kernel: pci 0000:00:16.0: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Jul 9 13:07:31.714260 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] Jul 9 13:07:31.714309 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] Jul 9 13:07:31.714357 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] Jul 9 13:07:31.714406 kernel: pci 0000:00:16.0: PME# supported from D0 D3hot D3cold Jul 9 13:07:31.714465 kernel: pci 0000:00:16.1: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Jul 9 13:07:31.714527 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] Jul 9 13:07:31.714588 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] Jul 9 13:07:31.714643 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] Jul 9 13:07:31.714692 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] Jul 9 13:07:31.714742 kernel: pci 0000:00:16.1: PME# supported from D0 D3hot D3cold Jul 9 13:07:31.714796 kernel: pci 0000:00:16.2: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Jul 9 13:07:31.714848 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] Jul 9 13:07:31.714898 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] Jul 9 13:07:31.714947 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] Jul 9 13:07:31.714995 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] Jul 9 13:07:31.715043 kernel: pci 0000:00:16.2: PME# supported from D0 D3hot D3cold Jul 9 13:07:31.715102 kernel: pci 0000:00:16.3: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Jul 9 13:07:31.715161 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] Jul 9 13:07:31.715226 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] Jul 9 13:07:31.715281 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] Jul 9 13:07:31.715331 kernel: pci 0000:00:16.3: PME# supported from D0 D3hot D3cold Jul 9 13:07:31.715388 kernel: pci 0000:00:16.4: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Jul 9 13:07:31.716798 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] Jul 9 13:07:31.716861 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] Jul 9 13:07:31.716915 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] Jul 9 13:07:31.716971 kernel: pci 0000:00:16.4: PME# supported from D0 D3hot D3cold Jul 9 13:07:31.717029 kernel: pci 0000:00:16.5: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Jul 9 13:07:31.717082 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] Jul 9 13:07:31.717134 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] Jul 9 13:07:31.717207 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] Jul 9 13:07:31.717272 kernel: pci 0000:00:16.5: PME# supported from D0 D3hot D3cold Jul 9 13:07:31.717327 kernel: pci 0000:00:16.6: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Jul 9 13:07:31.717382 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] Jul 9 13:07:31.717432 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] Jul 9 13:07:31.717483 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] Jul 9 13:07:31.717533 kernel: pci 0000:00:16.6: PME# supported from D0 D3hot D3cold Jul 9 13:07:31.719614 kernel: pci 0000:00:16.7: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Jul 9 13:07:31.719676 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] Jul 9 13:07:31.719730 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] Jul 9 13:07:31.719781 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] Jul 9 13:07:31.719834 kernel: pci 0000:00:16.7: PME# supported from D0 D3hot D3cold Jul 9 13:07:31.719889 kernel: pci 0000:00:17.0: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Jul 9 13:07:31.719948 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] Jul 9 13:07:31.720001 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] Jul 9 13:07:31.720050 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] Jul 9 13:07:31.720099 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] Jul 9 13:07:31.720150 kernel: pci 0000:00:17.0: PME# supported from D0 D3hot D3cold Jul 9 13:07:31.720242 kernel: pci 0000:00:17.1: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Jul 9 13:07:31.720307 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] Jul 9 13:07:31.720355 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] Jul 9 13:07:31.720406 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] Jul 9 13:07:31.720472 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] Jul 9 13:07:31.720521 kernel: pci 0000:00:17.1: PME# supported from D0 D3hot D3cold Jul 9 13:07:31.720708 kernel: pci 0000:00:17.2: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Jul 9 13:07:31.720768 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] Jul 9 13:07:31.720819 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] Jul 9 13:07:31.720887 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] Jul 9 13:07:31.720938 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] Jul 9 13:07:31.720992 kernel: pci 0000:00:17.2: PME# supported from D0 D3hot D3cold Jul 9 13:07:31.721048 kernel: pci 0000:00:17.3: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Jul 9 13:07:31.721100 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] Jul 9 13:07:31.721150 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] Jul 9 13:07:31.721205 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] Jul 9 13:07:31.721255 kernel: pci 0000:00:17.3: PME# supported from D0 D3hot D3cold Jul 9 13:07:31.721312 kernel: pci 0000:00:17.4: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Jul 9 13:07:31.721367 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] Jul 9 13:07:31.721417 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] Jul 9 13:07:31.721468 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] Jul 9 13:07:31.721518 kernel: pci 0000:00:17.4: PME# supported from D0 D3hot D3cold Jul 9 13:07:31.721573 kernel: pci 0000:00:17.5: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Jul 9 13:07:31.721636 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] Jul 9 13:07:31.721687 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] Jul 9 13:07:31.721740 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] Jul 9 13:07:31.721790 kernel: pci 0000:00:17.5: PME# supported from D0 D3hot D3cold Jul 9 13:07:31.721845 kernel: pci 0000:00:17.6: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Jul 9 13:07:31.721897 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] Jul 9 13:07:31.721947 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] Jul 9 13:07:31.721996 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] Jul 9 13:07:31.722047 kernel: pci 0000:00:17.6: PME# supported from D0 D3hot D3cold Jul 9 13:07:31.722104 kernel: pci 0000:00:17.7: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Jul 9 13:07:31.722155 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] Jul 9 13:07:31.722205 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] Jul 9 13:07:31.722255 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] Jul 9 13:07:31.722306 kernel: pci 0000:00:17.7: PME# supported from D0 D3hot D3cold Jul 9 13:07:31.722362 kernel: pci 0000:00:18.0: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Jul 9 13:07:31.722413 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] Jul 9 13:07:31.722466 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] Jul 9 13:07:31.722516 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] Jul 9 13:07:31.722565 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] Jul 9 13:07:31.724674 kernel: pci 0000:00:18.0: PME# supported from D0 D3hot D3cold Jul 9 13:07:31.724736 kernel: pci 0000:00:18.1: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Jul 9 13:07:31.724791 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] Jul 9 13:07:31.724843 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] Jul 9 13:07:31.724898 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] Jul 9 13:07:31.725441 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] Jul 9 13:07:31.725496 kernel: pci 0000:00:18.1: PME# supported from D0 D3hot D3cold Jul 9 13:07:31.725554 kernel: pci 0000:00:18.2: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Jul 9 13:07:31.726986 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] Jul 9 13:07:31.727044 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] Jul 9 13:07:31.727096 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] Jul 9 13:07:31.727151 kernel: pci 0000:00:18.2: PME# supported from D0 D3hot D3cold Jul 9 13:07:31.727208 kernel: pci 0000:00:18.3: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Jul 9 13:07:31.727260 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] Jul 9 13:07:31.727310 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] Jul 9 13:07:31.727360 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] Jul 9 13:07:31.727412 kernel: pci 0000:00:18.3: PME# supported from D0 D3hot D3cold Jul 9 13:07:31.727467 kernel: pci 0000:00:18.4: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Jul 9 13:07:31.727520 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] Jul 9 13:07:31.727571 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] Jul 9 13:07:31.727640 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] Jul 9 13:07:31.727691 kernel: pci 0000:00:18.4: PME# supported from D0 D3hot D3cold Jul 9 13:07:31.727749 kernel: pci 0000:00:18.5: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Jul 9 13:07:31.727800 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] Jul 9 13:07:31.727851 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] Jul 9 13:07:31.727904 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] Jul 9 13:07:31.727953 kernel: pci 0000:00:18.5: PME# supported from D0 D3hot D3cold Jul 9 13:07:31.728009 kernel: pci 0000:00:18.6: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Jul 9 13:07:31.728060 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] Jul 9 13:07:31.728110 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] Jul 9 13:07:31.728160 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] Jul 9 13:07:31.728222 kernel: pci 0000:00:18.6: PME# supported from D0 D3hot D3cold Jul 9 13:07:31.728279 kernel: pci 0000:00:18.7: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Jul 9 13:07:31.728333 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] Jul 9 13:07:31.728383 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] Jul 9 13:07:31.728433 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] Jul 9 13:07:31.728483 kernel: pci 0000:00:18.7: PME# supported from D0 D3hot D3cold Jul 9 13:07:31.728537 kernel: pci_bus 0000:01: extended config space not accessible Jul 9 13:07:31.728624 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Jul 9 13:07:31.728680 kernel: pci_bus 0000:02: extended config space not accessible Jul 9 13:07:31.728692 kernel: acpiphp: Slot [32] registered Jul 9 13:07:31.728698 kernel: acpiphp: Slot [33] registered Jul 9 13:07:31.728704 kernel: acpiphp: Slot [34] registered Jul 9 13:07:31.728710 kernel: acpiphp: Slot [35] registered Jul 9 13:07:31.728716 kernel: acpiphp: Slot [36] registered Jul 9 13:07:31.728722 kernel: acpiphp: Slot [37] registered Jul 9 13:07:31.728728 kernel: acpiphp: Slot [38] registered Jul 9 13:07:31.728733 kernel: acpiphp: Slot [39] registered Jul 9 13:07:31.728739 kernel: acpiphp: Slot [40] registered Jul 9 13:07:31.728746 kernel: acpiphp: Slot [41] registered Jul 9 13:07:31.728752 kernel: acpiphp: Slot [42] registered Jul 9 13:07:31.728758 kernel: acpiphp: Slot [43] registered Jul 9 13:07:31.728764 kernel: acpiphp: Slot [44] registered Jul 9 13:07:31.728770 kernel: acpiphp: Slot [45] registered Jul 9 13:07:31.728776 kernel: acpiphp: Slot [46] registered Jul 9 13:07:31.728781 kernel: acpiphp: Slot [47] registered Jul 9 13:07:31.728787 kernel: acpiphp: Slot [48] registered Jul 9 13:07:31.728793 kernel: acpiphp: Slot [49] registered Jul 9 13:07:31.728799 kernel: acpiphp: Slot [50] registered Jul 9 13:07:31.728806 kernel: acpiphp: Slot [51] registered Jul 9 13:07:31.728812 kernel: acpiphp: Slot [52] registered Jul 9 13:07:31.728817 kernel: acpiphp: Slot [53] registered Jul 9 13:07:31.728823 kernel: acpiphp: Slot [54] registered Jul 9 13:07:31.728829 kernel: acpiphp: Slot [55] registered Jul 9 13:07:31.728835 kernel: acpiphp: Slot [56] registered Jul 9 13:07:31.728841 kernel: acpiphp: Slot [57] registered Jul 9 13:07:31.728846 kernel: acpiphp: Slot [58] registered Jul 9 13:07:31.728852 kernel: acpiphp: Slot [59] registered Jul 9 13:07:31.728859 kernel: acpiphp: Slot [60] registered Jul 9 13:07:31.728865 kernel: acpiphp: Slot [61] registered Jul 9 13:07:31.728870 kernel: acpiphp: Slot [62] registered Jul 9 13:07:31.728876 kernel: acpiphp: Slot [63] registered Jul 9 13:07:31.728927 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] (subtractive decode) Jul 9 13:07:31.728981 kernel: pci 0000:00:11.0: bridge window [mem 0x000a0000-0x000bffff window] (subtractive decode) Jul 9 13:07:31.729031 kernel: pci 0000:00:11.0: bridge window [mem 0x000cc000-0x000dbfff window] (subtractive decode) Jul 9 13:07:31.729081 kernel: pci 0000:00:11.0: bridge window [mem 0xc0000000-0xfebfffff window] (subtractive decode) Jul 9 13:07:31.729133 kernel: pci 0000:00:11.0: bridge window [io 0x0000-0x0cf7 window] (subtractive decode) Jul 9 13:07:31.729183 kernel: pci 0000:00:11.0: bridge window [io 0x0d00-0xfeff window] (subtractive decode) Jul 9 13:07:31.729240 kernel: pci 0000:03:00.0: [15ad:07c0] type 00 class 0x010700 PCIe Endpoint Jul 9 13:07:31.729292 kernel: pci 0000:03:00.0: BAR 0 [io 0x4000-0x4007] Jul 9 13:07:31.729345 kernel: pci 0000:03:00.0: BAR 1 [mem 0xfd5f8000-0xfd5fffff 64bit] Jul 9 13:07:31.729395 kernel: pci 0000:03:00.0: ROM [mem 0x00000000-0x0000ffff pref] Jul 9 13:07:31.729447 kernel: pci 0000:03:00.0: PME# supported from D0 D3hot D3cold Jul 9 13:07:31.729498 kernel: pci 0000:03:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' Jul 9 13:07:31.729551 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] Jul 9 13:07:31.730722 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] Jul 9 13:07:31.730780 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] Jul 9 13:07:31.730834 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] Jul 9 13:07:31.730887 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] Jul 9 13:07:31.730941 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] Jul 9 13:07:31.730994 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] Jul 9 13:07:31.731050 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] Jul 9 13:07:31.731109 kernel: pci 0000:0b:00.0: [15ad:07b0] type 00 class 0x020000 PCIe Endpoint Jul 9 13:07:31.731162 kernel: pci 0000:0b:00.0: BAR 0 [mem 0xfd4fc000-0xfd4fcfff] Jul 9 13:07:31.731229 kernel: pci 0000:0b:00.0: BAR 1 [mem 0xfd4fd000-0xfd4fdfff] Jul 9 13:07:31.731281 kernel: pci 0000:0b:00.0: BAR 2 [mem 0xfd4fe000-0xfd4fffff] Jul 9 13:07:31.731331 kernel: pci 0000:0b:00.0: BAR 3 [io 0x5000-0x500f] Jul 9 13:07:31.731381 kernel: pci 0000:0b:00.0: ROM [mem 0x00000000-0x0000ffff pref] Jul 9 13:07:31.731435 kernel: pci 0000:0b:00.0: supports D1 D2 Jul 9 13:07:31.731485 kernel: pci 0000:0b:00.0: PME# supported from D0 D1 D2 D3hot D3cold Jul 9 13:07:31.731537 kernel: pci 0000:0b:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' Jul 9 13:07:31.731632 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] Jul 9 13:07:31.731686 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] Jul 9 13:07:31.731739 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] Jul 9 13:07:31.731790 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] Jul 9 13:07:31.731841 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] Jul 9 13:07:31.731895 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] Jul 9 13:07:31.731947 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] Jul 9 13:07:31.731998 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] Jul 9 13:07:31.732049 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] Jul 9 13:07:31.732100 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] Jul 9 13:07:31.732151 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] Jul 9 13:07:31.732232 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] Jul 9 13:07:31.732286 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] Jul 9 13:07:31.732350 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] Jul 9 13:07:31.732399 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] Jul 9 13:07:31.732448 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] Jul 9 13:07:31.732497 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] Jul 9 13:07:31.732546 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] Jul 9 13:07:31.732614 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] Jul 9 13:07:31.732669 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] Jul 9 13:07:31.732718 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] Jul 9 13:07:31.732768 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] Jul 9 13:07:31.732817 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] Jul 9 13:07:31.732867 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] Jul 9 13:07:31.732875 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 9 Jul 9 13:07:31.732881 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 0 Jul 9 13:07:31.732887 kernel: ACPI: PCI: Interrupt link LNKB disabled Jul 9 13:07:31.732894 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jul 9 13:07:31.732900 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 10 Jul 9 13:07:31.732906 kernel: iommu: Default domain type: Translated Jul 9 13:07:31.732912 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jul 9 13:07:31.732918 kernel: PCI: Using ACPI for IRQ routing Jul 9 13:07:31.732923 kernel: PCI: pci_cache_line_size set to 64 bytes Jul 9 13:07:31.732929 kernel: e820: reserve RAM buffer [mem 0x0009ec00-0x0009ffff] Jul 9 13:07:31.732935 kernel: e820: reserve RAM buffer [mem 0x7fee0000-0x7fffffff] Jul 9 13:07:31.732983 kernel: pci 0000:00:0f.0: vgaarb: setting as boot VGA device Jul 9 13:07:31.733034 kernel: pci 0000:00:0f.0: vgaarb: bridge control possible Jul 9 13:07:31.733083 kernel: pci 0000:00:0f.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jul 9 13:07:31.733091 kernel: vgaarb: loaded Jul 9 13:07:31.733097 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 Jul 9 13:07:31.733103 kernel: hpet0: 16 comparators, 64-bit 14.318180 MHz counter Jul 9 13:07:31.733108 kernel: clocksource: Switched to clocksource tsc-early Jul 9 13:07:31.733114 kernel: VFS: Disk quotas dquot_6.6.0 Jul 9 13:07:31.733120 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 9 13:07:31.733125 kernel: pnp: PnP ACPI init Jul 9 13:07:31.733184 kernel: system 00:00: [io 0x1000-0x103f] has been reserved Jul 9 13:07:31.733231 kernel: system 00:00: [io 0x1040-0x104f] has been reserved Jul 9 13:07:31.733275 kernel: system 00:00: [io 0x0cf0-0x0cf1] has been reserved Jul 9 13:07:31.733325 kernel: system 00:04: [mem 0xfed00000-0xfed003ff] has been reserved Jul 9 13:07:31.733373 kernel: pnp 00:06: [dma 2] Jul 9 13:07:31.733438 kernel: system 00:07: [io 0xfce0-0xfcff] has been reserved Jul 9 13:07:31.733500 kernel: system 00:07: [mem 0xf0000000-0xf7ffffff] has been reserved Jul 9 13:07:31.733557 kernel: system 00:07: [mem 0xfe800000-0xfe9fffff] has been reserved Jul 9 13:07:31.733568 kernel: pnp: PnP ACPI: found 8 devices Jul 9 13:07:31.734597 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jul 9 13:07:31.734622 kernel: NET: Registered PF_INET protocol family Jul 9 13:07:31.734629 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 9 13:07:31.734634 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jul 9 13:07:31.734641 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 9 13:07:31.734649 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jul 9 13:07:31.734654 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jul 9 13:07:31.734660 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jul 9 13:07:31.734666 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jul 9 13:07:31.734672 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jul 9 13:07:31.734677 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 9 13:07:31.734683 kernel: NET: Registered PF_XDP protocol family Jul 9 13:07:31.734747 kernel: pci 0000:00:15.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 Jul 9 13:07:31.734803 kernel: pci 0000:00:15.3: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Jul 9 13:07:31.734858 kernel: pci 0000:00:15.4: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Jul 9 13:07:31.734923 kernel: pci 0000:00:15.5: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Jul 9 13:07:31.734981 kernel: pci 0000:00:15.6: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Jul 9 13:07:31.735032 kernel: pci 0000:00:15.7: bridge window [io 0x1000-0x0fff] to [bus 0a] add_size 1000 Jul 9 13:07:31.735093 kernel: pci 0000:00:16.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 0b] add_size 200000 add_align 100000 Jul 9 13:07:31.735145 kernel: pci 0000:00:16.3: bridge window [io 0x1000-0x0fff] to [bus 0e] add_size 1000 Jul 9 13:07:31.735196 kernel: pci 0000:00:16.4: bridge window [io 0x1000-0x0fff] to [bus 0f] add_size 1000 Jul 9 13:07:31.735264 kernel: pci 0000:00:16.5: bridge window [io 0x1000-0x0fff] to [bus 10] add_size 1000 Jul 9 13:07:31.735313 kernel: pci 0000:00:16.6: bridge window [io 0x1000-0x0fff] to [bus 11] add_size 1000 Jul 9 13:07:31.735362 kernel: pci 0000:00:16.7: bridge window [io 0x1000-0x0fff] to [bus 12] add_size 1000 Jul 9 13:07:31.735412 kernel: pci 0000:00:17.3: bridge window [io 0x1000-0x0fff] to [bus 16] add_size 1000 Jul 9 13:07:31.735462 kernel: pci 0000:00:17.4: bridge window [io 0x1000-0x0fff] to [bus 17] add_size 1000 Jul 9 13:07:31.735510 kernel: pci 0000:00:17.5: bridge window [io 0x1000-0x0fff] to [bus 18] add_size 1000 Jul 9 13:07:31.735560 kernel: pci 0000:00:17.6: bridge window [io 0x1000-0x0fff] to [bus 19] add_size 1000 Jul 9 13:07:31.736850 kernel: pci 0000:00:17.7: bridge window [io 0x1000-0x0fff] to [bus 1a] add_size 1000 Jul 9 13:07:31.736906 kernel: pci 0000:00:18.2: bridge window [io 0x1000-0x0fff] to [bus 1d] add_size 1000 Jul 9 13:07:31.736957 kernel: pci 0000:00:18.3: bridge window [io 0x1000-0x0fff] to [bus 1e] add_size 1000 Jul 9 13:07:31.737006 kernel: pci 0000:00:18.4: bridge window [io 0x1000-0x0fff] to [bus 1f] add_size 1000 Jul 9 13:07:31.737056 kernel: pci 0000:00:18.5: bridge window [io 0x1000-0x0fff] to [bus 20] add_size 1000 Jul 9 13:07:31.737105 kernel: pci 0000:00:18.6: bridge window [io 0x1000-0x0fff] to [bus 21] add_size 1000 Jul 9 13:07:31.737154 kernel: pci 0000:00:18.7: bridge window [io 0x1000-0x0fff] to [bus 22] add_size 1000 Jul 9 13:07:31.737203 kernel: pci 0000:00:15.0: bridge window [mem 0xc0000000-0xc01fffff 64bit pref]: assigned Jul 9 13:07:31.737433 kernel: pci 0000:00:16.0: bridge window [mem 0xc0200000-0xc03fffff 64bit pref]: assigned Jul 9 13:07:31.738635 kernel: pci 0000:00:15.3: bridge window [io size 0x1000]: can't assign; no space Jul 9 13:07:31.738709 kernel: pci 0000:00:15.3: bridge window [io size 0x1000]: failed to assign Jul 9 13:07:31.738762 kernel: pci 0000:00:15.4: bridge window [io size 0x1000]: can't assign; no space Jul 9 13:07:31.738831 kernel: pci 0000:00:15.4: bridge window [io size 0x1000]: failed to assign Jul 9 13:07:31.738882 kernel: pci 0000:00:15.5: bridge window [io size 0x1000]: can't assign; no space Jul 9 13:07:31.738932 kernel: pci 0000:00:15.5: bridge window [io size 0x1000]: failed to assign Jul 9 13:07:31.738984 kernel: pci 0000:00:15.6: bridge window [io size 0x1000]: can't assign; no space Jul 9 13:07:31.739034 kernel: pci 0000:00:15.6: bridge window [io size 0x1000]: failed to assign Jul 9 13:07:31.739088 kernel: pci 0000:00:15.7: bridge window [io size 0x1000]: can't assign; no space Jul 9 13:07:31.739137 kernel: pci 0000:00:15.7: bridge window [io size 0x1000]: failed to assign Jul 9 13:07:31.739187 kernel: pci 0000:00:16.3: bridge window [io size 0x1000]: can't assign; no space Jul 9 13:07:31.739237 kernel: pci 0000:00:16.3: bridge window [io size 0x1000]: failed to assign Jul 9 13:07:31.739291 kernel: pci 0000:00:16.4: bridge window [io size 0x1000]: can't assign; no space Jul 9 13:07:31.739349 kernel: pci 0000:00:16.4: bridge window [io size 0x1000]: failed to assign Jul 9 13:07:31.739400 kernel: pci 0000:00:16.5: bridge window [io size 0x1000]: can't assign; no space Jul 9 13:07:31.739457 kernel: pci 0000:00:16.5: bridge window [io size 0x1000]: failed to assign Jul 9 13:07:31.739511 kernel: pci 0000:00:16.6: bridge window [io size 0x1000]: can't assign; no space Jul 9 13:07:31.739563 kernel: pci 0000:00:16.6: bridge window [io size 0x1000]: failed to assign Jul 9 13:07:31.739625 kernel: pci 0000:00:16.7: bridge window [io size 0x1000]: can't assign; no space Jul 9 13:07:31.739675 kernel: pci 0000:00:16.7: bridge window [io size 0x1000]: failed to assign Jul 9 13:07:31.739725 kernel: pci 0000:00:17.3: bridge window [io size 0x1000]: can't assign; no space Jul 9 13:07:31.739775 kernel: pci 0000:00:17.3: bridge window [io size 0x1000]: failed to assign Jul 9 13:07:31.739824 kernel: pci 0000:00:17.4: bridge window [io size 0x1000]: can't assign; no space Jul 9 13:07:31.739877 kernel: pci 0000:00:17.4: bridge window [io size 0x1000]: failed to assign Jul 9 13:07:31.739926 kernel: pci 0000:00:17.5: bridge window [io size 0x1000]: can't assign; no space Jul 9 13:07:31.739975 kernel: pci 0000:00:17.5: bridge window [io size 0x1000]: failed to assign Jul 9 13:07:31.740025 kernel: pci 0000:00:17.6: bridge window [io size 0x1000]: can't assign; no space Jul 9 13:07:31.740074 kernel: pci 0000:00:17.6: bridge window [io size 0x1000]: failed to assign Jul 9 13:07:31.740123 kernel: pci 0000:00:17.7: bridge window [io size 0x1000]: can't assign; no space Jul 9 13:07:31.740173 kernel: pci 0000:00:17.7: bridge window [io size 0x1000]: failed to assign Jul 9 13:07:31.740254 kernel: pci 0000:00:18.2: bridge window [io size 0x1000]: can't assign; no space Jul 9 13:07:31.740305 kernel: pci 0000:00:18.2: bridge window [io size 0x1000]: failed to assign Jul 9 13:07:31.740355 kernel: pci 0000:00:18.3: bridge window [io size 0x1000]: can't assign; no space Jul 9 13:07:31.740404 kernel: pci 0000:00:18.3: bridge window [io size 0x1000]: failed to assign Jul 9 13:07:31.740458 kernel: pci 0000:00:18.4: bridge window [io size 0x1000]: can't assign; no space Jul 9 13:07:31.740524 kernel: pci 0000:00:18.4: bridge window [io size 0x1000]: failed to assign Jul 9 13:07:31.741166 kernel: pci 0000:00:18.5: bridge window [io size 0x1000]: can't assign; no space Jul 9 13:07:31.741238 kernel: pci 0000:00:18.5: bridge window [io size 0x1000]: failed to assign Jul 9 13:07:31.741292 kernel: pci 0000:00:18.6: bridge window [io size 0x1000]: can't assign; no space Jul 9 13:07:31.741347 kernel: pci 0000:00:18.6: bridge window [io size 0x1000]: failed to assign Jul 9 13:07:31.741398 kernel: pci 0000:00:18.7: bridge window [io size 0x1000]: can't assign; no space Jul 9 13:07:31.741448 kernel: pci 0000:00:18.7: bridge window [io size 0x1000]: failed to assign Jul 9 13:07:31.741498 kernel: pci 0000:00:18.7: bridge window [io size 0x1000]: can't assign; no space Jul 9 13:07:31.741547 kernel: pci 0000:00:18.7: bridge window [io size 0x1000]: failed to assign Jul 9 13:07:31.741609 kernel: pci 0000:00:18.6: bridge window [io size 0x1000]: can't assign; no space Jul 9 13:07:31.741671 kernel: pci 0000:00:18.6: bridge window [io size 0x1000]: failed to assign Jul 9 13:07:31.741732 kernel: pci 0000:00:18.5: bridge window [io size 0x1000]: can't assign; no space Jul 9 13:07:31.741783 kernel: pci 0000:00:18.5: bridge window [io size 0x1000]: failed to assign Jul 9 13:07:31.742021 kernel: pci 0000:00:18.4: bridge window [io size 0x1000]: can't assign; no space Jul 9 13:07:31.742395 kernel: pci 0000:00:18.4: bridge window [io size 0x1000]: failed to assign Jul 9 13:07:31.742449 kernel: pci 0000:00:18.3: bridge window [io size 0x1000]: can't assign; no space Jul 9 13:07:31.742501 kernel: pci 0000:00:18.3: bridge window [io size 0x1000]: failed to assign Jul 9 13:07:31.742553 kernel: pci 0000:00:18.2: bridge window [io size 0x1000]: can't assign; no space Jul 9 13:07:31.742618 kernel: pci 0000:00:18.2: bridge window [io size 0x1000]: failed to assign Jul 9 13:07:31.742670 kernel: pci 0000:00:17.7: bridge window [io size 0x1000]: can't assign; no space Jul 9 13:07:31.742720 kernel: pci 0000:00:17.7: bridge window [io size 0x1000]: failed to assign Jul 9 13:07:31.742770 kernel: pci 0000:00:17.6: bridge window [io size 0x1000]: can't assign; no space Jul 9 13:07:31.742823 kernel: pci 0000:00:17.6: bridge window [io size 0x1000]: failed to assign Jul 9 13:07:31.742873 kernel: pci 0000:00:17.5: bridge window [io size 0x1000]: can't assign; no space Jul 9 13:07:31.742923 kernel: pci 0000:00:17.5: bridge window [io size 0x1000]: failed to assign Jul 9 13:07:31.742972 kernel: pci 0000:00:17.4: bridge window [io size 0x1000]: can't assign; no space Jul 9 13:07:31.743021 kernel: pci 0000:00:17.4: bridge window [io size 0x1000]: failed to assign Jul 9 13:07:31.743071 kernel: pci 0000:00:17.3: bridge window [io size 0x1000]: can't assign; no space Jul 9 13:07:31.743121 kernel: pci 0000:00:17.3: bridge window [io size 0x1000]: failed to assign Jul 9 13:07:31.743171 kernel: pci 0000:00:16.7: bridge window [io size 0x1000]: can't assign; no space Jul 9 13:07:31.743221 kernel: pci 0000:00:16.7: bridge window [io size 0x1000]: failed to assign Jul 9 13:07:31.743270 kernel: pci 0000:00:16.6: bridge window [io size 0x1000]: can't assign; no space Jul 9 13:07:31.743323 kernel: pci 0000:00:16.6: bridge window [io size 0x1000]: failed to assign Jul 9 13:07:31.743374 kernel: pci 0000:00:16.5: bridge window [io size 0x1000]: can't assign; no space Jul 9 13:07:31.743425 kernel: pci 0000:00:16.5: bridge window [io size 0x1000]: failed to assign Jul 9 13:07:31.743474 kernel: pci 0000:00:16.4: bridge window [io size 0x1000]: can't assign; no space Jul 9 13:07:31.743524 kernel: pci 0000:00:16.4: bridge window [io size 0x1000]: failed to assign Jul 9 13:07:31.743739 kernel: pci 0000:00:16.3: bridge window [io size 0x1000]: can't assign; no space Jul 9 13:07:31.743799 kernel: pci 0000:00:16.3: bridge window [io size 0x1000]: failed to assign Jul 9 13:07:31.743853 kernel: pci 0000:00:15.7: bridge window [io size 0x1000]: can't assign; no space Jul 9 13:07:31.744619 kernel: pci 0000:00:15.7: bridge window [io size 0x1000]: failed to assign Jul 9 13:07:31.744685 kernel: pci 0000:00:15.6: bridge window [io size 0x1000]: can't assign; no space Jul 9 13:07:31.744740 kernel: pci 0000:00:15.6: bridge window [io size 0x1000]: failed to assign Jul 9 13:07:31.744793 kernel: pci 0000:00:15.5: bridge window [io size 0x1000]: can't assign; no space Jul 9 13:07:31.744844 kernel: pci 0000:00:15.5: bridge window [io size 0x1000]: failed to assign Jul 9 13:07:31.744894 kernel: pci 0000:00:15.4: bridge window [io size 0x1000]: can't assign; no space Jul 9 13:07:31.744945 kernel: pci 0000:00:15.4: bridge window [io size 0x1000]: failed to assign Jul 9 13:07:31.744999 kernel: pci 0000:00:15.3: bridge window [io size 0x1000]: can't assign; no space Jul 9 13:07:31.745050 kernel: pci 0000:00:15.3: bridge window [io size 0x1000]: failed to assign Jul 9 13:07:31.745103 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Jul 9 13:07:31.745154 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] Jul 9 13:07:31.745203 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] Jul 9 13:07:31.745253 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] Jul 9 13:07:31.745302 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] Jul 9 13:07:31.745356 kernel: pci 0000:03:00.0: ROM [mem 0xfd500000-0xfd50ffff pref]: assigned Jul 9 13:07:31.745409 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] Jul 9 13:07:31.745459 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] Jul 9 13:07:31.745508 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] Jul 9 13:07:31.745558 kernel: pci 0000:00:15.0: bridge window [mem 0xc0000000-0xc01fffff 64bit pref] Jul 9 13:07:31.745617 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] Jul 9 13:07:31.745667 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] Jul 9 13:07:31.745717 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] Jul 9 13:07:31.745766 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] Jul 9 13:07:31.745817 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] Jul 9 13:07:31.745866 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] Jul 9 13:07:31.745919 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] Jul 9 13:07:31.745968 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] Jul 9 13:07:31.746017 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] Jul 9 13:07:31.746066 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] Jul 9 13:07:31.746117 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] Jul 9 13:07:31.746167 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] Jul 9 13:07:31.746236 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] Jul 9 13:07:31.746294 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] Jul 9 13:07:31.746359 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] Jul 9 13:07:31.746413 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] Jul 9 13:07:31.746462 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] Jul 9 13:07:31.746511 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] Jul 9 13:07:31.746562 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] Jul 9 13:07:31.746619 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] Jul 9 13:07:31.746716 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] Jul 9 13:07:31.746771 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] Jul 9 13:07:31.746821 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] Jul 9 13:07:31.746875 kernel: pci 0000:0b:00.0: ROM [mem 0xfd400000-0xfd40ffff pref]: assigned Jul 9 13:07:31.746926 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] Jul 9 13:07:31.746975 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] Jul 9 13:07:31.747025 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] Jul 9 13:07:31.747074 kernel: pci 0000:00:16.0: bridge window [mem 0xc0200000-0xc03fffff 64bit pref] Jul 9 13:07:31.747125 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] Jul 9 13:07:31.747175 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] Jul 9 13:07:31.747226 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] Jul 9 13:07:31.747276 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] Jul 9 13:07:31.747327 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] Jul 9 13:07:31.747377 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] Jul 9 13:07:31.747426 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] Jul 9 13:07:31.747475 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] Jul 9 13:07:31.747524 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] Jul 9 13:07:31.747583 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] Jul 9 13:07:31.747635 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] Jul 9 13:07:31.747712 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] Jul 9 13:07:31.749266 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] Jul 9 13:07:31.749329 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] Jul 9 13:07:31.749384 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] Jul 9 13:07:31.749437 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] Jul 9 13:07:31.749488 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] Jul 9 13:07:31.749540 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] Jul 9 13:07:31.749610 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] Jul 9 13:07:31.749662 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] Jul 9 13:07:31.749714 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] Jul 9 13:07:31.749764 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] Jul 9 13:07:31.749815 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] Jul 9 13:07:31.749868 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] Jul 9 13:07:31.749919 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] Jul 9 13:07:31.749970 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] Jul 9 13:07:31.750024 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] Jul 9 13:07:31.750075 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] Jul 9 13:07:31.750126 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] Jul 9 13:07:31.750175 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] Jul 9 13:07:31.750226 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] Jul 9 13:07:31.750277 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] Jul 9 13:07:31.750327 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] Jul 9 13:07:31.750378 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] Jul 9 13:07:31.750430 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] Jul 9 13:07:31.750482 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] Jul 9 13:07:31.750533 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] Jul 9 13:07:31.750601 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] Jul 9 13:07:31.750655 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] Jul 9 13:07:31.750705 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] Jul 9 13:07:31.750759 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] Jul 9 13:07:31.750809 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] Jul 9 13:07:31.750869 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] Jul 9 13:07:31.750922 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] Jul 9 13:07:31.750972 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] Jul 9 13:07:31.751022 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] Jul 9 13:07:31.751071 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] Jul 9 13:07:31.751124 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] Jul 9 13:07:31.751177 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] Jul 9 13:07:31.751228 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] Jul 9 13:07:31.751281 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] Jul 9 13:07:31.751331 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] Jul 9 13:07:31.751380 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] Jul 9 13:07:31.751431 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] Jul 9 13:07:31.751483 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] Jul 9 13:07:31.751533 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] Jul 9 13:07:31.752065 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] Jul 9 13:07:31.752129 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] Jul 9 13:07:31.752264 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] Jul 9 13:07:31.752322 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] Jul 9 13:07:31.752375 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] Jul 9 13:07:31.752428 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] Jul 9 13:07:31.752479 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] Jul 9 13:07:31.752530 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] Jul 9 13:07:31.752593 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] Jul 9 13:07:31.753613 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] Jul 9 13:07:31.753675 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] Jul 9 13:07:31.753733 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] Jul 9 13:07:31.753785 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] Jul 9 13:07:31.753836 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] Jul 9 13:07:31.753896 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] Jul 9 13:07:31.753948 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] Jul 9 13:07:31.753999 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] Jul 9 13:07:31.754051 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] Jul 9 13:07:31.754105 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] Jul 9 13:07:31.754156 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] Jul 9 13:07:31.754210 kernel: pci_bus 0000:00: resource 4 [mem 0x000a0000-0x000bffff window] Jul 9 13:07:31.754255 kernel: pci_bus 0000:00: resource 5 [mem 0x000cc000-0x000dbfff window] Jul 9 13:07:31.754300 kernel: pci_bus 0000:00: resource 6 [mem 0xc0000000-0xfebfffff window] Jul 9 13:07:31.754345 kernel: pci_bus 0000:00: resource 7 [io 0x0000-0x0cf7 window] Jul 9 13:07:31.754388 kernel: pci_bus 0000:00: resource 8 [io 0x0d00-0xfeff window] Jul 9 13:07:31.754438 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x3fff] Jul 9 13:07:31.754484 kernel: pci_bus 0000:02: resource 1 [mem 0xfd600000-0xfdffffff] Jul 9 13:07:31.754529 kernel: pci_bus 0000:02: resource 2 [mem 0xe7b00000-0xe7ffffff 64bit pref] Jul 9 13:07:31.754581 kernel: pci_bus 0000:02: resource 4 [mem 0x000a0000-0x000bffff window] Jul 9 13:07:31.754629 kernel: pci_bus 0000:02: resource 5 [mem 0x000cc000-0x000dbfff window] Jul 9 13:07:31.754676 kernel: pci_bus 0000:02: resource 6 [mem 0xc0000000-0xfebfffff window] Jul 9 13:07:31.754723 kernel: pci_bus 0000:02: resource 7 [io 0x0000-0x0cf7 window] Jul 9 13:07:31.754772 kernel: pci_bus 0000:02: resource 8 [io 0x0d00-0xfeff window] Jul 9 13:07:31.754823 kernel: pci_bus 0000:03: resource 0 [io 0x4000-0x4fff] Jul 9 13:07:31.754870 kernel: pci_bus 0000:03: resource 1 [mem 0xfd500000-0xfd5fffff] Jul 9 13:07:31.754916 kernel: pci_bus 0000:03: resource 2 [mem 0xc0000000-0xc01fffff 64bit pref] Jul 9 13:07:31.754965 kernel: pci_bus 0000:04: resource 0 [io 0x8000-0x8fff] Jul 9 13:07:31.755012 kernel: pci_bus 0000:04: resource 1 [mem 0xfd100000-0xfd1fffff] Jul 9 13:07:31.755057 kernel: pci_bus 0000:04: resource 2 [mem 0xe7800000-0xe78fffff 64bit pref] Jul 9 13:07:31.755109 kernel: pci_bus 0000:05: resource 0 [io 0xc000-0xcfff] Jul 9 13:07:31.755155 kernel: pci_bus 0000:05: resource 1 [mem 0xfcd00000-0xfcdfffff] Jul 9 13:07:31.755200 kernel: pci_bus 0000:05: resource 2 [mem 0xe7400000-0xe74fffff 64bit pref] Jul 9 13:07:31.755249 kernel: pci_bus 0000:06: resource 1 [mem 0xfc900000-0xfc9fffff] Jul 9 13:07:31.755295 kernel: pci_bus 0000:06: resource 2 [mem 0xe7000000-0xe70fffff 64bit pref] Jul 9 13:07:31.755346 kernel: pci_bus 0000:07: resource 1 [mem 0xfc500000-0xfc5fffff] Jul 9 13:07:31.755392 kernel: pci_bus 0000:07: resource 2 [mem 0xe6c00000-0xe6cfffff 64bit pref] Jul 9 13:07:31.755445 kernel: pci_bus 0000:08: resource 1 [mem 0xfc100000-0xfc1fffff] Jul 9 13:07:31.755491 kernel: pci_bus 0000:08: resource 2 [mem 0xe6800000-0xe68fffff 64bit pref] Jul 9 13:07:31.755540 kernel: pci_bus 0000:09: resource 1 [mem 0xfbd00000-0xfbdfffff] Jul 9 13:07:31.755800 kernel: pci_bus 0000:09: resource 2 [mem 0xe6400000-0xe64fffff 64bit pref] Jul 9 13:07:31.755855 kernel: pci_bus 0000:0a: resource 1 [mem 0xfb900000-0xfb9fffff] Jul 9 13:07:31.755902 kernel: pci_bus 0000:0a: resource 2 [mem 0xe6000000-0xe60fffff 64bit pref] Jul 9 13:07:31.755957 kernel: pci_bus 0000:0b: resource 0 [io 0x5000-0x5fff] Jul 9 13:07:31.756003 kernel: pci_bus 0000:0b: resource 1 [mem 0xfd400000-0xfd4fffff] Jul 9 13:07:31.756065 kernel: pci_bus 0000:0b: resource 2 [mem 0xc0200000-0xc03fffff 64bit pref] Jul 9 13:07:31.756269 kernel: pci_bus 0000:0c: resource 0 [io 0x9000-0x9fff] Jul 9 13:07:31.756319 kernel: pci_bus 0000:0c: resource 1 [mem 0xfd000000-0xfd0fffff] Jul 9 13:07:31.756367 kernel: pci_bus 0000:0c: resource 2 [mem 0xe7700000-0xe77fffff 64bit pref] Jul 9 13:07:31.756427 kernel: pci_bus 0000:0d: resource 0 [io 0xd000-0xdfff] Jul 9 13:07:31.756483 kernel: pci_bus 0000:0d: resource 1 [mem 0xfcc00000-0xfccfffff] Jul 9 13:07:31.756688 kernel: pci_bus 0000:0d: resource 2 [mem 0xe7300000-0xe73fffff 64bit pref] Jul 9 13:07:31.756743 kernel: pci_bus 0000:0e: resource 1 [mem 0xfc800000-0xfc8fffff] Jul 9 13:07:31.756808 kernel: pci_bus 0000:0e: resource 2 [mem 0xe6f00000-0xe6ffffff 64bit pref] Jul 9 13:07:31.757149 kernel: pci_bus 0000:0f: resource 1 [mem 0xfc400000-0xfc4fffff] Jul 9 13:07:31.757204 kernel: pci_bus 0000:0f: resource 2 [mem 0xe6b00000-0xe6bfffff 64bit pref] Jul 9 13:07:31.757256 kernel: pci_bus 0000:10: resource 1 [mem 0xfc000000-0xfc0fffff] Jul 9 13:07:31.757304 kernel: pci_bus 0000:10: resource 2 [mem 0xe6700000-0xe67fffff 64bit pref] Jul 9 13:07:31.757369 kernel: pci_bus 0000:11: resource 1 [mem 0xfbc00000-0xfbcfffff] Jul 9 13:07:31.757544 kernel: pci_bus 0000:11: resource 2 [mem 0xe6300000-0xe63fffff 64bit pref] Jul 9 13:07:31.757609 kernel: pci_bus 0000:12: resource 1 [mem 0xfb800000-0xfb8fffff] Jul 9 13:07:31.757660 kernel: pci_bus 0000:12: resource 2 [mem 0xe5f00000-0xe5ffffff 64bit pref] Jul 9 13:07:31.757711 kernel: pci_bus 0000:13: resource 0 [io 0x6000-0x6fff] Jul 9 13:07:31.757757 kernel: pci_bus 0000:13: resource 1 [mem 0xfd300000-0xfd3fffff] Jul 9 13:07:31.757802 kernel: pci_bus 0000:13: resource 2 [mem 0xe7a00000-0xe7afffff 64bit pref] Jul 9 13:07:31.758017 kernel: pci_bus 0000:14: resource 0 [io 0xa000-0xafff] Jul 9 13:07:31.758068 kernel: pci_bus 0000:14: resource 1 [mem 0xfcf00000-0xfcffffff] Jul 9 13:07:31.758115 kernel: pci_bus 0000:14: resource 2 [mem 0xe7600000-0xe76fffff 64bit pref] Jul 9 13:07:31.758167 kernel: pci_bus 0000:15: resource 0 [io 0xe000-0xefff] Jul 9 13:07:31.758227 kernel: pci_bus 0000:15: resource 1 [mem 0xfcb00000-0xfcbfffff] Jul 9 13:07:31.758273 kernel: pci_bus 0000:15: resource 2 [mem 0xe7200000-0xe72fffff 64bit pref] Jul 9 13:07:31.758322 kernel: pci_bus 0000:16: resource 1 [mem 0xfc700000-0xfc7fffff] Jul 9 13:07:31.758367 kernel: pci_bus 0000:16: resource 2 [mem 0xe6e00000-0xe6efffff 64bit pref] Jul 9 13:07:31.758418 kernel: pci_bus 0000:17: resource 1 [mem 0xfc300000-0xfc3fffff] Jul 9 13:07:31.758465 kernel: pci_bus 0000:17: resource 2 [mem 0xe6a00000-0xe6afffff 64bit pref] Jul 9 13:07:31.758516 kernel: pci_bus 0000:18: resource 1 [mem 0xfbf00000-0xfbffffff] Jul 9 13:07:31.758562 kernel: pci_bus 0000:18: resource 2 [mem 0xe6600000-0xe66fffff 64bit pref] Jul 9 13:07:31.758635 kernel: pci_bus 0000:19: resource 1 [mem 0xfbb00000-0xfbbfffff] Jul 9 13:07:31.758680 kernel: pci_bus 0000:19: resource 2 [mem 0xe6200000-0xe62fffff 64bit pref] Jul 9 13:07:31.758728 kernel: pci_bus 0000:1a: resource 1 [mem 0xfb700000-0xfb7fffff] Jul 9 13:07:31.758772 kernel: pci_bus 0000:1a: resource 2 [mem 0xe5e00000-0xe5efffff 64bit pref] Jul 9 13:07:31.758822 kernel: pci_bus 0000:1b: resource 0 [io 0x7000-0x7fff] Jul 9 13:07:31.758867 kernel: pci_bus 0000:1b: resource 1 [mem 0xfd200000-0xfd2fffff] Jul 9 13:07:31.758911 kernel: pci_bus 0000:1b: resource 2 [mem 0xe7900000-0xe79fffff 64bit pref] Jul 9 13:07:31.758959 kernel: pci_bus 0000:1c: resource 0 [io 0xb000-0xbfff] Jul 9 13:07:31.759003 kernel: pci_bus 0000:1c: resource 1 [mem 0xfce00000-0xfcefffff] Jul 9 13:07:31.759047 kernel: pci_bus 0000:1c: resource 2 [mem 0xe7500000-0xe75fffff 64bit pref] Jul 9 13:07:31.759099 kernel: pci_bus 0000:1d: resource 1 [mem 0xfca00000-0xfcafffff] Jul 9 13:07:31.759143 kernel: pci_bus 0000:1d: resource 2 [mem 0xe7100000-0xe71fffff 64bit pref] Jul 9 13:07:31.759192 kernel: pci_bus 0000:1e: resource 1 [mem 0xfc600000-0xfc6fffff] Jul 9 13:07:31.759238 kernel: pci_bus 0000:1e: resource 2 [mem 0xe6d00000-0xe6dfffff 64bit pref] Jul 9 13:07:31.759287 kernel: pci_bus 0000:1f: resource 1 [mem 0xfc200000-0xfc2fffff] Jul 9 13:07:31.759332 kernel: pci_bus 0000:1f: resource 2 [mem 0xe6900000-0xe69fffff 64bit pref] Jul 9 13:07:31.759381 kernel: pci_bus 0000:20: resource 1 [mem 0xfbe00000-0xfbefffff] Jul 9 13:07:31.759447 kernel: pci_bus 0000:20: resource 2 [mem 0xe6500000-0xe65fffff 64bit pref] Jul 9 13:07:31.759512 kernel: pci_bus 0000:21: resource 1 [mem 0xfba00000-0xfbafffff] Jul 9 13:07:31.759556 kernel: pci_bus 0000:21: resource 2 [mem 0xe6100000-0xe61fffff 64bit pref] Jul 9 13:07:31.759620 kernel: pci_bus 0000:22: resource 1 [mem 0xfb600000-0xfb6fffff] Jul 9 13:07:31.759668 kernel: pci_bus 0000:22: resource 2 [mem 0xe5d00000-0xe5dfffff 64bit pref] Jul 9 13:07:31.759724 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jul 9 13:07:31.759735 kernel: PCI: CLS 32 bytes, default 64 Jul 9 13:07:31.759742 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jul 9 13:07:31.759748 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns Jul 9 13:07:31.759754 kernel: clocksource: Switched to clocksource tsc Jul 9 13:07:31.759759 kernel: Initialise system trusted keyrings Jul 9 13:07:31.759765 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jul 9 13:07:31.759771 kernel: Key type asymmetric registered Jul 9 13:07:31.759777 kernel: Asymmetric key parser 'x509' registered Jul 9 13:07:31.759783 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jul 9 13:07:31.759790 kernel: io scheduler mq-deadline registered Jul 9 13:07:31.759795 kernel: io scheduler kyber registered Jul 9 13:07:31.759801 kernel: io scheduler bfq registered Jul 9 13:07:31.759853 kernel: pcieport 0000:00:15.0: PME: Signaling with IRQ 24 Jul 9 13:07:31.759904 kernel: pcieport 0000:00:15.0: pciehp: Slot #160 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 9 13:07:31.759955 kernel: pcieport 0000:00:15.1: PME: Signaling with IRQ 25 Jul 9 13:07:31.760007 kernel: pcieport 0000:00:15.1: pciehp: Slot #161 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 9 13:07:31.760060 kernel: pcieport 0000:00:15.2: PME: Signaling with IRQ 26 Jul 9 13:07:31.760110 kernel: pcieport 0000:00:15.2: pciehp: Slot #162 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 9 13:07:31.760161 kernel: pcieport 0000:00:15.3: PME: Signaling with IRQ 27 Jul 9 13:07:31.760249 kernel: pcieport 0000:00:15.3: pciehp: Slot #163 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 9 13:07:31.760311 kernel: pcieport 0000:00:15.4: PME: Signaling with IRQ 28 Jul 9 13:07:31.760361 kernel: pcieport 0000:00:15.4: pciehp: Slot #164 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 9 13:07:31.760410 kernel: pcieport 0000:00:15.5: PME: Signaling with IRQ 29 Jul 9 13:07:31.760463 kernel: pcieport 0000:00:15.5: pciehp: Slot #165 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 9 13:07:31.760530 kernel: pcieport 0000:00:15.6: PME: Signaling with IRQ 30 Jul 9 13:07:31.760608 kernel: pcieport 0000:00:15.6: pciehp: Slot #166 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 9 13:07:31.760661 kernel: pcieport 0000:00:15.7: PME: Signaling with IRQ 31 Jul 9 13:07:31.760711 kernel: pcieport 0000:00:15.7: pciehp: Slot #167 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 9 13:07:31.760760 kernel: pcieport 0000:00:16.0: PME: Signaling with IRQ 32 Jul 9 13:07:31.760810 kernel: pcieport 0000:00:16.0: pciehp: Slot #192 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 9 13:07:31.760862 kernel: pcieport 0000:00:16.1: PME: Signaling with IRQ 33 Jul 9 13:07:31.760910 kernel: pcieport 0000:00:16.1: pciehp: Slot #193 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 9 13:07:31.760960 kernel: pcieport 0000:00:16.2: PME: Signaling with IRQ 34 Jul 9 13:07:31.761009 kernel: pcieport 0000:00:16.2: pciehp: Slot #194 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 9 13:07:31.761057 kernel: pcieport 0000:00:16.3: PME: Signaling with IRQ 35 Jul 9 13:07:31.761107 kernel: pcieport 0000:00:16.3: pciehp: Slot #195 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 9 13:07:31.761164 kernel: pcieport 0000:00:16.4: PME: Signaling with IRQ 36 Jul 9 13:07:31.761239 kernel: pcieport 0000:00:16.4: pciehp: Slot #196 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 9 13:07:31.761292 kernel: pcieport 0000:00:16.5: PME: Signaling with IRQ 37 Jul 9 13:07:31.761344 kernel: pcieport 0000:00:16.5: pciehp: Slot #197 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 9 13:07:31.761394 kernel: pcieport 0000:00:16.6: PME: Signaling with IRQ 38 Jul 9 13:07:31.761446 kernel: pcieport 0000:00:16.6: pciehp: Slot #198 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 9 13:07:31.761498 kernel: pcieport 0000:00:16.7: PME: Signaling with IRQ 39 Jul 9 13:07:31.761548 kernel: pcieport 0000:00:16.7: pciehp: Slot #199 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 9 13:07:31.761614 kernel: pcieport 0000:00:17.0: PME: Signaling with IRQ 40 Jul 9 13:07:31.761669 kernel: pcieport 0000:00:17.0: pciehp: Slot #224 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 9 13:07:31.761721 kernel: pcieport 0000:00:17.1: PME: Signaling with IRQ 41 Jul 9 13:07:31.761772 kernel: pcieport 0000:00:17.1: pciehp: Slot #225 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 9 13:07:31.761823 kernel: pcieport 0000:00:17.2: PME: Signaling with IRQ 42 Jul 9 13:07:31.761874 kernel: pcieport 0000:00:17.2: pciehp: Slot #226 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 9 13:07:31.761925 kernel: pcieport 0000:00:17.3: PME: Signaling with IRQ 43 Jul 9 13:07:31.761976 kernel: pcieport 0000:00:17.3: pciehp: Slot #227 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 9 13:07:31.762028 kernel: pcieport 0000:00:17.4: PME: Signaling with IRQ 44 Jul 9 13:07:31.762081 kernel: pcieport 0000:00:17.4: pciehp: Slot #228 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 9 13:07:31.762132 kernel: pcieport 0000:00:17.5: PME: Signaling with IRQ 45 Jul 9 13:07:31.762183 kernel: pcieport 0000:00:17.5: pciehp: Slot #229 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 9 13:07:31.762235 kernel: pcieport 0000:00:17.6: PME: Signaling with IRQ 46 Jul 9 13:07:31.762285 kernel: pcieport 0000:00:17.6: pciehp: Slot #230 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 9 13:07:31.762336 kernel: pcieport 0000:00:17.7: PME: Signaling with IRQ 47 Jul 9 13:07:31.762386 kernel: pcieport 0000:00:17.7: pciehp: Slot #231 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 9 13:07:31.762439 kernel: pcieport 0000:00:18.0: PME: Signaling with IRQ 48 Jul 9 13:07:31.762489 kernel: pcieport 0000:00:18.0: pciehp: Slot #256 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 9 13:07:31.762540 kernel: pcieport 0000:00:18.1: PME: Signaling with IRQ 49 Jul 9 13:07:31.762606 kernel: pcieport 0000:00:18.1: pciehp: Slot #257 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 9 13:07:31.762658 kernel: pcieport 0000:00:18.2: PME: Signaling with IRQ 50 Jul 9 13:07:31.762709 kernel: pcieport 0000:00:18.2: pciehp: Slot #258 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 9 13:07:31.762760 kernel: pcieport 0000:00:18.3: PME: Signaling with IRQ 51 Jul 9 13:07:31.762811 kernel: pcieport 0000:00:18.3: pciehp: Slot #259 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 9 13:07:31.762864 kernel: pcieport 0000:00:18.4: PME: Signaling with IRQ 52 Jul 9 13:07:31.762915 kernel: pcieport 0000:00:18.4: pciehp: Slot #260 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 9 13:07:31.762965 kernel: pcieport 0000:00:18.5: PME: Signaling with IRQ 53 Jul 9 13:07:31.763030 kernel: pcieport 0000:00:18.5: pciehp: Slot #261 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 9 13:07:31.763080 kernel: pcieport 0000:00:18.6: PME: Signaling with IRQ 54 Jul 9 13:07:31.763130 kernel: pcieport 0000:00:18.6: pciehp: Slot #262 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 9 13:07:31.763179 kernel: pcieport 0000:00:18.7: PME: Signaling with IRQ 55 Jul 9 13:07:31.763230 kernel: pcieport 0000:00:18.7: pciehp: Slot #263 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 9 13:07:31.763239 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jul 9 13:07:31.763247 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 9 13:07:31.763253 kernel: 00:05: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jul 9 13:07:31.763259 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBC,PNP0f13:MOUS] at 0x60,0x64 irq 1,12 Jul 9 13:07:31.763266 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jul 9 13:07:31.763272 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jul 9 13:07:31.763321 kernel: rtc_cmos 00:01: registered as rtc0 Jul 9 13:07:31.763369 kernel: rtc_cmos 00:01: setting system clock to 2025-07-09T13:07:31 UTC (1752066451) Jul 9 13:07:31.763378 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jul 9 13:07:31.763421 kernel: rtc_cmos 00:01: alarms up to one month, y3k, 114 bytes nvram Jul 9 13:07:31.763429 kernel: intel_pstate: CPU model not supported Jul 9 13:07:31.763436 kernel: NET: Registered PF_INET6 protocol family Jul 9 13:07:31.763442 kernel: Segment Routing with IPv6 Jul 9 13:07:31.763448 kernel: In-situ OAM (IOAM) with IPv6 Jul 9 13:07:31.763454 kernel: NET: Registered PF_PACKET protocol family Jul 9 13:07:31.763462 kernel: Key type dns_resolver registered Jul 9 13:07:31.763468 kernel: IPI shorthand broadcast: enabled Jul 9 13:07:31.763474 kernel: sched_clock: Marking stable (2725070338, 173193577)->(2913035519, -14771604) Jul 9 13:07:31.763480 kernel: registered taskstats version 1 Jul 9 13:07:31.763486 kernel: Loading compiled-in X.509 certificates Jul 9 13:07:31.763492 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.36-flatcar: 8ba3d283fde4a005aa35ab9394afe8122b8a3878' Jul 9 13:07:31.763498 kernel: Demotion targets for Node 0: null Jul 9 13:07:31.763504 kernel: Key type .fscrypt registered Jul 9 13:07:31.763510 kernel: Key type fscrypt-provisioning registered Jul 9 13:07:31.763517 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 9 13:07:31.763523 kernel: ima: Allocated hash algorithm: sha1 Jul 9 13:07:31.763529 kernel: ima: No architecture policies found Jul 9 13:07:31.763535 kernel: clk: Disabling unused clocks Jul 9 13:07:31.763541 kernel: Warning: unable to open an initial console. Jul 9 13:07:31.763547 kernel: Freeing unused kernel image (initmem) memory: 54568K Jul 9 13:07:31.763553 kernel: Write protecting the kernel read-only data: 24576k Jul 9 13:07:31.763559 kernel: Freeing unused kernel image (rodata/data gap) memory: 284K Jul 9 13:07:31.763566 kernel: Run /init as init process Jul 9 13:07:31.763573 kernel: with arguments: Jul 9 13:07:31.763589 kernel: /init Jul 9 13:07:31.763595 kernel: with environment: Jul 9 13:07:31.763601 kernel: HOME=/ Jul 9 13:07:31.763607 kernel: TERM=linux Jul 9 13:07:31.763613 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 9 13:07:31.763620 systemd[1]: Successfully made /usr/ read-only. Jul 9 13:07:31.763628 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 9 13:07:31.763637 systemd[1]: Detected virtualization vmware. Jul 9 13:07:31.763643 systemd[1]: Detected architecture x86-64. Jul 9 13:07:31.763649 systemd[1]: Running in initrd. Jul 9 13:07:31.763655 systemd[1]: No hostname configured, using default hostname. Jul 9 13:07:31.763661 systemd[1]: Hostname set to . Jul 9 13:07:31.763667 systemd[1]: Initializing machine ID from random generator. Jul 9 13:07:31.763673 systemd[1]: Queued start job for default target initrd.target. Jul 9 13:07:31.763680 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 9 13:07:31.763687 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 9 13:07:31.763694 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 9 13:07:31.763700 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 9 13:07:31.763706 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 9 13:07:31.763713 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 9 13:07:31.763720 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 9 13:07:31.763728 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 9 13:07:31.763734 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 9 13:07:31.763740 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 9 13:07:31.763747 systemd[1]: Reached target paths.target - Path Units. Jul 9 13:07:31.763753 systemd[1]: Reached target slices.target - Slice Units. Jul 9 13:07:31.763759 systemd[1]: Reached target swap.target - Swaps. Jul 9 13:07:31.763765 systemd[1]: Reached target timers.target - Timer Units. Jul 9 13:07:31.763771 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 9 13:07:31.763777 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 9 13:07:31.763784 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 9 13:07:31.763791 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jul 9 13:07:31.763797 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 9 13:07:31.763803 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 9 13:07:31.763809 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 9 13:07:31.763815 systemd[1]: Reached target sockets.target - Socket Units. Jul 9 13:07:31.763822 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 9 13:07:31.763828 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 9 13:07:31.763835 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 9 13:07:31.763841 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jul 9 13:07:31.763848 systemd[1]: Starting systemd-fsck-usr.service... Jul 9 13:07:31.763854 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 9 13:07:31.763860 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 9 13:07:31.763866 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 9 13:07:31.763873 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 9 13:07:31.763880 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 9 13:07:31.763886 systemd[1]: Finished systemd-fsck-usr.service. Jul 9 13:07:31.763893 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 9 13:07:31.763912 systemd-journald[244]: Collecting audit messages is disabled. Jul 9 13:07:31.763929 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 9 13:07:31.763936 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 9 13:07:31.763943 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 9 13:07:31.763949 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 9 13:07:31.763956 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 9 13:07:31.763962 kernel: Bridge firewalling registered Jul 9 13:07:31.763969 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 9 13:07:31.763976 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 9 13:07:31.763982 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 9 13:07:31.763988 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 9 13:07:31.763995 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 9 13:07:31.764001 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 9 13:07:31.764009 systemd-journald[244]: Journal started Jul 9 13:07:31.764023 systemd-journald[244]: Runtime Journal (/run/log/journal/38d2c7cecdc2428fadea597bb32907e0) is 4.8M, max 38.8M, 34M free. Jul 9 13:07:31.713982 systemd-modules-load[245]: Inserted module 'overlay' Jul 9 13:07:31.743645 systemd-modules-load[245]: Inserted module 'br_netfilter' Jul 9 13:07:31.770592 systemd[1]: Started systemd-journald.service - Journal Service. Jul 9 13:07:31.773806 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 9 13:07:31.777713 dracut-cmdline[271]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=f85d3be94c634d7d72fbcd0e670073ce56ae2e0cc763f83b329300b7cea5203d Jul 9 13:07:31.783544 systemd-tmpfiles[285]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jul 9 13:07:31.786159 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 9 13:07:31.787137 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 9 13:07:31.817770 systemd-resolved[312]: Positive Trust Anchors: Jul 9 13:07:31.817779 systemd-resolved[312]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 9 13:07:31.817801 systemd-resolved[312]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 9 13:07:31.819965 systemd-resolved[312]: Defaulting to hostname 'linux'. Jul 9 13:07:31.820782 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 9 13:07:31.820945 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 9 13:07:31.836594 kernel: SCSI subsystem initialized Jul 9 13:07:31.853589 kernel: Loading iSCSI transport class v2.0-870. Jul 9 13:07:31.861588 kernel: iscsi: registered transport (tcp) Jul 9 13:07:31.882604 kernel: iscsi: registered transport (qla4xxx) Jul 9 13:07:31.882629 kernel: QLogic iSCSI HBA Driver Jul 9 13:07:31.892843 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 9 13:07:31.900055 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 9 13:07:31.901165 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 9 13:07:31.922688 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 9 13:07:31.923610 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 9 13:07:31.959591 kernel: raid6: avx2x4 gen() 47430 MB/s Jul 9 13:07:31.976587 kernel: raid6: avx2x2 gen() 53503 MB/s Jul 9 13:07:31.993714 kernel: raid6: avx2x1 gen() 44767 MB/s Jul 9 13:07:31.993728 kernel: raid6: using algorithm avx2x2 gen() 53503 MB/s Jul 9 13:07:32.011776 kernel: raid6: .... xor() 32324 MB/s, rmw enabled Jul 9 13:07:32.011798 kernel: raid6: using avx2x2 recovery algorithm Jul 9 13:07:32.025589 kernel: xor: automatically using best checksumming function avx Jul 9 13:07:32.129593 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 9 13:07:32.133390 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 9 13:07:32.134421 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 9 13:07:32.150207 systemd-udevd[492]: Using default interface naming scheme 'v255'. Jul 9 13:07:32.153563 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 9 13:07:32.154649 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 9 13:07:32.174957 dracut-pre-trigger[498]: rd.md=0: removing MD RAID activation Jul 9 13:07:32.187983 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 9 13:07:32.188831 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 9 13:07:32.263729 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 9 13:07:32.265822 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 9 13:07:32.337589 kernel: VMware PVSCSI driver - version 1.0.7.0-k Jul 9 13:07:32.337626 kernel: vmw_pvscsi: using 64bit dma Jul 9 13:07:32.337635 kernel: VMware vmxnet3 virtual NIC driver - version 1.9.0.0-k-NAPI Jul 9 13:07:32.337643 kernel: vmw_pvscsi: max_id: 16 Jul 9 13:07:32.339584 kernel: vmw_pvscsi: setting ring_pages to 8 Jul 9 13:07:32.345812 kernel: vmw_pvscsi: enabling reqCallThreshold Jul 9 13:07:32.345833 kernel: vmxnet3 0000:0b:00.0: # of Tx queues : 2, # of Rx queues : 2 Jul 9 13:07:32.345931 kernel: vmw_pvscsi: driver-based request coalescing enabled Jul 9 13:07:32.346597 kernel: vmw_pvscsi: using MSI-X Jul 9 13:07:32.351461 kernel: scsi host0: VMware PVSCSI storage adapter rev 2, req/cmp/msg rings: 8/8/1 pages, cmd_per_lun=254 Jul 9 13:07:32.351572 kernel: vmw_pvscsi 0000:03:00.0: VMware PVSCSI rev 2 host #0 Jul 9 13:07:32.351681 kernel: vmxnet3 0000:0b:00.0 eth0: NIC Link is Up 10000 Mbps Jul 9 13:07:32.355621 kernel: scsi 0:0:0:0: Direct-Access VMware Virtual disk 2.0 PQ: 0 ANSI: 6 Jul 9 13:07:32.368594 kernel: libata version 3.00 loaded. Jul 9 13:07:32.372601 kernel: cryptd: max_cpu_qlen set to 1000 Jul 9 13:07:32.375521 kernel: vmxnet3 0000:0b:00.0 ens192: renamed from eth0 Jul 9 13:07:32.376784 (udev-worker)[549]: id: Truncating stdout of 'dmi_memory_id' up to 16384 byte. Jul 9 13:07:32.379309 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 9 13:07:32.379345 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 9 13:07:32.380884 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 9 13:07:32.382414 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 9 13:07:32.387658 kernel: sd 0:0:0:0: [sda] 17805312 512-byte logical blocks: (9.12 GB/8.49 GiB) Jul 9 13:07:32.387786 kernel: sd 0:0:0:0: [sda] Write Protect is off Jul 9 13:07:32.387854 kernel: ata_piix 0000:00:07.1: version 2.13 Jul 9 13:07:32.387928 kernel: sd 0:0:0:0: [sda] Mode Sense: 31 00 00 00 Jul 9 13:07:32.389583 kernel: sd 0:0:0:0: [sda] Cache data unavailable Jul 9 13:07:32.389661 kernel: sd 0:0:0:0: [sda] Assuming drive cache: write through Jul 9 13:07:32.395584 kernel: AES CTR mode by8 optimization enabled Jul 9 13:07:32.401658 kernel: scsi host1: ata_piix Jul 9 13:07:32.406608 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input2 Jul 9 13:07:32.406631 kernel: scsi host2: ata_piix Jul 9 13:07:32.408590 kernel: ata1: PATA max UDMA/33 cmd 0x1f0 ctl 0x3f6 bmdma 0x1060 irq 14 lpm-pol 0 Jul 9 13:07:32.408608 kernel: ata2: PATA max UDMA/33 cmd 0x170 ctl 0x376 bmdma 0x1068 irq 15 lpm-pol 0 Jul 9 13:07:32.415509 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 9 13:07:32.423835 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 9 13:07:32.423852 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jul 9 13:07:32.575669 kernel: ata2.00: ATAPI: VMware Virtual IDE CDROM Drive, 00000001, max UDMA/33 Jul 9 13:07:32.582652 kernel: scsi 2:0:0:0: CD-ROM NECVMWar VMware IDE CDR10 1.00 PQ: 0 ANSI: 5 Jul 9 13:07:32.609734 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 1x/1x writer dvd-ram cd/rw xa/form2 cdda tray Jul 9 13:07:32.609846 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jul 9 13:07:32.621588 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jul 9 13:07:32.623220 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_disk EFI-SYSTEM. Jul 9 13:07:32.628639 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_disk OEM. Jul 9 13:07:32.633906 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_disk ROOT. Jul 9 13:07:32.638157 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_disk USR-A. Jul 9 13:07:32.638323 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_disk USR-A. Jul 9 13:07:32.638970 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 9 13:07:32.680591 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 9 13:07:32.692585 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 9 13:07:32.852489 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 9 13:07:32.852830 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 9 13:07:32.852955 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 9 13:07:32.853147 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 9 13:07:32.853771 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 9 13:07:32.867749 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 9 13:07:33.691657 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 9 13:07:33.692551 disk-uuid[647]: The operation has completed successfully. Jul 9 13:07:33.723130 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 9 13:07:33.723200 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 9 13:07:33.743563 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 9 13:07:33.756382 sh[677]: Success Jul 9 13:07:33.769928 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 9 13:07:33.769957 kernel: device-mapper: uevent: version 1.0.3 Jul 9 13:07:33.771078 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jul 9 13:07:33.777600 kernel: device-mapper: verity: sha256 using shash "sha256-avx2" Jul 9 13:07:33.819504 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 9 13:07:33.821621 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 9 13:07:33.830677 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 9 13:07:33.841615 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' Jul 9 13:07:33.841637 kernel: BTRFS: device fsid 082bcfbc-2c86-46fe-87f4-85dea5450235 devid 1 transid 36 /dev/mapper/usr (254:0) scanned by mount (689) Jul 9 13:07:33.844424 kernel: BTRFS info (device dm-0): first mount of filesystem 082bcfbc-2c86-46fe-87f4-85dea5450235 Jul 9 13:07:33.844440 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jul 9 13:07:33.846005 kernel: BTRFS info (device dm-0): using free-space-tree Jul 9 13:07:33.853111 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 9 13:07:33.853417 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jul 9 13:07:33.854010 systemd[1]: Starting afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments... Jul 9 13:07:33.855632 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 9 13:07:33.882590 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (712) Jul 9 13:07:33.888123 kernel: BTRFS info (device sda6): first mount of filesystem 87056a6c-ee99-487a-9330-f1335025b841 Jul 9 13:07:33.888148 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jul 9 13:07:33.888156 kernel: BTRFS info (device sda6): using free-space-tree Jul 9 13:07:33.898599 kernel: BTRFS info (device sda6): last unmount of filesystem 87056a6c-ee99-487a-9330-f1335025b841 Jul 9 13:07:33.899539 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 9 13:07:33.900491 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 9 13:07:33.933486 systemd[1]: Finished afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments. Jul 9 13:07:33.935235 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 9 13:07:34.009173 ignition[731]: Ignition 2.21.0 Jul 9 13:07:34.009653 ignition[731]: Stage: fetch-offline Jul 9 13:07:34.009786 ignition[731]: no configs at "/usr/lib/ignition/base.d" Jul 9 13:07:34.009910 ignition[731]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Jul 9 13:07:34.010102 ignition[731]: parsed url from cmdline: "" Jul 9 13:07:34.010129 ignition[731]: no config URL provided Jul 9 13:07:34.010338 ignition[731]: reading system config file "/usr/lib/ignition/user.ign" Jul 9 13:07:34.010454 ignition[731]: no config at "/usr/lib/ignition/user.ign" Jul 9 13:07:34.011105 ignition[731]: config successfully fetched Jul 9 13:07:34.011150 ignition[731]: parsing config with SHA512: ac41e1f8697c528f19e23053f9cbde45e5fbdf28375a5dbf16b59063eca20953267c57d86bc75bd96277bc4ddc3455632f616929b762c424d9c8f9693775ee5e Jul 9 13:07:34.014997 unknown[731]: fetched base config from "system" Jul 9 13:07:34.015004 unknown[731]: fetched user config from "vmware" Jul 9 13:07:34.015279 ignition[731]: fetch-offline: fetch-offline passed Jul 9 13:07:34.015314 ignition[731]: Ignition finished successfully Jul 9 13:07:34.017289 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 9 13:07:34.019679 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 9 13:07:34.020798 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 9 13:07:34.046861 systemd-networkd[871]: lo: Link UP Jul 9 13:07:34.046867 systemd-networkd[871]: lo: Gained carrier Jul 9 13:07:34.047537 systemd-networkd[871]: Enumeration completed Jul 9 13:07:34.047851 systemd-networkd[871]: ens192: Configuring with /etc/systemd/network/10-dracut-cmdline-99.network. Jul 9 13:07:34.050819 kernel: vmxnet3 0000:0b:00.0 ens192: intr type 3, mode 0, 3 vectors allocated Jul 9 13:07:34.050916 kernel: vmxnet3 0000:0b:00.0 ens192: NIC Link is Up 10000 Mbps Jul 9 13:07:34.047856 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 9 13:07:34.047994 systemd[1]: Reached target network.target - Network. Jul 9 13:07:34.048084 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jul 9 13:07:34.049914 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 9 13:07:34.050802 systemd-networkd[871]: ens192: Link UP Jul 9 13:07:34.050804 systemd-networkd[871]: ens192: Gained carrier Jul 9 13:07:34.066195 ignition[874]: Ignition 2.21.0 Jul 9 13:07:34.066207 ignition[874]: Stage: kargs Jul 9 13:07:34.066289 ignition[874]: no configs at "/usr/lib/ignition/base.d" Jul 9 13:07:34.066294 ignition[874]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Jul 9 13:07:34.066901 ignition[874]: kargs: kargs passed Jul 9 13:07:34.066927 ignition[874]: Ignition finished successfully Jul 9 13:07:34.068345 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 9 13:07:34.069179 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 9 13:07:34.086991 ignition[882]: Ignition 2.21.0 Jul 9 13:07:34.086998 ignition[882]: Stage: disks Jul 9 13:07:34.087076 ignition[882]: no configs at "/usr/lib/ignition/base.d" Jul 9 13:07:34.087081 ignition[882]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Jul 9 13:07:34.090243 ignition[882]: disks: disks passed Jul 9 13:07:34.090383 ignition[882]: Ignition finished successfully Jul 9 13:07:34.091295 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 9 13:07:34.091517 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 9 13:07:34.091640 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 9 13:07:34.091830 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 9 13:07:34.092075 systemd[1]: Reached target sysinit.target - System Initialization. Jul 9 13:07:34.092252 systemd[1]: Reached target basic.target - Basic System. Jul 9 13:07:34.092973 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 9 13:07:34.111814 systemd-fsck[891]: ROOT: clean, 15/1628000 files, 120826/1617920 blocks Jul 9 13:07:34.113234 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 9 13:07:34.114129 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 9 13:07:34.199468 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 9 13:07:34.199697 kernel: EXT4-fs (sda9): mounted filesystem b08a603c-44fa-43af-af80-90bed9b8770a r/w with ordered data mode. Quota mode: none. Jul 9 13:07:34.199849 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 9 13:07:34.200841 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 9 13:07:34.201473 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 9 13:07:34.202870 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 9 13:07:34.203108 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 9 13:07:34.203309 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 9 13:07:34.211990 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 9 13:07:34.212765 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 9 13:07:34.219496 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (899) Jul 9 13:07:34.219528 kernel: BTRFS info (device sda6): first mount of filesystem 87056a6c-ee99-487a-9330-f1335025b841 Jul 9 13:07:34.221171 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jul 9 13:07:34.221193 kernel: BTRFS info (device sda6): using free-space-tree Jul 9 13:07:34.224729 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 9 13:07:34.250213 initrd-setup-root[923]: cut: /sysroot/etc/passwd: No such file or directory Jul 9 13:07:34.252838 initrd-setup-root[930]: cut: /sysroot/etc/group: No such file or directory Jul 9 13:07:34.255028 initrd-setup-root[937]: cut: /sysroot/etc/shadow: No such file or directory Jul 9 13:07:34.256915 initrd-setup-root[944]: cut: /sysroot/etc/gshadow: No such file or directory Jul 9 13:07:34.316439 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 9 13:07:34.317139 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 9 13:07:34.317658 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 9 13:07:34.333615 kernel: BTRFS info (device sda6): last unmount of filesystem 87056a6c-ee99-487a-9330-f1335025b841 Jul 9 13:07:34.349502 ignition[1012]: INFO : Ignition 2.21.0 Jul 9 13:07:34.349499 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 9 13:07:34.349861 ignition[1012]: INFO : Stage: mount Jul 9 13:07:34.350041 ignition[1012]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 9 13:07:34.350163 ignition[1012]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Jul 9 13:07:34.350792 ignition[1012]: INFO : mount: mount passed Jul 9 13:07:34.350792 ignition[1012]: INFO : Ignition finished successfully Jul 9 13:07:34.351628 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 9 13:07:34.352255 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 9 13:07:34.841309 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 9 13:07:34.842228 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 9 13:07:34.866359 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (1024) Jul 9 13:07:34.866400 kernel: BTRFS info (device sda6): first mount of filesystem 87056a6c-ee99-487a-9330-f1335025b841 Jul 9 13:07:34.866414 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jul 9 13:07:34.867949 kernel: BTRFS info (device sda6): using free-space-tree Jul 9 13:07:34.871050 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 9 13:07:34.885127 ignition[1041]: INFO : Ignition 2.21.0 Jul 9 13:07:34.885127 ignition[1041]: INFO : Stage: files Jul 9 13:07:34.885656 ignition[1041]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 9 13:07:34.885656 ignition[1041]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Jul 9 13:07:34.886125 ignition[1041]: DEBUG : files: compiled without relabeling support, skipping Jul 9 13:07:34.886755 ignition[1041]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 9 13:07:34.886755 ignition[1041]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 9 13:07:34.888741 ignition[1041]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 9 13:07:34.889003 ignition[1041]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 9 13:07:34.889376 unknown[1041]: wrote ssh authorized keys file for user: core Jul 9 13:07:34.889624 ignition[1041]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 9 13:07:34.891475 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jul 9 13:07:34.891475 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Jul 9 13:07:34.956723 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 9 13:07:35.179815 systemd-networkd[871]: ens192: Gained IPv6LL Jul 9 13:07:35.321020 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jul 9 13:07:35.321020 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 9 13:07:35.321458 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jul 9 13:07:35.803764 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jul 9 13:07:35.900017 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 9 13:07:35.900017 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jul 9 13:07:35.900567 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jul 9 13:07:35.900567 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 9 13:07:35.900567 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 9 13:07:35.900567 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 9 13:07:35.900567 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 9 13:07:35.900567 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 9 13:07:35.900567 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 9 13:07:35.901687 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 9 13:07:35.901687 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 9 13:07:35.901687 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jul 9 13:07:35.903609 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jul 9 13:07:35.903609 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jul 9 13:07:35.904027 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Jul 9 13:07:36.573812 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jul 9 13:07:36.768015 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jul 9 13:07:36.768372 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/etc/systemd/network/00-vmware.network" Jul 9 13:07:36.768864 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/etc/systemd/network/00-vmware.network" Jul 9 13:07:36.768864 ignition[1041]: INFO : files: op(d): [started] processing unit "prepare-helm.service" Jul 9 13:07:36.769948 ignition[1041]: INFO : files: op(d): op(e): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 9 13:07:36.770586 ignition[1041]: INFO : files: op(d): op(e): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 9 13:07:36.770586 ignition[1041]: INFO : files: op(d): [finished] processing unit "prepare-helm.service" Jul 9 13:07:36.770586 ignition[1041]: INFO : files: op(f): [started] processing unit "coreos-metadata.service" Jul 9 13:07:36.770586 ignition[1041]: INFO : files: op(f): op(10): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 9 13:07:36.771611 ignition[1041]: INFO : files: op(f): op(10): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 9 13:07:36.771611 ignition[1041]: INFO : files: op(f): [finished] processing unit "coreos-metadata.service" Jul 9 13:07:36.771611 ignition[1041]: INFO : files: op(11): [started] setting preset to disabled for "coreos-metadata.service" Jul 9 13:07:36.791317 ignition[1041]: INFO : files: op(11): op(12): [started] removing enablement symlink(s) for "coreos-metadata.service" Jul 9 13:07:36.793363 ignition[1041]: INFO : files: op(11): op(12): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jul 9 13:07:36.793754 ignition[1041]: INFO : files: op(11): [finished] setting preset to disabled for "coreos-metadata.service" Jul 9 13:07:36.793754 ignition[1041]: INFO : files: op(13): [started] setting preset to enabled for "prepare-helm.service" Jul 9 13:07:36.793754 ignition[1041]: INFO : files: op(13): [finished] setting preset to enabled for "prepare-helm.service" Jul 9 13:07:36.794602 ignition[1041]: INFO : files: createResultFile: createFiles: op(14): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 9 13:07:36.794602 ignition[1041]: INFO : files: createResultFile: createFiles: op(14): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 9 13:07:36.795588 ignition[1041]: INFO : files: files passed Jul 9 13:07:36.795588 ignition[1041]: INFO : Ignition finished successfully Jul 9 13:07:36.796113 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 9 13:07:36.797147 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 9 13:07:36.798643 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 9 13:07:36.804616 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 9 13:07:36.804796 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 9 13:07:36.808492 initrd-setup-root-after-ignition[1073]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 9 13:07:36.808752 initrd-setup-root-after-ignition[1073]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 9 13:07:36.809549 initrd-setup-root-after-ignition[1077]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 9 13:07:36.810385 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 9 13:07:36.810588 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 9 13:07:36.811281 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 9 13:07:36.846032 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 9 13:07:36.846101 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 9 13:07:36.846400 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 9 13:07:36.846540 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 9 13:07:36.846778 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 9 13:07:36.847228 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 9 13:07:36.869071 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 9 13:07:36.869858 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 9 13:07:36.894914 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 9 13:07:36.895190 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 9 13:07:36.895514 systemd[1]: Stopped target timers.target - Timer Units. Jul 9 13:07:36.895822 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 9 13:07:36.895915 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 9 13:07:36.896406 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 9 13:07:36.896672 systemd[1]: Stopped target basic.target - Basic System. Jul 9 13:07:36.896907 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 9 13:07:36.897160 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 9 13:07:36.897422 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 9 13:07:36.897668 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jul 9 13:07:36.897947 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 9 13:07:36.898206 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 9 13:07:36.898464 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 9 13:07:36.898745 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 9 13:07:36.898987 systemd[1]: Stopped target swap.target - Swaps. Jul 9 13:07:36.899198 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 9 13:07:36.899355 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 9 13:07:36.899699 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 9 13:07:36.899964 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 9 13:07:36.900211 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 9 13:07:36.900375 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 9 13:07:36.900644 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 9 13:07:36.900706 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 9 13:07:36.901104 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 9 13:07:36.901170 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 9 13:07:36.901336 systemd[1]: Stopped target paths.target - Path Units. Jul 9 13:07:36.901435 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 9 13:07:36.904609 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 9 13:07:36.904773 systemd[1]: Stopped target slices.target - Slice Units. Jul 9 13:07:36.905029 systemd[1]: Stopped target sockets.target - Socket Units. Jul 9 13:07:36.905243 systemd[1]: iscsid.socket: Deactivated successfully. Jul 9 13:07:36.905289 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 9 13:07:36.905454 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 9 13:07:36.905497 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 9 13:07:36.905692 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 9 13:07:36.905758 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 9 13:07:36.905924 systemd[1]: ignition-files.service: Deactivated successfully. Jul 9 13:07:36.905986 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 9 13:07:36.907655 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 9 13:07:36.907751 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 9 13:07:36.907815 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 9 13:07:36.908525 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 9 13:07:36.908624 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 9 13:07:36.908689 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 9 13:07:36.908945 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 9 13:07:36.909018 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 9 13:07:36.911973 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 9 13:07:36.917782 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 9 13:07:36.927031 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 9 13:07:36.929359 ignition[1097]: INFO : Ignition 2.21.0 Jul 9 13:07:36.929359 ignition[1097]: INFO : Stage: umount Jul 9 13:07:36.929724 ignition[1097]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 9 13:07:36.929724 ignition[1097]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Jul 9 13:07:36.930519 ignition[1097]: INFO : umount: umount passed Jul 9 13:07:36.931030 ignition[1097]: INFO : Ignition finished successfully Jul 9 13:07:36.931274 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 9 13:07:36.931327 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 9 13:07:36.931664 systemd[1]: Stopped target network.target - Network. Jul 9 13:07:36.931846 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 9 13:07:36.931877 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 9 13:07:36.932093 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 9 13:07:36.932116 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 9 13:07:36.932451 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 9 13:07:36.932474 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 9 13:07:36.933132 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 9 13:07:36.933156 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 9 13:07:36.933343 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 9 13:07:36.933471 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 9 13:07:36.938995 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 9 13:07:36.939176 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 9 13:07:36.940397 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jul 9 13:07:36.940554 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 9 13:07:36.940589 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 9 13:07:36.941251 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jul 9 13:07:36.942717 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 9 13:07:36.942791 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 9 13:07:36.943723 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jul 9 13:07:36.943840 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jul 9 13:07:36.944012 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 9 13:07:36.944031 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 9 13:07:36.945632 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 9 13:07:36.945865 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 9 13:07:36.946005 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 9 13:07:36.946134 systemd[1]: afterburn-network-kargs.service: Deactivated successfully. Jul 9 13:07:36.946156 systemd[1]: Stopped afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments. Jul 9 13:07:36.946278 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 9 13:07:36.946299 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 9 13:07:36.947008 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 9 13:07:36.947033 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 9 13:07:36.947234 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 9 13:07:36.948282 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 9 13:07:36.954386 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 9 13:07:36.954617 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 9 13:07:36.959902 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 9 13:07:36.959989 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 9 13:07:36.960281 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 9 13:07:36.960307 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 9 13:07:36.960516 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 9 13:07:36.960531 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 9 13:07:36.960708 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 9 13:07:36.960731 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 9 13:07:36.961002 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 9 13:07:36.961025 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 9 13:07:36.961315 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 9 13:07:36.961336 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 9 13:07:36.962044 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 9 13:07:36.962142 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jul 9 13:07:36.962166 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jul 9 13:07:36.962324 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 9 13:07:36.962348 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 9 13:07:36.962611 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 9 13:07:36.962633 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 9 13:07:36.970571 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 9 13:07:36.970641 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 9 13:07:37.120029 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 9 13:07:37.120105 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 9 13:07:37.120516 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 9 13:07:37.120648 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 9 13:07:37.120679 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 9 13:07:37.121257 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 9 13:07:37.137357 systemd[1]: Switching root. Jul 9 13:07:37.166117 systemd-journald[244]: Journal stopped Jul 9 13:07:38.175138 systemd-journald[244]: Received SIGTERM from PID 1 (systemd). Jul 9 13:07:38.175159 kernel: SELinux: policy capability network_peer_controls=1 Jul 9 13:07:38.175168 kernel: SELinux: policy capability open_perms=1 Jul 9 13:07:38.175173 kernel: SELinux: policy capability extended_socket_class=1 Jul 9 13:07:38.175179 kernel: SELinux: policy capability always_check_network=0 Jul 9 13:07:38.175186 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 9 13:07:38.175192 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 9 13:07:38.175197 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 9 13:07:38.175203 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 9 13:07:38.175208 kernel: SELinux: policy capability userspace_initial_context=0 Jul 9 13:07:38.175214 kernel: audit: type=1403 audit(1752066457.704:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 9 13:07:38.175220 systemd[1]: Successfully loaded SELinux policy in 47.828ms. Jul 9 13:07:38.175228 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 3.720ms. Jul 9 13:07:38.175235 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 9 13:07:38.175242 systemd[1]: Detected virtualization vmware. Jul 9 13:07:38.175248 systemd[1]: Detected architecture x86-64. Jul 9 13:07:38.175255 systemd[1]: Detected first boot. Jul 9 13:07:38.175262 systemd[1]: Initializing machine ID from random generator. Jul 9 13:07:38.175268 zram_generator::config[1141]: No configuration found. Jul 9 13:07:38.175351 kernel: vmw_vmci 0000:00:07.7: Using capabilities 0xc Jul 9 13:07:38.175362 kernel: Guest personality initialized and is active Jul 9 13:07:38.175368 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Jul 9 13:07:38.175374 kernel: Initialized host personality Jul 9 13:07:38.175382 kernel: NET: Registered PF_VSOCK protocol family Jul 9 13:07:38.175389 systemd[1]: Populated /etc with preset unit settings. Jul 9 13:07:38.175396 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Jul 9 13:07:38.175403 systemd[1]: COREOS_CUSTOM_PUBLIC_IPV4=$(ip addr show ens192 | grep -v "inet 10." | grep -Po "inet \K[\d.]+")" > ${OUTPUT}" Jul 9 13:07:38.175410 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jul 9 13:07:38.175416 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 9 13:07:38.175422 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 9 13:07:38.175430 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 9 13:07:38.175437 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 9 13:07:38.175443 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 9 13:07:38.175450 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 9 13:07:38.175456 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 9 13:07:38.175463 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 9 13:07:38.175469 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 9 13:07:38.175478 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 9 13:07:38.175484 systemd[1]: Created slice user.slice - User and Session Slice. Jul 9 13:07:38.175491 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 9 13:07:38.175500 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 9 13:07:38.175506 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 9 13:07:38.175513 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 9 13:07:38.175521 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 9 13:07:38.175528 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 9 13:07:38.175536 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jul 9 13:07:38.175542 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 9 13:07:38.175549 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 9 13:07:38.175556 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 9 13:07:38.175562 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 9 13:07:38.175569 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 9 13:07:38.177605 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 9 13:07:38.177618 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 9 13:07:38.177629 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 9 13:07:38.177636 systemd[1]: Reached target slices.target - Slice Units. Jul 9 13:07:38.177643 systemd[1]: Reached target swap.target - Swaps. Jul 9 13:07:38.177650 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 9 13:07:38.177657 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 9 13:07:38.177665 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jul 9 13:07:38.177673 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 9 13:07:38.177679 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 9 13:07:38.177686 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 9 13:07:38.177693 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 9 13:07:38.177700 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 9 13:07:38.177707 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 9 13:07:38.177713 systemd[1]: Mounting media.mount - External Media Directory... Jul 9 13:07:38.177722 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 9 13:07:38.177729 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 9 13:07:38.177735 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 9 13:07:38.177742 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 9 13:07:38.177749 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 9 13:07:38.177756 systemd[1]: Reached target machines.target - Containers. Jul 9 13:07:38.177763 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 9 13:07:38.177770 systemd[1]: Starting ignition-delete-config.service - Ignition (delete config)... Jul 9 13:07:38.177778 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 9 13:07:38.177784 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 9 13:07:38.177791 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 9 13:07:38.177798 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 9 13:07:38.177805 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 9 13:07:38.177812 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 9 13:07:38.177818 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 9 13:07:38.177825 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 9 13:07:38.177833 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 9 13:07:38.177840 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 9 13:07:38.177847 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 9 13:07:38.177853 systemd[1]: Stopped systemd-fsck-usr.service. Jul 9 13:07:38.177861 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 9 13:07:38.177867 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 9 13:07:38.177874 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 9 13:07:38.177881 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 9 13:07:38.177890 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 9 13:07:38.177897 kernel: fuse: init (API version 7.41) Jul 9 13:07:38.177903 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jul 9 13:07:38.177926 systemd-journald[1224]: Collecting audit messages is disabled. Jul 9 13:07:38.177945 systemd-journald[1224]: Journal started Jul 9 13:07:38.177961 systemd-journald[1224]: Runtime Journal (/run/log/journal/85a13c040d1e45ec9f58751a2563bc5e) is 4.8M, max 38.8M, 34M free. Jul 9 13:07:38.041098 systemd[1]: Queued start job for default target multi-user.target. Jul 9 13:07:38.053683 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jul 9 13:07:38.053919 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 9 13:07:38.178550 jq[1211]: true Jul 9 13:07:38.180289 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 9 13:07:38.180313 systemd[1]: verity-setup.service: Deactivated successfully. Jul 9 13:07:38.180322 systemd[1]: Stopped verity-setup.service. Jul 9 13:07:38.183788 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 9 13:07:38.197588 systemd[1]: Started systemd-journald.service - Journal Service. Jul 9 13:07:38.198933 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 9 13:07:38.199091 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 9 13:07:38.199246 systemd[1]: Mounted media.mount - External Media Directory. Jul 9 13:07:38.199386 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 9 13:07:38.199539 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 9 13:07:38.199702 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 9 13:07:38.201707 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 9 13:07:38.201961 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 9 13:07:38.202073 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 9 13:07:38.202764 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 9 13:07:38.202871 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 9 13:07:38.203110 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 9 13:07:38.203223 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 9 13:07:38.203791 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 9 13:07:38.204053 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 9 13:07:38.207853 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 9 13:07:38.207999 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 9 13:07:38.208344 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 9 13:07:38.216262 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 9 13:07:38.218588 kernel: loop: module loaded Jul 9 13:07:38.218625 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 9 13:07:38.222638 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 9 13:07:38.222787 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 9 13:07:38.222811 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 9 13:07:38.223487 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jul 9 13:07:38.227861 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 9 13:07:38.228327 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 9 13:07:38.241324 jq[1242]: true Jul 9 13:07:38.248092 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 9 13:07:38.250301 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 9 13:07:38.251618 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 9 13:07:38.253589 kernel: ACPI: bus type drm_connector registered Jul 9 13:07:38.255444 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 9 13:07:38.259876 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 9 13:07:38.261879 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 9 13:07:38.263132 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 9 13:07:38.263806 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 9 13:07:38.264333 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 9 13:07:38.264450 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 9 13:07:38.265743 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jul 9 13:07:38.267944 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 9 13:07:38.268126 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 9 13:07:38.269564 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 9 13:07:38.283730 systemd-journald[1224]: Time spent on flushing to /var/log/journal/85a13c040d1e45ec9f58751a2563bc5e is 78.237ms for 1757 entries. Jul 9 13:07:38.283730 systemd-journald[1224]: System Journal (/var/log/journal/85a13c040d1e45ec9f58751a2563bc5e) is 8M, max 584.8M, 576.8M free. Jul 9 13:07:38.384686 systemd-journald[1224]: Received client request to flush runtime journal. Jul 9 13:07:38.384721 kernel: loop0: detected capacity change from 0 to 114008 Jul 9 13:07:38.384738 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 9 13:07:38.384749 kernel: loop1: detected capacity change from 0 to 2960 Jul 9 13:07:38.333015 ignition[1275]: Ignition 2.21.0 Jul 9 13:07:38.284125 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 9 13:07:38.333289 ignition[1275]: deleting config from guestinfo properties Jul 9 13:07:38.284614 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 9 13:07:38.343221 ignition[1275]: Successfully deleted config Jul 9 13:07:38.295726 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jul 9 13:07:38.308707 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 9 13:07:38.312348 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 9 13:07:38.318942 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 9 13:07:38.345059 systemd[1]: Finished ignition-delete-config.service - Ignition (delete config). Jul 9 13:07:38.367848 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jul 9 13:07:38.379637 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 9 13:07:38.381207 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 9 13:07:38.386244 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 9 13:07:38.408990 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 9 13:07:38.411711 systemd-tmpfiles[1307]: ACLs are not supported, ignoring. Jul 9 13:07:38.411881 systemd-tmpfiles[1307]: ACLs are not supported, ignoring. Jul 9 13:07:38.414880 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 9 13:07:38.418701 kernel: loop2: detected capacity change from 0 to 224512 Jul 9 13:07:38.452655 kernel: loop3: detected capacity change from 0 to 146480 Jul 9 13:07:38.511593 kernel: loop4: detected capacity change from 0 to 114008 Jul 9 13:07:38.525648 kernel: loop5: detected capacity change from 0 to 2960 Jul 9 13:07:38.540646 kernel: loop6: detected capacity change from 0 to 224512 Jul 9 13:07:38.570698 kernel: loop7: detected capacity change from 0 to 146480 Jul 9 13:07:38.601764 (sd-merge)[1317]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-vmware'. Jul 9 13:07:38.602045 (sd-merge)[1317]: Merged extensions into '/usr'. Jul 9 13:07:38.606450 systemd[1]: Reload requested from client PID 1274 ('systemd-sysext') (unit systemd-sysext.service)... Jul 9 13:07:38.606706 systemd[1]: Reloading... Jul 9 13:07:38.680591 zram_generator::config[1342]: No configuration found. Jul 9 13:07:38.811566 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 9 13:07:38.821973 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Jul 9 13:07:38.858525 ldconfig[1262]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 9 13:07:38.871584 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 9 13:07:38.871697 systemd[1]: Reloading finished in 264 ms. Jul 9 13:07:38.888032 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 9 13:07:38.888422 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 9 13:07:38.888794 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 9 13:07:38.906423 systemd[1]: Starting ensure-sysext.service... Jul 9 13:07:38.907180 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 9 13:07:38.909068 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 9 13:07:38.918225 systemd[1]: Reload requested from client PID 1400 ('systemctl') (unit ensure-sysext.service)... Jul 9 13:07:38.918234 systemd[1]: Reloading... Jul 9 13:07:38.920491 systemd-tmpfiles[1401]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jul 9 13:07:38.921351 systemd-tmpfiles[1401]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jul 9 13:07:38.921557 systemd-tmpfiles[1401]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 9 13:07:38.922779 systemd-tmpfiles[1401]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 9 13:07:38.923306 systemd-tmpfiles[1401]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 9 13:07:38.923484 systemd-tmpfiles[1401]: ACLs are not supported, ignoring. Jul 9 13:07:38.923521 systemd-tmpfiles[1401]: ACLs are not supported, ignoring. Jul 9 13:07:38.926889 systemd-tmpfiles[1401]: Detected autofs mount point /boot during canonicalization of boot. Jul 9 13:07:38.926895 systemd-tmpfiles[1401]: Skipping /boot Jul 9 13:07:38.933344 systemd-udevd[1402]: Using default interface naming scheme 'v255'. Jul 9 13:07:38.933541 systemd-tmpfiles[1401]: Detected autofs mount point /boot during canonicalization of boot. Jul 9 13:07:38.933546 systemd-tmpfiles[1401]: Skipping /boot Jul 9 13:07:38.966995 zram_generator::config[1425]: No configuration found. Jul 9 13:07:39.105025 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 9 13:07:39.115614 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Jul 9 13:07:39.126589 kernel: mousedev: PS/2 mouse device common for all mice Jul 9 13:07:39.150589 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jul 9 13:07:39.155590 kernel: ACPI: button: Power Button [PWRF] Jul 9 13:07:39.184140 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_disk OEM. Jul 9 13:07:39.184521 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jul 9 13:07:39.184651 systemd[1]: Reloading finished in 266 ms. Jul 9 13:07:39.192004 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 9 13:07:39.197210 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 9 13:07:39.220712 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 9 13:07:39.223619 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 9 13:07:39.228261 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 9 13:07:39.229079 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 9 13:07:39.231598 kernel: piix4_smbus 0000:00:07.3: SMBus Host Controller not enabled! Jul 9 13:07:39.231961 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 9 13:07:39.234107 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 9 13:07:39.234353 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 9 13:07:39.240633 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 9 13:07:39.240750 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 9 13:07:39.242178 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 9 13:07:39.244973 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 9 13:07:39.246624 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 9 13:07:39.247768 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 9 13:07:39.247945 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 9 13:07:39.249252 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 9 13:07:39.250611 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 9 13:07:39.250935 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 9 13:07:39.251049 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 9 13:07:39.251360 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 9 13:07:39.251467 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 9 13:07:39.254093 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 9 13:07:39.254240 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 9 13:07:39.258454 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 9 13:07:39.261401 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 9 13:07:39.270772 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 9 13:07:39.276384 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 9 13:07:39.280270 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 9 13:07:39.285596 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 9 13:07:39.285802 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 9 13:07:39.285869 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 9 13:07:39.294631 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 9 13:07:39.294755 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 9 13:07:39.296634 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 9 13:07:39.299946 systemd[1]: Finished ensure-sysext.service. Jul 9 13:07:39.307893 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jul 9 13:07:39.318850 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 9 13:07:39.319352 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 9 13:07:39.319537 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 9 13:07:39.320026 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 9 13:07:39.320831 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 9 13:07:39.322835 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 9 13:07:39.328941 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 9 13:07:39.329069 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 9 13:07:39.329255 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 9 13:07:39.329875 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 9 13:07:39.330613 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 9 13:07:39.330865 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 9 13:07:39.339197 augenrules[1583]: No rules Jul 9 13:07:39.339966 systemd[1]: audit-rules.service: Deactivated successfully. Jul 9 13:07:39.340640 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 9 13:07:39.348642 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 9 13:07:39.359638 (udev-worker)[1432]: id: Truncating stdout of 'dmi_memory_id' up to 16384 byte. Jul 9 13:07:39.368661 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 9 13:07:39.383471 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 9 13:07:39.383700 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 9 13:07:39.403862 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 9 13:07:39.447911 systemd-networkd[1530]: lo: Link UP Jul 9 13:07:39.447916 systemd-networkd[1530]: lo: Gained carrier Jul 9 13:07:39.448768 systemd-networkd[1530]: Enumeration completed Jul 9 13:07:39.448821 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 9 13:07:39.448966 systemd-networkd[1530]: ens192: Configuring with /etc/systemd/network/00-vmware.network. Jul 9 13:07:39.451703 kernel: vmxnet3 0000:0b:00.0 ens192: intr type 3, mode 0, 3 vectors allocated Jul 9 13:07:39.451848 kernel: vmxnet3 0000:0b:00.0 ens192: NIC Link is Up 10000 Mbps Jul 9 13:07:39.451041 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jul 9 13:07:39.452833 systemd-networkd[1530]: ens192: Link UP Jul 9 13:07:39.452917 systemd-networkd[1530]: ens192: Gained carrier Jul 9 13:07:39.453940 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 9 13:07:39.477977 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jul 9 13:07:39.478165 systemd[1]: Reached target time-set.target - System Time Set. Jul 9 13:07:39.483196 systemd-resolved[1531]: Positive Trust Anchors: Jul 9 13:07:39.483416 systemd-resolved[1531]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 9 13:07:39.483442 systemd-resolved[1531]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 9 13:07:39.486627 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jul 9 13:07:39.498920 systemd-resolved[1531]: Defaulting to hostname 'linux'. Jul 9 13:07:39.500088 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 9 13:07:39.500271 systemd[1]: Reached target network.target - Network. Jul 9 13:07:39.500370 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 9 13:07:39.521735 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 9 13:07:39.521966 systemd[1]: Reached target sysinit.target - System Initialization. Jul 9 13:07:39.522121 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 9 13:07:39.522245 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 9 13:07:39.522373 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jul 9 13:07:39.522558 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 9 13:07:39.522708 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 9 13:07:39.522814 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 9 13:07:39.522917 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 9 13:07:39.522937 systemd[1]: Reached target paths.target - Path Units. Jul 9 13:07:39.523017 systemd[1]: Reached target timers.target - Timer Units. Jul 9 13:07:39.523877 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 9 13:07:39.524843 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 9 13:07:39.526161 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jul 9 13:07:39.526345 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jul 9 13:07:39.526463 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jul 9 13:07:39.528709 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 9 13:07:39.529280 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jul 9 13:07:39.529793 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 9 13:07:39.530304 systemd[1]: Reached target sockets.target - Socket Units. Jul 9 13:07:39.530399 systemd[1]: Reached target basic.target - Basic System. Jul 9 13:07:39.530516 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 9 13:07:39.530534 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 9 13:07:39.531306 systemd[1]: Starting containerd.service - containerd container runtime... Jul 9 13:07:39.532105 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 9 13:07:39.534899 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 9 13:07:39.536418 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 9 13:07:39.537250 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 9 13:07:39.537460 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 9 13:07:39.539691 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jul 9 13:07:39.542020 jq[1616]: false Jul 9 13:07:39.542639 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 9 13:07:39.550448 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 9 13:07:39.553669 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 9 13:07:39.557723 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 9 13:07:39.560951 google_oslogin_nss_cache[1618]: oslogin_cache_refresh[1618]: Refreshing passwd entry cache Jul 9 13:07:39.561278 oslogin_cache_refresh[1618]: Refreshing passwd entry cache Jul 9 13:07:39.562687 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 9 13:07:39.563270 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 9 13:07:39.564547 extend-filesystems[1617]: Found /dev/sda6 Jul 9 13:07:39.565943 google_oslogin_nss_cache[1618]: oslogin_cache_refresh[1618]: Failure getting users, quitting Jul 9 13:07:39.565943 google_oslogin_nss_cache[1618]: oslogin_cache_refresh[1618]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jul 9 13:07:39.565943 google_oslogin_nss_cache[1618]: oslogin_cache_refresh[1618]: Refreshing group entry cache Jul 9 13:07:39.565815 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 9 13:07:39.565671 oslogin_cache_refresh[1618]: Failure getting users, quitting Jul 9 13:07:39.565682 oslogin_cache_refresh[1618]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jul 9 13:07:39.565707 oslogin_cache_refresh[1618]: Refreshing group entry cache Jul 9 13:07:39.567399 systemd[1]: Starting update-engine.service - Update Engine... Jul 9 13:07:39.568385 extend-filesystems[1617]: Found /dev/sda9 Jul 9 13:07:39.568792 google_oslogin_nss_cache[1618]: oslogin_cache_refresh[1618]: Failure getting groups, quitting Jul 9 13:07:39.568792 google_oslogin_nss_cache[1618]: oslogin_cache_refresh[1618]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jul 9 13:07:39.568764 oslogin_cache_refresh[1618]: Failure getting groups, quitting Jul 9 13:07:39.568769 oslogin_cache_refresh[1618]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jul 9 13:07:39.570033 extend-filesystems[1617]: Checking size of /dev/sda9 Jul 9 13:07:39.572378 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 9 13:07:39.575666 systemd[1]: Starting vgauthd.service - VGAuth Service for open-vm-tools... Jul 9 13:07:39.582991 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 9 13:07:39.583270 extend-filesystems[1617]: Old size kept for /dev/sda9 Jul 9 13:07:39.583365 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 9 13:07:39.583519 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 9 13:07:39.583767 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jul 9 13:07:39.583936 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jul 9 13:09:05.377967 systemd-timesyncd[1564]: Contacted time server 99.28.14.242:123 (0.flatcar.pool.ntp.org). Jul 9 13:09:05.378002 systemd-timesyncd[1564]: Initial clock synchronization to Wed 2025-07-09 13:09:05.377914 UTC. Jul 9 13:09:05.378261 systemd[1]: motdgen.service: Deactivated successfully. Jul 9 13:09:05.378419 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 9 13:09:05.379228 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 9 13:09:05.379533 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 9 13:09:05.379669 systemd-resolved[1531]: Clock change detected. Flushing caches. Jul 9 13:09:05.381853 jq[1637]: true Jul 9 13:09:05.381854 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 9 13:09:05.382031 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 9 13:09:05.398670 jq[1647]: true Jul 9 13:09:05.401323 update_engine[1634]: I20250709 13:09:05.401266 1634 main.cc:92] Flatcar Update Engine starting Jul 9 13:09:05.416323 (ntainerd)[1659]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 9 13:09:05.417664 systemd[1]: Started vgauthd.service - VGAuth Service for open-vm-tools. Jul 9 13:09:05.424573 tar[1645]: linux-amd64/LICENSE Jul 9 13:09:05.424573 tar[1645]: linux-amd64/helm Jul 9 13:09:05.429708 systemd[1]: Starting vmtoolsd.service - Service for virtual machines hosted on VMware... Jul 9 13:09:05.469080 dbus-daemon[1614]: [system] SELinux support is enabled Jul 9 13:09:05.470938 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 9 13:09:05.473985 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 9 13:09:05.474078 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 9 13:09:05.474693 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 9 13:09:05.474707 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 9 13:09:05.477000 systemd-logind[1628]: Watching system buttons on /dev/input/event2 (Power Button) Jul 9 13:09:05.480327 systemd-logind[1628]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jul 9 13:09:05.484772 systemd-logind[1628]: New seat seat0. Jul 9 13:09:05.487552 systemd[1]: Started systemd-logind.service - User Login Management. Jul 9 13:09:05.488763 unknown[1665]: Pref_Init: Using '/etc/vmware-tools/vgauth.conf' as preferences filepath Jul 9 13:09:05.489516 unknown[1665]: Core dump limit set to -1 Jul 9 13:09:05.490831 systemd[1]: Started update-engine.service - Update Engine. Jul 9 13:09:05.492250 update_engine[1634]: I20250709 13:09:05.492156 1634 update_check_scheduler.cc:74] Next update check in 5m41s Jul 9 13:09:05.501020 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 9 13:09:05.505070 systemd[1]: Started vmtoolsd.service - Service for virtual machines hosted on VMware. Jul 9 13:09:05.530562 bash[1683]: Updated "/home/core/.ssh/authorized_keys" Jul 9 13:09:05.533664 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 9 13:09:05.534507 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jul 9 13:09:05.694720 containerd[1659]: time="2025-07-09T13:09:05Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jul 9 13:09:05.695716 locksmithd[1682]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 9 13:09:05.696160 containerd[1659]: time="2025-07-09T13:09:05.696139223Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Jul 9 13:09:05.706995 containerd[1659]: time="2025-07-09T13:09:05.706964803Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="8.259µs" Jul 9 13:09:05.707296 containerd[1659]: time="2025-07-09T13:09:05.707284986Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jul 9 13:09:05.707411 containerd[1659]: time="2025-07-09T13:09:05.707402738Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jul 9 13:09:05.708558 containerd[1659]: time="2025-07-09T13:09:05.707526771Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jul 9 13:09:05.708558 containerd[1659]: time="2025-07-09T13:09:05.707538929Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jul 9 13:09:05.708558 containerd[1659]: time="2025-07-09T13:09:05.707553416Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 9 13:09:05.708558 containerd[1659]: time="2025-07-09T13:09:05.707587717Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 9 13:09:05.708558 containerd[1659]: time="2025-07-09T13:09:05.707595052Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 9 13:09:05.708558 containerd[1659]: time="2025-07-09T13:09:05.707742065Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 9 13:09:05.708558 containerd[1659]: time="2025-07-09T13:09:05.707750314Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 9 13:09:05.708558 containerd[1659]: time="2025-07-09T13:09:05.707756597Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 9 13:09:05.708558 containerd[1659]: time="2025-07-09T13:09:05.707761252Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jul 9 13:09:05.708558 containerd[1659]: time="2025-07-09T13:09:05.707800330Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jul 9 13:09:05.708558 containerd[1659]: time="2025-07-09T13:09:05.707915459Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 9 13:09:05.708753 containerd[1659]: time="2025-07-09T13:09:05.707931411Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 9 13:09:05.708753 containerd[1659]: time="2025-07-09T13:09:05.707938052Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jul 9 13:09:05.708753 containerd[1659]: time="2025-07-09T13:09:05.707956999Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jul 9 13:09:05.708753 containerd[1659]: time="2025-07-09T13:09:05.708079454Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jul 9 13:09:05.708753 containerd[1659]: time="2025-07-09T13:09:05.708110942Z" level=info msg="metadata content store policy set" policy=shared Jul 9 13:09:05.727417 sshd_keygen[1662]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 9 13:09:05.741256 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 9 13:09:05.743431 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 9 13:09:05.752086 systemd[1]: issuegen.service: Deactivated successfully. Jul 9 13:09:05.752216 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 9 13:09:05.753320 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 9 13:09:05.787674 containerd[1659]: time="2025-07-09T13:09:05.787373347Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jul 9 13:09:05.787674 containerd[1659]: time="2025-07-09T13:09:05.787428619Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jul 9 13:09:05.787674 containerd[1659]: time="2025-07-09T13:09:05.787441273Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jul 9 13:09:05.787674 containerd[1659]: time="2025-07-09T13:09:05.787451502Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jul 9 13:09:05.787674 containerd[1659]: time="2025-07-09T13:09:05.787461417Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jul 9 13:09:05.787674 containerd[1659]: time="2025-07-09T13:09:05.787469389Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jul 9 13:09:05.787674 containerd[1659]: time="2025-07-09T13:09:05.787480750Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jul 9 13:09:05.787674 containerd[1659]: time="2025-07-09T13:09:05.787489756Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jul 9 13:09:05.787674 containerd[1659]: time="2025-07-09T13:09:05.787498041Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jul 9 13:09:05.787674 containerd[1659]: time="2025-07-09T13:09:05.787512790Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jul 9 13:09:05.787674 containerd[1659]: time="2025-07-09T13:09:05.787521460Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jul 9 13:09:05.787674 containerd[1659]: time="2025-07-09T13:09:05.787533562Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jul 9 13:09:05.787674 containerd[1659]: time="2025-07-09T13:09:05.787621024Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jul 9 13:09:05.791499 containerd[1659]: time="2025-07-09T13:09:05.788241871Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jul 9 13:09:05.791499 containerd[1659]: time="2025-07-09T13:09:05.788270019Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jul 9 13:09:05.791499 containerd[1659]: time="2025-07-09T13:09:05.788282489Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jul 9 13:09:05.791499 containerd[1659]: time="2025-07-09T13:09:05.788291186Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jul 9 13:09:05.791499 containerd[1659]: time="2025-07-09T13:09:05.788300076Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jul 9 13:09:05.791499 containerd[1659]: time="2025-07-09T13:09:05.788309560Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jul 9 13:09:05.791499 containerd[1659]: time="2025-07-09T13:09:05.788317350Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jul 9 13:09:05.791499 containerd[1659]: time="2025-07-09T13:09:05.788327023Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jul 9 13:09:05.791499 containerd[1659]: time="2025-07-09T13:09:05.788335105Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jul 9 13:09:05.791499 containerd[1659]: time="2025-07-09T13:09:05.788343871Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jul 9 13:09:05.791499 containerd[1659]: time="2025-07-09T13:09:05.788397156Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jul 9 13:09:05.791499 containerd[1659]: time="2025-07-09T13:09:05.788409405Z" level=info msg="Start snapshots syncer" Jul 9 13:09:05.791499 containerd[1659]: time="2025-07-09T13:09:05.788425719Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jul 9 13:09:05.789815 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 9 13:09:05.791890 containerd[1659]: time="2025-07-09T13:09:05.788609490Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jul 9 13:09:05.791890 containerd[1659]: time="2025-07-09T13:09:05.788662148Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jul 9 13:09:05.792002 containerd[1659]: time="2025-07-09T13:09:05.788712097Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jul 9 13:09:05.792002 containerd[1659]: time="2025-07-09T13:09:05.788797320Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jul 9 13:09:05.792002 containerd[1659]: time="2025-07-09T13:09:05.788814509Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jul 9 13:09:05.792002 containerd[1659]: time="2025-07-09T13:09:05.788824262Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jul 9 13:09:05.792002 containerd[1659]: time="2025-07-09T13:09:05.788831799Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jul 9 13:09:05.792002 containerd[1659]: time="2025-07-09T13:09:05.788840085Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jul 9 13:09:05.792002 containerd[1659]: time="2025-07-09T13:09:05.788848129Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jul 9 13:09:05.792002 containerd[1659]: time="2025-07-09T13:09:05.788856053Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jul 9 13:09:05.792002 containerd[1659]: time="2025-07-09T13:09:05.788873790Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jul 9 13:09:05.792002 containerd[1659]: time="2025-07-09T13:09:05.788882023Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jul 9 13:09:05.792002 containerd[1659]: time="2025-07-09T13:09:05.788889511Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jul 9 13:09:05.792002 containerd[1659]: time="2025-07-09T13:09:05.788911810Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 9 13:09:05.792002 containerd[1659]: time="2025-07-09T13:09:05.788923040Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 9 13:09:05.792002 containerd[1659]: time="2025-07-09T13:09:05.788929871Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 9 13:09:05.792314 containerd[1659]: time="2025-07-09T13:09:05.788936693Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 9 13:09:05.792314 containerd[1659]: time="2025-07-09T13:09:05.788942503Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jul 9 13:09:05.792314 containerd[1659]: time="2025-07-09T13:09:05.788948793Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jul 9 13:09:05.792314 containerd[1659]: time="2025-07-09T13:09:05.788956371Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jul 9 13:09:05.792314 containerd[1659]: time="2025-07-09T13:09:05.788967710Z" level=info msg="runtime interface created" Jul 9 13:09:05.792314 containerd[1659]: time="2025-07-09T13:09:05.788973319Z" level=info msg="created NRI interface" Jul 9 13:09:05.792314 containerd[1659]: time="2025-07-09T13:09:05.788979294Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jul 9 13:09:05.792314 containerd[1659]: time="2025-07-09T13:09:05.788988144Z" level=info msg="Connect containerd service" Jul 9 13:09:05.792314 containerd[1659]: time="2025-07-09T13:09:05.789006559Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 9 13:09:05.794334 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 9 13:09:05.796708 containerd[1659]: time="2025-07-09T13:09:05.796589178Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 9 13:09:05.797332 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jul 9 13:09:05.797823 systemd[1]: Reached target getty.target - Login Prompts. Jul 9 13:09:05.881011 tar[1645]: linux-amd64/README.md Jul 9 13:09:05.894545 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 9 13:09:05.913000 containerd[1659]: time="2025-07-09T13:09:05.912367587Z" level=info msg="Start subscribing containerd event" Jul 9 13:09:05.913000 containerd[1659]: time="2025-07-09T13:09:05.912401451Z" level=info msg="Start recovering state" Jul 9 13:09:05.913000 containerd[1659]: time="2025-07-09T13:09:05.912433555Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 9 13:09:05.913000 containerd[1659]: time="2025-07-09T13:09:05.912459466Z" level=info msg="Start event monitor" Jul 9 13:09:05.913000 containerd[1659]: time="2025-07-09T13:09:05.912469368Z" level=info msg="Start cni network conf syncer for default" Jul 9 13:09:05.913000 containerd[1659]: time="2025-07-09T13:09:05.912473527Z" level=info msg="Start streaming server" Jul 9 13:09:05.913000 containerd[1659]: time="2025-07-09T13:09:05.912460042Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 9 13:09:05.913000 containerd[1659]: time="2025-07-09T13:09:05.912481466Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jul 9 13:09:05.913000 containerd[1659]: time="2025-07-09T13:09:05.912500145Z" level=info msg="runtime interface starting up..." Jul 9 13:09:05.913000 containerd[1659]: time="2025-07-09T13:09:05.912503416Z" level=info msg="starting plugins..." Jul 9 13:09:05.913000 containerd[1659]: time="2025-07-09T13:09:05.912513172Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jul 9 13:09:05.913000 containerd[1659]: time="2025-07-09T13:09:05.912585384Z" level=info msg="containerd successfully booted in 0.218108s" Jul 9 13:09:05.912678 systemd[1]: Started containerd.service - containerd container runtime. Jul 9 13:09:06.604836 systemd-networkd[1530]: ens192: Gained IPv6LL Jul 9 13:09:06.606210 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 9 13:09:06.607085 systemd[1]: Reached target network-online.target - Network is Online. Jul 9 13:09:06.608457 systemd[1]: Starting coreos-metadata.service - VMware metadata agent... Jul 9 13:09:06.624857 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 9 13:09:06.626808 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 9 13:09:06.667011 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 9 13:09:06.684051 systemd[1]: coreos-metadata.service: Deactivated successfully. Jul 9 13:09:06.684226 systemd[1]: Finished coreos-metadata.service - VMware metadata agent. Jul 9 13:09:06.684620 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 9 13:09:07.890391 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 9 13:09:07.891594 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 9 13:09:07.892463 systemd[1]: Startup finished in 2.759s (kernel) + 6.115s (initrd) + 4.441s (userspace) = 13.316s. Jul 9 13:09:07.903430 (kubelet)[1811]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 9 13:09:07.952037 login[1740]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jul 9 13:09:07.953531 login[1744]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jul 9 13:09:07.958446 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 9 13:09:07.960991 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 9 13:09:07.968365 systemd-logind[1628]: New session 1 of user core. Jul 9 13:09:07.973365 systemd-logind[1628]: New session 2 of user core. Jul 9 13:09:07.977906 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 9 13:09:07.980446 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 9 13:09:07.994269 (systemd)[1819]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 9 13:09:07.996029 systemd-logind[1628]: New session c1 of user core. Jul 9 13:09:08.085257 systemd[1819]: Queued start job for default target default.target. Jul 9 13:09:08.089483 systemd[1819]: Created slice app.slice - User Application Slice. Jul 9 13:09:08.089501 systemd[1819]: Reached target paths.target - Paths. Jul 9 13:09:08.089526 systemd[1819]: Reached target timers.target - Timers. Jul 9 13:09:08.090193 systemd[1819]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 9 13:09:08.099125 systemd[1819]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 9 13:09:08.099663 systemd[1819]: Reached target sockets.target - Sockets. Jul 9 13:09:08.099752 systemd[1819]: Reached target basic.target - Basic System. Jul 9 13:09:08.099776 systemd[1819]: Reached target default.target - Main User Target. Jul 9 13:09:08.099793 systemd[1819]: Startup finished in 99ms. Jul 9 13:09:08.099977 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 9 13:09:08.101115 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 9 13:09:08.101960 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 9 13:09:08.992052 kubelet[1811]: E0709 13:09:08.991997 1811 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 9 13:09:08.994399 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 9 13:09:08.994749 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 9 13:09:08.995283 systemd[1]: kubelet.service: Consumed 648ms CPU time, 265.2M memory peak. Jul 9 13:09:19.012946 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 9 13:09:19.014261 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 9 13:09:19.293426 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 9 13:09:19.296359 (kubelet)[1866]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 9 13:09:19.323658 kubelet[1866]: E0709 13:09:19.323603 1866 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 9 13:09:19.326369 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 9 13:09:19.326525 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 9 13:09:19.326922 systemd[1]: kubelet.service: Consumed 102ms CPU time, 110.4M memory peak. Jul 9 13:09:29.512879 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 9 13:09:29.514533 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 9 13:09:29.850888 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 9 13:09:29.858851 (kubelet)[1880]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 9 13:09:29.918378 kubelet[1880]: E0709 13:09:29.918343 1880 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 9 13:09:29.919880 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 9 13:09:29.920018 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 9 13:09:29.920403 systemd[1]: kubelet.service: Consumed 104ms CPU time, 108.2M memory peak. Jul 9 13:09:35.598482 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 9 13:09:35.600019 systemd[1]: Started sshd@0-139.178.70.108:22-139.178.68.195:39146.service - OpenSSH per-connection server daemon (139.178.68.195:39146). Jul 9 13:09:35.658747 sshd[1887]: Accepted publickey for core from 139.178.68.195 port 39146 ssh2: RSA SHA256:pHehh7tc90QOyf1uGohWVF4tJIie1SMOFA2c8G1DmZI Jul 9 13:09:35.659485 sshd-session[1887]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 13:09:35.662152 systemd-logind[1628]: New session 3 of user core. Jul 9 13:09:35.669731 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 9 13:09:35.722799 systemd[1]: Started sshd@1-139.178.70.108:22-139.178.68.195:39154.service - OpenSSH per-connection server daemon (139.178.68.195:39154). Jul 9 13:09:35.759012 sshd[1893]: Accepted publickey for core from 139.178.68.195 port 39154 ssh2: RSA SHA256:pHehh7tc90QOyf1uGohWVF4tJIie1SMOFA2c8G1DmZI Jul 9 13:09:35.759527 sshd-session[1893]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 13:09:35.763072 systemd-logind[1628]: New session 4 of user core. Jul 9 13:09:35.776758 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 9 13:09:35.824088 sshd[1896]: Connection closed by 139.178.68.195 port 39154 Jul 9 13:09:35.825791 sshd-session[1893]: pam_unix(sshd:session): session closed for user core Jul 9 13:09:35.832456 systemd[1]: sshd@1-139.178.70.108:22-139.178.68.195:39154.service: Deactivated successfully. Jul 9 13:09:35.833401 systemd[1]: session-4.scope: Deactivated successfully. Jul 9 13:09:35.833901 systemd-logind[1628]: Session 4 logged out. Waiting for processes to exit. Jul 9 13:09:35.834966 systemd[1]: Started sshd@2-139.178.70.108:22-139.178.68.195:39162.service - OpenSSH per-connection server daemon (139.178.68.195:39162). Jul 9 13:09:35.836184 systemd-logind[1628]: Removed session 4. Jul 9 13:09:35.871145 sshd[1902]: Accepted publickey for core from 139.178.68.195 port 39162 ssh2: RSA SHA256:pHehh7tc90QOyf1uGohWVF4tJIie1SMOFA2c8G1DmZI Jul 9 13:09:35.872105 sshd-session[1902]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 13:09:35.874610 systemd-logind[1628]: New session 5 of user core. Jul 9 13:09:35.884804 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 9 13:09:35.931221 sshd[1905]: Connection closed by 139.178.68.195 port 39162 Jul 9 13:09:35.931167 sshd-session[1902]: pam_unix(sshd:session): session closed for user core Jul 9 13:09:35.940209 systemd[1]: sshd@2-139.178.70.108:22-139.178.68.195:39162.service: Deactivated successfully. Jul 9 13:09:35.941298 systemd[1]: session-5.scope: Deactivated successfully. Jul 9 13:09:35.941828 systemd-logind[1628]: Session 5 logged out. Waiting for processes to exit. Jul 9 13:09:35.943234 systemd[1]: Started sshd@3-139.178.70.108:22-139.178.68.195:39164.service - OpenSSH per-connection server daemon (139.178.68.195:39164). Jul 9 13:09:35.945062 systemd-logind[1628]: Removed session 5. Jul 9 13:09:35.981090 sshd[1911]: Accepted publickey for core from 139.178.68.195 port 39164 ssh2: RSA SHA256:pHehh7tc90QOyf1uGohWVF4tJIie1SMOFA2c8G1DmZI Jul 9 13:09:35.981897 sshd-session[1911]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 13:09:35.984432 systemd-logind[1628]: New session 6 of user core. Jul 9 13:09:35.993791 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 9 13:09:36.042516 sshd[1914]: Connection closed by 139.178.68.195 port 39164 Jul 9 13:09:36.042889 sshd-session[1911]: pam_unix(sshd:session): session closed for user core Jul 9 13:09:36.046956 systemd[1]: sshd@3-139.178.70.108:22-139.178.68.195:39164.service: Deactivated successfully. Jul 9 13:09:36.048165 systemd[1]: session-6.scope: Deactivated successfully. Jul 9 13:09:36.049230 systemd-logind[1628]: Session 6 logged out. Waiting for processes to exit. Jul 9 13:09:36.050814 systemd-logind[1628]: Removed session 6. Jul 9 13:09:36.051782 systemd[1]: Started sshd@4-139.178.70.108:22-139.178.68.195:39176.service - OpenSSH per-connection server daemon (139.178.68.195:39176). Jul 9 13:09:36.083767 sshd[1920]: Accepted publickey for core from 139.178.68.195 port 39176 ssh2: RSA SHA256:pHehh7tc90QOyf1uGohWVF4tJIie1SMOFA2c8G1DmZI Jul 9 13:09:36.084463 sshd-session[1920]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 13:09:36.087139 systemd-logind[1628]: New session 7 of user core. Jul 9 13:09:36.096902 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 9 13:09:36.154329 sudo[1924]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 9 13:09:36.154522 sudo[1924]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 9 13:09:36.174169 sudo[1924]: pam_unix(sudo:session): session closed for user root Jul 9 13:09:36.175049 sshd[1923]: Connection closed by 139.178.68.195 port 39176 Jul 9 13:09:36.175488 sshd-session[1920]: pam_unix(sshd:session): session closed for user core Jul 9 13:09:36.182476 systemd[1]: sshd@4-139.178.70.108:22-139.178.68.195:39176.service: Deactivated successfully. Jul 9 13:09:36.183703 systemd[1]: session-7.scope: Deactivated successfully. Jul 9 13:09:36.184382 systemd-logind[1628]: Session 7 logged out. Waiting for processes to exit. Jul 9 13:09:36.186470 systemd[1]: Started sshd@5-139.178.70.108:22-139.178.68.195:39178.service - OpenSSH per-connection server daemon (139.178.68.195:39178). Jul 9 13:09:36.187895 systemd-logind[1628]: Removed session 7. Jul 9 13:09:36.226205 sshd[1930]: Accepted publickey for core from 139.178.68.195 port 39178 ssh2: RSA SHA256:pHehh7tc90QOyf1uGohWVF4tJIie1SMOFA2c8G1DmZI Jul 9 13:09:36.227010 sshd-session[1930]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 13:09:36.229850 systemd-logind[1628]: New session 8 of user core. Jul 9 13:09:36.238727 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 9 13:09:36.288123 sudo[1935]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 9 13:09:36.288323 sudo[1935]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 9 13:09:36.291280 sudo[1935]: pam_unix(sudo:session): session closed for user root Jul 9 13:09:36.295099 sudo[1934]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jul 9 13:09:36.295462 sudo[1934]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 9 13:09:36.303162 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 9 13:09:36.329820 augenrules[1957]: No rules Jul 9 13:09:36.330443 systemd[1]: audit-rules.service: Deactivated successfully. Jul 9 13:09:36.330622 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 9 13:09:36.331573 sudo[1934]: pam_unix(sudo:session): session closed for user root Jul 9 13:09:36.332703 sshd[1933]: Connection closed by 139.178.68.195 port 39178 Jul 9 13:09:36.332593 sshd-session[1930]: pam_unix(sshd:session): session closed for user core Jul 9 13:09:36.343653 systemd[1]: sshd@5-139.178.70.108:22-139.178.68.195:39178.service: Deactivated successfully. Jul 9 13:09:36.344411 systemd[1]: session-8.scope: Deactivated successfully. Jul 9 13:09:36.344856 systemd-logind[1628]: Session 8 logged out. Waiting for processes to exit. Jul 9 13:09:36.345888 systemd[1]: Started sshd@6-139.178.70.108:22-139.178.68.195:39188.service - OpenSSH per-connection server daemon (139.178.68.195:39188). Jul 9 13:09:36.347737 systemd-logind[1628]: Removed session 8. Jul 9 13:09:36.386261 sshd[1966]: Accepted publickey for core from 139.178.68.195 port 39188 ssh2: RSA SHA256:pHehh7tc90QOyf1uGohWVF4tJIie1SMOFA2c8G1DmZI Jul 9 13:09:36.387176 sshd-session[1966]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 13:09:36.391260 systemd-logind[1628]: New session 9 of user core. Jul 9 13:09:36.399745 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 9 13:09:36.449227 sudo[1970]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 9 13:09:36.449456 sudo[1970]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 9 13:09:36.743112 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 9 13:09:36.751921 (dockerd)[1988]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 9 13:09:37.003042 dockerd[1988]: time="2025-07-09T13:09:37.002968995Z" level=info msg="Starting up" Jul 9 13:09:37.003586 dockerd[1988]: time="2025-07-09T13:09:37.003572746Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jul 9 13:09:37.010586 dockerd[1988]: time="2025-07-09T13:09:37.010553789Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Jul 9 13:09:37.055899 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport1447133716-merged.mount: Deactivated successfully. Jul 9 13:09:37.192039 dockerd[1988]: time="2025-07-09T13:09:37.192005244Z" level=info msg="Loading containers: start." Jul 9 13:09:37.225658 kernel: Initializing XFRM netlink socket Jul 9 13:09:37.431725 systemd-networkd[1530]: docker0: Link UP Jul 9 13:09:37.432972 dockerd[1988]: time="2025-07-09T13:09:37.432949417Z" level=info msg="Loading containers: done." Jul 9 13:09:37.441225 dockerd[1988]: time="2025-07-09T13:09:37.441198919Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 9 13:09:37.441328 dockerd[1988]: time="2025-07-09T13:09:37.441258080Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Jul 9 13:09:37.441328 dockerd[1988]: time="2025-07-09T13:09:37.441303599Z" level=info msg="Initializing buildkit" Jul 9 13:09:37.450699 dockerd[1988]: time="2025-07-09T13:09:37.450675005Z" level=info msg="Completed buildkit initialization" Jul 9 13:09:37.456372 dockerd[1988]: time="2025-07-09T13:09:37.456345208Z" level=info msg="Daemon has completed initialization" Jul 9 13:09:37.456549 dockerd[1988]: time="2025-07-09T13:09:37.456425873Z" level=info msg="API listen on /run/docker.sock" Jul 9 13:09:37.456685 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 9 13:09:38.137029 containerd[1659]: time="2025-07-09T13:09:38.137001906Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.6\"" Jul 9 13:09:38.930876 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3321726121.mount: Deactivated successfully. Jul 9 13:09:40.012972 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jul 9 13:09:40.014588 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 9 13:09:40.214998 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 9 13:09:40.224892 (kubelet)[2259]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 9 13:09:40.241647 containerd[1659]: time="2025-07-09T13:09:40.241488413Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 13:09:40.249798 containerd[1659]: time="2025-07-09T13:09:40.249777532Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.6: active requests=0, bytes read=28799045" Jul 9 13:09:40.254734 containerd[1659]: time="2025-07-09T13:09:40.254720692Z" level=info msg="ImageCreate event name:\"sha256:8c5b95b1b5cb4a908fcbbbe81697c57019f9e9d89bfb5e0355235d440b7a6aa9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 13:09:40.259256 containerd[1659]: time="2025-07-09T13:09:40.259229621Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:0f5764551d7de4ef70489ff8a70f32df7dea00701f5545af089b60bc5ede4f6f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 13:09:40.259759 containerd[1659]: time="2025-07-09T13:09:40.259739665Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.6\" with image id \"sha256:8c5b95b1b5cb4a908fcbbbe81697c57019f9e9d89bfb5e0355235d440b7a6aa9\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.6\", repo digest \"registry.k8s.io/kube-apiserver@sha256:0f5764551d7de4ef70489ff8a70f32df7dea00701f5545af089b60bc5ede4f6f\", size \"28795845\" in 2.122713863s" Jul 9 13:09:40.259792 containerd[1659]: time="2025-07-09T13:09:40.259767275Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.6\" returns image reference \"sha256:8c5b95b1b5cb4a908fcbbbe81697c57019f9e9d89bfb5e0355235d440b7a6aa9\"" Jul 9 13:09:40.260436 containerd[1659]: time="2025-07-09T13:09:40.260422596Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.6\"" Jul 9 13:09:40.262472 kubelet[2259]: E0709 13:09:40.262446 2259 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 9 13:09:40.263873 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 9 13:09:40.264009 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 9 13:09:40.264389 systemd[1]: kubelet.service: Consumed 98ms CPU time, 108.6M memory peak. Jul 9 13:09:41.580651 containerd[1659]: time="2025-07-09T13:09:41.580343209Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 13:09:41.588334 containerd[1659]: time="2025-07-09T13:09:41.588313813Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.6: active requests=0, bytes read=24783912" Jul 9 13:09:41.598967 containerd[1659]: time="2025-07-09T13:09:41.598928897Z" level=info msg="ImageCreate event name:\"sha256:77d0e7de0c6b41e2331c3997698c3f917527cf7bbe462f5c813f514e788436de\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 13:09:41.604329 containerd[1659]: time="2025-07-09T13:09:41.604294520Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3425f29c94a77d74cb89f38413e6274277dcf5e2bc7ab6ae953578a91e9e8356\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 13:09:41.605220 containerd[1659]: time="2025-07-09T13:09:41.605104154Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.6\" with image id \"sha256:77d0e7de0c6b41e2331c3997698c3f917527cf7bbe462f5c813f514e788436de\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.6\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3425f29c94a77d74cb89f38413e6274277dcf5e2bc7ab6ae953578a91e9e8356\", size \"26385746\" in 1.344662219s" Jul 9 13:09:41.605220 containerd[1659]: time="2025-07-09T13:09:41.605140199Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.6\" returns image reference \"sha256:77d0e7de0c6b41e2331c3997698c3f917527cf7bbe462f5c813f514e788436de\"" Jul 9 13:09:41.605616 containerd[1659]: time="2025-07-09T13:09:41.605514137Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.6\"" Jul 9 13:09:42.709202 containerd[1659]: time="2025-07-09T13:09:42.709172287Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 13:09:42.709884 containerd[1659]: time="2025-07-09T13:09:42.709811694Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.6: active requests=0, bytes read=19176916" Jul 9 13:09:42.710294 containerd[1659]: time="2025-07-09T13:09:42.710280385Z" level=info msg="ImageCreate event name:\"sha256:b34d1cd163151c2491919f315274d85bff904721213f2b19341b403a28a39ae2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 13:09:42.711606 containerd[1659]: time="2025-07-09T13:09:42.711593492Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:130f633cbd1d70e2f4655350153cb3fc469f4d5a6310b4f0b49d93fb2ba2132b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 13:09:42.712157 containerd[1659]: time="2025-07-09T13:09:42.712144602Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.6\" with image id \"sha256:b34d1cd163151c2491919f315274d85bff904721213f2b19341b403a28a39ae2\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.6\", repo digest \"registry.k8s.io/kube-scheduler@sha256:130f633cbd1d70e2f4655350153cb3fc469f4d5a6310b4f0b49d93fb2ba2132b\", size \"20778768\" in 1.106590633s" Jul 9 13:09:42.712269 containerd[1659]: time="2025-07-09T13:09:42.712201728Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.6\" returns image reference \"sha256:b34d1cd163151c2491919f315274d85bff904721213f2b19341b403a28a39ae2\"" Jul 9 13:09:42.712506 containerd[1659]: time="2025-07-09T13:09:42.712491623Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.6\"" Jul 9 13:09:43.611531 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1872181811.mount: Deactivated successfully. Jul 9 13:09:44.049794 containerd[1659]: time="2025-07-09T13:09:44.049758958Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 13:09:44.052769 containerd[1659]: time="2025-07-09T13:09:44.052742318Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.6: active requests=0, bytes read=30895363" Jul 9 13:09:44.057270 containerd[1659]: time="2025-07-09T13:09:44.057236172Z" level=info msg="ImageCreate event name:\"sha256:63f0cbe3b7339c5d006efc9964228e48271bae73039320037c451b5e8f763e02\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 13:09:44.064586 containerd[1659]: time="2025-07-09T13:09:44.064533505Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:b13d9da413b983d130bf090b83fce12e1ccc704e95f366da743c18e964d9d7e9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 13:09:44.065040 containerd[1659]: time="2025-07-09T13:09:44.064886071Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.6\" with image id \"sha256:63f0cbe3b7339c5d006efc9964228e48271bae73039320037c451b5e8f763e02\", repo tag \"registry.k8s.io/kube-proxy:v1.32.6\", repo digest \"registry.k8s.io/kube-proxy@sha256:b13d9da413b983d130bf090b83fce12e1ccc704e95f366da743c18e964d9d7e9\", size \"30894382\" in 1.35230287s" Jul 9 13:09:44.065040 containerd[1659]: time="2025-07-09T13:09:44.064908298Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.6\" returns image reference \"sha256:63f0cbe3b7339c5d006efc9964228e48271bae73039320037c451b5e8f763e02\"" Jul 9 13:09:44.065360 containerd[1659]: time="2025-07-09T13:09:44.065184116Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jul 9 13:09:44.699732 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3233490544.mount: Deactivated successfully. Jul 9 13:09:45.484082 containerd[1659]: time="2025-07-09T13:09:45.484046244Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 13:09:45.487013 containerd[1659]: time="2025-07-09T13:09:45.486990558Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Jul 9 13:09:45.491861 containerd[1659]: time="2025-07-09T13:09:45.491831976Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 13:09:45.496761 containerd[1659]: time="2025-07-09T13:09:45.496739230Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 13:09:45.497606 containerd[1659]: time="2025-07-09T13:09:45.497582278Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.432381316s" Jul 9 13:09:45.497736 containerd[1659]: time="2025-07-09T13:09:45.497666443Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jul 9 13:09:45.498269 containerd[1659]: time="2025-07-09T13:09:45.498242232Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 9 13:09:46.450205 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4039803978.mount: Deactivated successfully. Jul 9 13:09:46.452209 containerd[1659]: time="2025-07-09T13:09:46.452184138Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 9 13:09:46.452734 containerd[1659]: time="2025-07-09T13:09:46.452718224Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Jul 9 13:09:46.453240 containerd[1659]: time="2025-07-09T13:09:46.453223553Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 9 13:09:46.454214 containerd[1659]: time="2025-07-09T13:09:46.454191105Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 9 13:09:46.454643 containerd[1659]: time="2025-07-09T13:09:46.454565150Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 956.296053ms" Jul 9 13:09:46.454643 containerd[1659]: time="2025-07-09T13:09:46.454581705Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jul 9 13:09:46.454891 containerd[1659]: time="2025-07-09T13:09:46.454877197Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jul 9 13:09:47.168670 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2649351009.mount: Deactivated successfully. Jul 9 13:09:50.512760 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jul 9 13:09:50.514870 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 9 13:09:50.581737 update_engine[1634]: I20250709 13:09:50.581701 1634 update_attempter.cc:509] Updating boot flags... Jul 9 13:09:50.985491 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 9 13:09:50.990861 (kubelet)[2423]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 9 13:09:51.099224 containerd[1659]: time="2025-07-09T13:09:51.098808611Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 13:09:51.100615 containerd[1659]: time="2025-07-09T13:09:51.100593601Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57551360" Jul 9 13:09:51.100978 containerd[1659]: time="2025-07-09T13:09:51.100957413Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 13:09:51.103961 containerd[1659]: time="2025-07-09T13:09:51.103925138Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 13:09:51.104544 containerd[1659]: time="2025-07-09T13:09:51.104454125Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 4.649541837s" Jul 9 13:09:51.104616 containerd[1659]: time="2025-07-09T13:09:51.104604853Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Jul 9 13:09:51.128312 kubelet[2423]: E0709 13:09:51.128286 2423 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 9 13:09:51.132839 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 9 13:09:51.133103 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 9 13:09:51.133697 systemd[1]: kubelet.service: Consumed 108ms CPU time, 112.3M memory peak. Jul 9 13:09:52.747734 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 9 13:09:52.748027 systemd[1]: kubelet.service: Consumed 108ms CPU time, 112.3M memory peak. Jul 9 13:09:52.749586 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 9 13:09:52.766390 systemd[1]: Reload requested from client PID 2456 ('systemctl') (unit session-9.scope)... Jul 9 13:09:52.766406 systemd[1]: Reloading... Jul 9 13:09:52.828657 zram_generator::config[2503]: No configuration found. Jul 9 13:09:52.892454 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 9 13:09:52.900712 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Jul 9 13:09:52.968848 systemd[1]: Reloading finished in 202 ms. Jul 9 13:09:53.006764 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 9 13:09:53.006825 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 9 13:09:53.006997 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 9 13:09:53.008800 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 9 13:09:53.357576 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 9 13:09:53.365789 (kubelet)[2567]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 9 13:09:53.475668 kubelet[2567]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 9 13:09:53.475668 kubelet[2567]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 9 13:09:53.475668 kubelet[2567]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 9 13:09:53.475668 kubelet[2567]: I0709 13:09:53.475126 2567 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 9 13:09:53.750061 kubelet[2567]: I0709 13:09:53.749887 2567 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jul 9 13:09:53.750061 kubelet[2567]: I0709 13:09:53.749907 2567 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 9 13:09:53.750166 kubelet[2567]: I0709 13:09:53.750156 2567 server.go:954] "Client rotation is on, will bootstrap in background" Jul 9 13:09:53.779992 kubelet[2567]: I0709 13:09:53.779971 2567 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 9 13:09:53.781180 kubelet[2567]: E0709 13:09:53.780852 2567 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://139.178.70.108:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 139.178.70.108:6443: connect: connection refused" logger="UnhandledError" Jul 9 13:09:53.792388 kubelet[2567]: I0709 13:09:53.792372 2567 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 9 13:09:53.796571 kubelet[2567]: I0709 13:09:53.796551 2567 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 9 13:09:53.798488 kubelet[2567]: I0709 13:09:53.798220 2567 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 9 13:09:53.798488 kubelet[2567]: I0709 13:09:53.798241 2567 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 9 13:09:53.798488 kubelet[2567]: I0709 13:09:53.798354 2567 topology_manager.go:138] "Creating topology manager with none policy" Jul 9 13:09:53.798488 kubelet[2567]: I0709 13:09:53.798363 2567 container_manager_linux.go:304] "Creating device plugin manager" Jul 9 13:09:53.799270 kubelet[2567]: I0709 13:09:53.799262 2567 state_mem.go:36] "Initialized new in-memory state store" Jul 9 13:09:53.803505 kubelet[2567]: I0709 13:09:53.803497 2567 kubelet.go:446] "Attempting to sync node with API server" Jul 9 13:09:53.803945 kubelet[2567]: I0709 13:09:53.803937 2567 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 9 13:09:53.806068 kubelet[2567]: I0709 13:09:53.806060 2567 kubelet.go:352] "Adding apiserver pod source" Jul 9 13:09:53.806475 kubelet[2567]: I0709 13:09:53.806468 2567 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 9 13:09:53.807779 kubelet[2567]: W0709 13:09:53.807755 2567 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://139.178.70.108:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 139.178.70.108:6443: connect: connection refused Jul 9 13:09:53.807829 kubelet[2567]: E0709 13:09:53.807821 2567 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://139.178.70.108:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 139.178.70.108:6443: connect: connection refused" logger="UnhandledError" Jul 9 13:09:53.809304 kubelet[2567]: W0709 13:09:53.809160 2567 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://139.178.70.108:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 139.178.70.108:6443: connect: connection refused Jul 9 13:09:53.809304 kubelet[2567]: E0709 13:09:53.809183 2567 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://139.178.70.108:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 139.178.70.108:6443: connect: connection refused" logger="UnhandledError" Jul 9 13:09:53.810492 kubelet[2567]: I0709 13:09:53.810481 2567 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Jul 9 13:09:53.813613 kubelet[2567]: I0709 13:09:53.812876 2567 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 9 13:09:53.813613 kubelet[2567]: W0709 13:09:53.812911 2567 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 9 13:09:53.815145 kubelet[2567]: I0709 13:09:53.814536 2567 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 9 13:09:53.815145 kubelet[2567]: I0709 13:09:53.814554 2567 server.go:1287] "Started kubelet" Jul 9 13:09:53.815145 kubelet[2567]: I0709 13:09:53.814858 2567 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jul 9 13:09:53.820153 kubelet[2567]: I0709 13:09:53.820127 2567 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 9 13:09:53.821910 kubelet[2567]: I0709 13:09:53.821901 2567 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 9 13:09:53.824536 kubelet[2567]: I0709 13:09:53.824102 2567 server.go:479] "Adding debug handlers to kubelet server" Jul 9 13:09:53.824606 kubelet[2567]: E0709 13:09:53.822710 2567 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://139.178.70.108:6443/api/v1/namespaces/default/events\": dial tcp 139.178.70.108:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18509748c324fe14 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-07-09 13:09:53.814543892 +0000 UTC m=+0.446681685,LastTimestamp:2025-07-09 13:09:53.814543892 +0000 UTC m=+0.446681685,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 9 13:09:53.826980 kubelet[2567]: I0709 13:09:53.826971 2567 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 9 13:09:53.829774 kubelet[2567]: I0709 13:09:53.829761 2567 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 9 13:09:53.832363 kubelet[2567]: I0709 13:09:53.831769 2567 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 9 13:09:53.832363 kubelet[2567]: E0709 13:09:53.831863 2567 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 9 13:09:53.832363 kubelet[2567]: I0709 13:09:53.831893 2567 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 9 13:09:53.832363 kubelet[2567]: I0709 13:09:53.831916 2567 reconciler.go:26] "Reconciler: start to sync state" Jul 9 13:09:53.832363 kubelet[2567]: W0709 13:09:53.832069 2567 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://139.178.70.108:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.70.108:6443: connect: connection refused Jul 9 13:09:53.832363 kubelet[2567]: E0709 13:09:53.832092 2567 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://139.178.70.108:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 139.178.70.108:6443: connect: connection refused" logger="UnhandledError" Jul 9 13:09:53.832363 kubelet[2567]: E0709 13:09:53.832124 2567 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.108:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.108:6443: connect: connection refused" interval="200ms" Jul 9 13:09:53.835706 kubelet[2567]: I0709 13:09:53.835433 2567 factory.go:221] Registration of the containerd container factory successfully Jul 9 13:09:53.835706 kubelet[2567]: I0709 13:09:53.835442 2567 factory.go:221] Registration of the systemd container factory successfully Jul 9 13:09:53.835706 kubelet[2567]: I0709 13:09:53.835476 2567 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 9 13:09:53.841187 kubelet[2567]: I0709 13:09:53.841155 2567 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 9 13:09:53.841892 kubelet[2567]: I0709 13:09:53.841881 2567 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 9 13:09:53.841944 kubelet[2567]: I0709 13:09:53.841939 2567 status_manager.go:227] "Starting to sync pod status with apiserver" Jul 9 13:09:53.842001 kubelet[2567]: I0709 13:09:53.841993 2567 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 9 13:09:53.842046 kubelet[2567]: I0709 13:09:53.842040 2567 kubelet.go:2382] "Starting kubelet main sync loop" Jul 9 13:09:53.842122 kubelet[2567]: E0709 13:09:53.842108 2567 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 9 13:09:53.847295 kubelet[2567]: W0709 13:09:53.847269 2567 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://139.178.70.108:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.70.108:6443: connect: connection refused Jul 9 13:09:53.847502 kubelet[2567]: E0709 13:09:53.847491 2567 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://139.178.70.108:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 139.178.70.108:6443: connect: connection refused" logger="UnhandledError" Jul 9 13:09:53.847763 kubelet[2567]: E0709 13:09:53.847719 2567 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 9 13:09:53.861671 kubelet[2567]: I0709 13:09:53.861659 2567 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 9 13:09:53.861820 kubelet[2567]: I0709 13:09:53.861735 2567 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 9 13:09:53.861820 kubelet[2567]: I0709 13:09:53.861745 2567 state_mem.go:36] "Initialized new in-memory state store" Jul 9 13:09:53.862910 kubelet[2567]: I0709 13:09:53.862774 2567 policy_none.go:49] "None policy: Start" Jul 9 13:09:53.862910 kubelet[2567]: I0709 13:09:53.862784 2567 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 9 13:09:53.862910 kubelet[2567]: I0709 13:09:53.862791 2567 state_mem.go:35] "Initializing new in-memory state store" Jul 9 13:09:53.865796 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 9 13:09:53.877163 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 9 13:09:53.879567 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 9 13:09:53.889221 kubelet[2567]: I0709 13:09:53.889204 2567 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 9 13:09:53.889604 kubelet[2567]: I0709 13:09:53.889319 2567 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 9 13:09:53.889604 kubelet[2567]: I0709 13:09:53.889328 2567 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 9 13:09:53.889604 kubelet[2567]: I0709 13:09:53.889553 2567 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 9 13:09:53.890038 kubelet[2567]: E0709 13:09:53.890029 2567 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 9 13:09:53.890302 kubelet[2567]: E0709 13:09:53.890292 2567 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jul 9 13:09:53.952910 systemd[1]: Created slice kubepods-burstable-pod8ae7f133d43a2a4e542539d39e6bc3a7.slice - libcontainer container kubepods-burstable-pod8ae7f133d43a2a4e542539d39e6bc3a7.slice. Jul 9 13:09:53.960393 kubelet[2567]: E0709 13:09:53.960226 2567 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 9 13:09:53.963320 systemd[1]: Created slice kubepods-burstable-podd1af03769b64da1b1e8089a7035018fc.slice - libcontainer container kubepods-burstable-podd1af03769b64da1b1e8089a7035018fc.slice. Jul 9 13:09:53.973725 kubelet[2567]: E0709 13:09:53.973712 2567 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 9 13:09:53.975152 systemd[1]: Created slice kubepods-burstable-pod8a75e163f27396b2168da0f88f85f8a5.slice - libcontainer container kubepods-burstable-pod8a75e163f27396b2168da0f88f85f8a5.slice. Jul 9 13:09:53.977430 kubelet[2567]: E0709 13:09:53.977408 2567 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 9 13:09:53.990523 kubelet[2567]: I0709 13:09:53.990487 2567 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 9 13:09:53.990934 kubelet[2567]: E0709 13:09:53.990902 2567 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://139.178.70.108:6443/api/v1/nodes\": dial tcp 139.178.70.108:6443: connect: connection refused" node="localhost" Jul 9 13:09:54.033429 kubelet[2567]: E0709 13:09:54.033393 2567 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.108:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.108:6443: connect: connection refused" interval="400ms" Jul 9 13:09:54.033708 kubelet[2567]: I0709 13:09:54.033627 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 9 13:09:54.033708 kubelet[2567]: I0709 13:09:54.033674 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 9 13:09:54.033708 kubelet[2567]: I0709 13:09:54.033689 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8a75e163f27396b2168da0f88f85f8a5-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"8a75e163f27396b2168da0f88f85f8a5\") " pod="kube-system/kube-scheduler-localhost" Jul 9 13:09:54.033857 kubelet[2567]: I0709 13:09:54.033717 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8ae7f133d43a2a4e542539d39e6bc3a7-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"8ae7f133d43a2a4e542539d39e6bc3a7\") " pod="kube-system/kube-apiserver-localhost" Jul 9 13:09:54.033857 kubelet[2567]: I0709 13:09:54.033756 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 9 13:09:54.033857 kubelet[2567]: I0709 13:09:54.033771 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 9 13:09:54.033857 kubelet[2567]: I0709 13:09:54.033784 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8ae7f133d43a2a4e542539d39e6bc3a7-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"8ae7f133d43a2a4e542539d39e6bc3a7\") " pod="kube-system/kube-apiserver-localhost" Jul 9 13:09:54.033857 kubelet[2567]: I0709 13:09:54.033796 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8ae7f133d43a2a4e542539d39e6bc3a7-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"8ae7f133d43a2a4e542539d39e6bc3a7\") " pod="kube-system/kube-apiserver-localhost" Jul 9 13:09:54.033986 kubelet[2567]: I0709 13:09:54.033808 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 9 13:09:54.192133 kubelet[2567]: I0709 13:09:54.192108 2567 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 9 13:09:54.192382 kubelet[2567]: E0709 13:09:54.192349 2567 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://139.178.70.108:6443/api/v1/nodes\": dial tcp 139.178.70.108:6443: connect: connection refused" node="localhost" Jul 9 13:09:54.261944 containerd[1659]: time="2025-07-09T13:09:54.261895077Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:8ae7f133d43a2a4e542539d39e6bc3a7,Namespace:kube-system,Attempt:0,}" Jul 9 13:09:54.280164 containerd[1659]: time="2025-07-09T13:09:54.280065427Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d1af03769b64da1b1e8089a7035018fc,Namespace:kube-system,Attempt:0,}" Jul 9 13:09:54.280237 containerd[1659]: time="2025-07-09T13:09:54.280196225Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:8a75e163f27396b2168da0f88f85f8a5,Namespace:kube-system,Attempt:0,}" Jul 9 13:09:54.328110 containerd[1659]: time="2025-07-09T13:09:54.327972301Z" level=info msg="connecting to shim 686fad71bb57527f0cc6c9bba6116307a72de60111621d0c0855c542386dac55" address="unix:///run/containerd/s/9d5736799efdaacef188416a9b1320c8b1bdac21929d50656770d26a40946f1a" namespace=k8s.io protocol=ttrpc version=3 Jul 9 13:09:54.332213 containerd[1659]: time="2025-07-09T13:09:54.332187250Z" level=info msg="connecting to shim 714af6a8e83d0429d2c03a05addddecb477dd2ceefe7b981c9370f4f3168bd7e" address="unix:///run/containerd/s/90c6bc524112602e927812dcd1d1da958297f26635483bef512925f484b93b9f" namespace=k8s.io protocol=ttrpc version=3 Jul 9 13:09:54.335465 containerd[1659]: time="2025-07-09T13:09:54.335429280Z" level=info msg="connecting to shim e179f95a02d0a1e727c2e60cd95467998e4d5bb4d412c6bf7a00d71c7aa345ce" address="unix:///run/containerd/s/9573601efca1e2bd05b94edbca4ddb7f9326a0376f0b5fd7924f4cc0541fa4f9" namespace=k8s.io protocol=ttrpc version=3 Jul 9 13:09:54.388740 systemd[1]: Started cri-containerd-686fad71bb57527f0cc6c9bba6116307a72de60111621d0c0855c542386dac55.scope - libcontainer container 686fad71bb57527f0cc6c9bba6116307a72de60111621d0c0855c542386dac55. Jul 9 13:09:54.390217 systemd[1]: Started cri-containerd-e179f95a02d0a1e727c2e60cd95467998e4d5bb4d412c6bf7a00d71c7aa345ce.scope - libcontainer container e179f95a02d0a1e727c2e60cd95467998e4d5bb4d412c6bf7a00d71c7aa345ce. Jul 9 13:09:54.393220 systemd[1]: Started cri-containerd-714af6a8e83d0429d2c03a05addddecb477dd2ceefe7b981c9370f4f3168bd7e.scope - libcontainer container 714af6a8e83d0429d2c03a05addddecb477dd2ceefe7b981c9370f4f3168bd7e. Jul 9 13:09:54.434511 kubelet[2567]: E0709 13:09:54.434490 2567 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.108:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.108:6443: connect: connection refused" interval="800ms" Jul 9 13:09:54.452935 containerd[1659]: time="2025-07-09T13:09:54.452914722Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d1af03769b64da1b1e8089a7035018fc,Namespace:kube-system,Attempt:0,} returns sandbox id \"686fad71bb57527f0cc6c9bba6116307a72de60111621d0c0855c542386dac55\"" Jul 9 13:09:54.456530 containerd[1659]: time="2025-07-09T13:09:54.456009073Z" level=info msg="CreateContainer within sandbox \"686fad71bb57527f0cc6c9bba6116307a72de60111621d0c0855c542386dac55\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 9 13:09:54.461391 containerd[1659]: time="2025-07-09T13:09:54.461357795Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:8ae7f133d43a2a4e542539d39e6bc3a7,Namespace:kube-system,Attempt:0,} returns sandbox id \"714af6a8e83d0429d2c03a05addddecb477dd2ceefe7b981c9370f4f3168bd7e\"" Jul 9 13:09:54.464042 containerd[1659]: time="2025-07-09T13:09:54.464025942Z" level=info msg="CreateContainer within sandbox \"714af6a8e83d0429d2c03a05addddecb477dd2ceefe7b981c9370f4f3168bd7e\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 9 13:09:54.469720 containerd[1659]: time="2025-07-09T13:09:54.469695103Z" level=info msg="Container ae8679090606f599d7829d865594042f233628b0a89984a15f53e4bcdc9dcc14: CDI devices from CRI Config.CDIDevices: []" Jul 9 13:09:54.471836 containerd[1659]: time="2025-07-09T13:09:54.471784998Z" level=info msg="Container 0bcfc960b546a06bfdad38da4624724f7640939d4005ce679ba540f44f453fdf: CDI devices from CRI Config.CDIDevices: []" Jul 9 13:09:54.477312 containerd[1659]: time="2025-07-09T13:09:54.477288578Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:8a75e163f27396b2168da0f88f85f8a5,Namespace:kube-system,Attempt:0,} returns sandbox id \"e179f95a02d0a1e727c2e60cd95467998e4d5bb4d412c6bf7a00d71c7aa345ce\"" Jul 9 13:09:54.480054 containerd[1659]: time="2025-07-09T13:09:54.480017346Z" level=info msg="CreateContainer within sandbox \"e179f95a02d0a1e727c2e60cd95467998e4d5bb4d412c6bf7a00d71c7aa345ce\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 9 13:09:54.484613 containerd[1659]: time="2025-07-09T13:09:54.484520635Z" level=info msg="CreateContainer within sandbox \"686fad71bb57527f0cc6c9bba6116307a72de60111621d0c0855c542386dac55\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"ae8679090606f599d7829d865594042f233628b0a89984a15f53e4bcdc9dcc14\"" Jul 9 13:09:54.484874 containerd[1659]: time="2025-07-09T13:09:54.484862229Z" level=info msg="Container cf3628a6017dabc95b3e6dfe4ec2be7fd6ce371adbf3aeaaf6a34629e99907da: CDI devices from CRI Config.CDIDevices: []" Jul 9 13:09:54.485187 containerd[1659]: time="2025-07-09T13:09:54.485145782Z" level=info msg="CreateContainer within sandbox \"714af6a8e83d0429d2c03a05addddecb477dd2ceefe7b981c9370f4f3168bd7e\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"0bcfc960b546a06bfdad38da4624724f7640939d4005ce679ba540f44f453fdf\"" Jul 9 13:09:54.485361 containerd[1659]: time="2025-07-09T13:09:54.485261275Z" level=info msg="StartContainer for \"ae8679090606f599d7829d865594042f233628b0a89984a15f53e4bcdc9dcc14\"" Jul 9 13:09:54.485918 containerd[1659]: time="2025-07-09T13:09:54.485902772Z" level=info msg="connecting to shim ae8679090606f599d7829d865594042f233628b0a89984a15f53e4bcdc9dcc14" address="unix:///run/containerd/s/9d5736799efdaacef188416a9b1320c8b1bdac21929d50656770d26a40946f1a" protocol=ttrpc version=3 Jul 9 13:09:54.486406 containerd[1659]: time="2025-07-09T13:09:54.486393371Z" level=info msg="StartContainer for \"0bcfc960b546a06bfdad38da4624724f7640939d4005ce679ba540f44f453fdf\"" Jul 9 13:09:54.488809 containerd[1659]: time="2025-07-09T13:09:54.488794149Z" level=info msg="connecting to shim 0bcfc960b546a06bfdad38da4624724f7640939d4005ce679ba540f44f453fdf" address="unix:///run/containerd/s/90c6bc524112602e927812dcd1d1da958297f26635483bef512925f484b93b9f" protocol=ttrpc version=3 Jul 9 13:09:54.489532 containerd[1659]: time="2025-07-09T13:09:54.489478033Z" level=info msg="CreateContainer within sandbox \"e179f95a02d0a1e727c2e60cd95467998e4d5bb4d412c6bf7a00d71c7aa345ce\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"cf3628a6017dabc95b3e6dfe4ec2be7fd6ce371adbf3aeaaf6a34629e99907da\"" Jul 9 13:09:54.489782 containerd[1659]: time="2025-07-09T13:09:54.489763509Z" level=info msg="StartContainer for \"cf3628a6017dabc95b3e6dfe4ec2be7fd6ce371adbf3aeaaf6a34629e99907da\"" Jul 9 13:09:54.494094 containerd[1659]: time="2025-07-09T13:09:54.493857008Z" level=info msg="connecting to shim cf3628a6017dabc95b3e6dfe4ec2be7fd6ce371adbf3aeaaf6a34629e99907da" address="unix:///run/containerd/s/9573601efca1e2bd05b94edbca4ddb7f9326a0376f0b5fd7924f4cc0541fa4f9" protocol=ttrpc version=3 Jul 9 13:09:54.504739 systemd[1]: Started cri-containerd-ae8679090606f599d7829d865594042f233628b0a89984a15f53e4bcdc9dcc14.scope - libcontainer container ae8679090606f599d7829d865594042f233628b0a89984a15f53e4bcdc9dcc14. Jul 9 13:09:54.507292 systemd[1]: Started cri-containerd-0bcfc960b546a06bfdad38da4624724f7640939d4005ce679ba540f44f453fdf.scope - libcontainer container 0bcfc960b546a06bfdad38da4624724f7640939d4005ce679ba540f44f453fdf. Jul 9 13:09:54.515804 systemd[1]: Started cri-containerd-cf3628a6017dabc95b3e6dfe4ec2be7fd6ce371adbf3aeaaf6a34629e99907da.scope - libcontainer container cf3628a6017dabc95b3e6dfe4ec2be7fd6ce371adbf3aeaaf6a34629e99907da. Jul 9 13:09:54.561827 containerd[1659]: time="2025-07-09T13:09:54.561793020Z" level=info msg="StartContainer for \"0bcfc960b546a06bfdad38da4624724f7640939d4005ce679ba540f44f453fdf\" returns successfully" Jul 9 13:09:54.562963 containerd[1659]: time="2025-07-09T13:09:54.562950329Z" level=info msg="StartContainer for \"ae8679090606f599d7829d865594042f233628b0a89984a15f53e4bcdc9dcc14\" returns successfully" Jul 9 13:09:54.580447 containerd[1659]: time="2025-07-09T13:09:54.580199165Z" level=info msg="StartContainer for \"cf3628a6017dabc95b3e6dfe4ec2be7fd6ce371adbf3aeaaf6a34629e99907da\" returns successfully" Jul 9 13:09:54.595265 kubelet[2567]: I0709 13:09:54.595174 2567 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 9 13:09:54.596307 kubelet[2567]: E0709 13:09:54.596285 2567 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://139.178.70.108:6443/api/v1/nodes\": dial tcp 139.178.70.108:6443: connect: connection refused" node="localhost" Jul 9 13:09:54.790840 kubelet[2567]: W0709 13:09:54.790778 2567 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://139.178.70.108:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 139.178.70.108:6443: connect: connection refused Jul 9 13:09:54.790840 kubelet[2567]: E0709 13:09:54.790822 2567 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://139.178.70.108:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 139.178.70.108:6443: connect: connection refused" logger="UnhandledError" Jul 9 13:09:54.796650 kubelet[2567]: W0709 13:09:54.796153 2567 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://139.178.70.108:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.70.108:6443: connect: connection refused Jul 9 13:09:54.796725 kubelet[2567]: E0709 13:09:54.796706 2567 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://139.178.70.108:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 139.178.70.108:6443: connect: connection refused" logger="UnhandledError" Jul 9 13:09:54.836769 kubelet[2567]: W0709 13:09:54.836681 2567 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://139.178.70.108:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 139.178.70.108:6443: connect: connection refused Jul 9 13:09:54.836769 kubelet[2567]: E0709 13:09:54.836722 2567 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://139.178.70.108:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 139.178.70.108:6443: connect: connection refused" logger="UnhandledError" Jul 9 13:09:54.854138 kubelet[2567]: E0709 13:09:54.853987 2567 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 9 13:09:54.856496 kubelet[2567]: E0709 13:09:54.856344 2567 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 9 13:09:54.857197 kubelet[2567]: E0709 13:09:54.857188 2567 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 9 13:09:55.397373 kubelet[2567]: I0709 13:09:55.397323 2567 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 9 13:09:55.859206 kubelet[2567]: E0709 13:09:55.859188 2567 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 9 13:09:55.859582 kubelet[2567]: E0709 13:09:55.859556 2567 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 9 13:09:56.456552 kubelet[2567]: E0709 13:09:56.456524 2567 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jul 9 13:09:56.622913 kubelet[2567]: I0709 13:09:56.622668 2567 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jul 9 13:09:56.622913 kubelet[2567]: E0709 13:09:56.622703 2567 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Jul 9 13:09:56.665718 kubelet[2567]: E0709 13:09:56.665696 2567 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 9 13:09:56.766350 kubelet[2567]: E0709 13:09:56.766322 2567 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 9 13:09:56.866539 kubelet[2567]: E0709 13:09:56.866511 2567 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 9 13:09:56.967714 kubelet[2567]: E0709 13:09:56.967616 2567 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 9 13:09:57.033030 kubelet[2567]: I0709 13:09:57.032955 2567 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 9 13:09:57.039965 kubelet[2567]: E0709 13:09:57.039935 2567 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jul 9 13:09:57.040170 kubelet[2567]: I0709 13:09:57.040042 2567 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jul 9 13:09:57.041341 kubelet[2567]: E0709 13:09:57.041328 2567 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Jul 9 13:09:57.041548 kubelet[2567]: I0709 13:09:57.041427 2567 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 9 13:09:57.042443 kubelet[2567]: E0709 13:09:57.042425 2567 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Jul 9 13:09:57.471343 kubelet[2567]: I0709 13:09:57.470248 2567 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 9 13:09:57.812880 kubelet[2567]: I0709 13:09:57.812855 2567 apiserver.go:52] "Watching apiserver" Jul 9 13:09:57.832239 kubelet[2567]: I0709 13:09:57.832213 2567 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 9 13:09:58.539488 systemd[1]: Reload requested from client PID 2838 ('systemctl') (unit session-9.scope)... Jul 9 13:09:58.539501 systemd[1]: Reloading... Jul 9 13:09:58.606659 zram_generator::config[2884]: No configuration found. Jul 9 13:09:58.677702 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 9 13:09:58.686938 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Jul 9 13:09:58.774217 systemd[1]: Reloading finished in 234 ms. Jul 9 13:09:58.797350 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 9 13:09:58.806550 systemd[1]: kubelet.service: Deactivated successfully. Jul 9 13:09:58.806745 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 9 13:09:58.806795 systemd[1]: kubelet.service: Consumed 576ms CPU time, 129.2M memory peak. Jul 9 13:09:58.808908 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 9 13:09:59.478877 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 9 13:09:59.482316 (kubelet)[2949]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 9 13:09:59.628984 kubelet[2949]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 9 13:09:59.628984 kubelet[2949]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 9 13:09:59.628984 kubelet[2949]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 9 13:09:59.629239 kubelet[2949]: I0709 13:09:59.629089 2949 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 9 13:09:59.684904 kubelet[2949]: I0709 13:09:59.684697 2949 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jul 9 13:09:59.684904 kubelet[2949]: I0709 13:09:59.684719 2949 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 9 13:09:59.685776 kubelet[2949]: I0709 13:09:59.685728 2949 server.go:954] "Client rotation is on, will bootstrap in background" Jul 9 13:09:59.692442 kubelet[2949]: I0709 13:09:59.691571 2949 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 9 13:09:59.698613 kubelet[2949]: I0709 13:09:59.698594 2949 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 9 13:09:59.702435 kubelet[2949]: I0709 13:09:59.702421 2949 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 9 13:09:59.705181 kubelet[2949]: I0709 13:09:59.705168 2949 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 9 13:09:59.705410 kubelet[2949]: I0709 13:09:59.705390 2949 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 9 13:09:59.705585 kubelet[2949]: I0709 13:09:59.705457 2949 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 9 13:09:59.705692 kubelet[2949]: I0709 13:09:59.705680 2949 topology_manager.go:138] "Creating topology manager with none policy" Jul 9 13:09:59.705734 kubelet[2949]: I0709 13:09:59.705728 2949 container_manager_linux.go:304] "Creating device plugin manager" Jul 9 13:09:59.705808 kubelet[2949]: I0709 13:09:59.705801 2949 state_mem.go:36] "Initialized new in-memory state store" Jul 9 13:09:59.705979 kubelet[2949]: I0709 13:09:59.705972 2949 kubelet.go:446] "Attempting to sync node with API server" Jul 9 13:09:59.706270 kubelet[2949]: I0709 13:09:59.706263 2949 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 9 13:09:59.706318 kubelet[2949]: I0709 13:09:59.706313 2949 kubelet.go:352] "Adding apiserver pod source" Jul 9 13:09:59.706363 kubelet[2949]: I0709 13:09:59.706358 2949 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 9 13:09:59.707264 kubelet[2949]: I0709 13:09:59.707253 2949 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Jul 9 13:09:59.710229 kubelet[2949]: I0709 13:09:59.710217 2949 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 9 13:09:59.711004 kubelet[2949]: I0709 13:09:59.710995 2949 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 9 13:09:59.711073 kubelet[2949]: I0709 13:09:59.711068 2949 server.go:1287] "Started kubelet" Jul 9 13:09:59.720480 kubelet[2949]: I0709 13:09:59.720451 2949 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jul 9 13:09:59.723819 kubelet[2949]: I0709 13:09:59.723802 2949 server.go:479] "Adding debug handlers to kubelet server" Jul 9 13:09:59.724437 kubelet[2949]: I0709 13:09:59.720629 2949 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 9 13:09:59.724788 kubelet[2949]: I0709 13:09:59.724594 2949 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 9 13:09:59.725649 kubelet[2949]: I0709 13:09:59.725626 2949 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 9 13:09:59.727416 sudo[2964]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jul 9 13:09:59.727716 sudo[2964]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jul 9 13:09:59.728206 kubelet[2949]: I0709 13:09:59.728131 2949 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 9 13:09:59.732970 kubelet[2949]: E0709 13:09:59.732107 2949 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 9 13:09:59.732970 kubelet[2949]: I0709 13:09:59.732609 2949 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 9 13:09:59.732970 kubelet[2949]: I0709 13:09:59.732698 2949 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 9 13:09:59.732970 kubelet[2949]: I0709 13:09:59.732770 2949 reconciler.go:26] "Reconciler: start to sync state" Jul 9 13:09:59.734734 kubelet[2949]: I0709 13:09:59.734305 2949 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 9 13:09:59.735280 kubelet[2949]: I0709 13:09:59.735265 2949 factory.go:221] Registration of the containerd container factory successfully Jul 9 13:09:59.735375 kubelet[2949]: I0709 13:09:59.735363 2949 factory.go:221] Registration of the systemd container factory successfully Jul 9 13:09:59.741909 kubelet[2949]: I0709 13:09:59.741881 2949 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 9 13:09:59.742696 kubelet[2949]: I0709 13:09:59.742682 2949 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 9 13:09:59.742739 kubelet[2949]: I0709 13:09:59.742699 2949 status_manager.go:227] "Starting to sync pod status with apiserver" Jul 9 13:09:59.742739 kubelet[2949]: I0709 13:09:59.742712 2949 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 9 13:09:59.742739 kubelet[2949]: I0709 13:09:59.742717 2949 kubelet.go:2382] "Starting kubelet main sync loop" Jul 9 13:09:59.742788 kubelet[2949]: E0709 13:09:59.742742 2949 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 9 13:09:59.790530 kubelet[2949]: I0709 13:09:59.790511 2949 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 9 13:09:59.790530 kubelet[2949]: I0709 13:09:59.790520 2949 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 9 13:09:59.790672 kubelet[2949]: I0709 13:09:59.790542 2949 state_mem.go:36] "Initialized new in-memory state store" Jul 9 13:09:59.790672 kubelet[2949]: I0709 13:09:59.790652 2949 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 9 13:09:59.790672 kubelet[2949]: I0709 13:09:59.790659 2949 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 9 13:09:59.790672 kubelet[2949]: I0709 13:09:59.790670 2949 policy_none.go:49] "None policy: Start" Jul 9 13:09:59.790771 kubelet[2949]: I0709 13:09:59.790676 2949 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 9 13:09:59.790771 kubelet[2949]: I0709 13:09:59.790682 2949 state_mem.go:35] "Initializing new in-memory state store" Jul 9 13:09:59.790771 kubelet[2949]: I0709 13:09:59.790741 2949 state_mem.go:75] "Updated machine memory state" Jul 9 13:09:59.793464 kubelet[2949]: I0709 13:09:59.793401 2949 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 9 13:09:59.793524 kubelet[2949]: I0709 13:09:59.793494 2949 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 9 13:09:59.793524 kubelet[2949]: I0709 13:09:59.793500 2949 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 9 13:09:59.794741 kubelet[2949]: I0709 13:09:59.794711 2949 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 9 13:09:59.796678 kubelet[2949]: E0709 13:09:59.796248 2949 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 9 13:09:59.843922 kubelet[2949]: I0709 13:09:59.843902 2949 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 9 13:09:59.857856 kubelet[2949]: I0709 13:09:59.857836 2949 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 9 13:09:59.857939 kubelet[2949]: I0709 13:09:59.857836 2949 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jul 9 13:09:59.867425 kubelet[2949]: E0709 13:09:59.867161 2949 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jul 9 13:09:59.897069 kubelet[2949]: I0709 13:09:59.897048 2949 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 9 13:09:59.911274 kubelet[2949]: I0709 13:09:59.911240 2949 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Jul 9 13:09:59.911364 kubelet[2949]: I0709 13:09:59.911306 2949 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jul 9 13:09:59.934193 kubelet[2949]: I0709 13:09:59.934162 2949 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 9 13:09:59.934193 kubelet[2949]: I0709 13:09:59.934194 2949 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8a75e163f27396b2168da0f88f85f8a5-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"8a75e163f27396b2168da0f88f85f8a5\") " pod="kube-system/kube-scheduler-localhost" Jul 9 13:09:59.934289 kubelet[2949]: I0709 13:09:59.934206 2949 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8ae7f133d43a2a4e542539d39e6bc3a7-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"8ae7f133d43a2a4e542539d39e6bc3a7\") " pod="kube-system/kube-apiserver-localhost" Jul 9 13:09:59.934289 kubelet[2949]: I0709 13:09:59.934218 2949 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 9 13:09:59.934289 kubelet[2949]: I0709 13:09:59.934227 2949 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8ae7f133d43a2a4e542539d39e6bc3a7-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"8ae7f133d43a2a4e542539d39e6bc3a7\") " pod="kube-system/kube-apiserver-localhost" Jul 9 13:09:59.934289 kubelet[2949]: I0709 13:09:59.934237 2949 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8ae7f133d43a2a4e542539d39e6bc3a7-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"8ae7f133d43a2a4e542539d39e6bc3a7\") " pod="kube-system/kube-apiserver-localhost" Jul 9 13:09:59.934289 kubelet[2949]: I0709 13:09:59.934249 2949 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 9 13:09:59.934374 kubelet[2949]: I0709 13:09:59.934258 2949 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 9 13:09:59.934374 kubelet[2949]: I0709 13:09:59.934266 2949 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 9 13:10:00.250027 sudo[2964]: pam_unix(sudo:session): session closed for user root Jul 9 13:10:00.707498 kubelet[2949]: I0709 13:10:00.707456 2949 apiserver.go:52] "Watching apiserver" Jul 9 13:10:00.733673 kubelet[2949]: I0709 13:10:00.733628 2949 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 9 13:10:00.755616 kubelet[2949]: I0709 13:10:00.755566 2949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=3.755552144 podStartE2EDuration="3.755552144s" podCreationTimestamp="2025-07-09 13:09:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-09 13:10:00.755380819 +0000 UTC m=+1.191783954" watchObservedRunningTime="2025-07-09 13:10:00.755552144 +0000 UTC m=+1.191955277" Jul 9 13:10:00.761500 kubelet[2949]: I0709 13:10:00.761360 2949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.761345253 podStartE2EDuration="1.761345253s" podCreationTimestamp="2025-07-09 13:09:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-09 13:10:00.761120339 +0000 UTC m=+1.197523473" watchObservedRunningTime="2025-07-09 13:10:00.761345253 +0000 UTC m=+1.197748385" Jul 9 13:10:00.768648 kubelet[2949]: I0709 13:10:00.768468 2949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.7684566130000001 podStartE2EDuration="1.768456613s" podCreationTimestamp="2025-07-09 13:09:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-09 13:10:00.768353293 +0000 UTC m=+1.204756433" watchObservedRunningTime="2025-07-09 13:10:00.768456613 +0000 UTC m=+1.204859746" Jul 9 13:10:00.775847 kubelet[2949]: I0709 13:10:00.774585 2949 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 9 13:10:00.784958 kubelet[2949]: E0709 13:10:00.784835 2949 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jul 9 13:10:02.149055 sudo[1970]: pam_unix(sudo:session): session closed for user root Jul 9 13:10:02.149915 sshd[1969]: Connection closed by 139.178.68.195 port 39188 Jul 9 13:10:02.157066 sshd-session[1966]: pam_unix(sshd:session): session closed for user core Jul 9 13:10:02.159618 systemd[1]: sshd@6-139.178.70.108:22-139.178.68.195:39188.service: Deactivated successfully. Jul 9 13:10:02.160811 systemd[1]: session-9.scope: Deactivated successfully. Jul 9 13:10:02.160929 systemd[1]: session-9.scope: Consumed 2.721s CPU time, 205.5M memory peak. Jul 9 13:10:02.161919 systemd-logind[1628]: Session 9 logged out. Waiting for processes to exit. Jul 9 13:10:02.162601 systemd-logind[1628]: Removed session 9. Jul 9 13:10:03.314330 kubelet[2949]: I0709 13:10:03.314310 2949 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 9 13:10:03.314803 containerd[1659]: time="2025-07-09T13:10:03.314722801Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 9 13:10:03.315379 kubelet[2949]: I0709 13:10:03.314876 2949 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 9 13:10:04.445436 systemd[1]: Created slice kubepods-besteffort-pod3e42b00c_318f_4569_b13c_a0ac3e7cfe46.slice - libcontainer container kubepods-besteffort-pod3e42b00c_318f_4569_b13c_a0ac3e7cfe46.slice. Jul 9 13:10:04.458040 systemd[1]: Created slice kubepods-burstable-podd84cc738_1909_460a_9971_4ee9bfc13ad0.slice - libcontainer container kubepods-burstable-podd84cc738_1909_460a_9971_4ee9bfc13ad0.slice. Jul 9 13:10:04.466188 kubelet[2949]: I0709 13:10:04.466159 2949 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d84cc738-1909-460a-9971-4ee9bfc13ad0-cilium-config-path\") pod \"cilium-6vpld\" (UID: \"d84cc738-1909-460a-9971-4ee9bfc13ad0\") " pod="kube-system/cilium-6vpld" Jul 9 13:10:04.466188 kubelet[2949]: I0709 13:10:04.466190 2949 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d84cc738-1909-460a-9971-4ee9bfc13ad0-cilium-cgroup\") pod \"cilium-6vpld\" (UID: \"d84cc738-1909-460a-9971-4ee9bfc13ad0\") " pod="kube-system/cilium-6vpld" Jul 9 13:10:04.466521 kubelet[2949]: I0709 13:10:04.466204 2949 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3e42b00c-318f-4569-b13c-a0ac3e7cfe46-lib-modules\") pod \"kube-proxy-75wsh\" (UID: \"3e42b00c-318f-4569-b13c-a0ac3e7cfe46\") " pod="kube-system/kube-proxy-75wsh" Jul 9 13:10:04.466521 kubelet[2949]: I0709 13:10:04.466221 2949 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3e42b00c-318f-4569-b13c-a0ac3e7cfe46-xtables-lock\") pod \"kube-proxy-75wsh\" (UID: \"3e42b00c-318f-4569-b13c-a0ac3e7cfe46\") " pod="kube-system/kube-proxy-75wsh" Jul 9 13:10:04.466521 kubelet[2949]: I0709 13:10:04.466233 2949 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d84cc738-1909-460a-9971-4ee9bfc13ad0-bpf-maps\") pod \"cilium-6vpld\" (UID: \"d84cc738-1909-460a-9971-4ee9bfc13ad0\") " pod="kube-system/cilium-6vpld" Jul 9 13:10:04.466521 kubelet[2949]: I0709 13:10:04.466243 2949 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d84cc738-1909-460a-9971-4ee9bfc13ad0-xtables-lock\") pod \"cilium-6vpld\" (UID: \"d84cc738-1909-460a-9971-4ee9bfc13ad0\") " pod="kube-system/cilium-6vpld" Jul 9 13:10:04.466521 kubelet[2949]: I0709 13:10:04.466250 2949 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d84cc738-1909-460a-9971-4ee9bfc13ad0-clustermesh-secrets\") pod \"cilium-6vpld\" (UID: \"d84cc738-1909-460a-9971-4ee9bfc13ad0\") " pod="kube-system/cilium-6vpld" Jul 9 13:10:04.466521 kubelet[2949]: I0709 13:10:04.466259 2949 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d84cc738-1909-460a-9971-4ee9bfc13ad0-host-proc-sys-net\") pod \"cilium-6vpld\" (UID: \"d84cc738-1909-460a-9971-4ee9bfc13ad0\") " pod="kube-system/cilium-6vpld" Jul 9 13:10:04.467024 kubelet[2949]: I0709 13:10:04.466268 2949 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d84cc738-1909-460a-9971-4ee9bfc13ad0-hubble-tls\") pod \"cilium-6vpld\" (UID: \"d84cc738-1909-460a-9971-4ee9bfc13ad0\") " pod="kube-system/cilium-6vpld" Jul 9 13:10:04.467024 kubelet[2949]: I0709 13:10:04.466277 2949 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/3e42b00c-318f-4569-b13c-a0ac3e7cfe46-kube-proxy\") pod \"kube-proxy-75wsh\" (UID: \"3e42b00c-318f-4569-b13c-a0ac3e7cfe46\") " pod="kube-system/kube-proxy-75wsh" Jul 9 13:10:04.467024 kubelet[2949]: I0709 13:10:04.466296 2949 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d84cc738-1909-460a-9971-4ee9bfc13ad0-lib-modules\") pod \"cilium-6vpld\" (UID: \"d84cc738-1909-460a-9971-4ee9bfc13ad0\") " pod="kube-system/cilium-6vpld" Jul 9 13:10:04.467024 kubelet[2949]: I0709 13:10:04.466308 2949 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d84cc738-1909-460a-9971-4ee9bfc13ad0-cilium-run\") pod \"cilium-6vpld\" (UID: \"d84cc738-1909-460a-9971-4ee9bfc13ad0\") " pod="kube-system/cilium-6vpld" Jul 9 13:10:04.467024 kubelet[2949]: I0709 13:10:04.466321 2949 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d84cc738-1909-460a-9971-4ee9bfc13ad0-host-proc-sys-kernel\") pod \"cilium-6vpld\" (UID: \"d84cc738-1909-460a-9971-4ee9bfc13ad0\") " pod="kube-system/cilium-6vpld" Jul 9 13:10:04.467024 kubelet[2949]: I0709 13:10:04.466331 2949 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b5p5n\" (UniqueName: \"kubernetes.io/projected/d84cc738-1909-460a-9971-4ee9bfc13ad0-kube-api-access-b5p5n\") pod \"cilium-6vpld\" (UID: \"d84cc738-1909-460a-9971-4ee9bfc13ad0\") " pod="kube-system/cilium-6vpld" Jul 9 13:10:04.467188 kubelet[2949]: I0709 13:10:04.466340 2949 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d84cc738-1909-460a-9971-4ee9bfc13ad0-cni-path\") pod \"cilium-6vpld\" (UID: \"d84cc738-1909-460a-9971-4ee9bfc13ad0\") " pod="kube-system/cilium-6vpld" Jul 9 13:10:04.467188 kubelet[2949]: I0709 13:10:04.466536 2949 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d84cc738-1909-460a-9971-4ee9bfc13ad0-etc-cni-netd\") pod \"cilium-6vpld\" (UID: \"d84cc738-1909-460a-9971-4ee9bfc13ad0\") " pod="kube-system/cilium-6vpld" Jul 9 13:10:04.467188 kubelet[2949]: I0709 13:10:04.466555 2949 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dz57j\" (UniqueName: \"kubernetes.io/projected/3e42b00c-318f-4569-b13c-a0ac3e7cfe46-kube-api-access-dz57j\") pod \"kube-proxy-75wsh\" (UID: \"3e42b00c-318f-4569-b13c-a0ac3e7cfe46\") " pod="kube-system/kube-proxy-75wsh" Jul 9 13:10:04.467188 kubelet[2949]: I0709 13:10:04.466570 2949 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d84cc738-1909-460a-9971-4ee9bfc13ad0-hostproc\") pod \"cilium-6vpld\" (UID: \"d84cc738-1909-460a-9971-4ee9bfc13ad0\") " pod="kube-system/cilium-6vpld" Jul 9 13:10:04.483500 systemd[1]: Created slice kubepods-besteffort-pod1bad48ba_ad86_4d68_a0ab_8d3c80a7260e.slice - libcontainer container kubepods-besteffort-pod1bad48ba_ad86_4d68_a0ab_8d3c80a7260e.slice. Jul 9 13:10:04.566875 kubelet[2949]: I0709 13:10:04.566843 2949 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1bad48ba-ad86-4d68-a0ab-8d3c80a7260e-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-csntj\" (UID: \"1bad48ba-ad86-4d68-a0ab-8d3c80a7260e\") " pod="kube-system/cilium-operator-6c4d7847fc-csntj" Jul 9 13:10:04.566969 kubelet[2949]: I0709 13:10:04.566906 2949 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zgw7m\" (UniqueName: \"kubernetes.io/projected/1bad48ba-ad86-4d68-a0ab-8d3c80a7260e-kube-api-access-zgw7m\") pod \"cilium-operator-6c4d7847fc-csntj\" (UID: \"1bad48ba-ad86-4d68-a0ab-8d3c80a7260e\") " pod="kube-system/cilium-operator-6c4d7847fc-csntj" Jul 9 13:10:04.755198 containerd[1659]: time="2025-07-09T13:10:04.755134732Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-75wsh,Uid:3e42b00c-318f-4569-b13c-a0ac3e7cfe46,Namespace:kube-system,Attempt:0,}" Jul 9 13:10:04.761649 containerd[1659]: time="2025-07-09T13:10:04.761562633Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-6vpld,Uid:d84cc738-1909-460a-9971-4ee9bfc13ad0,Namespace:kube-system,Attempt:0,}" Jul 9 13:10:04.793352 containerd[1659]: time="2025-07-09T13:10:04.793249322Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-csntj,Uid:1bad48ba-ad86-4d68-a0ab-8d3c80a7260e,Namespace:kube-system,Attempt:0,}" Jul 9 13:10:04.883691 containerd[1659]: time="2025-07-09T13:10:04.883615673Z" level=info msg="connecting to shim c829710d7efa18f4f9bb446abacf6a7b1d7d7f9e9b50f52b877d145abadfcfcb" address="unix:///run/containerd/s/3f803bbf07772596f368e4708cf17578683324b93c80aa6182a97cb696e11639" namespace=k8s.io protocol=ttrpc version=3 Jul 9 13:10:04.885443 containerd[1659]: time="2025-07-09T13:10:04.885410154Z" level=info msg="connecting to shim ff01457b2a3f9e96a80665d56887be58fa3ea27db2006ed7cd025890bef1634e" address="unix:///run/containerd/s/51052ecd943d991f208552324b92b3a10c9c2486c1c5daa0df983c68906abaa5" namespace=k8s.io protocol=ttrpc version=3 Jul 9 13:10:04.901816 containerd[1659]: time="2025-07-09T13:10:04.901366109Z" level=info msg="connecting to shim a93606c0b87d982c3fb16340d522ed075662716ca2bfb2c4bdc837f52af9cf4a" address="unix:///run/containerd/s/a38fa2225066b48b983c14bda59e798a7ad3e62c2abefc171c3fc83350da84c4" namespace=k8s.io protocol=ttrpc version=3 Jul 9 13:10:04.907882 systemd[1]: Started cri-containerd-c829710d7efa18f4f9bb446abacf6a7b1d7d7f9e9b50f52b877d145abadfcfcb.scope - libcontainer container c829710d7efa18f4f9bb446abacf6a7b1d7d7f9e9b50f52b877d145abadfcfcb. Jul 9 13:10:04.913078 systemd[1]: Started cri-containerd-ff01457b2a3f9e96a80665d56887be58fa3ea27db2006ed7cd025890bef1634e.scope - libcontainer container ff01457b2a3f9e96a80665d56887be58fa3ea27db2006ed7cd025890bef1634e. Jul 9 13:10:04.931785 systemd[1]: Started cri-containerd-a93606c0b87d982c3fb16340d522ed075662716ca2bfb2c4bdc837f52af9cf4a.scope - libcontainer container a93606c0b87d982c3fb16340d522ed075662716ca2bfb2c4bdc837f52af9cf4a. Jul 9 13:10:04.945569 containerd[1659]: time="2025-07-09T13:10:04.945544111Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-75wsh,Uid:3e42b00c-318f-4569-b13c-a0ac3e7cfe46,Namespace:kube-system,Attempt:0,} returns sandbox id \"c829710d7efa18f4f9bb446abacf6a7b1d7d7f9e9b50f52b877d145abadfcfcb\"" Jul 9 13:10:04.948339 containerd[1659]: time="2025-07-09T13:10:04.948237513Z" level=info msg="CreateContainer within sandbox \"c829710d7efa18f4f9bb446abacf6a7b1d7d7f9e9b50f52b877d145abadfcfcb\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 9 13:10:04.959081 containerd[1659]: time="2025-07-09T13:10:04.958952190Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-6vpld,Uid:d84cc738-1909-460a-9971-4ee9bfc13ad0,Namespace:kube-system,Attempt:0,} returns sandbox id \"ff01457b2a3f9e96a80665d56887be58fa3ea27db2006ed7cd025890bef1634e\"" Jul 9 13:10:04.960661 containerd[1659]: time="2025-07-09T13:10:04.960641160Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jul 9 13:10:04.994155 containerd[1659]: time="2025-07-09T13:10:04.994129255Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-csntj,Uid:1bad48ba-ad86-4d68-a0ab-8d3c80a7260e,Namespace:kube-system,Attempt:0,} returns sandbox id \"a93606c0b87d982c3fb16340d522ed075662716ca2bfb2c4bdc837f52af9cf4a\"" Jul 9 13:10:04.998269 containerd[1659]: time="2025-07-09T13:10:04.998249805Z" level=info msg="Container 942354d08369ade9a8ab9099cacb12f4c91304ec6258fdb72a3d9f0aa8cbd963: CDI devices from CRI Config.CDIDevices: []" Jul 9 13:10:05.031252 containerd[1659]: time="2025-07-09T13:10:05.031219295Z" level=info msg="CreateContainer within sandbox \"c829710d7efa18f4f9bb446abacf6a7b1d7d7f9e9b50f52b877d145abadfcfcb\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"942354d08369ade9a8ab9099cacb12f4c91304ec6258fdb72a3d9f0aa8cbd963\"" Jul 9 13:10:05.031768 containerd[1659]: time="2025-07-09T13:10:05.031746354Z" level=info msg="StartContainer for \"942354d08369ade9a8ab9099cacb12f4c91304ec6258fdb72a3d9f0aa8cbd963\"" Jul 9 13:10:05.033356 containerd[1659]: time="2025-07-09T13:10:05.032953317Z" level=info msg="connecting to shim 942354d08369ade9a8ab9099cacb12f4c91304ec6258fdb72a3d9f0aa8cbd963" address="unix:///run/containerd/s/3f803bbf07772596f368e4708cf17578683324b93c80aa6182a97cb696e11639" protocol=ttrpc version=3 Jul 9 13:10:05.050757 systemd[1]: Started cri-containerd-942354d08369ade9a8ab9099cacb12f4c91304ec6258fdb72a3d9f0aa8cbd963.scope - libcontainer container 942354d08369ade9a8ab9099cacb12f4c91304ec6258fdb72a3d9f0aa8cbd963. Jul 9 13:10:05.075506 containerd[1659]: time="2025-07-09T13:10:05.075479614Z" level=info msg="StartContainer for \"942354d08369ade9a8ab9099cacb12f4c91304ec6258fdb72a3d9f0aa8cbd963\" returns successfully" Jul 9 13:10:07.528268 kubelet[2949]: I0709 13:10:07.528236 2949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-75wsh" podStartSLOduration=3.528225131 podStartE2EDuration="3.528225131s" podCreationTimestamp="2025-07-09 13:10:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-09 13:10:05.793601749 +0000 UTC m=+6.230004888" watchObservedRunningTime="2025-07-09 13:10:07.528225131 +0000 UTC m=+7.964628269" Jul 9 13:10:08.816873 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1586941765.mount: Deactivated successfully. Jul 9 13:10:10.763010 containerd[1659]: time="2025-07-09T13:10:10.762960269Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 13:10:10.776202 containerd[1659]: time="2025-07-09T13:10:10.776171543Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Jul 9 13:10:10.810931 containerd[1659]: time="2025-07-09T13:10:10.810877891Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 13:10:10.811692 containerd[1659]: time="2025-07-09T13:10:10.811606907Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 5.850936196s" Jul 9 13:10:10.811692 containerd[1659]: time="2025-07-09T13:10:10.811625704Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jul 9 13:10:10.812895 containerd[1659]: time="2025-07-09T13:10:10.812709498Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jul 9 13:10:10.813733 containerd[1659]: time="2025-07-09T13:10:10.813685998Z" level=info msg="CreateContainer within sandbox \"ff01457b2a3f9e96a80665d56887be58fa3ea27db2006ed7cd025890bef1634e\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 9 13:10:10.983655 containerd[1659]: time="2025-07-09T13:10:10.983406287Z" level=info msg="Container 63a492b6f3379dff0844e37c78e128e54a52495b751b8c57a696b8ef3b2d8ade: CDI devices from CRI Config.CDIDevices: []" Jul 9 13:10:10.986630 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2101801829.mount: Deactivated successfully. Jul 9 13:10:11.116950 containerd[1659]: time="2025-07-09T13:10:11.116870942Z" level=info msg="CreateContainer within sandbox \"ff01457b2a3f9e96a80665d56887be58fa3ea27db2006ed7cd025890bef1634e\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"63a492b6f3379dff0844e37c78e128e54a52495b751b8c57a696b8ef3b2d8ade\"" Jul 9 13:10:11.117400 containerd[1659]: time="2025-07-09T13:10:11.117384942Z" level=info msg="StartContainer for \"63a492b6f3379dff0844e37c78e128e54a52495b751b8c57a696b8ef3b2d8ade\"" Jul 9 13:10:11.118063 containerd[1659]: time="2025-07-09T13:10:11.118047186Z" level=info msg="connecting to shim 63a492b6f3379dff0844e37c78e128e54a52495b751b8c57a696b8ef3b2d8ade" address="unix:///run/containerd/s/51052ecd943d991f208552324b92b3a10c9c2486c1c5daa0df983c68906abaa5" protocol=ttrpc version=3 Jul 9 13:10:11.146723 systemd[1]: Started cri-containerd-63a492b6f3379dff0844e37c78e128e54a52495b751b8c57a696b8ef3b2d8ade.scope - libcontainer container 63a492b6f3379dff0844e37c78e128e54a52495b751b8c57a696b8ef3b2d8ade. Jul 9 13:10:11.166344 containerd[1659]: time="2025-07-09T13:10:11.166313780Z" level=info msg="StartContainer for \"63a492b6f3379dff0844e37c78e128e54a52495b751b8c57a696b8ef3b2d8ade\" returns successfully" Jul 9 13:10:11.174263 systemd[1]: cri-containerd-63a492b6f3379dff0844e37c78e128e54a52495b751b8c57a696b8ef3b2d8ade.scope: Deactivated successfully. Jul 9 13:10:11.185617 containerd[1659]: time="2025-07-09T13:10:11.185566086Z" level=info msg="TaskExit event in podsandbox handler container_id:\"63a492b6f3379dff0844e37c78e128e54a52495b751b8c57a696b8ef3b2d8ade\" id:\"63a492b6f3379dff0844e37c78e128e54a52495b751b8c57a696b8ef3b2d8ade\" pid:3361 exited_at:{seconds:1752066611 nanos:174978707}" Jul 9 13:10:11.185797 containerd[1659]: time="2025-07-09T13:10:11.185642217Z" level=info msg="received exit event container_id:\"63a492b6f3379dff0844e37c78e128e54a52495b751b8c57a696b8ef3b2d8ade\" id:\"63a492b6f3379dff0844e37c78e128e54a52495b751b8c57a696b8ef3b2d8ade\" pid:3361 exited_at:{seconds:1752066611 nanos:174978707}" Jul 9 13:10:11.204420 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-63a492b6f3379dff0844e37c78e128e54a52495b751b8c57a696b8ef3b2d8ade-rootfs.mount: Deactivated successfully. Jul 9 13:10:11.794554 containerd[1659]: time="2025-07-09T13:10:11.794524145Z" level=info msg="CreateContainer within sandbox \"ff01457b2a3f9e96a80665d56887be58fa3ea27db2006ed7cd025890bef1634e\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 9 13:10:11.800376 containerd[1659]: time="2025-07-09T13:10:11.800348928Z" level=info msg="Container 080fb7116012c729b92754402c4e9e79a43bec649427f6653c1be8356a12a5ee: CDI devices from CRI Config.CDIDevices: []" Jul 9 13:10:11.802707 containerd[1659]: time="2025-07-09T13:10:11.802683387Z" level=info msg="CreateContainer within sandbox \"ff01457b2a3f9e96a80665d56887be58fa3ea27db2006ed7cd025890bef1634e\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"080fb7116012c729b92754402c4e9e79a43bec649427f6653c1be8356a12a5ee\"" Jul 9 13:10:11.803292 containerd[1659]: time="2025-07-09T13:10:11.803276299Z" level=info msg="StartContainer for \"080fb7116012c729b92754402c4e9e79a43bec649427f6653c1be8356a12a5ee\"" Jul 9 13:10:11.804170 containerd[1659]: time="2025-07-09T13:10:11.804149758Z" level=info msg="connecting to shim 080fb7116012c729b92754402c4e9e79a43bec649427f6653c1be8356a12a5ee" address="unix:///run/containerd/s/51052ecd943d991f208552324b92b3a10c9c2486c1c5daa0df983c68906abaa5" protocol=ttrpc version=3 Jul 9 13:10:11.826740 systemd[1]: Started cri-containerd-080fb7116012c729b92754402c4e9e79a43bec649427f6653c1be8356a12a5ee.scope - libcontainer container 080fb7116012c729b92754402c4e9e79a43bec649427f6653c1be8356a12a5ee. Jul 9 13:10:11.853487 containerd[1659]: time="2025-07-09T13:10:11.853466341Z" level=info msg="StartContainer for \"080fb7116012c729b92754402c4e9e79a43bec649427f6653c1be8356a12a5ee\" returns successfully" Jul 9 13:10:11.865175 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 9 13:10:11.865464 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 9 13:10:11.865571 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jul 9 13:10:11.867598 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 9 13:10:11.868696 systemd[1]: cri-containerd-080fb7116012c729b92754402c4e9e79a43bec649427f6653c1be8356a12a5ee.scope: Deactivated successfully. Jul 9 13:10:11.870554 containerd[1659]: time="2025-07-09T13:10:11.870509744Z" level=info msg="received exit event container_id:\"080fb7116012c729b92754402c4e9e79a43bec649427f6653c1be8356a12a5ee\" id:\"080fb7116012c729b92754402c4e9e79a43bec649427f6653c1be8356a12a5ee\" pid:3405 exited_at:{seconds:1752066611 nanos:870267481}" Jul 9 13:10:11.870980 containerd[1659]: time="2025-07-09T13:10:11.870951221Z" level=info msg="TaskExit event in podsandbox handler container_id:\"080fb7116012c729b92754402c4e9e79a43bec649427f6653c1be8356a12a5ee\" id:\"080fb7116012c729b92754402c4e9e79a43bec649427f6653c1be8356a12a5ee\" pid:3405 exited_at:{seconds:1752066611 nanos:870267481}" Jul 9 13:10:11.884827 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 9 13:10:12.230319 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3969420167.mount: Deactivated successfully. Jul 9 13:10:12.800600 containerd[1659]: time="2025-07-09T13:10:12.800565237Z" level=info msg="CreateContainer within sandbox \"ff01457b2a3f9e96a80665d56887be58fa3ea27db2006ed7cd025890bef1634e\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 9 13:10:12.831058 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2058304664.mount: Deactivated successfully. Jul 9 13:10:12.837019 containerd[1659]: time="2025-07-09T13:10:12.832424107Z" level=info msg="Container 57dc5d70f3decf1a6045acf83854cd0994fc890fe0a89be497c4289d013d1a9d: CDI devices from CRI Config.CDIDevices: []" Jul 9 13:10:12.857126 containerd[1659]: time="2025-07-09T13:10:12.857100801Z" level=info msg="CreateContainer within sandbox \"ff01457b2a3f9e96a80665d56887be58fa3ea27db2006ed7cd025890bef1634e\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"57dc5d70f3decf1a6045acf83854cd0994fc890fe0a89be497c4289d013d1a9d\"" Jul 9 13:10:12.857703 containerd[1659]: time="2025-07-09T13:10:12.857688105Z" level=info msg="StartContainer for \"57dc5d70f3decf1a6045acf83854cd0994fc890fe0a89be497c4289d013d1a9d\"" Jul 9 13:10:12.858387 containerd[1659]: time="2025-07-09T13:10:12.858369239Z" level=info msg="connecting to shim 57dc5d70f3decf1a6045acf83854cd0994fc890fe0a89be497c4289d013d1a9d" address="unix:///run/containerd/s/51052ecd943d991f208552324b92b3a10c9c2486c1c5daa0df983c68906abaa5" protocol=ttrpc version=3 Jul 9 13:10:12.874237 containerd[1659]: time="2025-07-09T13:10:12.873375442Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 13:10:12.874237 containerd[1659]: time="2025-07-09T13:10:12.873770307Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Jul 9 13:10:12.875873 containerd[1659]: time="2025-07-09T13:10:12.875853356Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 13:10:12.877762 containerd[1659]: time="2025-07-09T13:10:12.877738445Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.06499751s" Jul 9 13:10:12.877762 containerd[1659]: time="2025-07-09T13:10:12.877759040Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jul 9 13:10:12.879488 containerd[1659]: time="2025-07-09T13:10:12.879466589Z" level=info msg="CreateContainer within sandbox \"a93606c0b87d982c3fb16340d522ed075662716ca2bfb2c4bdc837f52af9cf4a\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jul 9 13:10:12.880916 systemd[1]: Started cri-containerd-57dc5d70f3decf1a6045acf83854cd0994fc890fe0a89be497c4289d013d1a9d.scope - libcontainer container 57dc5d70f3decf1a6045acf83854cd0994fc890fe0a89be497c4289d013d1a9d. Jul 9 13:10:12.883655 containerd[1659]: time="2025-07-09T13:10:12.883244068Z" level=info msg="Container fb1dc13511474fdde58ff028a6b1058a781f8207725d6bac53c385237f2cb838: CDI devices from CRI Config.CDIDevices: []" Jul 9 13:10:12.889797 containerd[1659]: time="2025-07-09T13:10:12.889772319Z" level=info msg="CreateContainer within sandbox \"a93606c0b87d982c3fb16340d522ed075662716ca2bfb2c4bdc837f52af9cf4a\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"fb1dc13511474fdde58ff028a6b1058a781f8207725d6bac53c385237f2cb838\"" Jul 9 13:10:12.891304 containerd[1659]: time="2025-07-09T13:10:12.890590270Z" level=info msg="StartContainer for \"fb1dc13511474fdde58ff028a6b1058a781f8207725d6bac53c385237f2cb838\"" Jul 9 13:10:12.891304 containerd[1659]: time="2025-07-09T13:10:12.891064956Z" level=info msg="connecting to shim fb1dc13511474fdde58ff028a6b1058a781f8207725d6bac53c385237f2cb838" address="unix:///run/containerd/s/a38fa2225066b48b983c14bda59e798a7ad3e62c2abefc171c3fc83350da84c4" protocol=ttrpc version=3 Jul 9 13:10:12.907815 systemd[1]: Started cri-containerd-fb1dc13511474fdde58ff028a6b1058a781f8207725d6bac53c385237f2cb838.scope - libcontainer container fb1dc13511474fdde58ff028a6b1058a781f8207725d6bac53c385237f2cb838. Jul 9 13:10:12.920467 containerd[1659]: time="2025-07-09T13:10:12.920040317Z" level=info msg="StartContainer for \"57dc5d70f3decf1a6045acf83854cd0994fc890fe0a89be497c4289d013d1a9d\" returns successfully" Jul 9 13:10:12.937473 containerd[1659]: time="2025-07-09T13:10:12.937434191Z" level=info msg="StartContainer for \"fb1dc13511474fdde58ff028a6b1058a781f8207725d6bac53c385237f2cb838\" returns successfully" Jul 9 13:10:12.948012 systemd[1]: cri-containerd-57dc5d70f3decf1a6045acf83854cd0994fc890fe0a89be497c4289d013d1a9d.scope: Deactivated successfully. Jul 9 13:10:12.948185 systemd[1]: cri-containerd-57dc5d70f3decf1a6045acf83854cd0994fc890fe0a89be497c4289d013d1a9d.scope: Consumed 15ms CPU time, 4.2M memory peak, 1M read from disk. Jul 9 13:10:12.949088 containerd[1659]: time="2025-07-09T13:10:12.949034001Z" level=info msg="received exit event container_id:\"57dc5d70f3decf1a6045acf83854cd0994fc890fe0a89be497c4289d013d1a9d\" id:\"57dc5d70f3decf1a6045acf83854cd0994fc890fe0a89be497c4289d013d1a9d\" pid:3469 exited_at:{seconds:1752066612 nanos:948403457}" Jul 9 13:10:12.949304 containerd[1659]: time="2025-07-09T13:10:12.949080750Z" level=info msg="TaskExit event in podsandbox handler container_id:\"57dc5d70f3decf1a6045acf83854cd0994fc890fe0a89be497c4289d013d1a9d\" id:\"57dc5d70f3decf1a6045acf83854cd0994fc890fe0a89be497c4289d013d1a9d\" pid:3469 exited_at:{seconds:1752066612 nanos:948403457}" Jul 9 13:10:13.806860 containerd[1659]: time="2025-07-09T13:10:13.806828929Z" level=info msg="CreateContainer within sandbox \"ff01457b2a3f9e96a80665d56887be58fa3ea27db2006ed7cd025890bef1634e\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 9 13:10:13.811892 kubelet[2949]: I0709 13:10:13.811700 2949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-csntj" podStartSLOduration=1.928494087 podStartE2EDuration="9.811687355s" podCreationTimestamp="2025-07-09 13:10:04 +0000 UTC" firstStartedPulling="2025-07-09 13:10:04.994919502 +0000 UTC m=+5.431322630" lastFinishedPulling="2025-07-09 13:10:12.878112766 +0000 UTC m=+13.314515898" observedRunningTime="2025-07-09 13:10:13.811330861 +0000 UTC m=+14.247733998" watchObservedRunningTime="2025-07-09 13:10:13.811687355 +0000 UTC m=+14.248090493" Jul 9 13:10:13.819658 containerd[1659]: time="2025-07-09T13:10:13.817512186Z" level=info msg="Container a707a648f5c424c7798c92fba9180e41d8638bd676b1409229fcea87bfa58151: CDI devices from CRI Config.CDIDevices: []" Jul 9 13:10:13.819115 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2902855934.mount: Deactivated successfully. Jul 9 13:10:13.830317 containerd[1659]: time="2025-07-09T13:10:13.830258263Z" level=info msg="CreateContainer within sandbox \"ff01457b2a3f9e96a80665d56887be58fa3ea27db2006ed7cd025890bef1634e\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"a707a648f5c424c7798c92fba9180e41d8638bd676b1409229fcea87bfa58151\"" Jul 9 13:10:13.835810 containerd[1659]: time="2025-07-09T13:10:13.833712761Z" level=info msg="StartContainer for \"a707a648f5c424c7798c92fba9180e41d8638bd676b1409229fcea87bfa58151\"" Jul 9 13:10:13.837432 containerd[1659]: time="2025-07-09T13:10:13.837401975Z" level=info msg="connecting to shim a707a648f5c424c7798c92fba9180e41d8638bd676b1409229fcea87bfa58151" address="unix:///run/containerd/s/51052ecd943d991f208552324b92b3a10c9c2486c1c5daa0df983c68906abaa5" protocol=ttrpc version=3 Jul 9 13:10:13.859787 systemd[1]: Started cri-containerd-a707a648f5c424c7798c92fba9180e41d8638bd676b1409229fcea87bfa58151.scope - libcontainer container a707a648f5c424c7798c92fba9180e41d8638bd676b1409229fcea87bfa58151. Jul 9 13:10:13.880105 systemd[1]: cri-containerd-a707a648f5c424c7798c92fba9180e41d8638bd676b1409229fcea87bfa58151.scope: Deactivated successfully. Jul 9 13:10:13.881098 containerd[1659]: time="2025-07-09T13:10:13.881072834Z" level=info msg="received exit event container_id:\"a707a648f5c424c7798c92fba9180e41d8638bd676b1409229fcea87bfa58151\" id:\"a707a648f5c424c7798c92fba9180e41d8638bd676b1409229fcea87bfa58151\" pid:3539 exited_at:{seconds:1752066613 nanos:880813389}" Jul 9 13:10:13.881251 containerd[1659]: time="2025-07-09T13:10:13.881231887Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a707a648f5c424c7798c92fba9180e41d8638bd676b1409229fcea87bfa58151\" id:\"a707a648f5c424c7798c92fba9180e41d8638bd676b1409229fcea87bfa58151\" pid:3539 exited_at:{seconds:1752066613 nanos:880813389}" Jul 9 13:10:13.887008 containerd[1659]: time="2025-07-09T13:10:13.886926982Z" level=info msg="StartContainer for \"a707a648f5c424c7798c92fba9180e41d8638bd676b1409229fcea87bfa58151\" returns successfully" Jul 9 13:10:13.896858 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a707a648f5c424c7798c92fba9180e41d8638bd676b1409229fcea87bfa58151-rootfs.mount: Deactivated successfully. Jul 9 13:10:14.808590 containerd[1659]: time="2025-07-09T13:10:14.808521843Z" level=info msg="CreateContainer within sandbox \"ff01457b2a3f9e96a80665d56887be58fa3ea27db2006ed7cd025890bef1634e\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 9 13:10:14.839205 containerd[1659]: time="2025-07-09T13:10:14.839182673Z" level=info msg="Container b9401cb3b62bca051d83f1c49c387d934f5c2daa2dc0964394132977b9894247: CDI devices from CRI Config.CDIDevices: []" Jul 9 13:10:14.857771 containerd[1659]: time="2025-07-09T13:10:14.857749379Z" level=info msg="CreateContainer within sandbox \"ff01457b2a3f9e96a80665d56887be58fa3ea27db2006ed7cd025890bef1634e\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"b9401cb3b62bca051d83f1c49c387d934f5c2daa2dc0964394132977b9894247\"" Jul 9 13:10:14.858300 containerd[1659]: time="2025-07-09T13:10:14.858284065Z" level=info msg="StartContainer for \"b9401cb3b62bca051d83f1c49c387d934f5c2daa2dc0964394132977b9894247\"" Jul 9 13:10:14.858951 containerd[1659]: time="2025-07-09T13:10:14.858930829Z" level=info msg="connecting to shim b9401cb3b62bca051d83f1c49c387d934f5c2daa2dc0964394132977b9894247" address="unix:///run/containerd/s/51052ecd943d991f208552324b92b3a10c9c2486c1c5daa0df983c68906abaa5" protocol=ttrpc version=3 Jul 9 13:10:14.877792 systemd[1]: Started cri-containerd-b9401cb3b62bca051d83f1c49c387d934f5c2daa2dc0964394132977b9894247.scope - libcontainer container b9401cb3b62bca051d83f1c49c387d934f5c2daa2dc0964394132977b9894247. Jul 9 13:10:14.899407 containerd[1659]: time="2025-07-09T13:10:14.899370582Z" level=info msg="StartContainer for \"b9401cb3b62bca051d83f1c49c387d934f5c2daa2dc0964394132977b9894247\" returns successfully" Jul 9 13:10:15.010829 containerd[1659]: time="2025-07-09T13:10:15.010625623Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b9401cb3b62bca051d83f1c49c387d934f5c2daa2dc0964394132977b9894247\" id:\"8e978795310b5754748ba1f29da682a8748d41ece68a664e35d86982000ee8a9\" pid:3608 exited_at:{seconds:1752066615 nanos:10457121}" Jul 9 13:10:15.059946 kubelet[2949]: I0709 13:10:15.059886 2949 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jul 9 13:10:15.086073 systemd[1]: Created slice kubepods-burstable-podbfe277a7_ad55_4e4e_adf6_164633ddb9e9.slice - libcontainer container kubepods-burstable-podbfe277a7_ad55_4e4e_adf6_164633ddb9e9.slice. Jul 9 13:10:15.090310 systemd[1]: Created slice kubepods-burstable-poddf040616_02e3_46e6_9dcb_b9852bd63ef3.slice - libcontainer container kubepods-burstable-poddf040616_02e3_46e6_9dcb_b9852bd63ef3.slice. Jul 9 13:10:15.133940 kubelet[2949]: I0709 13:10:15.133852 2949 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qws55\" (UniqueName: \"kubernetes.io/projected/bfe277a7-ad55-4e4e-adf6-164633ddb9e9-kube-api-access-qws55\") pod \"coredns-668d6bf9bc-knnjl\" (UID: \"bfe277a7-ad55-4e4e-adf6-164633ddb9e9\") " pod="kube-system/coredns-668d6bf9bc-knnjl" Jul 9 13:10:15.134129 kubelet[2949]: I0709 13:10:15.134065 2949 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tz2nr\" (UniqueName: \"kubernetes.io/projected/df040616-02e3-46e6-9dcb-b9852bd63ef3-kube-api-access-tz2nr\") pod \"coredns-668d6bf9bc-lzd9x\" (UID: \"df040616-02e3-46e6-9dcb-b9852bd63ef3\") " pod="kube-system/coredns-668d6bf9bc-lzd9x" Jul 9 13:10:15.134129 kubelet[2949]: I0709 13:10:15.134099 2949 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bfe277a7-ad55-4e4e-adf6-164633ddb9e9-config-volume\") pod \"coredns-668d6bf9bc-knnjl\" (UID: \"bfe277a7-ad55-4e4e-adf6-164633ddb9e9\") " pod="kube-system/coredns-668d6bf9bc-knnjl" Jul 9 13:10:15.134216 kubelet[2949]: I0709 13:10:15.134198 2949 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/df040616-02e3-46e6-9dcb-b9852bd63ef3-config-volume\") pod \"coredns-668d6bf9bc-lzd9x\" (UID: \"df040616-02e3-46e6-9dcb-b9852bd63ef3\") " pod="kube-system/coredns-668d6bf9bc-lzd9x" Jul 9 13:10:15.390417 containerd[1659]: time="2025-07-09T13:10:15.390092948Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-knnjl,Uid:bfe277a7-ad55-4e4e-adf6-164633ddb9e9,Namespace:kube-system,Attempt:0,}" Jul 9 13:10:15.395946 containerd[1659]: time="2025-07-09T13:10:15.395463624Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-lzd9x,Uid:df040616-02e3-46e6-9dcb-b9852bd63ef3,Namespace:kube-system,Attempt:0,}" Jul 9 13:10:15.818766 kubelet[2949]: I0709 13:10:15.818726 2949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-6vpld" podStartSLOduration=5.9664044910000005 podStartE2EDuration="11.818710639s" podCreationTimestamp="2025-07-09 13:10:04 +0000 UTC" firstStartedPulling="2025-07-09 13:10:04.959921254 +0000 UTC m=+5.396324382" lastFinishedPulling="2025-07-09 13:10:10.812227401 +0000 UTC m=+11.248630530" observedRunningTime="2025-07-09 13:10:15.818121855 +0000 UTC m=+16.254524993" watchObservedRunningTime="2025-07-09 13:10:15.818710639 +0000 UTC m=+16.255113777" Jul 9 13:10:17.114350 systemd-networkd[1530]: cilium_host: Link UP Jul 9 13:10:17.114733 systemd-networkd[1530]: cilium_net: Link UP Jul 9 13:10:17.114837 systemd-networkd[1530]: cilium_net: Gained carrier Jul 9 13:10:17.114930 systemd-networkd[1530]: cilium_host: Gained carrier Jul 9 13:10:17.205182 systemd-networkd[1530]: cilium_host: Gained IPv6LL Jul 9 13:10:17.240851 systemd-networkd[1530]: cilium_vxlan: Link UP Jul 9 13:10:17.241901 systemd-networkd[1530]: cilium_vxlan: Gained carrier Jul 9 13:10:17.909744 kernel: NET: Registered PF_ALG protocol family Jul 9 13:10:18.092777 systemd-networkd[1530]: cilium_net: Gained IPv6LL Jul 9 13:10:18.444509 systemd-networkd[1530]: lxc_health: Link UP Jul 9 13:10:18.470108 systemd-networkd[1530]: lxc_health: Gained carrier Jul 9 13:10:18.478186 systemd-networkd[1530]: cilium_vxlan: Gained IPv6LL Jul 9 13:10:18.958692 kernel: eth0: renamed from tmpfcc8b Jul 9 13:10:18.961712 kernel: eth0: renamed from tmpca23a Jul 9 13:10:18.963458 systemd-networkd[1530]: lxc63cbf1b83972: Link UP Jul 9 13:10:18.963611 systemd-networkd[1530]: lxcafa90fdcbaf8: Link UP Jul 9 13:10:18.966567 systemd-networkd[1530]: lxc63cbf1b83972: Gained carrier Jul 9 13:10:18.966716 systemd-networkd[1530]: lxcafa90fdcbaf8: Gained carrier Jul 9 13:10:20.140809 systemd-networkd[1530]: lxcafa90fdcbaf8: Gained IPv6LL Jul 9 13:10:20.204763 systemd-networkd[1530]: lxc_health: Gained IPv6LL Jul 9 13:10:20.268816 systemd-networkd[1530]: lxc63cbf1b83972: Gained IPv6LL Jul 9 13:10:21.875088 containerd[1659]: time="2025-07-09T13:10:21.874669575Z" level=info msg="connecting to shim fcc8b4f5d8c5ff8ae62f95d5d9d3492c679154dbe27e40c76ff2b2c2e6ee79d7" address="unix:///run/containerd/s/04f1264652081714f1f54da643550ebb96e537976041aae406b07fcbeaab865a" namespace=k8s.io protocol=ttrpc version=3 Jul 9 13:10:21.900223 containerd[1659]: time="2025-07-09T13:10:21.900189967Z" level=info msg="connecting to shim ca23a6bfe35cbf01ceb841d3a8d63862be8231ec261a2780ec3b06f113ec64d5" address="unix:///run/containerd/s/aeed0a3eddd90c98c1f7605e301ed9efd4557c550174a089d1855c411ceb71d3" namespace=k8s.io protocol=ttrpc version=3 Jul 9 13:10:21.903789 systemd[1]: Started cri-containerd-fcc8b4f5d8c5ff8ae62f95d5d9d3492c679154dbe27e40c76ff2b2c2e6ee79d7.scope - libcontainer container fcc8b4f5d8c5ff8ae62f95d5d9d3492c679154dbe27e40c76ff2b2c2e6ee79d7. Jul 9 13:10:21.924864 systemd[1]: Started cri-containerd-ca23a6bfe35cbf01ceb841d3a8d63862be8231ec261a2780ec3b06f113ec64d5.scope - libcontainer container ca23a6bfe35cbf01ceb841d3a8d63862be8231ec261a2780ec3b06f113ec64d5. Jul 9 13:10:21.929505 systemd-resolved[1531]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 9 13:10:21.949327 systemd-resolved[1531]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 9 13:10:21.986589 containerd[1659]: time="2025-07-09T13:10:21.986563572Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-lzd9x,Uid:df040616-02e3-46e6-9dcb-b9852bd63ef3,Namespace:kube-system,Attempt:0,} returns sandbox id \"fcc8b4f5d8c5ff8ae62f95d5d9d3492c679154dbe27e40c76ff2b2c2e6ee79d7\"" Jul 9 13:10:21.988757 containerd[1659]: time="2025-07-09T13:10:21.988734947Z" level=info msg="CreateContainer within sandbox \"fcc8b4f5d8c5ff8ae62f95d5d9d3492c679154dbe27e40c76ff2b2c2e6ee79d7\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 9 13:10:22.001643 containerd[1659]: time="2025-07-09T13:10:22.001610419Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-knnjl,Uid:bfe277a7-ad55-4e4e-adf6-164633ddb9e9,Namespace:kube-system,Attempt:0,} returns sandbox id \"ca23a6bfe35cbf01ceb841d3a8d63862be8231ec261a2780ec3b06f113ec64d5\"" Jul 9 13:10:22.003610 containerd[1659]: time="2025-07-09T13:10:22.003585640Z" level=info msg="CreateContainer within sandbox \"ca23a6bfe35cbf01ceb841d3a8d63862be8231ec261a2780ec3b06f113ec64d5\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 9 13:10:22.011826 containerd[1659]: time="2025-07-09T13:10:22.011727125Z" level=info msg="Container 03f8c2645c468668573c38644190ec1e822a3854d4c4998566f56cd12f253e97: CDI devices from CRI Config.CDIDevices: []" Jul 9 13:10:22.015602 containerd[1659]: time="2025-07-09T13:10:22.015582327Z" level=info msg="Container c396086c272f66a8bd924f56b16ae559abf948d475bb9553bbaa1aea2c3f5769: CDI devices from CRI Config.CDIDevices: []" Jul 9 13:10:22.015848 containerd[1659]: time="2025-07-09T13:10:22.015804470Z" level=info msg="CreateContainer within sandbox \"fcc8b4f5d8c5ff8ae62f95d5d9d3492c679154dbe27e40c76ff2b2c2e6ee79d7\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"03f8c2645c468668573c38644190ec1e822a3854d4c4998566f56cd12f253e97\"" Jul 9 13:10:22.016122 containerd[1659]: time="2025-07-09T13:10:22.016104155Z" level=info msg="StartContainer for \"03f8c2645c468668573c38644190ec1e822a3854d4c4998566f56cd12f253e97\"" Jul 9 13:10:22.017055 containerd[1659]: time="2025-07-09T13:10:22.017042252Z" level=info msg="connecting to shim 03f8c2645c468668573c38644190ec1e822a3854d4c4998566f56cd12f253e97" address="unix:///run/containerd/s/04f1264652081714f1f54da643550ebb96e537976041aae406b07fcbeaab865a" protocol=ttrpc version=3 Jul 9 13:10:22.023178 containerd[1659]: time="2025-07-09T13:10:22.023142952Z" level=info msg="CreateContainer within sandbox \"ca23a6bfe35cbf01ceb841d3a8d63862be8231ec261a2780ec3b06f113ec64d5\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c396086c272f66a8bd924f56b16ae559abf948d475bb9553bbaa1aea2c3f5769\"" Jul 9 13:10:22.028380 containerd[1659]: time="2025-07-09T13:10:22.028281823Z" level=info msg="StartContainer for \"c396086c272f66a8bd924f56b16ae559abf948d475bb9553bbaa1aea2c3f5769\"" Jul 9 13:10:22.030098 containerd[1659]: time="2025-07-09T13:10:22.029608789Z" level=info msg="connecting to shim c396086c272f66a8bd924f56b16ae559abf948d475bb9553bbaa1aea2c3f5769" address="unix:///run/containerd/s/aeed0a3eddd90c98c1f7605e301ed9efd4557c550174a089d1855c411ceb71d3" protocol=ttrpc version=3 Jul 9 13:10:22.033789 systemd[1]: Started cri-containerd-03f8c2645c468668573c38644190ec1e822a3854d4c4998566f56cd12f253e97.scope - libcontainer container 03f8c2645c468668573c38644190ec1e822a3854d4c4998566f56cd12f253e97. Jul 9 13:10:22.051829 systemd[1]: Started cri-containerd-c396086c272f66a8bd924f56b16ae559abf948d475bb9553bbaa1aea2c3f5769.scope - libcontainer container c396086c272f66a8bd924f56b16ae559abf948d475bb9553bbaa1aea2c3f5769. Jul 9 13:10:22.080467 containerd[1659]: time="2025-07-09T13:10:22.080323893Z" level=info msg="StartContainer for \"03f8c2645c468668573c38644190ec1e822a3854d4c4998566f56cd12f253e97\" returns successfully" Jul 9 13:10:22.085023 containerd[1659]: time="2025-07-09T13:10:22.084973748Z" level=info msg="StartContainer for \"c396086c272f66a8bd924f56b16ae559abf948d475bb9553bbaa1aea2c3f5769\" returns successfully" Jul 9 13:10:22.837376 kubelet[2949]: I0709 13:10:22.837319 2949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-knnjl" podStartSLOduration=18.837303676 podStartE2EDuration="18.837303676s" podCreationTimestamp="2025-07-09 13:10:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-09 13:10:22.836919898 +0000 UTC m=+23.273323034" watchObservedRunningTime="2025-07-09 13:10:22.837303676 +0000 UTC m=+23.273706810" Jul 9 13:10:22.868285 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount849116279.mount: Deactivated successfully. Jul 9 13:10:22.891646 kubelet[2949]: I0709 13:10:22.891569 2949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-lzd9x" podStartSLOduration=18.891555881 podStartE2EDuration="18.891555881s" podCreationTimestamp="2025-07-09 13:10:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-09 13:10:22.890936669 +0000 UTC m=+23.327339807" watchObservedRunningTime="2025-07-09 13:10:22.891555881 +0000 UTC m=+23.327959016" Jul 9 13:11:06.225734 systemd[1]: Started sshd@7-139.178.70.108:22-139.178.68.195:49048.service - OpenSSH per-connection server daemon (139.178.68.195:49048). Jul 9 13:11:06.308019 sshd[4267]: Accepted publickey for core from 139.178.68.195 port 49048 ssh2: RSA SHA256:pHehh7tc90QOyf1uGohWVF4tJIie1SMOFA2c8G1DmZI Jul 9 13:11:06.309256 sshd-session[4267]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 13:11:06.313309 systemd-logind[1628]: New session 10 of user core. Jul 9 13:11:06.326794 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 9 13:11:06.835664 sshd[4270]: Connection closed by 139.178.68.195 port 49048 Jul 9 13:11:06.835860 sshd-session[4267]: pam_unix(sshd:session): session closed for user core Jul 9 13:11:06.843207 systemd[1]: sshd@7-139.178.70.108:22-139.178.68.195:49048.service: Deactivated successfully. Jul 9 13:11:06.844483 systemd[1]: session-10.scope: Deactivated successfully. Jul 9 13:11:06.845071 systemd-logind[1628]: Session 10 logged out. Waiting for processes to exit. Jul 9 13:11:06.846019 systemd-logind[1628]: Removed session 10. Jul 9 13:11:11.850013 systemd[1]: Started sshd@8-139.178.70.108:22-139.178.68.195:45400.service - OpenSSH per-connection server daemon (139.178.68.195:45400). Jul 9 13:11:11.924312 sshd[4283]: Accepted publickey for core from 139.178.68.195 port 45400 ssh2: RSA SHA256:pHehh7tc90QOyf1uGohWVF4tJIie1SMOFA2c8G1DmZI Jul 9 13:11:11.925379 sshd-session[4283]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 13:11:11.928743 systemd-logind[1628]: New session 11 of user core. Jul 9 13:11:11.933724 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 9 13:11:12.124794 sshd[4286]: Connection closed by 139.178.68.195 port 45400 Jul 9 13:11:12.124460 sshd-session[4283]: pam_unix(sshd:session): session closed for user core Jul 9 13:11:12.140018 systemd[1]: sshd@8-139.178.70.108:22-139.178.68.195:45400.service: Deactivated successfully. Jul 9 13:11:12.141030 systemd[1]: session-11.scope: Deactivated successfully. Jul 9 13:11:12.141487 systemd-logind[1628]: Session 11 logged out. Waiting for processes to exit. Jul 9 13:11:12.142748 systemd-logind[1628]: Removed session 11. Jul 9 13:11:17.133366 systemd[1]: Started sshd@9-139.178.70.108:22-139.178.68.195:45408.service - OpenSSH per-connection server daemon (139.178.68.195:45408). Jul 9 13:11:17.174192 sshd[4299]: Accepted publickey for core from 139.178.68.195 port 45408 ssh2: RSA SHA256:pHehh7tc90QOyf1uGohWVF4tJIie1SMOFA2c8G1DmZI Jul 9 13:11:17.175129 sshd-session[4299]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 13:11:17.177820 systemd-logind[1628]: New session 12 of user core. Jul 9 13:11:17.184782 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 9 13:11:17.278171 sshd[4302]: Connection closed by 139.178.68.195 port 45408 Jul 9 13:11:17.278501 sshd-session[4299]: pam_unix(sshd:session): session closed for user core Jul 9 13:11:17.280587 systemd[1]: sshd@9-139.178.70.108:22-139.178.68.195:45408.service: Deactivated successfully. Jul 9 13:11:17.281609 systemd[1]: session-12.scope: Deactivated successfully. Jul 9 13:11:17.282135 systemd-logind[1628]: Session 12 logged out. Waiting for processes to exit. Jul 9 13:11:17.282857 systemd-logind[1628]: Removed session 12. Jul 9 13:11:22.294001 systemd[1]: Started sshd@10-139.178.70.108:22-139.178.68.195:50426.service - OpenSSH per-connection server daemon (139.178.68.195:50426). Jul 9 13:11:22.334694 sshd[4316]: Accepted publickey for core from 139.178.68.195 port 50426 ssh2: RSA SHA256:pHehh7tc90QOyf1uGohWVF4tJIie1SMOFA2c8G1DmZI Jul 9 13:11:22.335838 sshd-session[4316]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 13:11:22.339421 systemd-logind[1628]: New session 13 of user core. Jul 9 13:11:22.345830 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 9 13:11:22.466202 sshd[4319]: Connection closed by 139.178.68.195 port 50426 Jul 9 13:11:22.466925 sshd-session[4316]: pam_unix(sshd:session): session closed for user core Jul 9 13:11:22.475403 systemd[1]: sshd@10-139.178.70.108:22-139.178.68.195:50426.service: Deactivated successfully. Jul 9 13:11:22.477646 systemd[1]: session-13.scope: Deactivated successfully. Jul 9 13:11:22.479169 systemd-logind[1628]: Session 13 logged out. Waiting for processes to exit. Jul 9 13:11:22.481601 systemd[1]: Started sshd@11-139.178.70.108:22-139.178.68.195:50430.service - OpenSSH per-connection server daemon (139.178.68.195:50430). Jul 9 13:11:22.484618 systemd-logind[1628]: Removed session 13. Jul 9 13:11:22.526035 sshd[4332]: Accepted publickey for core from 139.178.68.195 port 50430 ssh2: RSA SHA256:pHehh7tc90QOyf1uGohWVF4tJIie1SMOFA2c8G1DmZI Jul 9 13:11:22.527341 sshd-session[4332]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 13:11:22.531863 systemd-logind[1628]: New session 14 of user core. Jul 9 13:11:22.550866 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 9 13:11:22.712834 sshd[4335]: Connection closed by 139.178.68.195 port 50430 Jul 9 13:11:22.713644 sshd-session[4332]: pam_unix(sshd:session): session closed for user core Jul 9 13:11:22.723334 systemd[1]: sshd@11-139.178.70.108:22-139.178.68.195:50430.service: Deactivated successfully. Jul 9 13:11:22.726269 systemd[1]: session-14.scope: Deactivated successfully. Jul 9 13:11:22.728762 systemd-logind[1628]: Session 14 logged out. Waiting for processes to exit. Jul 9 13:11:22.733845 systemd[1]: Started sshd@12-139.178.70.108:22-139.178.68.195:50442.service - OpenSSH per-connection server daemon (139.178.68.195:50442). Jul 9 13:11:22.737786 systemd-logind[1628]: Removed session 14. Jul 9 13:11:22.796957 sshd[4345]: Accepted publickey for core from 139.178.68.195 port 50442 ssh2: RSA SHA256:pHehh7tc90QOyf1uGohWVF4tJIie1SMOFA2c8G1DmZI Jul 9 13:11:22.797860 sshd-session[4345]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 13:11:22.801473 systemd-logind[1628]: New session 15 of user core. Jul 9 13:11:22.810825 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 9 13:11:22.930986 sshd[4348]: Connection closed by 139.178.68.195 port 50442 Jul 9 13:11:22.930530 sshd-session[4345]: pam_unix(sshd:session): session closed for user core Jul 9 13:11:22.936716 systemd[1]: sshd@12-139.178.70.108:22-139.178.68.195:50442.service: Deactivated successfully. Jul 9 13:11:22.938037 systemd[1]: session-15.scope: Deactivated successfully. Jul 9 13:11:22.939306 systemd-logind[1628]: Session 15 logged out. Waiting for processes to exit. Jul 9 13:11:22.940411 systemd-logind[1628]: Removed session 15. Jul 9 13:11:27.940529 systemd[1]: Started sshd@13-139.178.70.108:22-139.178.68.195:50452.service - OpenSSH per-connection server daemon (139.178.68.195:50452). Jul 9 13:11:27.983725 sshd[4361]: Accepted publickey for core from 139.178.68.195 port 50452 ssh2: RSA SHA256:pHehh7tc90QOyf1uGohWVF4tJIie1SMOFA2c8G1DmZI Jul 9 13:11:27.984769 sshd-session[4361]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 13:11:27.987707 systemd-logind[1628]: New session 16 of user core. Jul 9 13:11:27.992729 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 9 13:11:28.083481 sshd[4364]: Connection closed by 139.178.68.195 port 50452 Jul 9 13:11:28.083761 sshd-session[4361]: pam_unix(sshd:session): session closed for user core Jul 9 13:11:28.086285 systemd[1]: sshd@13-139.178.70.108:22-139.178.68.195:50452.service: Deactivated successfully. Jul 9 13:11:28.087938 systemd-logind[1628]: Session 16 logged out. Waiting for processes to exit. Jul 9 13:11:28.087965 systemd[1]: session-16.scope: Deactivated successfully. Jul 9 13:11:28.088990 systemd-logind[1628]: Removed session 16. Jul 9 13:11:33.094087 systemd[1]: Started sshd@14-139.178.70.108:22-139.178.68.195:37828.service - OpenSSH per-connection server daemon (139.178.68.195:37828). Jul 9 13:11:33.269922 sshd[4376]: Accepted publickey for core from 139.178.68.195 port 37828 ssh2: RSA SHA256:pHehh7tc90QOyf1uGohWVF4tJIie1SMOFA2c8G1DmZI Jul 9 13:11:33.270787 sshd-session[4376]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 13:11:33.273560 systemd-logind[1628]: New session 17 of user core. Jul 9 13:11:33.278928 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 9 13:11:33.401654 sshd[4379]: Connection closed by 139.178.68.195 port 37828 Jul 9 13:11:33.402185 sshd-session[4376]: pam_unix(sshd:session): session closed for user core Jul 9 13:11:33.404615 systemd[1]: sshd@14-139.178.70.108:22-139.178.68.195:37828.service: Deactivated successfully. Jul 9 13:11:33.404761 systemd-logind[1628]: Session 17 logged out. Waiting for processes to exit. Jul 9 13:11:33.406026 systemd[1]: session-17.scope: Deactivated successfully. Jul 9 13:11:33.407072 systemd-logind[1628]: Removed session 17. Jul 9 13:11:38.412409 systemd[1]: Started sshd@15-139.178.70.108:22-139.178.68.195:50762.service - OpenSSH per-connection server daemon (139.178.68.195:50762). Jul 9 13:11:38.453631 sshd[4393]: Accepted publickey for core from 139.178.68.195 port 50762 ssh2: RSA SHA256:pHehh7tc90QOyf1uGohWVF4tJIie1SMOFA2c8G1DmZI Jul 9 13:11:38.454379 sshd-session[4393]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 13:11:38.456852 systemd-logind[1628]: New session 18 of user core. Jul 9 13:11:38.460720 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 9 13:11:38.626999 sshd[4396]: Connection closed by 139.178.68.195 port 50762 Jul 9 13:11:38.627445 sshd-session[4393]: pam_unix(sshd:session): session closed for user core Jul 9 13:11:38.635285 systemd[1]: sshd@15-139.178.70.108:22-139.178.68.195:50762.service: Deactivated successfully. Jul 9 13:11:38.637138 systemd[1]: session-18.scope: Deactivated successfully. Jul 9 13:11:38.638500 systemd-logind[1628]: Session 18 logged out. Waiting for processes to exit. Jul 9 13:11:38.641109 systemd[1]: Started sshd@16-139.178.70.108:22-139.178.68.195:50764.service - OpenSSH per-connection server daemon (139.178.68.195:50764). Jul 9 13:11:38.642325 systemd-logind[1628]: Removed session 18. Jul 9 13:11:38.707094 sshd[4407]: Accepted publickey for core from 139.178.68.195 port 50764 ssh2: RSA SHA256:pHehh7tc90QOyf1uGohWVF4tJIie1SMOFA2c8G1DmZI Jul 9 13:11:38.708286 sshd-session[4407]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 13:11:38.711902 systemd-logind[1628]: New session 19 of user core. Jul 9 13:11:38.722850 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 9 13:11:39.386237 sshd[4410]: Connection closed by 139.178.68.195 port 50764 Jul 9 13:11:39.386567 sshd-session[4407]: pam_unix(sshd:session): session closed for user core Jul 9 13:11:39.393769 systemd[1]: sshd@16-139.178.70.108:22-139.178.68.195:50764.service: Deactivated successfully. Jul 9 13:11:39.395175 systemd[1]: session-19.scope: Deactivated successfully. Jul 9 13:11:39.396699 systemd-logind[1628]: Session 19 logged out. Waiting for processes to exit. Jul 9 13:11:39.397547 systemd[1]: Started sshd@17-139.178.70.108:22-139.178.68.195:50778.service - OpenSSH per-connection server daemon (139.178.68.195:50778). Jul 9 13:11:39.398477 systemd-logind[1628]: Removed session 19. Jul 9 13:11:39.444096 sshd[4420]: Accepted publickey for core from 139.178.68.195 port 50778 ssh2: RSA SHA256:pHehh7tc90QOyf1uGohWVF4tJIie1SMOFA2c8G1DmZI Jul 9 13:11:39.444856 sshd-session[4420]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 13:11:39.447897 systemd-logind[1628]: New session 20 of user core. Jul 9 13:11:39.452720 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 9 13:11:41.528183 sshd[4423]: Connection closed by 139.178.68.195 port 50778 Jul 9 13:11:41.528560 sshd-session[4420]: pam_unix(sshd:session): session closed for user core Jul 9 13:11:41.539197 systemd[1]: sshd@17-139.178.70.108:22-139.178.68.195:50778.service: Deactivated successfully. Jul 9 13:11:41.540792 systemd[1]: session-20.scope: Deactivated successfully. Jul 9 13:11:41.541255 systemd-logind[1628]: Session 20 logged out. Waiting for processes to exit. Jul 9 13:11:41.542797 systemd[1]: Started sshd@18-139.178.70.108:22-139.178.68.195:50794.service - OpenSSH per-connection server daemon (139.178.68.195:50794). Jul 9 13:11:41.543513 systemd-logind[1628]: Removed session 20. Jul 9 13:11:41.628307 sshd[4440]: Accepted publickey for core from 139.178.68.195 port 50794 ssh2: RSA SHA256:pHehh7tc90QOyf1uGohWVF4tJIie1SMOFA2c8G1DmZI Jul 9 13:11:41.629281 sshd-session[4440]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 13:11:41.632206 systemd-logind[1628]: New session 21 of user core. Jul 9 13:11:41.640789 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 9 13:11:42.063956 sshd[4443]: Connection closed by 139.178.68.195 port 50794 Jul 9 13:11:42.064408 sshd-session[4440]: pam_unix(sshd:session): session closed for user core Jul 9 13:11:42.074412 systemd[1]: sshd@18-139.178.70.108:22-139.178.68.195:50794.service: Deactivated successfully. Jul 9 13:11:42.075680 systemd[1]: session-21.scope: Deactivated successfully. Jul 9 13:11:42.076234 systemd-logind[1628]: Session 21 logged out. Waiting for processes to exit. Jul 9 13:11:42.079502 systemd[1]: Started sshd@19-139.178.70.108:22-139.178.68.195:50802.service - OpenSSH per-connection server daemon (139.178.68.195:50802). Jul 9 13:11:42.080392 systemd-logind[1628]: Removed session 21. Jul 9 13:11:42.121081 sshd[4452]: Accepted publickey for core from 139.178.68.195 port 50802 ssh2: RSA SHA256:pHehh7tc90QOyf1uGohWVF4tJIie1SMOFA2c8G1DmZI Jul 9 13:11:42.122361 sshd-session[4452]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 13:11:42.126461 systemd-logind[1628]: New session 22 of user core. Jul 9 13:11:42.129968 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 9 13:11:42.225082 sshd[4455]: Connection closed by 139.178.68.195 port 50802 Jul 9 13:11:42.225542 sshd-session[4452]: pam_unix(sshd:session): session closed for user core Jul 9 13:11:42.227780 systemd-logind[1628]: Session 22 logged out. Waiting for processes to exit. Jul 9 13:11:42.228157 systemd[1]: sshd@19-139.178.70.108:22-139.178.68.195:50802.service: Deactivated successfully. Jul 9 13:11:42.229796 systemd[1]: session-22.scope: Deactivated successfully. Jul 9 13:11:42.231366 systemd-logind[1628]: Removed session 22. Jul 9 13:11:47.235776 systemd[1]: Started sshd@20-139.178.70.108:22-139.178.68.195:50818.service - OpenSSH per-connection server daemon (139.178.68.195:50818). Jul 9 13:11:47.275488 sshd[4469]: Accepted publickey for core from 139.178.68.195 port 50818 ssh2: RSA SHA256:pHehh7tc90QOyf1uGohWVF4tJIie1SMOFA2c8G1DmZI Jul 9 13:11:47.276176 sshd-session[4469]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 13:11:47.278650 systemd-logind[1628]: New session 23 of user core. Jul 9 13:11:47.285715 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 9 13:11:47.373360 sshd[4472]: Connection closed by 139.178.68.195 port 50818 Jul 9 13:11:47.373744 sshd-session[4469]: pam_unix(sshd:session): session closed for user core Jul 9 13:11:47.375539 systemd-logind[1628]: Session 23 logged out. Waiting for processes to exit. Jul 9 13:11:47.376548 systemd[1]: sshd@20-139.178.70.108:22-139.178.68.195:50818.service: Deactivated successfully. Jul 9 13:11:47.377766 systemd[1]: session-23.scope: Deactivated successfully. Jul 9 13:11:47.379108 systemd-logind[1628]: Removed session 23. Jul 9 13:11:52.392017 systemd[1]: Started sshd@21-139.178.70.108:22-139.178.68.195:39640.service - OpenSSH per-connection server daemon (139.178.68.195:39640). Jul 9 13:11:52.433203 sshd[4484]: Accepted publickey for core from 139.178.68.195 port 39640 ssh2: RSA SHA256:pHehh7tc90QOyf1uGohWVF4tJIie1SMOFA2c8G1DmZI Jul 9 13:11:52.434092 sshd-session[4484]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 13:11:52.436539 systemd-logind[1628]: New session 24 of user core. Jul 9 13:11:52.446727 systemd[1]: Started session-24.scope - Session 24 of User core. Jul 9 13:11:52.536520 sshd[4487]: Connection closed by 139.178.68.195 port 39640 Jul 9 13:11:52.537027 sshd-session[4484]: pam_unix(sshd:session): session closed for user core Jul 9 13:11:52.539789 systemd[1]: sshd@21-139.178.70.108:22-139.178.68.195:39640.service: Deactivated successfully. Jul 9 13:11:52.541536 systemd[1]: session-24.scope: Deactivated successfully. Jul 9 13:11:52.542337 systemd-logind[1628]: Session 24 logged out. Waiting for processes to exit. Jul 9 13:11:52.543853 systemd-logind[1628]: Removed session 24. Jul 9 13:11:57.551798 systemd[1]: Started sshd@22-139.178.70.108:22-139.178.68.195:39652.service - OpenSSH per-connection server daemon (139.178.68.195:39652). Jul 9 13:11:57.590894 sshd[4499]: Accepted publickey for core from 139.178.68.195 port 39652 ssh2: RSA SHA256:pHehh7tc90QOyf1uGohWVF4tJIie1SMOFA2c8G1DmZI Jul 9 13:11:57.591722 sshd-session[4499]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 13:11:57.594534 systemd-logind[1628]: New session 25 of user core. Jul 9 13:11:57.599736 systemd[1]: Started session-25.scope - Session 25 of User core. Jul 9 13:11:57.697657 sshd[4502]: Connection closed by 139.178.68.195 port 39652 Jul 9 13:11:57.698030 sshd-session[4499]: pam_unix(sshd:session): session closed for user core Jul 9 13:11:57.700324 systemd[1]: sshd@22-139.178.70.108:22-139.178.68.195:39652.service: Deactivated successfully. Jul 9 13:11:57.701745 systemd[1]: session-25.scope: Deactivated successfully. Jul 9 13:11:57.702345 systemd-logind[1628]: Session 25 logged out. Waiting for processes to exit. Jul 9 13:11:57.703206 systemd-logind[1628]: Removed session 25. Jul 9 13:12:02.712327 systemd[1]: Started sshd@23-139.178.70.108:22-139.178.68.195:40726.service - OpenSSH per-connection server daemon (139.178.68.195:40726). Jul 9 13:12:02.749940 sshd[4516]: Accepted publickey for core from 139.178.68.195 port 40726 ssh2: RSA SHA256:pHehh7tc90QOyf1uGohWVF4tJIie1SMOFA2c8G1DmZI Jul 9 13:12:02.751115 sshd-session[4516]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 13:12:02.754147 systemd-logind[1628]: New session 26 of user core. Jul 9 13:12:02.760732 systemd[1]: Started session-26.scope - Session 26 of User core. Jul 9 13:12:02.858339 sshd[4519]: Connection closed by 139.178.68.195 port 40726 Jul 9 13:12:02.859193 sshd-session[4516]: pam_unix(sshd:session): session closed for user core Jul 9 13:12:02.868875 systemd[1]: sshd@23-139.178.70.108:22-139.178.68.195:40726.service: Deactivated successfully. Jul 9 13:12:02.869941 systemd[1]: session-26.scope: Deactivated successfully. Jul 9 13:12:02.870495 systemd-logind[1628]: Session 26 logged out. Waiting for processes to exit. Jul 9 13:12:02.872065 systemd[1]: Started sshd@24-139.178.70.108:22-139.178.68.195:40742.service - OpenSSH per-connection server daemon (139.178.68.195:40742). Jul 9 13:12:02.873128 systemd-logind[1628]: Removed session 26. Jul 9 13:12:02.909461 sshd[4531]: Accepted publickey for core from 139.178.68.195 port 40742 ssh2: RSA SHA256:pHehh7tc90QOyf1uGohWVF4tJIie1SMOFA2c8G1DmZI Jul 9 13:12:02.910238 sshd-session[4531]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 13:12:02.913064 systemd-logind[1628]: New session 27 of user core. Jul 9 13:12:02.918728 systemd[1]: Started session-27.scope - Session 27 of User core. Jul 9 13:12:04.583682 containerd[1659]: time="2025-07-09T13:12:04.583601774Z" level=info msg="StopContainer for \"fb1dc13511474fdde58ff028a6b1058a781f8207725d6bac53c385237f2cb838\" with timeout 30 (s)" Jul 9 13:12:04.587387 containerd[1659]: time="2025-07-09T13:12:04.587372517Z" level=info msg="Stop container \"fb1dc13511474fdde58ff028a6b1058a781f8207725d6bac53c385237f2cb838\" with signal terminated" Jul 9 13:12:04.601773 systemd[1]: cri-containerd-fb1dc13511474fdde58ff028a6b1058a781f8207725d6bac53c385237f2cb838.scope: Deactivated successfully. Jul 9 13:12:04.602465 containerd[1659]: time="2025-07-09T13:12:04.602281826Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fb1dc13511474fdde58ff028a6b1058a781f8207725d6bac53c385237f2cb838\" id:\"fb1dc13511474fdde58ff028a6b1058a781f8207725d6bac53c385237f2cb838\" pid:3493 exited_at:{seconds:1752066724 nanos:601548700}" Jul 9 13:12:04.602588 containerd[1659]: time="2025-07-09T13:12:04.602520276Z" level=info msg="received exit event container_id:\"fb1dc13511474fdde58ff028a6b1058a781f8207725d6bac53c385237f2cb838\" id:\"fb1dc13511474fdde58ff028a6b1058a781f8207725d6bac53c385237f2cb838\" pid:3493 exited_at:{seconds:1752066724 nanos:601548700}" Jul 9 13:12:04.620320 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fb1dc13511474fdde58ff028a6b1058a781f8207725d6bac53c385237f2cb838-rootfs.mount: Deactivated successfully. Jul 9 13:12:04.678019 containerd[1659]: time="2025-07-09T13:12:04.677994518Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 9 13:12:04.682837 containerd[1659]: time="2025-07-09T13:12:04.682766669Z" level=info msg="StopContainer for \"fb1dc13511474fdde58ff028a6b1058a781f8207725d6bac53c385237f2cb838\" returns successfully" Jul 9 13:12:04.683303 containerd[1659]: time="2025-07-09T13:12:04.683277945Z" level=info msg="StopPodSandbox for \"a93606c0b87d982c3fb16340d522ed075662716ca2bfb2c4bdc837f52af9cf4a\"" Jul 9 13:12:04.683356 containerd[1659]: time="2025-07-09T13:12:04.683340746Z" level=info msg="Container to stop \"fb1dc13511474fdde58ff028a6b1058a781f8207725d6bac53c385237f2cb838\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 9 13:12:04.693274 systemd[1]: cri-containerd-a93606c0b87d982c3fb16340d522ed075662716ca2bfb2c4bdc837f52af9cf4a.scope: Deactivated successfully. Jul 9 13:12:04.695584 containerd[1659]: time="2025-07-09T13:12:04.695492203Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b9401cb3b62bca051d83f1c49c387d934f5c2daa2dc0964394132977b9894247\" id:\"1854645e0e9f68e333017c34bb5c23f82b4e7933a388c35a4c579cf1ce9758ee\" pid:4560 exited_at:{seconds:1752066724 nanos:694205412}" Jul 9 13:12:04.695892 containerd[1659]: time="2025-07-09T13:12:04.695804266Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a93606c0b87d982c3fb16340d522ed075662716ca2bfb2c4bdc837f52af9cf4a\" id:\"a93606c0b87d982c3fb16340d522ed075662716ca2bfb2c4bdc837f52af9cf4a\" pid:3134 exit_status:137 exited_at:{seconds:1752066724 nanos:694032179}" Jul 9 13:12:04.697059 containerd[1659]: time="2025-07-09T13:12:04.697038084Z" level=info msg="StopContainer for \"b9401cb3b62bca051d83f1c49c387d934f5c2daa2dc0964394132977b9894247\" with timeout 2 (s)" Jul 9 13:12:04.697245 containerd[1659]: time="2025-07-09T13:12:04.697205753Z" level=info msg="Stop container \"b9401cb3b62bca051d83f1c49c387d934f5c2daa2dc0964394132977b9894247\" with signal terminated" Jul 9 13:12:04.703541 systemd-networkd[1530]: lxc_health: Link DOWN Jul 9 13:12:04.703546 systemd-networkd[1530]: lxc_health: Lost carrier Jul 9 13:12:04.715655 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a93606c0b87d982c3fb16340d522ed075662716ca2bfb2c4bdc837f52af9cf4a-rootfs.mount: Deactivated successfully. Jul 9 13:12:04.753266 systemd[1]: cri-containerd-b9401cb3b62bca051d83f1c49c387d934f5c2daa2dc0964394132977b9894247.scope: Deactivated successfully. Jul 9 13:12:04.753771 systemd[1]: cri-containerd-b9401cb3b62bca051d83f1c49c387d934f5c2daa2dc0964394132977b9894247.scope: Consumed 5.055s CPU time, 219.6M memory peak, 96.2M read from disk, 13.3M written to disk. Jul 9 13:12:04.755431 containerd[1659]: time="2025-07-09T13:12:04.753661316Z" level=info msg="received exit event container_id:\"b9401cb3b62bca051d83f1c49c387d934f5c2daa2dc0964394132977b9894247\" id:\"b9401cb3b62bca051d83f1c49c387d934f5c2daa2dc0964394132977b9894247\" pid:3577 exited_at:{seconds:1752066724 nanos:753395587}" Jul 9 13:12:04.767378 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b9401cb3b62bca051d83f1c49c387d934f5c2daa2dc0964394132977b9894247-rootfs.mount: Deactivated successfully. Jul 9 13:12:04.812292 containerd[1659]: time="2025-07-09T13:12:04.812265455Z" level=info msg="StopContainer for \"b9401cb3b62bca051d83f1c49c387d934f5c2daa2dc0964394132977b9894247\" returns successfully" Jul 9 13:12:04.812489 containerd[1659]: time="2025-07-09T13:12:04.812295858Z" level=info msg="shim disconnected" id=a93606c0b87d982c3fb16340d522ed075662716ca2bfb2c4bdc837f52af9cf4a namespace=k8s.io Jul 9 13:12:04.812489 containerd[1659]: time="2025-07-09T13:12:04.812387681Z" level=warning msg="cleaning up after shim disconnected" id=a93606c0b87d982c3fb16340d522ed075662716ca2bfb2c4bdc837f52af9cf4a namespace=k8s.io Jul 9 13:12:04.815911 containerd[1659]: time="2025-07-09T13:12:04.812392122Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 9 13:12:04.816071 containerd[1659]: time="2025-07-09T13:12:04.812803184Z" level=info msg="StopPodSandbox for \"ff01457b2a3f9e96a80665d56887be58fa3ea27db2006ed7cd025890bef1634e\"" Jul 9 13:12:04.816071 containerd[1659]: time="2025-07-09T13:12:04.816057955Z" level=info msg="Container to stop \"63a492b6f3379dff0844e37c78e128e54a52495b751b8c57a696b8ef3b2d8ade\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 9 13:12:04.816071 containerd[1659]: time="2025-07-09T13:12:04.816065910Z" level=info msg="Container to stop \"57dc5d70f3decf1a6045acf83854cd0994fc890fe0a89be497c4289d013d1a9d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 9 13:12:04.816139 containerd[1659]: time="2025-07-09T13:12:04.816074323Z" level=info msg="Container to stop \"b9401cb3b62bca051d83f1c49c387d934f5c2daa2dc0964394132977b9894247\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 9 13:12:04.816139 containerd[1659]: time="2025-07-09T13:12:04.816080294Z" level=info msg="Container to stop \"080fb7116012c729b92754402c4e9e79a43bec649427f6653c1be8356a12a5ee\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 9 13:12:04.816139 containerd[1659]: time="2025-07-09T13:12:04.816085066Z" level=info msg="Container to stop \"a707a648f5c424c7798c92fba9180e41d8638bd676b1409229fcea87bfa58151\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 9 13:12:04.818517 kubelet[2949]: E0709 13:12:04.818424 2949 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 9 13:12:04.821956 systemd[1]: cri-containerd-ff01457b2a3f9e96a80665d56887be58fa3ea27db2006ed7cd025890bef1634e.scope: Deactivated successfully. Jul 9 13:12:04.828788 containerd[1659]: time="2025-07-09T13:12:04.828631407Z" level=error msg="Failed to handle event container_id:\"a93606c0b87d982c3fb16340d522ed075662716ca2bfb2c4bdc837f52af9cf4a\" id:\"a93606c0b87d982c3fb16340d522ed075662716ca2bfb2c4bdc837f52af9cf4a\" pid:3134 exit_status:137 exited_at:{seconds:1752066724 nanos:694032179} for a93606c0b87d982c3fb16340d522ed075662716ca2bfb2c4bdc837f52af9cf4a" error="failed to handle container TaskExit event: failed to stop sandbox: failed to delete task: ttrpc: closed" Jul 9 13:12:04.828788 containerd[1659]: time="2025-07-09T13:12:04.828694586Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b9401cb3b62bca051d83f1c49c387d934f5c2daa2dc0964394132977b9894247\" id:\"b9401cb3b62bca051d83f1c49c387d934f5c2daa2dc0964394132977b9894247\" pid:3577 exited_at:{seconds:1752066724 nanos:753395587}" Jul 9 13:12:04.828788 containerd[1659]: time="2025-07-09T13:12:04.828717206Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ff01457b2a3f9e96a80665d56887be58fa3ea27db2006ed7cd025890bef1634e\" id:\"ff01457b2a3f9e96a80665d56887be58fa3ea27db2006ed7cd025890bef1634e\" pid:3119 exit_status:137 exited_at:{seconds:1752066724 nanos:822121209}" Jul 9 13:12:04.828788 containerd[1659]: time="2025-07-09T13:12:04.828757053Z" level=info msg="received exit event sandbox_id:\"a93606c0b87d982c3fb16340d522ed075662716ca2bfb2c4bdc837f52af9cf4a\" exit_status:137 exited_at:{seconds:1752066724 nanos:694032179}" Jul 9 13:12:04.830968 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a93606c0b87d982c3fb16340d522ed075662716ca2bfb2c4bdc837f52af9cf4a-shm.mount: Deactivated successfully. Jul 9 13:12:04.832155 containerd[1659]: time="2025-07-09T13:12:04.831715830Z" level=info msg="TearDown network for sandbox \"a93606c0b87d982c3fb16340d522ed075662716ca2bfb2c4bdc837f52af9cf4a\" successfully" Jul 9 13:12:04.832155 containerd[1659]: time="2025-07-09T13:12:04.831739648Z" level=info msg="StopPodSandbox for \"a93606c0b87d982c3fb16340d522ed075662716ca2bfb2c4bdc837f52af9cf4a\" returns successfully" Jul 9 13:12:04.840497 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ff01457b2a3f9e96a80665d56887be58fa3ea27db2006ed7cd025890bef1634e-rootfs.mount: Deactivated successfully. Jul 9 13:12:04.852210 containerd[1659]: time="2025-07-09T13:12:04.852173486Z" level=info msg="shim disconnected" id=ff01457b2a3f9e96a80665d56887be58fa3ea27db2006ed7cd025890bef1634e namespace=k8s.io Jul 9 13:12:04.852210 containerd[1659]: time="2025-07-09T13:12:04.852193324Z" level=warning msg="cleaning up after shim disconnected" id=ff01457b2a3f9e96a80665d56887be58fa3ea27db2006ed7cd025890bef1634e namespace=k8s.io Jul 9 13:12:04.852210 containerd[1659]: time="2025-07-09T13:12:04.852197855Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 9 13:12:04.852820 containerd[1659]: time="2025-07-09T13:12:04.852802814Z" level=info msg="received exit event sandbox_id:\"ff01457b2a3f9e96a80665d56887be58fa3ea27db2006ed7cd025890bef1634e\" exit_status:137 exited_at:{seconds:1752066724 nanos:822121209}" Jul 9 13:12:04.853736 containerd[1659]: time="2025-07-09T13:12:04.853720856Z" level=info msg="TearDown network for sandbox \"ff01457b2a3f9e96a80665d56887be58fa3ea27db2006ed7cd025890bef1634e\" successfully" Jul 9 13:12:04.853736 containerd[1659]: time="2025-07-09T13:12:04.853734825Z" level=info msg="StopPodSandbox for \"ff01457b2a3f9e96a80665d56887be58fa3ea27db2006ed7cd025890bef1634e\" returns successfully" Jul 9 13:12:04.985546 kubelet[2949]: I0709 13:12:04.985517 2949 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d84cc738-1909-460a-9971-4ee9bfc13ad0-cni-path\") pod \"d84cc738-1909-460a-9971-4ee9bfc13ad0\" (UID: \"d84cc738-1909-460a-9971-4ee9bfc13ad0\") " Jul 9 13:12:04.985868 kubelet[2949]: I0709 13:12:04.985735 2949 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d84cc738-1909-460a-9971-4ee9bfc13ad0-hostproc\") pod \"d84cc738-1909-460a-9971-4ee9bfc13ad0\" (UID: \"d84cc738-1909-460a-9971-4ee9bfc13ad0\") " Jul 9 13:12:04.985868 kubelet[2949]: I0709 13:12:04.985733 2949 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d84cc738-1909-460a-9971-4ee9bfc13ad0-cni-path" (OuterVolumeSpecName: "cni-path") pod "d84cc738-1909-460a-9971-4ee9bfc13ad0" (UID: "d84cc738-1909-460a-9971-4ee9bfc13ad0"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 9 13:12:04.985868 kubelet[2949]: I0709 13:12:04.985750 2949 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d84cc738-1909-460a-9971-4ee9bfc13ad0-cilium-run\") pod \"d84cc738-1909-460a-9971-4ee9bfc13ad0\" (UID: \"d84cc738-1909-460a-9971-4ee9bfc13ad0\") " Jul 9 13:12:04.985868 kubelet[2949]: I0709 13:12:04.985760 2949 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d84cc738-1909-460a-9971-4ee9bfc13ad0-lib-modules\") pod \"d84cc738-1909-460a-9971-4ee9bfc13ad0\" (UID: \"d84cc738-1909-460a-9971-4ee9bfc13ad0\") " Jul 9 13:12:04.985868 kubelet[2949]: I0709 13:12:04.985777 2949 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d84cc738-1909-460a-9971-4ee9bfc13ad0-host-proc-sys-kernel\") pod \"d84cc738-1909-460a-9971-4ee9bfc13ad0\" (UID: \"d84cc738-1909-460a-9971-4ee9bfc13ad0\") " Jul 9 13:12:04.985868 kubelet[2949]: I0709 13:12:04.985779 2949 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d84cc738-1909-460a-9971-4ee9bfc13ad0-hostproc" (OuterVolumeSpecName: "hostproc") pod "d84cc738-1909-460a-9971-4ee9bfc13ad0" (UID: "d84cc738-1909-460a-9971-4ee9bfc13ad0"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 9 13:12:04.986008 kubelet[2949]: I0709 13:12:04.985787 2949 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d84cc738-1909-460a-9971-4ee9bfc13ad0-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "d84cc738-1909-460a-9971-4ee9bfc13ad0" (UID: "d84cc738-1909-460a-9971-4ee9bfc13ad0"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 9 13:12:04.986008 kubelet[2949]: I0709 13:12:04.985794 2949 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d84cc738-1909-460a-9971-4ee9bfc13ad0-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "d84cc738-1909-460a-9971-4ee9bfc13ad0" (UID: "d84cc738-1909-460a-9971-4ee9bfc13ad0"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 9 13:12:04.986008 kubelet[2949]: I0709 13:12:04.985800 2949 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d84cc738-1909-460a-9971-4ee9bfc13ad0-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "d84cc738-1909-460a-9971-4ee9bfc13ad0" (UID: "d84cc738-1909-460a-9971-4ee9bfc13ad0"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 9 13:12:04.986008 kubelet[2949]: I0709 13:12:04.985803 2949 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d84cc738-1909-460a-9971-4ee9bfc13ad0-cilium-config-path\") pod \"d84cc738-1909-460a-9971-4ee9bfc13ad0\" (UID: \"d84cc738-1909-460a-9971-4ee9bfc13ad0\") " Jul 9 13:12:04.986008 kubelet[2949]: I0709 13:12:04.985835 2949 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d84cc738-1909-460a-9971-4ee9bfc13ad0-hubble-tls\") pod \"d84cc738-1909-460a-9971-4ee9bfc13ad0\" (UID: \"d84cc738-1909-460a-9971-4ee9bfc13ad0\") " Jul 9 13:12:04.986096 kubelet[2949]: I0709 13:12:04.985846 2949 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d84cc738-1909-460a-9971-4ee9bfc13ad0-cilium-cgroup\") pod \"d84cc738-1909-460a-9971-4ee9bfc13ad0\" (UID: \"d84cc738-1909-460a-9971-4ee9bfc13ad0\") " Jul 9 13:12:04.986096 kubelet[2949]: I0709 13:12:04.985857 2949 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1bad48ba-ad86-4d68-a0ab-8d3c80a7260e-cilium-config-path\") pod \"1bad48ba-ad86-4d68-a0ab-8d3c80a7260e\" (UID: \"1bad48ba-ad86-4d68-a0ab-8d3c80a7260e\") " Jul 9 13:12:04.986096 kubelet[2949]: I0709 13:12:04.985867 2949 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d84cc738-1909-460a-9971-4ee9bfc13ad0-xtables-lock\") pod \"d84cc738-1909-460a-9971-4ee9bfc13ad0\" (UID: \"d84cc738-1909-460a-9971-4ee9bfc13ad0\") " Jul 9 13:12:04.986096 kubelet[2949]: I0709 13:12:04.985874 2949 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d84cc738-1909-460a-9971-4ee9bfc13ad0-host-proc-sys-net\") pod \"d84cc738-1909-460a-9971-4ee9bfc13ad0\" (UID: \"d84cc738-1909-460a-9971-4ee9bfc13ad0\") " Jul 9 13:12:04.986096 kubelet[2949]: I0709 13:12:04.985885 2949 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d84cc738-1909-460a-9971-4ee9bfc13ad0-etc-cni-netd\") pod \"d84cc738-1909-460a-9971-4ee9bfc13ad0\" (UID: \"d84cc738-1909-460a-9971-4ee9bfc13ad0\") " Jul 9 13:12:04.986096 kubelet[2949]: I0709 13:12:04.985895 2949 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b5p5n\" (UniqueName: \"kubernetes.io/projected/d84cc738-1909-460a-9971-4ee9bfc13ad0-kube-api-access-b5p5n\") pod \"d84cc738-1909-460a-9971-4ee9bfc13ad0\" (UID: \"d84cc738-1909-460a-9971-4ee9bfc13ad0\") " Jul 9 13:12:04.986192 kubelet[2949]: I0709 13:12:04.985904 2949 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d84cc738-1909-460a-9971-4ee9bfc13ad0-clustermesh-secrets\") pod \"d84cc738-1909-460a-9971-4ee9bfc13ad0\" (UID: \"d84cc738-1909-460a-9971-4ee9bfc13ad0\") " Jul 9 13:12:04.986192 kubelet[2949]: I0709 13:12:04.985918 2949 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgw7m\" (UniqueName: \"kubernetes.io/projected/1bad48ba-ad86-4d68-a0ab-8d3c80a7260e-kube-api-access-zgw7m\") pod \"1bad48ba-ad86-4d68-a0ab-8d3c80a7260e\" (UID: \"1bad48ba-ad86-4d68-a0ab-8d3c80a7260e\") " Jul 9 13:12:04.986192 kubelet[2949]: I0709 13:12:04.985928 2949 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d84cc738-1909-460a-9971-4ee9bfc13ad0-bpf-maps\") pod \"d84cc738-1909-460a-9971-4ee9bfc13ad0\" (UID: \"d84cc738-1909-460a-9971-4ee9bfc13ad0\") " Jul 9 13:12:04.986192 kubelet[2949]: I0709 13:12:04.985961 2949 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d84cc738-1909-460a-9971-4ee9bfc13ad0-hostproc\") on node \"localhost\" DevicePath \"\"" Jul 9 13:12:04.986192 kubelet[2949]: I0709 13:12:04.985968 2949 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d84cc738-1909-460a-9971-4ee9bfc13ad0-cilium-run\") on node \"localhost\" DevicePath \"\"" Jul 9 13:12:04.986192 kubelet[2949]: I0709 13:12:04.985973 2949 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d84cc738-1909-460a-9971-4ee9bfc13ad0-cni-path\") on node \"localhost\" DevicePath \"\"" Jul 9 13:12:04.986192 kubelet[2949]: I0709 13:12:04.985978 2949 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d84cc738-1909-460a-9971-4ee9bfc13ad0-lib-modules\") on node \"localhost\" DevicePath \"\"" Jul 9 13:12:04.986311 kubelet[2949]: I0709 13:12:04.985982 2949 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d84cc738-1909-460a-9971-4ee9bfc13ad0-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Jul 9 13:12:04.986311 kubelet[2949]: I0709 13:12:04.985994 2949 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d84cc738-1909-460a-9971-4ee9bfc13ad0-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "d84cc738-1909-460a-9971-4ee9bfc13ad0" (UID: "d84cc738-1909-460a-9971-4ee9bfc13ad0"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 9 13:12:04.987642 kubelet[2949]: I0709 13:12:04.987582 2949 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d84cc738-1909-460a-9971-4ee9bfc13ad0-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "d84cc738-1909-460a-9971-4ee9bfc13ad0" (UID: "d84cc738-1909-460a-9971-4ee9bfc13ad0"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 9 13:12:04.987642 kubelet[2949]: I0709 13:12:04.987610 2949 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d84cc738-1909-460a-9971-4ee9bfc13ad0-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "d84cc738-1909-460a-9971-4ee9bfc13ad0" (UID: "d84cc738-1909-460a-9971-4ee9bfc13ad0"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 9 13:12:04.987642 kubelet[2949]: I0709 13:12:04.987621 2949 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d84cc738-1909-460a-9971-4ee9bfc13ad0-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "d84cc738-1909-460a-9971-4ee9bfc13ad0" (UID: "d84cc738-1909-460a-9971-4ee9bfc13ad0"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 9 13:12:04.989023 kubelet[2949]: I0709 13:12:04.988944 2949 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bad48ba-ad86-4d68-a0ab-8d3c80a7260e-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "1bad48ba-ad86-4d68-a0ab-8d3c80a7260e" (UID: "1bad48ba-ad86-4d68-a0ab-8d3c80a7260e"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 9 13:12:04.989023 kubelet[2949]: I0709 13:12:04.988964 2949 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d84cc738-1909-460a-9971-4ee9bfc13ad0-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "d84cc738-1909-460a-9971-4ee9bfc13ad0" (UID: "d84cc738-1909-460a-9971-4ee9bfc13ad0"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 9 13:12:04.996087 kubelet[2949]: I0709 13:12:04.996056 2949 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d84cc738-1909-460a-9971-4ee9bfc13ad0-kube-api-access-b5p5n" (OuterVolumeSpecName: "kube-api-access-b5p5n") pod "d84cc738-1909-460a-9971-4ee9bfc13ad0" (UID: "d84cc738-1909-460a-9971-4ee9bfc13ad0"). InnerVolumeSpecName "kube-api-access-b5p5n". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 9 13:12:04.996193 kubelet[2949]: I0709 13:12:04.996178 2949 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d84cc738-1909-460a-9971-4ee9bfc13ad0-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "d84cc738-1909-460a-9971-4ee9bfc13ad0" (UID: "d84cc738-1909-460a-9971-4ee9bfc13ad0"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 9 13:12:04.996253 kubelet[2949]: I0709 13:12:04.996244 2949 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d84cc738-1909-460a-9971-4ee9bfc13ad0-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "d84cc738-1909-460a-9971-4ee9bfc13ad0" (UID: "d84cc738-1909-460a-9971-4ee9bfc13ad0"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 9 13:12:04.997715 kubelet[2949]: I0709 13:12:04.997692 2949 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bad48ba-ad86-4d68-a0ab-8d3c80a7260e-kube-api-access-zgw7m" (OuterVolumeSpecName: "kube-api-access-zgw7m") pod "1bad48ba-ad86-4d68-a0ab-8d3c80a7260e" (UID: "1bad48ba-ad86-4d68-a0ab-8d3c80a7260e"). InnerVolumeSpecName "kube-api-access-zgw7m". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 9 13:12:04.997767 kubelet[2949]: I0709 13:12:04.997749 2949 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d84cc738-1909-460a-9971-4ee9bfc13ad0-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "d84cc738-1909-460a-9971-4ee9bfc13ad0" (UID: "d84cc738-1909-460a-9971-4ee9bfc13ad0"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jul 9 13:12:05.050659 kubelet[2949]: I0709 13:12:05.049994 2949 scope.go:117] "RemoveContainer" containerID="fb1dc13511474fdde58ff028a6b1058a781f8207725d6bac53c385237f2cb838" Jul 9 13:12:05.053926 systemd[1]: Removed slice kubepods-besteffort-pod1bad48ba_ad86_4d68_a0ab_8d3c80a7260e.slice - libcontainer container kubepods-besteffort-pod1bad48ba_ad86_4d68_a0ab_8d3c80a7260e.slice. Jul 9 13:12:05.064308 systemd[1]: Removed slice kubepods-burstable-podd84cc738_1909_460a_9971_4ee9bfc13ad0.slice - libcontainer container kubepods-burstable-podd84cc738_1909_460a_9971_4ee9bfc13ad0.slice. Jul 9 13:12:05.064779 containerd[1659]: time="2025-07-09T13:12:05.060162202Z" level=info msg="RemoveContainer for \"fb1dc13511474fdde58ff028a6b1058a781f8207725d6bac53c385237f2cb838\"" Jul 9 13:12:05.064367 systemd[1]: kubepods-burstable-podd84cc738_1909_460a_9971_4ee9bfc13ad0.slice: Consumed 5.113s CPU time, 220.7M memory peak, 97.3M read from disk, 13.3M written to disk. Jul 9 13:12:05.081869 containerd[1659]: time="2025-07-09T13:12:05.081380090Z" level=info msg="RemoveContainer for \"fb1dc13511474fdde58ff028a6b1058a781f8207725d6bac53c385237f2cb838\" returns successfully" Jul 9 13:12:05.082142 kubelet[2949]: I0709 13:12:05.082073 2949 scope.go:117] "RemoveContainer" containerID="fb1dc13511474fdde58ff028a6b1058a781f8207725d6bac53c385237f2cb838" Jul 9 13:12:05.082412 containerd[1659]: time="2025-07-09T13:12:05.082373188Z" level=error msg="ContainerStatus for \"fb1dc13511474fdde58ff028a6b1058a781f8207725d6bac53c385237f2cb838\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"fb1dc13511474fdde58ff028a6b1058a781f8207725d6bac53c385237f2cb838\": not found" Jul 9 13:12:05.083649 kubelet[2949]: E0709 13:12:05.083206 2949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"fb1dc13511474fdde58ff028a6b1058a781f8207725d6bac53c385237f2cb838\": not found" containerID="fb1dc13511474fdde58ff028a6b1058a781f8207725d6bac53c385237f2cb838" Jul 9 13:12:05.083649 kubelet[2949]: I0709 13:12:05.083235 2949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"fb1dc13511474fdde58ff028a6b1058a781f8207725d6bac53c385237f2cb838"} err="failed to get container status \"fb1dc13511474fdde58ff028a6b1058a781f8207725d6bac53c385237f2cb838\": rpc error: code = NotFound desc = an error occurred when try to find container \"fb1dc13511474fdde58ff028a6b1058a781f8207725d6bac53c385237f2cb838\": not found" Jul 9 13:12:05.083649 kubelet[2949]: I0709 13:12:05.083284 2949 scope.go:117] "RemoveContainer" containerID="b9401cb3b62bca051d83f1c49c387d934f5c2daa2dc0964394132977b9894247" Jul 9 13:12:05.084908 containerd[1659]: time="2025-07-09T13:12:05.084882216Z" level=info msg="RemoveContainer for \"b9401cb3b62bca051d83f1c49c387d934f5c2daa2dc0964394132977b9894247\"" Jul 9 13:12:05.086344 kubelet[2949]: I0709 13:12:05.086322 2949 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d84cc738-1909-460a-9971-4ee9bfc13ad0-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Jul 9 13:12:05.086344 kubelet[2949]: I0709 13:12:05.086337 2949 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1bad48ba-ad86-4d68-a0ab-8d3c80a7260e-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jul 9 13:12:05.086344 kubelet[2949]: I0709 13:12:05.086344 2949 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d84cc738-1909-460a-9971-4ee9bfc13ad0-xtables-lock\") on node \"localhost\" DevicePath \"\"" Jul 9 13:12:05.086684 kubelet[2949]: I0709 13:12:05.086350 2949 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d84cc738-1909-460a-9971-4ee9bfc13ad0-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Jul 9 13:12:05.086684 kubelet[2949]: I0709 13:12:05.086356 2949 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d84cc738-1909-460a-9971-4ee9bfc13ad0-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Jul 9 13:12:05.086684 kubelet[2949]: I0709 13:12:05.086361 2949 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-b5p5n\" (UniqueName: \"kubernetes.io/projected/d84cc738-1909-460a-9971-4ee9bfc13ad0-kube-api-access-b5p5n\") on node \"localhost\" DevicePath \"\"" Jul 9 13:12:05.086684 kubelet[2949]: I0709 13:12:05.086365 2949 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d84cc738-1909-460a-9971-4ee9bfc13ad0-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Jul 9 13:12:05.086684 kubelet[2949]: I0709 13:12:05.086371 2949 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zgw7m\" (UniqueName: \"kubernetes.io/projected/1bad48ba-ad86-4d68-a0ab-8d3c80a7260e-kube-api-access-zgw7m\") on node \"localhost\" DevicePath \"\"" Jul 9 13:12:05.086684 kubelet[2949]: I0709 13:12:05.086375 2949 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d84cc738-1909-460a-9971-4ee9bfc13ad0-bpf-maps\") on node \"localhost\" DevicePath \"\"" Jul 9 13:12:05.086684 kubelet[2949]: I0709 13:12:05.086381 2949 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d84cc738-1909-460a-9971-4ee9bfc13ad0-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jul 9 13:12:05.086684 kubelet[2949]: I0709 13:12:05.086386 2949 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d84cc738-1909-460a-9971-4ee9bfc13ad0-hubble-tls\") on node \"localhost\" DevicePath \"\"" Jul 9 13:12:05.087630 containerd[1659]: time="2025-07-09T13:12:05.087609061Z" level=info msg="RemoveContainer for \"b9401cb3b62bca051d83f1c49c387d934f5c2daa2dc0964394132977b9894247\" returns successfully" Jul 9 13:12:05.087873 kubelet[2949]: I0709 13:12:05.087746 2949 scope.go:117] "RemoveContainer" containerID="a707a648f5c424c7798c92fba9180e41d8638bd676b1409229fcea87bfa58151" Jul 9 13:12:05.089010 containerd[1659]: time="2025-07-09T13:12:05.088994106Z" level=info msg="RemoveContainer for \"a707a648f5c424c7798c92fba9180e41d8638bd676b1409229fcea87bfa58151\"" Jul 9 13:12:05.091918 containerd[1659]: time="2025-07-09T13:12:05.091821725Z" level=info msg="RemoveContainer for \"a707a648f5c424c7798c92fba9180e41d8638bd676b1409229fcea87bfa58151\" returns successfully" Jul 9 13:12:05.092121 kubelet[2949]: I0709 13:12:05.092101 2949 scope.go:117] "RemoveContainer" containerID="57dc5d70f3decf1a6045acf83854cd0994fc890fe0a89be497c4289d013d1a9d" Jul 9 13:12:05.095442 containerd[1659]: time="2025-07-09T13:12:05.095417396Z" level=info msg="RemoveContainer for \"57dc5d70f3decf1a6045acf83854cd0994fc890fe0a89be497c4289d013d1a9d\"" Jul 9 13:12:05.097215 containerd[1659]: time="2025-07-09T13:12:05.097198855Z" level=info msg="RemoveContainer for \"57dc5d70f3decf1a6045acf83854cd0994fc890fe0a89be497c4289d013d1a9d\" returns successfully" Jul 9 13:12:05.097311 kubelet[2949]: I0709 13:12:05.097297 2949 scope.go:117] "RemoveContainer" containerID="080fb7116012c729b92754402c4e9e79a43bec649427f6653c1be8356a12a5ee" Jul 9 13:12:05.098631 containerd[1659]: time="2025-07-09T13:12:05.098144948Z" level=info msg="RemoveContainer for \"080fb7116012c729b92754402c4e9e79a43bec649427f6653c1be8356a12a5ee\"" Jul 9 13:12:05.099606 containerd[1659]: time="2025-07-09T13:12:05.099591516Z" level=info msg="RemoveContainer for \"080fb7116012c729b92754402c4e9e79a43bec649427f6653c1be8356a12a5ee\" returns successfully" Jul 9 13:12:05.099773 kubelet[2949]: I0709 13:12:05.099758 2949 scope.go:117] "RemoveContainer" containerID="63a492b6f3379dff0844e37c78e128e54a52495b751b8c57a696b8ef3b2d8ade" Jul 9 13:12:05.100520 containerd[1659]: time="2025-07-09T13:12:05.100502924Z" level=info msg="RemoveContainer for \"63a492b6f3379dff0844e37c78e128e54a52495b751b8c57a696b8ef3b2d8ade\"" Jul 9 13:12:05.102298 containerd[1659]: time="2025-07-09T13:12:05.101915746Z" level=info msg="RemoveContainer for \"63a492b6f3379dff0844e37c78e128e54a52495b751b8c57a696b8ef3b2d8ade\" returns successfully" Jul 9 13:12:05.102298 containerd[1659]: time="2025-07-09T13:12:05.102252382Z" level=error msg="ContainerStatus for \"b9401cb3b62bca051d83f1c49c387d934f5c2daa2dc0964394132977b9894247\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b9401cb3b62bca051d83f1c49c387d934f5c2daa2dc0964394132977b9894247\": not found" Jul 9 13:12:05.102387 kubelet[2949]: I0709 13:12:05.102098 2949 scope.go:117] "RemoveContainer" containerID="b9401cb3b62bca051d83f1c49c387d934f5c2daa2dc0964394132977b9894247" Jul 9 13:12:05.102387 kubelet[2949]: E0709 13:12:05.102331 2949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b9401cb3b62bca051d83f1c49c387d934f5c2daa2dc0964394132977b9894247\": not found" containerID="b9401cb3b62bca051d83f1c49c387d934f5c2daa2dc0964394132977b9894247" Jul 9 13:12:05.102387 kubelet[2949]: I0709 13:12:05.102349 2949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b9401cb3b62bca051d83f1c49c387d934f5c2daa2dc0964394132977b9894247"} err="failed to get container status \"b9401cb3b62bca051d83f1c49c387d934f5c2daa2dc0964394132977b9894247\": rpc error: code = NotFound desc = an error occurred when try to find container \"b9401cb3b62bca051d83f1c49c387d934f5c2daa2dc0964394132977b9894247\": not found" Jul 9 13:12:05.102387 kubelet[2949]: I0709 13:12:05.102366 2949 scope.go:117] "RemoveContainer" containerID="a707a648f5c424c7798c92fba9180e41d8638bd676b1409229fcea87bfa58151" Jul 9 13:12:05.102879 kubelet[2949]: E0709 13:12:05.102525 2949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a707a648f5c424c7798c92fba9180e41d8638bd676b1409229fcea87bfa58151\": not found" containerID="a707a648f5c424c7798c92fba9180e41d8638bd676b1409229fcea87bfa58151" Jul 9 13:12:05.102879 kubelet[2949]: I0709 13:12:05.102540 2949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a707a648f5c424c7798c92fba9180e41d8638bd676b1409229fcea87bfa58151"} err="failed to get container status \"a707a648f5c424c7798c92fba9180e41d8638bd676b1409229fcea87bfa58151\": rpc error: code = NotFound desc = an error occurred when try to find container \"a707a648f5c424c7798c92fba9180e41d8638bd676b1409229fcea87bfa58151\": not found" Jul 9 13:12:05.102879 kubelet[2949]: I0709 13:12:05.102550 2949 scope.go:117] "RemoveContainer" containerID="57dc5d70f3decf1a6045acf83854cd0994fc890fe0a89be497c4289d013d1a9d" Jul 9 13:12:05.102879 kubelet[2949]: E0709 13:12:05.102718 2949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"57dc5d70f3decf1a6045acf83854cd0994fc890fe0a89be497c4289d013d1a9d\": not found" containerID="57dc5d70f3decf1a6045acf83854cd0994fc890fe0a89be497c4289d013d1a9d" Jul 9 13:12:05.102879 kubelet[2949]: I0709 13:12:05.102729 2949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"57dc5d70f3decf1a6045acf83854cd0994fc890fe0a89be497c4289d013d1a9d"} err="failed to get container status \"57dc5d70f3decf1a6045acf83854cd0994fc890fe0a89be497c4289d013d1a9d\": rpc error: code = NotFound desc = an error occurred when try to find container \"57dc5d70f3decf1a6045acf83854cd0994fc890fe0a89be497c4289d013d1a9d\": not found" Jul 9 13:12:05.102879 kubelet[2949]: I0709 13:12:05.102736 2949 scope.go:117] "RemoveContainer" containerID="080fb7116012c729b92754402c4e9e79a43bec649427f6653c1be8356a12a5ee" Jul 9 13:12:05.102999 containerd[1659]: time="2025-07-09T13:12:05.102456773Z" level=error msg="ContainerStatus for \"a707a648f5c424c7798c92fba9180e41d8638bd676b1409229fcea87bfa58151\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a707a648f5c424c7798c92fba9180e41d8638bd676b1409229fcea87bfa58151\": not found" Jul 9 13:12:05.102999 containerd[1659]: time="2025-07-09T13:12:05.102657959Z" level=error msg="ContainerStatus for \"57dc5d70f3decf1a6045acf83854cd0994fc890fe0a89be497c4289d013d1a9d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"57dc5d70f3decf1a6045acf83854cd0994fc890fe0a89be497c4289d013d1a9d\": not found" Jul 9 13:12:05.102999 containerd[1659]: time="2025-07-09T13:12:05.102820389Z" level=error msg="ContainerStatus for \"080fb7116012c729b92754402c4e9e79a43bec649427f6653c1be8356a12a5ee\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"080fb7116012c729b92754402c4e9e79a43bec649427f6653c1be8356a12a5ee\": not found" Jul 9 13:12:05.103051 kubelet[2949]: E0709 13:12:05.102905 2949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"080fb7116012c729b92754402c4e9e79a43bec649427f6653c1be8356a12a5ee\": not found" containerID="080fb7116012c729b92754402c4e9e79a43bec649427f6653c1be8356a12a5ee" Jul 9 13:12:05.103051 kubelet[2949]: I0709 13:12:05.102915 2949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"080fb7116012c729b92754402c4e9e79a43bec649427f6653c1be8356a12a5ee"} err="failed to get container status \"080fb7116012c729b92754402c4e9e79a43bec649427f6653c1be8356a12a5ee\": rpc error: code = NotFound desc = an error occurred when try to find container \"080fb7116012c729b92754402c4e9e79a43bec649427f6653c1be8356a12a5ee\": not found" Jul 9 13:12:05.103051 kubelet[2949]: I0709 13:12:05.102925 2949 scope.go:117] "RemoveContainer" containerID="63a492b6f3379dff0844e37c78e128e54a52495b751b8c57a696b8ef3b2d8ade" Jul 9 13:12:05.103110 containerd[1659]: time="2025-07-09T13:12:05.103002609Z" level=error msg="ContainerStatus for \"63a492b6f3379dff0844e37c78e128e54a52495b751b8c57a696b8ef3b2d8ade\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"63a492b6f3379dff0844e37c78e128e54a52495b751b8c57a696b8ef3b2d8ade\": not found" Jul 9 13:12:05.103132 kubelet[2949]: E0709 13:12:05.103052 2949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"63a492b6f3379dff0844e37c78e128e54a52495b751b8c57a696b8ef3b2d8ade\": not found" containerID="63a492b6f3379dff0844e37c78e128e54a52495b751b8c57a696b8ef3b2d8ade" Jul 9 13:12:05.103132 kubelet[2949]: I0709 13:12:05.103062 2949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"63a492b6f3379dff0844e37c78e128e54a52495b751b8c57a696b8ef3b2d8ade"} err="failed to get container status \"63a492b6f3379dff0844e37c78e128e54a52495b751b8c57a696b8ef3b2d8ade\": rpc error: code = NotFound desc = an error occurred when try to find container \"63a492b6f3379dff0844e37c78e128e54a52495b751b8c57a696b8ef3b2d8ade\": not found" Jul 9 13:12:05.620113 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ff01457b2a3f9e96a80665d56887be58fa3ea27db2006ed7cd025890bef1634e-shm.mount: Deactivated successfully. Jul 9 13:12:05.620181 systemd[1]: var-lib-kubelet-pods-1bad48ba\x2dad86\x2d4d68\x2da0ab\x2d8d3c80a7260e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dzgw7m.mount: Deactivated successfully. Jul 9 13:12:05.620235 systemd[1]: var-lib-kubelet-pods-d84cc738\x2d1909\x2d460a\x2d9971\x2d4ee9bfc13ad0-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2db5p5n.mount: Deactivated successfully. Jul 9 13:12:05.620287 systemd[1]: var-lib-kubelet-pods-d84cc738\x2d1909\x2d460a\x2d9971\x2d4ee9bfc13ad0-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 9 13:12:05.620324 systemd[1]: var-lib-kubelet-pods-d84cc738\x2d1909\x2d460a\x2d9971\x2d4ee9bfc13ad0-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 9 13:12:05.745352 kubelet[2949]: I0709 13:12:05.745272 2949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bad48ba-ad86-4d68-a0ab-8d3c80a7260e" path="/var/lib/kubelet/pods/1bad48ba-ad86-4d68-a0ab-8d3c80a7260e/volumes" Jul 9 13:12:05.746475 kubelet[2949]: I0709 13:12:05.746172 2949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d84cc738-1909-460a-9971-4ee9bfc13ad0" path="/var/lib/kubelet/pods/d84cc738-1909-460a-9971-4ee9bfc13ad0/volumes" Jul 9 13:12:06.478977 sshd[4534]: Connection closed by 139.178.68.195 port 40742 Jul 9 13:12:06.479685 sshd-session[4531]: pam_unix(sshd:session): session closed for user core Jul 9 13:12:06.486512 systemd[1]: sshd@24-139.178.70.108:22-139.178.68.195:40742.service: Deactivated successfully. Jul 9 13:12:06.488029 systemd[1]: session-27.scope: Deactivated successfully. Jul 9 13:12:06.488721 systemd-logind[1628]: Session 27 logged out. Waiting for processes to exit. Jul 9 13:12:06.490259 systemd-logind[1628]: Removed session 27. Jul 9 13:12:06.491441 systemd[1]: Started sshd@25-139.178.70.108:22-139.178.68.195:40758.service - OpenSSH per-connection server daemon (139.178.68.195:40758). Jul 9 13:12:06.578063 sshd[4684]: Accepted publickey for core from 139.178.68.195 port 40758 ssh2: RSA SHA256:pHehh7tc90QOyf1uGohWVF4tJIie1SMOFA2c8G1DmZI Jul 9 13:12:06.578859 sshd-session[4684]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 13:12:06.583027 systemd-logind[1628]: New session 28 of user core. Jul 9 13:12:06.588740 systemd[1]: Started session-28.scope - Session 28 of User core. Jul 9 13:12:06.789803 containerd[1659]: time="2025-07-09T13:12:06.789708766Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a93606c0b87d982c3fb16340d522ed075662716ca2bfb2c4bdc837f52af9cf4a\" id:\"a93606c0b87d982c3fb16340d522ed075662716ca2bfb2c4bdc837f52af9cf4a\" pid:3134 exit_status:137 exited_at:{seconds:1752066724 nanos:694032179}" Jul 9 13:12:06.967536 sshd[4687]: Connection closed by 139.178.68.195 port 40758 Jul 9 13:12:06.968461 sshd-session[4684]: pam_unix(sshd:session): session closed for user core Jul 9 13:12:06.974524 systemd[1]: sshd@25-139.178.70.108:22-139.178.68.195:40758.service: Deactivated successfully. Jul 9 13:12:06.976037 systemd[1]: session-28.scope: Deactivated successfully. Jul 9 13:12:06.978042 systemd-logind[1628]: Session 28 logged out. Waiting for processes to exit. Jul 9 13:12:06.981894 systemd[1]: Started sshd@26-139.178.70.108:22-139.178.68.195:40764.service - OpenSSH per-connection server daemon (139.178.68.195:40764). Jul 9 13:12:06.985328 kubelet[2949]: I0709 13:12:06.985209 2949 memory_manager.go:355] "RemoveStaleState removing state" podUID="d84cc738-1909-460a-9971-4ee9bfc13ad0" containerName="cilium-agent" Jul 9 13:12:06.986773 kubelet[2949]: I0709 13:12:06.985819 2949 memory_manager.go:355] "RemoveStaleState removing state" podUID="1bad48ba-ad86-4d68-a0ab-8d3c80a7260e" containerName="cilium-operator" Jul 9 13:12:06.986497 systemd-logind[1628]: Removed session 28. Jul 9 13:12:06.998801 systemd[1]: Created slice kubepods-burstable-poda1d10c81_ab44_4584_afcb_c7bcddf6cd14.slice - libcontainer container kubepods-burstable-poda1d10c81_ab44_4584_afcb_c7bcddf6cd14.slice. Jul 9 13:12:07.037517 sshd[4697]: Accepted publickey for core from 139.178.68.195 port 40764 ssh2: RSA SHA256:pHehh7tc90QOyf1uGohWVF4tJIie1SMOFA2c8G1DmZI Jul 9 13:12:07.039093 sshd-session[4697]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 13:12:07.044217 systemd-logind[1628]: New session 29 of user core. Jul 9 13:12:07.048734 systemd[1]: Started session-29.scope - Session 29 of User core. Jul 9 13:12:07.097481 kubelet[2949]: I0709 13:12:07.097445 2949 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a1d10c81-ab44-4584-afcb-c7bcddf6cd14-host-proc-sys-kernel\") pod \"cilium-g6rgj\" (UID: \"a1d10c81-ab44-4584-afcb-c7bcddf6cd14\") " pod="kube-system/cilium-g6rgj" Jul 9 13:12:07.097481 kubelet[2949]: I0709 13:12:07.097470 2949 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a1d10c81-ab44-4584-afcb-c7bcddf6cd14-cni-path\") pod \"cilium-g6rgj\" (UID: \"a1d10c81-ab44-4584-afcb-c7bcddf6cd14\") " pod="kube-system/cilium-g6rgj" Jul 9 13:12:07.097481 kubelet[2949]: I0709 13:12:07.097482 2949 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/a1d10c81-ab44-4584-afcb-c7bcddf6cd14-cilium-ipsec-secrets\") pod \"cilium-g6rgj\" (UID: \"a1d10c81-ab44-4584-afcb-c7bcddf6cd14\") " pod="kube-system/cilium-g6rgj" Jul 9 13:12:07.097606 kubelet[2949]: I0709 13:12:07.097492 2949 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a1d10c81-ab44-4584-afcb-c7bcddf6cd14-bpf-maps\") pod \"cilium-g6rgj\" (UID: \"a1d10c81-ab44-4584-afcb-c7bcddf6cd14\") " pod="kube-system/cilium-g6rgj" Jul 9 13:12:07.097606 kubelet[2949]: I0709 13:12:07.097501 2949 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a1d10c81-ab44-4584-afcb-c7bcddf6cd14-cilium-cgroup\") pod \"cilium-g6rgj\" (UID: \"a1d10c81-ab44-4584-afcb-c7bcddf6cd14\") " pod="kube-system/cilium-g6rgj" Jul 9 13:12:07.097606 kubelet[2949]: I0709 13:12:07.097509 2949 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a1d10c81-ab44-4584-afcb-c7bcddf6cd14-host-proc-sys-net\") pod \"cilium-g6rgj\" (UID: \"a1d10c81-ab44-4584-afcb-c7bcddf6cd14\") " pod="kube-system/cilium-g6rgj" Jul 9 13:12:07.097606 kubelet[2949]: I0709 13:12:07.097519 2949 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a1d10c81-ab44-4584-afcb-c7bcddf6cd14-cilium-run\") pod \"cilium-g6rgj\" (UID: \"a1d10c81-ab44-4584-afcb-c7bcddf6cd14\") " pod="kube-system/cilium-g6rgj" Jul 9 13:12:07.097606 kubelet[2949]: I0709 13:12:07.097528 2949 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a1d10c81-ab44-4584-afcb-c7bcddf6cd14-lib-modules\") pod \"cilium-g6rgj\" (UID: \"a1d10c81-ab44-4584-afcb-c7bcddf6cd14\") " pod="kube-system/cilium-g6rgj" Jul 9 13:12:07.097606 kubelet[2949]: I0709 13:12:07.097536 2949 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a1d10c81-ab44-4584-afcb-c7bcddf6cd14-hubble-tls\") pod \"cilium-g6rgj\" (UID: \"a1d10c81-ab44-4584-afcb-c7bcddf6cd14\") " pod="kube-system/cilium-g6rgj" Jul 9 13:12:07.098573 kubelet[2949]: I0709 13:12:07.097545 2949 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pcdbx\" (UniqueName: \"kubernetes.io/projected/a1d10c81-ab44-4584-afcb-c7bcddf6cd14-kube-api-access-pcdbx\") pod \"cilium-g6rgj\" (UID: \"a1d10c81-ab44-4584-afcb-c7bcddf6cd14\") " pod="kube-system/cilium-g6rgj" Jul 9 13:12:07.098573 kubelet[2949]: I0709 13:12:07.097554 2949 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a1d10c81-ab44-4584-afcb-c7bcddf6cd14-hostproc\") pod \"cilium-g6rgj\" (UID: \"a1d10c81-ab44-4584-afcb-c7bcddf6cd14\") " pod="kube-system/cilium-g6rgj" Jul 9 13:12:07.098573 kubelet[2949]: I0709 13:12:07.097563 2949 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a1d10c81-ab44-4584-afcb-c7bcddf6cd14-etc-cni-netd\") pod \"cilium-g6rgj\" (UID: \"a1d10c81-ab44-4584-afcb-c7bcddf6cd14\") " pod="kube-system/cilium-g6rgj" Jul 9 13:12:07.098573 kubelet[2949]: I0709 13:12:07.097572 2949 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a1d10c81-ab44-4584-afcb-c7bcddf6cd14-xtables-lock\") pod \"cilium-g6rgj\" (UID: \"a1d10c81-ab44-4584-afcb-c7bcddf6cd14\") " pod="kube-system/cilium-g6rgj" Jul 9 13:12:07.098573 kubelet[2949]: I0709 13:12:07.097582 2949 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a1d10c81-ab44-4584-afcb-c7bcddf6cd14-clustermesh-secrets\") pod \"cilium-g6rgj\" (UID: \"a1d10c81-ab44-4584-afcb-c7bcddf6cd14\") " pod="kube-system/cilium-g6rgj" Jul 9 13:12:07.098573 kubelet[2949]: I0709 13:12:07.097591 2949 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a1d10c81-ab44-4584-afcb-c7bcddf6cd14-cilium-config-path\") pod \"cilium-g6rgj\" (UID: \"a1d10c81-ab44-4584-afcb-c7bcddf6cd14\") " pod="kube-system/cilium-g6rgj" Jul 9 13:12:07.098705 sshd[4700]: Connection closed by 139.178.68.195 port 40764 Jul 9 13:12:07.098927 sshd-session[4697]: pam_unix(sshd:session): session closed for user core Jul 9 13:12:07.107995 systemd[1]: sshd@26-139.178.70.108:22-139.178.68.195:40764.service: Deactivated successfully. Jul 9 13:12:07.109210 systemd[1]: session-29.scope: Deactivated successfully. Jul 9 13:12:07.109796 systemd-logind[1628]: Session 29 logged out. Waiting for processes to exit. Jul 9 13:12:07.111265 systemd[1]: Started sshd@27-139.178.70.108:22-139.178.68.195:40778.service - OpenSSH per-connection server daemon (139.178.68.195:40778). Jul 9 13:12:07.112067 systemd-logind[1628]: Removed session 29. Jul 9 13:12:07.145976 sshd[4707]: Accepted publickey for core from 139.178.68.195 port 40778 ssh2: RSA SHA256:pHehh7tc90QOyf1uGohWVF4tJIie1SMOFA2c8G1DmZI Jul 9 13:12:07.146885 sshd-session[4707]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 13:12:07.150006 systemd-logind[1628]: New session 30 of user core. Jul 9 13:12:07.152716 systemd[1]: Started session-30.scope - Session 30 of User core. Jul 9 13:12:07.304132 containerd[1659]: time="2025-07-09T13:12:07.304055172Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-g6rgj,Uid:a1d10c81-ab44-4584-afcb-c7bcddf6cd14,Namespace:kube-system,Attempt:0,}" Jul 9 13:12:07.315809 containerd[1659]: time="2025-07-09T13:12:07.315758222Z" level=info msg="connecting to shim 55e1f28d2e586972513bd5fb4fc26631a8e1fce19860f9dbf6055b1147257ae6" address="unix:///run/containerd/s/370c879a6ce1b0a5ab3864fb1cca38993aca3e27dcb5475016dc5ebb76ffdb49" namespace=k8s.io protocol=ttrpc version=3 Jul 9 13:12:07.337949 systemd[1]: Started cri-containerd-55e1f28d2e586972513bd5fb4fc26631a8e1fce19860f9dbf6055b1147257ae6.scope - libcontainer container 55e1f28d2e586972513bd5fb4fc26631a8e1fce19860f9dbf6055b1147257ae6. Jul 9 13:12:07.356042 containerd[1659]: time="2025-07-09T13:12:07.355986893Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-g6rgj,Uid:a1d10c81-ab44-4584-afcb-c7bcddf6cd14,Namespace:kube-system,Attempt:0,} returns sandbox id \"55e1f28d2e586972513bd5fb4fc26631a8e1fce19860f9dbf6055b1147257ae6\"" Jul 9 13:12:07.358743 containerd[1659]: time="2025-07-09T13:12:07.358663200Z" level=info msg="CreateContainer within sandbox \"55e1f28d2e586972513bd5fb4fc26631a8e1fce19860f9dbf6055b1147257ae6\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 9 13:12:07.363170 containerd[1659]: time="2025-07-09T13:12:07.363142450Z" level=info msg="Container e912bcd4c0699abfef77edcb140bc4d5d4cb4bd61453ede8ecaee4d503dd9ad6: CDI devices from CRI Config.CDIDevices: []" Jul 9 13:12:07.366412 containerd[1659]: time="2025-07-09T13:12:07.366384232Z" level=info msg="CreateContainer within sandbox \"55e1f28d2e586972513bd5fb4fc26631a8e1fce19860f9dbf6055b1147257ae6\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"e912bcd4c0699abfef77edcb140bc4d5d4cb4bd61453ede8ecaee4d503dd9ad6\"" Jul 9 13:12:07.366765 containerd[1659]: time="2025-07-09T13:12:07.366743243Z" level=info msg="StartContainer for \"e912bcd4c0699abfef77edcb140bc4d5d4cb4bd61453ede8ecaee4d503dd9ad6\"" Jul 9 13:12:07.367589 containerd[1659]: time="2025-07-09T13:12:07.367569030Z" level=info msg="connecting to shim e912bcd4c0699abfef77edcb140bc4d5d4cb4bd61453ede8ecaee4d503dd9ad6" address="unix:///run/containerd/s/370c879a6ce1b0a5ab3864fb1cca38993aca3e27dcb5475016dc5ebb76ffdb49" protocol=ttrpc version=3 Jul 9 13:12:07.395798 systemd[1]: Started cri-containerd-e912bcd4c0699abfef77edcb140bc4d5d4cb4bd61453ede8ecaee4d503dd9ad6.scope - libcontainer container e912bcd4c0699abfef77edcb140bc4d5d4cb4bd61453ede8ecaee4d503dd9ad6. Jul 9 13:12:07.417005 containerd[1659]: time="2025-07-09T13:12:07.416977239Z" level=info msg="StartContainer for \"e912bcd4c0699abfef77edcb140bc4d5d4cb4bd61453ede8ecaee4d503dd9ad6\" returns successfully" Jul 9 13:12:07.446518 systemd[1]: cri-containerd-e912bcd4c0699abfef77edcb140bc4d5d4cb4bd61453ede8ecaee4d503dd9ad6.scope: Deactivated successfully. Jul 9 13:12:07.446898 systemd[1]: cri-containerd-e912bcd4c0699abfef77edcb140bc4d5d4cb4bd61453ede8ecaee4d503dd9ad6.scope: Consumed 15ms CPU time, 9.5M memory peak, 3.1M read from disk. Jul 9 13:12:07.447813 containerd[1659]: time="2025-07-09T13:12:07.447791312Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e912bcd4c0699abfef77edcb140bc4d5d4cb4bd61453ede8ecaee4d503dd9ad6\" id:\"e912bcd4c0699abfef77edcb140bc4d5d4cb4bd61453ede8ecaee4d503dd9ad6\" pid:4778 exited_at:{seconds:1752066727 nanos:447437376}" Jul 9 13:12:07.447965 containerd[1659]: time="2025-07-09T13:12:07.447903654Z" level=info msg="received exit event container_id:\"e912bcd4c0699abfef77edcb140bc4d5d4cb4bd61453ede8ecaee4d503dd9ad6\" id:\"e912bcd4c0699abfef77edcb140bc4d5d4cb4bd61453ede8ecaee4d503dd9ad6\" pid:4778 exited_at:{seconds:1752066727 nanos:447437376}" Jul 9 13:12:08.068751 containerd[1659]: time="2025-07-09T13:12:08.068726502Z" level=info msg="CreateContainer within sandbox \"55e1f28d2e586972513bd5fb4fc26631a8e1fce19860f9dbf6055b1147257ae6\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 9 13:12:08.089164 containerd[1659]: time="2025-07-09T13:12:08.089139884Z" level=info msg="Container 6bfc1af58af97e02a4c783fa4c63664348a724beb986b17399778eb4c8869063: CDI devices from CRI Config.CDIDevices: []" Jul 9 13:12:08.124904 containerd[1659]: time="2025-07-09T13:12:08.124825256Z" level=info msg="CreateContainer within sandbox \"55e1f28d2e586972513bd5fb4fc26631a8e1fce19860f9dbf6055b1147257ae6\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"6bfc1af58af97e02a4c783fa4c63664348a724beb986b17399778eb4c8869063\"" Jul 9 13:12:08.125607 containerd[1659]: time="2025-07-09T13:12:08.125321315Z" level=info msg="StartContainer for \"6bfc1af58af97e02a4c783fa4c63664348a724beb986b17399778eb4c8869063\"" Jul 9 13:12:08.126520 containerd[1659]: time="2025-07-09T13:12:08.126457897Z" level=info msg="connecting to shim 6bfc1af58af97e02a4c783fa4c63664348a724beb986b17399778eb4c8869063" address="unix:///run/containerd/s/370c879a6ce1b0a5ab3864fb1cca38993aca3e27dcb5475016dc5ebb76ffdb49" protocol=ttrpc version=3 Jul 9 13:12:08.147777 systemd[1]: Started cri-containerd-6bfc1af58af97e02a4c783fa4c63664348a724beb986b17399778eb4c8869063.scope - libcontainer container 6bfc1af58af97e02a4c783fa4c63664348a724beb986b17399778eb4c8869063. Jul 9 13:12:08.180472 containerd[1659]: time="2025-07-09T13:12:08.180437598Z" level=info msg="StartContainer for \"6bfc1af58af97e02a4c783fa4c63664348a724beb986b17399778eb4c8869063\" returns successfully" Jul 9 13:12:08.214725 systemd[1]: cri-containerd-6bfc1af58af97e02a4c783fa4c63664348a724beb986b17399778eb4c8869063.scope: Deactivated successfully. Jul 9 13:12:08.215045 systemd[1]: cri-containerd-6bfc1af58af97e02a4c783fa4c63664348a724beb986b17399778eb4c8869063.scope: Consumed 14ms CPU time, 7.4M memory peak, 2M read from disk. Jul 9 13:12:08.216173 containerd[1659]: time="2025-07-09T13:12:08.216084949Z" level=info msg="received exit event container_id:\"6bfc1af58af97e02a4c783fa4c63664348a724beb986b17399778eb4c8869063\" id:\"6bfc1af58af97e02a4c783fa4c63664348a724beb986b17399778eb4c8869063\" pid:4829 exited_at:{seconds:1752066728 nanos:215842747}" Jul 9 13:12:08.216441 containerd[1659]: time="2025-07-09T13:12:08.216420606Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6bfc1af58af97e02a4c783fa4c63664348a724beb986b17399778eb4c8869063\" id:\"6bfc1af58af97e02a4c783fa4c63664348a724beb986b17399778eb4c8869063\" pid:4829 exited_at:{seconds:1752066728 nanos:215842747}" Jul 9 13:12:08.231540 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6bfc1af58af97e02a4c783fa4c63664348a724beb986b17399778eb4c8869063-rootfs.mount: Deactivated successfully. Jul 9 13:12:09.072149 containerd[1659]: time="2025-07-09T13:12:09.072114499Z" level=info msg="CreateContainer within sandbox \"55e1f28d2e586972513bd5fb4fc26631a8e1fce19860f9dbf6055b1147257ae6\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 9 13:12:09.106987 containerd[1659]: time="2025-07-09T13:12:09.106884417Z" level=info msg="Container f9d06936662feb2ea1a954b5233ae415b73f03b261f9d66f49ed933a2709f1de: CDI devices from CRI Config.CDIDevices: []" Jul 9 13:12:09.132390 containerd[1659]: time="2025-07-09T13:12:09.132359682Z" level=info msg="CreateContainer within sandbox \"55e1f28d2e586972513bd5fb4fc26631a8e1fce19860f9dbf6055b1147257ae6\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"f9d06936662feb2ea1a954b5233ae415b73f03b261f9d66f49ed933a2709f1de\"" Jul 9 13:12:09.133696 containerd[1659]: time="2025-07-09T13:12:09.133679925Z" level=info msg="StartContainer for \"f9d06936662feb2ea1a954b5233ae415b73f03b261f9d66f49ed933a2709f1de\"" Jul 9 13:12:09.134742 containerd[1659]: time="2025-07-09T13:12:09.134697825Z" level=info msg="connecting to shim f9d06936662feb2ea1a954b5233ae415b73f03b261f9d66f49ed933a2709f1de" address="unix:///run/containerd/s/370c879a6ce1b0a5ab3864fb1cca38993aca3e27dcb5475016dc5ebb76ffdb49" protocol=ttrpc version=3 Jul 9 13:12:09.150742 systemd[1]: Started cri-containerd-f9d06936662feb2ea1a954b5233ae415b73f03b261f9d66f49ed933a2709f1de.scope - libcontainer container f9d06936662feb2ea1a954b5233ae415b73f03b261f9d66f49ed933a2709f1de. Jul 9 13:12:09.177244 containerd[1659]: time="2025-07-09T13:12:09.177215223Z" level=info msg="StartContainer for \"f9d06936662feb2ea1a954b5233ae415b73f03b261f9d66f49ed933a2709f1de\" returns successfully" Jul 9 13:12:09.233119 systemd[1]: cri-containerd-f9d06936662feb2ea1a954b5233ae415b73f03b261f9d66f49ed933a2709f1de.scope: Deactivated successfully. Jul 9 13:12:09.233468 systemd[1]: cri-containerd-f9d06936662feb2ea1a954b5233ae415b73f03b261f9d66f49ed933a2709f1de.scope: Consumed 15ms CPU time, 5.8M memory peak, 1.1M read from disk. Jul 9 13:12:09.234006 containerd[1659]: time="2025-07-09T13:12:09.233979821Z" level=info msg="received exit event container_id:\"f9d06936662feb2ea1a954b5233ae415b73f03b261f9d66f49ed933a2709f1de\" id:\"f9d06936662feb2ea1a954b5233ae415b73f03b261f9d66f49ed933a2709f1de\" pid:4875 exited_at:{seconds:1752066729 nanos:233836198}" Jul 9 13:12:09.234210 containerd[1659]: time="2025-07-09T13:12:09.234150276Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f9d06936662feb2ea1a954b5233ae415b73f03b261f9d66f49ed933a2709f1de\" id:\"f9d06936662feb2ea1a954b5233ae415b73f03b261f9d66f49ed933a2709f1de\" pid:4875 exited_at:{seconds:1752066729 nanos:233836198}" Jul 9 13:12:09.250388 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f9d06936662feb2ea1a954b5233ae415b73f03b261f9d66f49ed933a2709f1de-rootfs.mount: Deactivated successfully. Jul 9 13:12:09.819208 kubelet[2949]: E0709 13:12:09.819181 2949 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 9 13:12:10.078800 containerd[1659]: time="2025-07-09T13:12:10.078693816Z" level=info msg="CreateContainer within sandbox \"55e1f28d2e586972513bd5fb4fc26631a8e1fce19860f9dbf6055b1147257ae6\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 9 13:12:10.116750 containerd[1659]: time="2025-07-09T13:12:10.116719701Z" level=info msg="Container dd056f8bde1a09292fcdb3d750c532ace75d5592e1c9cdd80c833abda744b6b6: CDI devices from CRI Config.CDIDevices: []" Jul 9 13:12:10.142417 containerd[1659]: time="2025-07-09T13:12:10.142384936Z" level=info msg="CreateContainer within sandbox \"55e1f28d2e586972513bd5fb4fc26631a8e1fce19860f9dbf6055b1147257ae6\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"dd056f8bde1a09292fcdb3d750c532ace75d5592e1c9cdd80c833abda744b6b6\"" Jul 9 13:12:10.143076 containerd[1659]: time="2025-07-09T13:12:10.142707520Z" level=info msg="StartContainer for \"dd056f8bde1a09292fcdb3d750c532ace75d5592e1c9cdd80c833abda744b6b6\"" Jul 9 13:12:10.144215 containerd[1659]: time="2025-07-09T13:12:10.144036864Z" level=info msg="connecting to shim dd056f8bde1a09292fcdb3d750c532ace75d5592e1c9cdd80c833abda744b6b6" address="unix:///run/containerd/s/370c879a6ce1b0a5ab3864fb1cca38993aca3e27dcb5475016dc5ebb76ffdb49" protocol=ttrpc version=3 Jul 9 13:12:10.163724 systemd[1]: Started cri-containerd-dd056f8bde1a09292fcdb3d750c532ace75d5592e1c9cdd80c833abda744b6b6.scope - libcontainer container dd056f8bde1a09292fcdb3d750c532ace75d5592e1c9cdd80c833abda744b6b6. Jul 9 13:12:10.179939 systemd[1]: cri-containerd-dd056f8bde1a09292fcdb3d750c532ace75d5592e1c9cdd80c833abda744b6b6.scope: Deactivated successfully. Jul 9 13:12:10.180689 containerd[1659]: time="2025-07-09T13:12:10.180630916Z" level=info msg="TaskExit event in podsandbox handler container_id:\"dd056f8bde1a09292fcdb3d750c532ace75d5592e1c9cdd80c833abda744b6b6\" id:\"dd056f8bde1a09292fcdb3d750c532ace75d5592e1c9cdd80c833abda744b6b6\" pid:4914 exited_at:{seconds:1752066730 nanos:180156183}" Jul 9 13:12:10.192716 containerd[1659]: time="2025-07-09T13:12:10.192541393Z" level=info msg="received exit event container_id:\"dd056f8bde1a09292fcdb3d750c532ace75d5592e1c9cdd80c833abda744b6b6\" id:\"dd056f8bde1a09292fcdb3d750c532ace75d5592e1c9cdd80c833abda744b6b6\" pid:4914 exited_at:{seconds:1752066730 nanos:180156183}" Jul 9 13:12:10.197211 containerd[1659]: time="2025-07-09T13:12:10.197177545Z" level=info msg="StartContainer for \"dd056f8bde1a09292fcdb3d750c532ace75d5592e1c9cdd80c833abda744b6b6\" returns successfully" Jul 9 13:12:10.206090 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dd056f8bde1a09292fcdb3d750c532ace75d5592e1c9cdd80c833abda744b6b6-rootfs.mount: Deactivated successfully. Jul 9 13:12:11.081289 containerd[1659]: time="2025-07-09T13:12:11.081183742Z" level=info msg="CreateContainer within sandbox \"55e1f28d2e586972513bd5fb4fc26631a8e1fce19860f9dbf6055b1147257ae6\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 9 13:12:11.091122 containerd[1659]: time="2025-07-09T13:12:11.091100857Z" level=info msg="Container 86a13bf92cf23f2cfb2907b027202eb18c0c59cb003a5a05cb68ecc3bc990d3d: CDI devices from CRI Config.CDIDevices: []" Jul 9 13:12:11.094502 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1687203571.mount: Deactivated successfully. Jul 9 13:12:11.102215 containerd[1659]: time="2025-07-09T13:12:11.102143137Z" level=info msg="CreateContainer within sandbox \"55e1f28d2e586972513bd5fb4fc26631a8e1fce19860f9dbf6055b1147257ae6\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"86a13bf92cf23f2cfb2907b027202eb18c0c59cb003a5a05cb68ecc3bc990d3d\"" Jul 9 13:12:11.103538 containerd[1659]: time="2025-07-09T13:12:11.102800632Z" level=info msg="StartContainer for \"86a13bf92cf23f2cfb2907b027202eb18c0c59cb003a5a05cb68ecc3bc990d3d\"" Jul 9 13:12:11.103649 containerd[1659]: time="2025-07-09T13:12:11.103622277Z" level=info msg="connecting to shim 86a13bf92cf23f2cfb2907b027202eb18c0c59cb003a5a05cb68ecc3bc990d3d" address="unix:///run/containerd/s/370c879a6ce1b0a5ab3864fb1cca38993aca3e27dcb5475016dc5ebb76ffdb49" protocol=ttrpc version=3 Jul 9 13:12:11.121784 systemd[1]: Started cri-containerd-86a13bf92cf23f2cfb2907b027202eb18c0c59cb003a5a05cb68ecc3bc990d3d.scope - libcontainer container 86a13bf92cf23f2cfb2907b027202eb18c0c59cb003a5a05cb68ecc3bc990d3d. Jul 9 13:12:11.148144 containerd[1659]: time="2025-07-09T13:12:11.148119415Z" level=info msg="StartContainer for \"86a13bf92cf23f2cfb2907b027202eb18c0c59cb003a5a05cb68ecc3bc990d3d\" returns successfully" Jul 9 13:12:11.272420 containerd[1659]: time="2025-07-09T13:12:11.272392968Z" level=info msg="TaskExit event in podsandbox handler container_id:\"86a13bf92cf23f2cfb2907b027202eb18c0c59cb003a5a05cb68ecc3bc990d3d\" id:\"5cfedaaf1991e8ea0b1ebeddf50c32bf5518632e6e5d61c4d53f86df8d23834f\" pid:4981 exited_at:{seconds:1752066731 nanos:272197090}" Jul 9 13:12:12.310731 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni-avx)) Jul 9 13:12:13.187864 kubelet[2949]: I0709 13:12:13.187660 2949 setters.go:602] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-07-09T13:12:13Z","lastTransitionTime":"2025-07-09T13:12:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jul 9 13:12:13.640698 containerd[1659]: time="2025-07-09T13:12:13.640671465Z" level=info msg="TaskExit event in podsandbox handler container_id:\"86a13bf92cf23f2cfb2907b027202eb18c0c59cb003a5a05cb68ecc3bc990d3d\" id:\"9ea91df19aac38ee0e8c6049b8dd259dfdea28c92c0d8f76a0f6189cda4bdbd6\" pid:5101 exit_status:1 exited_at:{seconds:1752066733 nanos:639970717}" Jul 9 13:12:14.908750 systemd-networkd[1530]: lxc_health: Link UP Jul 9 13:12:14.928063 systemd-networkd[1530]: lxc_health: Gained carrier Jul 9 13:12:15.320148 kubelet[2949]: I0709 13:12:15.319826 2949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-g6rgj" podStartSLOduration=9.31980131 podStartE2EDuration="9.31980131s" podCreationTimestamp="2025-07-09 13:12:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-09 13:12:12.110544785 +0000 UTC m=+132.546947923" watchObservedRunningTime="2025-07-09 13:12:15.31980131 +0000 UTC m=+135.756204452" Jul 9 13:12:15.783010 containerd[1659]: time="2025-07-09T13:12:15.782982592Z" level=info msg="TaskExit event in podsandbox handler container_id:\"86a13bf92cf23f2cfb2907b027202eb18c0c59cb003a5a05cb68ecc3bc990d3d\" id:\"79fcc13e5479139b70e1037e24e85c88051e159fe5509f226f7ebcbfc32262c8\" pid:5556 exited_at:{seconds:1752066735 nanos:782602000}" Jul 9 13:12:16.940758 systemd-networkd[1530]: lxc_health: Gained IPv6LL Jul 9 13:12:17.895377 containerd[1659]: time="2025-07-09T13:12:17.895330312Z" level=info msg="TaskExit event in podsandbox handler container_id:\"86a13bf92cf23f2cfb2907b027202eb18c0c59cb003a5a05cb68ecc3bc990d3d\" id:\"64e45e8587ca15593ddb6427a9ff7f9f57abf235d2bb6f2c6c44c64e2ddcd2c6\" pid:5588 exited_at:{seconds:1752066737 nanos:894776140}" Jul 9 13:12:19.972661 containerd[1659]: time="2025-07-09T13:12:19.972627507Z" level=info msg="TaskExit event in podsandbox handler container_id:\"86a13bf92cf23f2cfb2907b027202eb18c0c59cb003a5a05cb68ecc3bc990d3d\" id:\"0958c1956bdf17d3a18f465ccfb1494751bc96f0a1c0952fc0f10028cf9460eb\" pid:5618 exited_at:{seconds:1752066739 nanos:972085678}" Jul 9 13:12:19.984959 sshd[4710]: Connection closed by 139.178.68.195 port 40778 Jul 9 13:12:19.985680 sshd-session[4707]: pam_unix(sshd:session): session closed for user core Jul 9 13:12:19.996668 systemd[1]: sshd@27-139.178.70.108:22-139.178.68.195:40778.service: Deactivated successfully. Jul 9 13:12:19.998497 systemd[1]: session-30.scope: Deactivated successfully. Jul 9 13:12:19.999952 systemd-logind[1628]: Session 30 logged out. Waiting for processes to exit. Jul 9 13:12:20.002427 systemd-logind[1628]: Removed session 30.