Sep 13 01:02:55.664486 kernel: Linux version 5.15.192-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Fri Sep 12 23:13:49 -00 2025 Sep 13 01:02:55.664501 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=65d14b740db9e581daa1d0206188b16d2f1a39e5c5e0878b6855323cd7c584ec Sep 13 01:02:55.664508 kernel: Disabled fast string operations Sep 13 01:02:55.664512 kernel: BIOS-provided physical RAM map: Sep 13 01:02:55.664516 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ebff] usable Sep 13 01:02:55.664520 kernel: BIOS-e820: [mem 0x000000000009ec00-0x000000000009ffff] reserved Sep 13 01:02:55.664525 kernel: BIOS-e820: [mem 0x00000000000dc000-0x00000000000fffff] reserved Sep 13 01:02:55.664530 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007fedffff] usable Sep 13 01:02:55.664533 kernel: BIOS-e820: [mem 0x000000007fee0000-0x000000007fefefff] ACPI data Sep 13 01:02:55.664537 kernel: BIOS-e820: [mem 0x000000007feff000-0x000000007fefffff] ACPI NVS Sep 13 01:02:55.664541 kernel: BIOS-e820: [mem 0x000000007ff00000-0x000000007fffffff] usable Sep 13 01:02:55.664545 kernel: BIOS-e820: [mem 0x00000000f0000000-0x00000000f7ffffff] reserved Sep 13 01:02:55.664549 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec0ffff] reserved Sep 13 01:02:55.664553 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved Sep 13 01:02:55.664559 kernel: BIOS-e820: [mem 0x00000000fffe0000-0x00000000ffffffff] reserved Sep 13 01:02:55.664564 kernel: NX (Execute Disable) protection: active Sep 13 01:02:55.664568 kernel: SMBIOS 2.7 present. Sep 13 01:02:55.664572 kernel: DMI: VMware, Inc. VMware Virtual Platform/440BX Desktop Reference Platform, BIOS 6.00 05/28/2020 Sep 13 01:02:55.664577 kernel: vmware: hypercall mode: 0x00 Sep 13 01:02:55.664581 kernel: Hypervisor detected: VMware Sep 13 01:02:55.664586 kernel: vmware: TSC freq read from hypervisor : 3408.000 MHz Sep 13 01:02:55.664591 kernel: vmware: Host bus clock speed read from hypervisor : 66000000 Hz Sep 13 01:02:55.664595 kernel: vmware: using clock offset of 3549686072 ns Sep 13 01:02:55.664599 kernel: tsc: Detected 3408.000 MHz processor Sep 13 01:02:55.664604 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 13 01:02:55.664609 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 13 01:02:55.664613 kernel: last_pfn = 0x80000 max_arch_pfn = 0x400000000 Sep 13 01:02:55.664618 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 13 01:02:55.664622 kernel: total RAM covered: 3072M Sep 13 01:02:55.664627 kernel: Found optimal setting for mtrr clean up Sep 13 01:02:55.664633 kernel: gran_size: 64K chunk_size: 64K num_reg: 2 lose cover RAM: 0G Sep 13 01:02:55.664637 kernel: Using GB pages for direct mapping Sep 13 01:02:55.664641 kernel: ACPI: Early table checksum verification disabled Sep 13 01:02:55.664646 kernel: ACPI: RSDP 0x00000000000F6A00 000024 (v02 PTLTD ) Sep 13 01:02:55.664650 kernel: ACPI: XSDT 0x000000007FEE965B 00005C (v01 INTEL 440BX 06040000 VMW 01324272) Sep 13 01:02:55.664655 kernel: ACPI: FACP 0x000000007FEFEE73 0000F4 (v04 INTEL 440BX 06040000 PTL 000F4240) Sep 13 01:02:55.664659 kernel: ACPI: DSDT 0x000000007FEEAD55 01411E (v01 PTLTD Custom 06040000 MSFT 03000001) Sep 13 01:02:55.664663 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 Sep 13 01:02:55.664668 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 Sep 13 01:02:55.664673 kernel: ACPI: BOOT 0x000000007FEEAD2D 000028 (v01 PTLTD $SBFTBL$ 06040000 LTP 00000001) Sep 13 01:02:55.664679 kernel: ACPI: APIC 0x000000007FEEA5EB 000742 (v01 PTLTD ? APIC 06040000 LTP 00000000) Sep 13 01:02:55.664684 kernel: ACPI: MCFG 0x000000007FEEA5AF 00003C (v01 PTLTD $PCITBL$ 06040000 LTP 00000001) Sep 13 01:02:55.664689 kernel: ACPI: SRAT 0x000000007FEE9757 0008A8 (v02 VMWARE MEMPLUG 06040000 VMW 00000001) Sep 13 01:02:55.664694 kernel: ACPI: HPET 0x000000007FEE971F 000038 (v01 VMWARE VMW HPET 06040000 VMW 00000001) Sep 13 01:02:55.664699 kernel: ACPI: WAET 0x000000007FEE96F7 000028 (v01 VMWARE VMW WAET 06040000 VMW 00000001) Sep 13 01:02:55.664704 kernel: ACPI: Reserving FACP table memory at [mem 0x7fefee73-0x7fefef66] Sep 13 01:02:55.664709 kernel: ACPI: Reserving DSDT table memory at [mem 0x7feead55-0x7fefee72] Sep 13 01:02:55.664732 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] Sep 13 01:02:55.664737 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] Sep 13 01:02:55.664742 kernel: ACPI: Reserving BOOT table memory at [mem 0x7feead2d-0x7feead54] Sep 13 01:02:55.664747 kernel: ACPI: Reserving APIC table memory at [mem 0x7feea5eb-0x7feead2c] Sep 13 01:02:55.664752 kernel: ACPI: Reserving MCFG table memory at [mem 0x7feea5af-0x7feea5ea] Sep 13 01:02:55.664757 kernel: ACPI: Reserving SRAT table memory at [mem 0x7fee9757-0x7fee9ffe] Sep 13 01:02:55.664762 kernel: ACPI: Reserving HPET table memory at [mem 0x7fee971f-0x7fee9756] Sep 13 01:02:55.664767 kernel: ACPI: Reserving WAET table memory at [mem 0x7fee96f7-0x7fee971e] Sep 13 01:02:55.664772 kernel: system APIC only can use physical flat Sep 13 01:02:55.664777 kernel: Setting APIC routing to physical flat. Sep 13 01:02:55.664781 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Sep 13 01:02:55.664803 kernel: SRAT: PXM 0 -> APIC 0x02 -> Node 0 Sep 13 01:02:55.664808 kernel: SRAT: PXM 0 -> APIC 0x04 -> Node 0 Sep 13 01:02:55.664812 kernel: SRAT: PXM 0 -> APIC 0x06 -> Node 0 Sep 13 01:02:55.664817 kernel: SRAT: PXM 0 -> APIC 0x08 -> Node 0 Sep 13 01:02:55.664822 kernel: SRAT: PXM 0 -> APIC 0x0a -> Node 0 Sep 13 01:02:55.664827 kernel: SRAT: PXM 0 -> APIC 0x0c -> Node 0 Sep 13 01:02:55.664831 kernel: SRAT: PXM 0 -> APIC 0x0e -> Node 0 Sep 13 01:02:55.664836 kernel: SRAT: PXM 0 -> APIC 0x10 -> Node 0 Sep 13 01:02:55.664841 kernel: SRAT: PXM 0 -> APIC 0x12 -> Node 0 Sep 13 01:02:55.664845 kernel: SRAT: PXM 0 -> APIC 0x14 -> Node 0 Sep 13 01:02:55.664850 kernel: SRAT: PXM 0 -> APIC 0x16 -> Node 0 Sep 13 01:02:55.664855 kernel: SRAT: PXM 0 -> APIC 0x18 -> Node 0 Sep 13 01:02:55.664859 kernel: SRAT: PXM 0 -> APIC 0x1a -> Node 0 Sep 13 01:02:55.664864 kernel: SRAT: PXM 0 -> APIC 0x1c -> Node 0 Sep 13 01:02:55.664869 kernel: SRAT: PXM 0 -> APIC 0x1e -> Node 0 Sep 13 01:02:55.664874 kernel: SRAT: PXM 0 -> APIC 0x20 -> Node 0 Sep 13 01:02:55.664878 kernel: SRAT: PXM 0 -> APIC 0x22 -> Node 0 Sep 13 01:02:55.664883 kernel: SRAT: PXM 0 -> APIC 0x24 -> Node 0 Sep 13 01:02:55.664888 kernel: SRAT: PXM 0 -> APIC 0x26 -> Node 0 Sep 13 01:02:55.664892 kernel: SRAT: PXM 0 -> APIC 0x28 -> Node 0 Sep 13 01:02:55.664897 kernel: SRAT: PXM 0 -> APIC 0x2a -> Node 0 Sep 13 01:02:55.664902 kernel: SRAT: PXM 0 -> APIC 0x2c -> Node 0 Sep 13 01:02:55.664906 kernel: SRAT: PXM 0 -> APIC 0x2e -> Node 0 Sep 13 01:02:55.664911 kernel: SRAT: PXM 0 -> APIC 0x30 -> Node 0 Sep 13 01:02:55.664916 kernel: SRAT: PXM 0 -> APIC 0x32 -> Node 0 Sep 13 01:02:55.664921 kernel: SRAT: PXM 0 -> APIC 0x34 -> Node 0 Sep 13 01:02:55.664926 kernel: SRAT: PXM 0 -> APIC 0x36 -> Node 0 Sep 13 01:02:55.664931 kernel: SRAT: PXM 0 -> APIC 0x38 -> Node 0 Sep 13 01:02:55.664935 kernel: SRAT: PXM 0 -> APIC 0x3a -> Node 0 Sep 13 01:02:55.664941 kernel: SRAT: PXM 0 -> APIC 0x3c -> Node 0 Sep 13 01:02:55.664949 kernel: SRAT: PXM 0 -> APIC 0x3e -> Node 0 Sep 13 01:02:55.664956 kernel: SRAT: PXM 0 -> APIC 0x40 -> Node 0 Sep 13 01:02:55.664963 kernel: SRAT: PXM 0 -> APIC 0x42 -> Node 0 Sep 13 01:02:55.664971 kernel: SRAT: PXM 0 -> APIC 0x44 -> Node 0 Sep 13 01:02:55.664978 kernel: SRAT: PXM 0 -> APIC 0x46 -> Node 0 Sep 13 01:02:55.664983 kernel: SRAT: PXM 0 -> APIC 0x48 -> Node 0 Sep 13 01:02:55.664987 kernel: SRAT: PXM 0 -> APIC 0x4a -> Node 0 Sep 13 01:02:55.664992 kernel: SRAT: PXM 0 -> APIC 0x4c -> Node 0 Sep 13 01:02:55.664997 kernel: SRAT: PXM 0 -> APIC 0x4e -> Node 0 Sep 13 01:02:55.665001 kernel: SRAT: PXM 0 -> APIC 0x50 -> Node 0 Sep 13 01:02:55.665006 kernel: SRAT: PXM 0 -> APIC 0x52 -> Node 0 Sep 13 01:02:55.665011 kernel: SRAT: PXM 0 -> APIC 0x54 -> Node 0 Sep 13 01:02:55.665015 kernel: SRAT: PXM 0 -> APIC 0x56 -> Node 0 Sep 13 01:02:55.665020 kernel: SRAT: PXM 0 -> APIC 0x58 -> Node 0 Sep 13 01:02:55.665026 kernel: SRAT: PXM 0 -> APIC 0x5a -> Node 0 Sep 13 01:02:55.665030 kernel: SRAT: PXM 0 -> APIC 0x5c -> Node 0 Sep 13 01:02:55.665035 kernel: SRAT: PXM 0 -> APIC 0x5e -> Node 0 Sep 13 01:02:55.665040 kernel: SRAT: PXM 0 -> APIC 0x60 -> Node 0 Sep 13 01:02:55.665044 kernel: SRAT: PXM 0 -> APIC 0x62 -> Node 0 Sep 13 01:02:55.665049 kernel: SRAT: PXM 0 -> APIC 0x64 -> Node 0 Sep 13 01:02:55.665053 kernel: SRAT: PXM 0 -> APIC 0x66 -> Node 0 Sep 13 01:02:55.665058 kernel: SRAT: PXM 0 -> APIC 0x68 -> Node 0 Sep 13 01:02:55.665063 kernel: SRAT: PXM 0 -> APIC 0x6a -> Node 0 Sep 13 01:02:55.665067 kernel: SRAT: PXM 0 -> APIC 0x6c -> Node 0 Sep 13 01:02:55.665073 kernel: SRAT: PXM 0 -> APIC 0x6e -> Node 0 Sep 13 01:02:55.665078 kernel: SRAT: PXM 0 -> APIC 0x70 -> Node 0 Sep 13 01:02:55.665082 kernel: SRAT: PXM 0 -> APIC 0x72 -> Node 0 Sep 13 01:02:55.665087 kernel: SRAT: PXM 0 -> APIC 0x74 -> Node 0 Sep 13 01:02:55.665091 kernel: SRAT: PXM 0 -> APIC 0x76 -> Node 0 Sep 13 01:02:55.665096 kernel: SRAT: PXM 0 -> APIC 0x78 -> Node 0 Sep 13 01:02:55.665105 kernel: SRAT: PXM 0 -> APIC 0x7a -> Node 0 Sep 13 01:02:55.665110 kernel: SRAT: PXM 0 -> APIC 0x7c -> Node 0 Sep 13 01:02:55.665115 kernel: SRAT: PXM 0 -> APIC 0x7e -> Node 0 Sep 13 01:02:55.665120 kernel: SRAT: PXM 0 -> APIC 0x80 -> Node 0 Sep 13 01:02:55.665125 kernel: SRAT: PXM 0 -> APIC 0x82 -> Node 0 Sep 13 01:02:55.665131 kernel: SRAT: PXM 0 -> APIC 0x84 -> Node 0 Sep 13 01:02:55.665136 kernel: SRAT: PXM 0 -> APIC 0x86 -> Node 0 Sep 13 01:02:55.665141 kernel: SRAT: PXM 0 -> APIC 0x88 -> Node 0 Sep 13 01:02:55.665146 kernel: SRAT: PXM 0 -> APIC 0x8a -> Node 0 Sep 13 01:02:55.665151 kernel: SRAT: PXM 0 -> APIC 0x8c -> Node 0 Sep 13 01:02:55.665156 kernel: SRAT: PXM 0 -> APIC 0x8e -> Node 0 Sep 13 01:02:55.665161 kernel: SRAT: PXM 0 -> APIC 0x90 -> Node 0 Sep 13 01:02:55.665167 kernel: SRAT: PXM 0 -> APIC 0x92 -> Node 0 Sep 13 01:02:55.665180 kernel: SRAT: PXM 0 -> APIC 0x94 -> Node 0 Sep 13 01:02:55.665186 kernel: SRAT: PXM 0 -> APIC 0x96 -> Node 0 Sep 13 01:02:55.665191 kernel: SRAT: PXM 0 -> APIC 0x98 -> Node 0 Sep 13 01:02:55.665196 kernel: SRAT: PXM 0 -> APIC 0x9a -> Node 0 Sep 13 01:02:55.665201 kernel: SRAT: PXM 0 -> APIC 0x9c -> Node 0 Sep 13 01:02:55.665206 kernel: SRAT: PXM 0 -> APIC 0x9e -> Node 0 Sep 13 01:02:55.665211 kernel: SRAT: PXM 0 -> APIC 0xa0 -> Node 0 Sep 13 01:02:55.665216 kernel: SRAT: PXM 0 -> APIC 0xa2 -> Node 0 Sep 13 01:02:55.665240 kernel: SRAT: PXM 0 -> APIC 0xa4 -> Node 0 Sep 13 01:02:55.665250 kernel: SRAT: PXM 0 -> APIC 0xa6 -> Node 0 Sep 13 01:02:55.665258 kernel: SRAT: PXM 0 -> APIC 0xa8 -> Node 0 Sep 13 01:02:55.665280 kernel: SRAT: PXM 0 -> APIC 0xaa -> Node 0 Sep 13 01:02:55.665287 kernel: SRAT: PXM 0 -> APIC 0xac -> Node 0 Sep 13 01:02:55.665295 kernel: SRAT: PXM 0 -> APIC 0xae -> Node 0 Sep 13 01:02:55.665301 kernel: SRAT: PXM 0 -> APIC 0xb0 -> Node 0 Sep 13 01:02:55.665309 kernel: SRAT: PXM 0 -> APIC 0xb2 -> Node 0 Sep 13 01:02:55.665316 kernel: SRAT: PXM 0 -> APIC 0xb4 -> Node 0 Sep 13 01:02:55.665323 kernel: SRAT: PXM 0 -> APIC 0xb6 -> Node 0 Sep 13 01:02:55.665330 kernel: SRAT: PXM 0 -> APIC 0xb8 -> Node 0 Sep 13 01:02:55.665335 kernel: SRAT: PXM 0 -> APIC 0xba -> Node 0 Sep 13 01:02:55.665340 kernel: SRAT: PXM 0 -> APIC 0xbc -> Node 0 Sep 13 01:02:55.665345 kernel: SRAT: PXM 0 -> APIC 0xbe -> Node 0 Sep 13 01:02:55.665350 kernel: SRAT: PXM 0 -> APIC 0xc0 -> Node 0 Sep 13 01:02:55.665355 kernel: SRAT: PXM 0 -> APIC 0xc2 -> Node 0 Sep 13 01:02:55.665360 kernel: SRAT: PXM 0 -> APIC 0xc4 -> Node 0 Sep 13 01:02:55.665365 kernel: SRAT: PXM 0 -> APIC 0xc6 -> Node 0 Sep 13 01:02:55.665370 kernel: SRAT: PXM 0 -> APIC 0xc8 -> Node 0 Sep 13 01:02:55.665375 kernel: SRAT: PXM 0 -> APIC 0xca -> Node 0 Sep 13 01:02:55.665381 kernel: SRAT: PXM 0 -> APIC 0xcc -> Node 0 Sep 13 01:02:55.665386 kernel: SRAT: PXM 0 -> APIC 0xce -> Node 0 Sep 13 01:02:55.665391 kernel: SRAT: PXM 0 -> APIC 0xd0 -> Node 0 Sep 13 01:02:55.665396 kernel: SRAT: PXM 0 -> APIC 0xd2 -> Node 0 Sep 13 01:02:55.665401 kernel: SRAT: PXM 0 -> APIC 0xd4 -> Node 0 Sep 13 01:02:55.665406 kernel: SRAT: PXM 0 -> APIC 0xd6 -> Node 0 Sep 13 01:02:55.665411 kernel: SRAT: PXM 0 -> APIC 0xd8 -> Node 0 Sep 13 01:02:55.665416 kernel: SRAT: PXM 0 -> APIC 0xda -> Node 0 Sep 13 01:02:55.665421 kernel: SRAT: PXM 0 -> APIC 0xdc -> Node 0 Sep 13 01:02:55.665426 kernel: SRAT: PXM 0 -> APIC 0xde -> Node 0 Sep 13 01:02:55.665432 kernel: SRAT: PXM 0 -> APIC 0xe0 -> Node 0 Sep 13 01:02:55.665437 kernel: SRAT: PXM 0 -> APIC 0xe2 -> Node 0 Sep 13 01:02:55.665442 kernel: SRAT: PXM 0 -> APIC 0xe4 -> Node 0 Sep 13 01:02:55.665447 kernel: SRAT: PXM 0 -> APIC 0xe6 -> Node 0 Sep 13 01:02:55.665452 kernel: SRAT: PXM 0 -> APIC 0xe8 -> Node 0 Sep 13 01:02:55.665457 kernel: SRAT: PXM 0 -> APIC 0xea -> Node 0 Sep 13 01:02:55.665462 kernel: SRAT: PXM 0 -> APIC 0xec -> Node 0 Sep 13 01:02:55.665467 kernel: SRAT: PXM 0 -> APIC 0xee -> Node 0 Sep 13 01:02:55.665472 kernel: SRAT: PXM 0 -> APIC 0xf0 -> Node 0 Sep 13 01:02:55.665477 kernel: SRAT: PXM 0 -> APIC 0xf2 -> Node 0 Sep 13 01:02:55.665483 kernel: SRAT: PXM 0 -> APIC 0xf4 -> Node 0 Sep 13 01:02:55.665488 kernel: SRAT: PXM 0 -> APIC 0xf6 -> Node 0 Sep 13 01:02:55.665493 kernel: SRAT: PXM 0 -> APIC 0xf8 -> Node 0 Sep 13 01:02:55.665498 kernel: SRAT: PXM 0 -> APIC 0xfa -> Node 0 Sep 13 01:02:55.665503 kernel: SRAT: PXM 0 -> APIC 0xfc -> Node 0 Sep 13 01:02:55.665508 kernel: SRAT: PXM 0 -> APIC 0xfe -> Node 0 Sep 13 01:02:55.665513 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Sep 13 01:02:55.665519 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Sep 13 01:02:55.665524 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000-0xbfffffff] hotplug Sep 13 01:02:55.665531 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7fffffff] -> [mem 0x00000000-0x7fffffff] Sep 13 01:02:55.665541 kernel: NODE_DATA(0) allocated [mem 0x7fffa000-0x7fffffff] Sep 13 01:02:55.665549 kernel: Zone ranges: Sep 13 01:02:55.665556 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 13 01:02:55.665561 kernel: DMA32 [mem 0x0000000001000000-0x000000007fffffff] Sep 13 01:02:55.665566 kernel: Normal empty Sep 13 01:02:55.665571 kernel: Movable zone start for each node Sep 13 01:02:55.665578 kernel: Early memory node ranges Sep 13 01:02:55.665585 kernel: node 0: [mem 0x0000000000001000-0x000000000009dfff] Sep 13 01:02:55.665591 kernel: node 0: [mem 0x0000000000100000-0x000000007fedffff] Sep 13 01:02:55.665598 kernel: node 0: [mem 0x000000007ff00000-0x000000007fffffff] Sep 13 01:02:55.665603 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007fffffff] Sep 13 01:02:55.665608 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 13 01:02:55.665613 kernel: On node 0, zone DMA: 98 pages in unavailable ranges Sep 13 01:02:55.665634 kernel: On node 0, zone DMA32: 32 pages in unavailable ranges Sep 13 01:02:55.665639 kernel: ACPI: PM-Timer IO Port: 0x1008 Sep 13 01:02:55.665644 kernel: system APIC only can use physical flat Sep 13 01:02:55.665649 kernel: ACPI: LAPIC_NMI (acpi_id[0x00] high edge lint[0x1]) Sep 13 01:02:55.665654 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) Sep 13 01:02:55.665660 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) Sep 13 01:02:55.665665 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) Sep 13 01:02:55.665670 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) Sep 13 01:02:55.665675 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) Sep 13 01:02:55.665680 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) Sep 13 01:02:55.665685 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) Sep 13 01:02:55.665690 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) Sep 13 01:02:55.665697 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) Sep 13 01:02:55.665705 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) Sep 13 01:02:55.665713 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) Sep 13 01:02:55.665722 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) Sep 13 01:02:55.665727 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) Sep 13 01:02:55.665732 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) Sep 13 01:02:55.665737 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) Sep 13 01:02:55.665742 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) Sep 13 01:02:55.665747 kernel: ACPI: LAPIC_NMI (acpi_id[0x11] high edge lint[0x1]) Sep 13 01:02:55.665752 kernel: ACPI: LAPIC_NMI (acpi_id[0x12] high edge lint[0x1]) Sep 13 01:02:55.665756 kernel: ACPI: LAPIC_NMI (acpi_id[0x13] high edge lint[0x1]) Sep 13 01:02:55.665762 kernel: ACPI: LAPIC_NMI (acpi_id[0x14] high edge lint[0x1]) Sep 13 01:02:55.665768 kernel: ACPI: LAPIC_NMI (acpi_id[0x15] high edge lint[0x1]) Sep 13 01:02:55.665773 kernel: ACPI: LAPIC_NMI (acpi_id[0x16] high edge lint[0x1]) Sep 13 01:02:55.665778 kernel: ACPI: LAPIC_NMI (acpi_id[0x17] high edge lint[0x1]) Sep 13 01:02:55.665783 kernel: ACPI: LAPIC_NMI (acpi_id[0x18] high edge lint[0x1]) Sep 13 01:02:55.665788 kernel: ACPI: LAPIC_NMI (acpi_id[0x19] high edge lint[0x1]) Sep 13 01:02:55.665793 kernel: ACPI: LAPIC_NMI (acpi_id[0x1a] high edge lint[0x1]) Sep 13 01:02:55.665798 kernel: ACPI: LAPIC_NMI (acpi_id[0x1b] high edge lint[0x1]) Sep 13 01:02:55.665803 kernel: ACPI: LAPIC_NMI (acpi_id[0x1c] high edge lint[0x1]) Sep 13 01:02:55.665807 kernel: ACPI: LAPIC_NMI (acpi_id[0x1d] high edge lint[0x1]) Sep 13 01:02:55.665831 kernel: ACPI: LAPIC_NMI (acpi_id[0x1e] high edge lint[0x1]) Sep 13 01:02:55.665837 kernel: ACPI: LAPIC_NMI (acpi_id[0x1f] high edge lint[0x1]) Sep 13 01:02:55.665842 kernel: ACPI: LAPIC_NMI (acpi_id[0x20] high edge lint[0x1]) Sep 13 01:02:55.665847 kernel: ACPI: LAPIC_NMI (acpi_id[0x21] high edge lint[0x1]) Sep 13 01:02:55.665852 kernel: ACPI: LAPIC_NMI (acpi_id[0x22] high edge lint[0x1]) Sep 13 01:02:55.665857 kernel: ACPI: LAPIC_NMI (acpi_id[0x23] high edge lint[0x1]) Sep 13 01:02:55.665862 kernel: ACPI: LAPIC_NMI (acpi_id[0x24] high edge lint[0x1]) Sep 13 01:02:55.665867 kernel: ACPI: LAPIC_NMI (acpi_id[0x25] high edge lint[0x1]) Sep 13 01:02:55.665872 kernel: ACPI: LAPIC_NMI (acpi_id[0x26] high edge lint[0x1]) Sep 13 01:02:55.665877 kernel: ACPI: LAPIC_NMI (acpi_id[0x27] high edge lint[0x1]) Sep 13 01:02:55.665883 kernel: ACPI: LAPIC_NMI (acpi_id[0x28] high edge lint[0x1]) Sep 13 01:02:55.665888 kernel: ACPI: LAPIC_NMI (acpi_id[0x29] high edge lint[0x1]) Sep 13 01:02:55.665893 kernel: ACPI: LAPIC_NMI (acpi_id[0x2a] high edge lint[0x1]) Sep 13 01:02:55.665898 kernel: ACPI: LAPIC_NMI (acpi_id[0x2b] high edge lint[0x1]) Sep 13 01:02:55.665903 kernel: ACPI: LAPIC_NMI (acpi_id[0x2c] high edge lint[0x1]) Sep 13 01:02:55.665908 kernel: ACPI: LAPIC_NMI (acpi_id[0x2d] high edge lint[0x1]) Sep 13 01:02:55.665913 kernel: ACPI: LAPIC_NMI (acpi_id[0x2e] high edge lint[0x1]) Sep 13 01:02:55.665918 kernel: ACPI: LAPIC_NMI (acpi_id[0x2f] high edge lint[0x1]) Sep 13 01:02:55.665923 kernel: ACPI: LAPIC_NMI (acpi_id[0x30] high edge lint[0x1]) Sep 13 01:02:55.665929 kernel: ACPI: LAPIC_NMI (acpi_id[0x31] high edge lint[0x1]) Sep 13 01:02:55.665935 kernel: ACPI: LAPIC_NMI (acpi_id[0x32] high edge lint[0x1]) Sep 13 01:02:55.665940 kernel: ACPI: LAPIC_NMI (acpi_id[0x33] high edge lint[0x1]) Sep 13 01:02:55.665945 kernel: ACPI: LAPIC_NMI (acpi_id[0x34] high edge lint[0x1]) Sep 13 01:02:55.665950 kernel: ACPI: LAPIC_NMI (acpi_id[0x35] high edge lint[0x1]) Sep 13 01:02:55.665955 kernel: ACPI: LAPIC_NMI (acpi_id[0x36] high edge lint[0x1]) Sep 13 01:02:55.665960 kernel: ACPI: LAPIC_NMI (acpi_id[0x37] high edge lint[0x1]) Sep 13 01:02:55.665965 kernel: ACPI: LAPIC_NMI (acpi_id[0x38] high edge lint[0x1]) Sep 13 01:02:55.665970 kernel: ACPI: LAPIC_NMI (acpi_id[0x39] high edge lint[0x1]) Sep 13 01:02:55.665975 kernel: ACPI: LAPIC_NMI (acpi_id[0x3a] high edge lint[0x1]) Sep 13 01:02:55.665981 kernel: ACPI: LAPIC_NMI (acpi_id[0x3b] high edge lint[0x1]) Sep 13 01:02:55.665986 kernel: ACPI: LAPIC_NMI (acpi_id[0x3c] high edge lint[0x1]) Sep 13 01:02:55.665991 kernel: ACPI: LAPIC_NMI (acpi_id[0x3d] high edge lint[0x1]) Sep 13 01:02:55.665996 kernel: ACPI: LAPIC_NMI (acpi_id[0x3e] high edge lint[0x1]) Sep 13 01:02:55.666001 kernel: ACPI: LAPIC_NMI (acpi_id[0x3f] high edge lint[0x1]) Sep 13 01:02:55.666006 kernel: ACPI: LAPIC_NMI (acpi_id[0x40] high edge lint[0x1]) Sep 13 01:02:55.666011 kernel: ACPI: LAPIC_NMI (acpi_id[0x41] high edge lint[0x1]) Sep 13 01:02:55.666016 kernel: ACPI: LAPIC_NMI (acpi_id[0x42] high edge lint[0x1]) Sep 13 01:02:55.666021 kernel: ACPI: LAPIC_NMI (acpi_id[0x43] high edge lint[0x1]) Sep 13 01:02:55.666027 kernel: ACPI: LAPIC_NMI (acpi_id[0x44] high edge lint[0x1]) Sep 13 01:02:55.666032 kernel: ACPI: LAPIC_NMI (acpi_id[0x45] high edge lint[0x1]) Sep 13 01:02:55.666038 kernel: ACPI: LAPIC_NMI (acpi_id[0x46] high edge lint[0x1]) Sep 13 01:02:55.666043 kernel: ACPI: LAPIC_NMI (acpi_id[0x47] high edge lint[0x1]) Sep 13 01:02:55.666048 kernel: ACPI: LAPIC_NMI (acpi_id[0x48] high edge lint[0x1]) Sep 13 01:02:55.666053 kernel: ACPI: LAPIC_NMI (acpi_id[0x49] high edge lint[0x1]) Sep 13 01:02:55.666058 kernel: ACPI: LAPIC_NMI (acpi_id[0x4a] high edge lint[0x1]) Sep 13 01:02:55.666063 kernel: ACPI: LAPIC_NMI (acpi_id[0x4b] high edge lint[0x1]) Sep 13 01:02:55.666068 kernel: ACPI: LAPIC_NMI (acpi_id[0x4c] high edge lint[0x1]) Sep 13 01:02:55.666073 kernel: ACPI: LAPIC_NMI (acpi_id[0x4d] high edge lint[0x1]) Sep 13 01:02:55.666079 kernel: ACPI: LAPIC_NMI (acpi_id[0x4e] high edge lint[0x1]) Sep 13 01:02:55.666084 kernel: ACPI: LAPIC_NMI (acpi_id[0x4f] high edge lint[0x1]) Sep 13 01:02:55.666089 kernel: ACPI: LAPIC_NMI (acpi_id[0x50] high edge lint[0x1]) Sep 13 01:02:55.666094 kernel: ACPI: LAPIC_NMI (acpi_id[0x51] high edge lint[0x1]) Sep 13 01:02:55.666099 kernel: ACPI: LAPIC_NMI (acpi_id[0x52] high edge lint[0x1]) Sep 13 01:02:55.666104 kernel: ACPI: LAPIC_NMI (acpi_id[0x53] high edge lint[0x1]) Sep 13 01:02:55.666109 kernel: ACPI: LAPIC_NMI (acpi_id[0x54] high edge lint[0x1]) Sep 13 01:02:55.666114 kernel: ACPI: LAPIC_NMI (acpi_id[0x55] high edge lint[0x1]) Sep 13 01:02:55.666119 kernel: ACPI: LAPIC_NMI (acpi_id[0x56] high edge lint[0x1]) Sep 13 01:02:55.666125 kernel: ACPI: LAPIC_NMI (acpi_id[0x57] high edge lint[0x1]) Sep 13 01:02:55.666130 kernel: ACPI: LAPIC_NMI (acpi_id[0x58] high edge lint[0x1]) Sep 13 01:02:55.666136 kernel: ACPI: LAPIC_NMI (acpi_id[0x59] high edge lint[0x1]) Sep 13 01:02:55.666141 kernel: ACPI: LAPIC_NMI (acpi_id[0x5a] high edge lint[0x1]) Sep 13 01:02:55.666146 kernel: ACPI: LAPIC_NMI (acpi_id[0x5b] high edge lint[0x1]) Sep 13 01:02:55.666151 kernel: ACPI: LAPIC_NMI (acpi_id[0x5c] high edge lint[0x1]) Sep 13 01:02:55.666156 kernel: ACPI: LAPIC_NMI (acpi_id[0x5d] high edge lint[0x1]) Sep 13 01:02:55.666161 kernel: ACPI: LAPIC_NMI (acpi_id[0x5e] high edge lint[0x1]) Sep 13 01:02:55.666166 kernel: ACPI: LAPIC_NMI (acpi_id[0x5f] high edge lint[0x1]) Sep 13 01:02:55.666180 kernel: ACPI: LAPIC_NMI (acpi_id[0x60] high edge lint[0x1]) Sep 13 01:02:55.666186 kernel: ACPI: LAPIC_NMI (acpi_id[0x61] high edge lint[0x1]) Sep 13 01:02:55.666192 kernel: ACPI: LAPIC_NMI (acpi_id[0x62] high edge lint[0x1]) Sep 13 01:02:55.666196 kernel: ACPI: LAPIC_NMI (acpi_id[0x63] high edge lint[0x1]) Sep 13 01:02:55.666202 kernel: ACPI: LAPIC_NMI (acpi_id[0x64] high edge lint[0x1]) Sep 13 01:02:55.666207 kernel: ACPI: LAPIC_NMI (acpi_id[0x65] high edge lint[0x1]) Sep 13 01:02:55.666212 kernel: ACPI: LAPIC_NMI (acpi_id[0x66] high edge lint[0x1]) Sep 13 01:02:55.666222 kernel: ACPI: LAPIC_NMI (acpi_id[0x67] high edge lint[0x1]) Sep 13 01:02:55.666227 kernel: ACPI: LAPIC_NMI (acpi_id[0x68] high edge lint[0x1]) Sep 13 01:02:55.666233 kernel: ACPI: LAPIC_NMI (acpi_id[0x69] high edge lint[0x1]) Sep 13 01:02:55.666239 kernel: ACPI: LAPIC_NMI (acpi_id[0x6a] high edge lint[0x1]) Sep 13 01:02:55.666244 kernel: ACPI: LAPIC_NMI (acpi_id[0x6b] high edge lint[0x1]) Sep 13 01:02:55.666249 kernel: ACPI: LAPIC_NMI (acpi_id[0x6c] high edge lint[0x1]) Sep 13 01:02:55.666254 kernel: ACPI: LAPIC_NMI (acpi_id[0x6d] high edge lint[0x1]) Sep 13 01:02:55.666259 kernel: ACPI: LAPIC_NMI (acpi_id[0x6e] high edge lint[0x1]) Sep 13 01:02:55.666265 kernel: ACPI: LAPIC_NMI (acpi_id[0x6f] high edge lint[0x1]) Sep 13 01:02:55.666270 kernel: ACPI: LAPIC_NMI (acpi_id[0x70] high edge lint[0x1]) Sep 13 01:02:55.666275 kernel: ACPI: LAPIC_NMI (acpi_id[0x71] high edge lint[0x1]) Sep 13 01:02:55.666280 kernel: ACPI: LAPIC_NMI (acpi_id[0x72] high edge lint[0x1]) Sep 13 01:02:55.666285 kernel: ACPI: LAPIC_NMI (acpi_id[0x73] high edge lint[0x1]) Sep 13 01:02:55.666291 kernel: ACPI: LAPIC_NMI (acpi_id[0x74] high edge lint[0x1]) Sep 13 01:02:55.666296 kernel: ACPI: LAPIC_NMI (acpi_id[0x75] high edge lint[0x1]) Sep 13 01:02:55.666301 kernel: ACPI: LAPIC_NMI (acpi_id[0x76] high edge lint[0x1]) Sep 13 01:02:55.666306 kernel: ACPI: LAPIC_NMI (acpi_id[0x77] high edge lint[0x1]) Sep 13 01:02:55.666311 kernel: ACPI: LAPIC_NMI (acpi_id[0x78] high edge lint[0x1]) Sep 13 01:02:55.666317 kernel: ACPI: LAPIC_NMI (acpi_id[0x79] high edge lint[0x1]) Sep 13 01:02:55.666322 kernel: ACPI: LAPIC_NMI (acpi_id[0x7a] high edge lint[0x1]) Sep 13 01:02:55.666327 kernel: ACPI: LAPIC_NMI (acpi_id[0x7b] high edge lint[0x1]) Sep 13 01:02:55.666332 kernel: ACPI: LAPIC_NMI (acpi_id[0x7c] high edge lint[0x1]) Sep 13 01:02:55.666338 kernel: ACPI: LAPIC_NMI (acpi_id[0x7d] high edge lint[0x1]) Sep 13 01:02:55.666343 kernel: ACPI: LAPIC_NMI (acpi_id[0x7e] high edge lint[0x1]) Sep 13 01:02:55.666348 kernel: ACPI: LAPIC_NMI (acpi_id[0x7f] high edge lint[0x1]) Sep 13 01:02:55.666353 kernel: IOAPIC[0]: apic_id 1, version 17, address 0xfec00000, GSI 0-23 Sep 13 01:02:55.666361 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 high edge) Sep 13 01:02:55.666369 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 13 01:02:55.666377 kernel: ACPI: HPET id: 0x8086af01 base: 0xfed00000 Sep 13 01:02:55.666385 kernel: TSC deadline timer available Sep 13 01:02:55.666393 kernel: smpboot: Allowing 128 CPUs, 126 hotplug CPUs Sep 13 01:02:55.666400 kernel: [mem 0x80000000-0xefffffff] available for PCI devices Sep 13 01:02:55.666411 kernel: Booting paravirtualized kernel on VMware hypervisor Sep 13 01:02:55.666418 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 13 01:02:55.666424 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:128 nr_node_ids:1 Sep 13 01:02:55.666429 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u262144 Sep 13 01:02:55.666435 kernel: pcpu-alloc: s188696 r8192 d32488 u262144 alloc=1*2097152 Sep 13 01:02:55.666440 kernel: pcpu-alloc: [0] 000 001 002 003 004 005 006 007 Sep 13 01:02:55.666445 kernel: pcpu-alloc: [0] 008 009 010 011 012 013 014 015 Sep 13 01:02:55.666450 kernel: pcpu-alloc: [0] 016 017 018 019 020 021 022 023 Sep 13 01:02:55.666456 kernel: pcpu-alloc: [0] 024 025 026 027 028 029 030 031 Sep 13 01:02:55.666461 kernel: pcpu-alloc: [0] 032 033 034 035 036 037 038 039 Sep 13 01:02:55.666466 kernel: pcpu-alloc: [0] 040 041 042 043 044 045 046 047 Sep 13 01:02:55.666471 kernel: pcpu-alloc: [0] 048 049 050 051 052 053 054 055 Sep 13 01:02:55.666483 kernel: pcpu-alloc: [0] 056 057 058 059 060 061 062 063 Sep 13 01:02:55.666489 kernel: pcpu-alloc: [0] 064 065 066 067 068 069 070 071 Sep 13 01:02:55.666495 kernel: pcpu-alloc: [0] 072 073 074 075 076 077 078 079 Sep 13 01:02:55.666500 kernel: pcpu-alloc: [0] 080 081 082 083 084 085 086 087 Sep 13 01:02:55.666505 kernel: pcpu-alloc: [0] 088 089 090 091 092 093 094 095 Sep 13 01:02:55.666511 kernel: pcpu-alloc: [0] 096 097 098 099 100 101 102 103 Sep 13 01:02:55.666517 kernel: pcpu-alloc: [0] 104 105 106 107 108 109 110 111 Sep 13 01:02:55.666522 kernel: pcpu-alloc: [0] 112 113 114 115 116 117 118 119 Sep 13 01:02:55.666527 kernel: pcpu-alloc: [0] 120 121 122 123 124 125 126 127 Sep 13 01:02:55.666533 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515808 Sep 13 01:02:55.666538 kernel: Policy zone: DMA32 Sep 13 01:02:55.666544 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=65d14b740db9e581daa1d0206188b16d2f1a39e5c5e0878b6855323cd7c584ec Sep 13 01:02:55.666550 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 13 01:02:55.666556 kernel: printk: log_buf_len individual max cpu contribution: 4096 bytes Sep 13 01:02:55.666562 kernel: printk: log_buf_len total cpu_extra contributions: 520192 bytes Sep 13 01:02:55.666567 kernel: printk: log_buf_len min size: 262144 bytes Sep 13 01:02:55.666573 kernel: printk: log_buf_len: 1048576 bytes Sep 13 01:02:55.666578 kernel: printk: early log buf free: 239728(91%) Sep 13 01:02:55.666584 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 13 01:02:55.666590 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Sep 13 01:02:55.666596 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 13 01:02:55.666601 kernel: Memory: 1940392K/2096628K available (12295K kernel code, 2276K rwdata, 13732K rodata, 47492K init, 4088K bss, 155976K reserved, 0K cma-reserved) Sep 13 01:02:55.666608 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=128, Nodes=1 Sep 13 01:02:55.666614 kernel: ftrace: allocating 34614 entries in 136 pages Sep 13 01:02:55.666619 kernel: ftrace: allocated 136 pages with 2 groups Sep 13 01:02:55.666626 kernel: rcu: Hierarchical RCU implementation. Sep 13 01:02:55.666632 kernel: rcu: RCU event tracing is enabled. Sep 13 01:02:55.666638 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=128. Sep 13 01:02:55.666644 kernel: Rude variant of Tasks RCU enabled. Sep 13 01:02:55.666649 kernel: Tracing variant of Tasks RCU enabled. Sep 13 01:02:55.666655 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 13 01:02:55.666660 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=128 Sep 13 01:02:55.666666 kernel: NR_IRQS: 33024, nr_irqs: 1448, preallocated irqs: 16 Sep 13 01:02:55.666671 kernel: random: crng init done Sep 13 01:02:55.666677 kernel: Console: colour VGA+ 80x25 Sep 13 01:02:55.666682 kernel: printk: console [tty0] enabled Sep 13 01:02:55.666688 kernel: printk: console [ttyS0] enabled Sep 13 01:02:55.666694 kernel: ACPI: Core revision 20210730 Sep 13 01:02:55.666700 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 133484882848 ns Sep 13 01:02:55.666705 kernel: APIC: Switch to symmetric I/O mode setup Sep 13 01:02:55.666711 kernel: x2apic enabled Sep 13 01:02:55.666716 kernel: Switched APIC routing to physical x2apic. Sep 13 01:02:55.666722 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Sep 13 01:02:55.666728 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns Sep 13 01:02:55.666733 kernel: Calibrating delay loop (skipped) preset value.. 6816.00 BogoMIPS (lpj=3408000) Sep 13 01:02:55.666738 kernel: Disabled fast string operations Sep 13 01:02:55.666745 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Sep 13 01:02:55.666750 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Sep 13 01:02:55.666756 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 13 01:02:55.666762 kernel: Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks! Sep 13 01:02:55.666767 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Sep 13 01:02:55.666773 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Sep 13 01:02:55.666778 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Sep 13 01:02:55.666784 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Sep 13 01:02:55.666790 kernel: RETBleed: Mitigation: Enhanced IBRS Sep 13 01:02:55.666796 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Sep 13 01:02:55.666801 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Sep 13 01:02:55.666807 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Sep 13 01:02:55.666812 kernel: SRBDS: Unknown: Dependent on hypervisor status Sep 13 01:02:55.666818 kernel: GDS: Unknown: Dependent on hypervisor status Sep 13 01:02:55.666823 kernel: active return thunk: its_return_thunk Sep 13 01:02:55.666829 kernel: ITS: Mitigation: Aligned branch/return thunks Sep 13 01:02:55.666834 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 13 01:02:55.666841 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 13 01:02:55.666846 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 13 01:02:55.666852 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 13 01:02:55.666857 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Sep 13 01:02:55.666863 kernel: Freeing SMP alternatives memory: 32K Sep 13 01:02:55.666868 kernel: pid_max: default: 131072 minimum: 1024 Sep 13 01:02:55.666874 kernel: LSM: Security Framework initializing Sep 13 01:02:55.666879 kernel: SELinux: Initializing. Sep 13 01:02:55.666885 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Sep 13 01:02:55.666891 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Sep 13 01:02:55.666897 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) Sep 13 01:02:55.666902 kernel: Performance Events: Skylake events, core PMU driver. Sep 13 01:02:55.666908 kernel: core: CPUID marked event: 'cpu cycles' unavailable Sep 13 01:02:55.666914 kernel: core: CPUID marked event: 'instructions' unavailable Sep 13 01:02:55.666919 kernel: core: CPUID marked event: 'bus cycles' unavailable Sep 13 01:02:55.666924 kernel: core: CPUID marked event: 'cache references' unavailable Sep 13 01:02:55.666930 kernel: core: CPUID marked event: 'cache misses' unavailable Sep 13 01:02:55.666935 kernel: core: CPUID marked event: 'branch instructions' unavailable Sep 13 01:02:55.666942 kernel: core: CPUID marked event: 'branch misses' unavailable Sep 13 01:02:55.666947 kernel: ... version: 1 Sep 13 01:02:55.666953 kernel: ... bit width: 48 Sep 13 01:02:55.666958 kernel: ... generic registers: 4 Sep 13 01:02:55.666964 kernel: ... value mask: 0000ffffffffffff Sep 13 01:02:55.666970 kernel: ... max period: 000000007fffffff Sep 13 01:02:55.666976 kernel: ... fixed-purpose events: 0 Sep 13 01:02:55.666981 kernel: ... event mask: 000000000000000f Sep 13 01:02:55.666987 kernel: signal: max sigframe size: 1776 Sep 13 01:02:55.666993 kernel: rcu: Hierarchical SRCU implementation. Sep 13 01:02:55.666999 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Sep 13 01:02:55.667004 kernel: smp: Bringing up secondary CPUs ... Sep 13 01:02:55.667012 kernel: x86: Booting SMP configuration: Sep 13 01:02:55.667021 kernel: .... node #0, CPUs: #1 Sep 13 01:02:55.667030 kernel: Disabled fast string operations Sep 13 01:02:55.667038 kernel: smpboot: CPU 1 Converting physical 2 to logical package 1 Sep 13 01:02:55.667044 kernel: smpboot: CPU 1 Converting physical 0 to logical die 1 Sep 13 01:02:55.667049 kernel: smp: Brought up 1 node, 2 CPUs Sep 13 01:02:55.667055 kernel: smpboot: Max logical packages: 128 Sep 13 01:02:55.667062 kernel: smpboot: Total of 2 processors activated (13632.00 BogoMIPS) Sep 13 01:02:55.667067 kernel: devtmpfs: initialized Sep 13 01:02:55.667072 kernel: x86/mm: Memory block size: 128MB Sep 13 01:02:55.667078 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7feff000-0x7fefffff] (4096 bytes) Sep 13 01:02:55.667084 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 13 01:02:55.667090 kernel: futex hash table entries: 32768 (order: 9, 2097152 bytes, linear) Sep 13 01:02:55.667095 kernel: pinctrl core: initialized pinctrl subsystem Sep 13 01:02:55.667101 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 13 01:02:55.667106 kernel: audit: initializing netlink subsys (disabled) Sep 13 01:02:55.667113 kernel: audit: type=2000 audit(1757725374.089:1): state=initialized audit_enabled=0 res=1 Sep 13 01:02:55.667118 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 13 01:02:55.667124 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 13 01:02:55.667129 kernel: cpuidle: using governor menu Sep 13 01:02:55.667134 kernel: Simple Boot Flag at 0x36 set to 0x80 Sep 13 01:02:55.667140 kernel: ACPI: bus type PCI registered Sep 13 01:02:55.667145 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 13 01:02:55.667151 kernel: dca service started, version 1.12.1 Sep 13 01:02:55.667156 kernel: PCI: MMCONFIG for domain 0000 [bus 00-7f] at [mem 0xf0000000-0xf7ffffff] (base 0xf0000000) Sep 13 01:02:55.667163 kernel: PCI: MMCONFIG at [mem 0xf0000000-0xf7ffffff] reserved in E820 Sep 13 01:02:55.667176 kernel: PCI: Using configuration type 1 for base access Sep 13 01:02:55.667182 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 13 01:02:55.667191 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Sep 13 01:02:55.667196 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Sep 13 01:02:55.667202 kernel: ACPI: Added _OSI(Module Device) Sep 13 01:02:55.667207 kernel: ACPI: Added _OSI(Processor Device) Sep 13 01:02:55.667213 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 13 01:02:55.667234 kernel: ACPI: Added _OSI(Linux-Dell-Video) Sep 13 01:02:55.667258 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Sep 13 01:02:55.667263 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Sep 13 01:02:55.667269 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 13 01:02:55.667274 kernel: ACPI: [Firmware Bug]: BIOS _OSI(Linux) query ignored Sep 13 01:02:55.667280 kernel: ACPI: Interpreter enabled Sep 13 01:02:55.667285 kernel: ACPI: PM: (supports S0 S1 S5) Sep 13 01:02:55.667291 kernel: ACPI: Using IOAPIC for interrupt routing Sep 13 01:02:55.667296 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 13 01:02:55.667302 kernel: ACPI: Enabled 4 GPEs in block 00 to 0F Sep 13 01:02:55.667308 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-7f]) Sep 13 01:02:55.667399 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 13 01:02:55.667450 kernel: acpi PNP0A03:00: _OSC: platform does not support [AER LTR] Sep 13 01:02:55.667497 kernel: acpi PNP0A03:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability] Sep 13 01:02:55.667504 kernel: PCI host bridge to bus 0000:00 Sep 13 01:02:55.667554 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 13 01:02:55.667601 kernel: pci_bus 0000:00: root bus resource [mem 0x000cc000-0x000dbfff window] Sep 13 01:02:55.667642 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Sep 13 01:02:55.667684 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 13 01:02:55.667728 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xfeff window] Sep 13 01:02:55.667769 kernel: pci_bus 0000:00: root bus resource [bus 00-7f] Sep 13 01:02:55.667823 kernel: pci 0000:00:00.0: [8086:7190] type 00 class 0x060000 Sep 13 01:02:55.667876 kernel: pci 0000:00:01.0: [8086:7191] type 01 class 0x060400 Sep 13 01:02:55.667931 kernel: pci 0000:00:07.0: [8086:7110] type 00 class 0x060100 Sep 13 01:02:55.667984 kernel: pci 0000:00:07.1: [8086:7111] type 00 class 0x01018a Sep 13 01:02:55.668041 kernel: pci 0000:00:07.1: reg 0x20: [io 0x1060-0x106f] Sep 13 01:02:55.668095 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Sep 13 01:02:55.676775 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Sep 13 01:02:55.676878 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Sep 13 01:02:55.676936 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Sep 13 01:02:55.676992 kernel: pci 0000:00:07.3: [8086:7113] type 00 class 0x068000 Sep 13 01:02:55.677043 kernel: pci 0000:00:07.3: quirk: [io 0x1000-0x103f] claimed by PIIX4 ACPI Sep 13 01:02:55.677092 kernel: pci 0000:00:07.3: quirk: [io 0x1040-0x104f] claimed by PIIX4 SMB Sep 13 01:02:55.677145 kernel: pci 0000:00:07.7: [15ad:0740] type 00 class 0x088000 Sep 13 01:02:55.677206 kernel: pci 0000:00:07.7: reg 0x10: [io 0x1080-0x10bf] Sep 13 01:02:55.677259 kernel: pci 0000:00:07.7: reg 0x14: [mem 0xfebfe000-0xfebfffff 64bit] Sep 13 01:02:55.677312 kernel: pci 0000:00:0f.0: [15ad:0405] type 00 class 0x030000 Sep 13 01:02:55.677362 kernel: pci 0000:00:0f.0: reg 0x10: [io 0x1070-0x107f] Sep 13 01:02:55.677410 kernel: pci 0000:00:0f.0: reg 0x14: [mem 0xe8000000-0xefffffff pref] Sep 13 01:02:55.677459 kernel: pci 0000:00:0f.0: reg 0x18: [mem 0xfe000000-0xfe7fffff] Sep 13 01:02:55.677508 kernel: pci 0000:00:0f.0: reg 0x30: [mem 0x00000000-0x00007fff pref] Sep 13 01:02:55.677556 kernel: pci 0000:00:0f.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 13 01:02:55.677609 kernel: pci 0000:00:11.0: [15ad:0790] type 01 class 0x060401 Sep 13 01:02:55.677665 kernel: pci 0000:00:15.0: [15ad:07a0] type 01 class 0x060400 Sep 13 01:02:55.677714 kernel: pci 0000:00:15.0: PME# supported from D0 D3hot D3cold Sep 13 01:02:55.677767 kernel: pci 0000:00:15.1: [15ad:07a0] type 01 class 0x060400 Sep 13 01:02:55.677818 kernel: pci 0000:00:15.1: PME# supported from D0 D3hot D3cold Sep 13 01:02:55.677871 kernel: pci 0000:00:15.2: [15ad:07a0] type 01 class 0x060400 Sep 13 01:02:55.677921 kernel: pci 0000:00:15.2: PME# supported from D0 D3hot D3cold Sep 13 01:02:55.678004 kernel: pci 0000:00:15.3: [15ad:07a0] type 01 class 0x060400 Sep 13 01:02:55.678068 kernel: pci 0000:00:15.3: PME# supported from D0 D3hot D3cold Sep 13 01:02:55.678121 kernel: pci 0000:00:15.4: [15ad:07a0] type 01 class 0x060400 Sep 13 01:02:55.679949 kernel: pci 0000:00:15.4: PME# supported from D0 D3hot D3cold Sep 13 01:02:55.680018 kernel: pci 0000:00:15.5: [15ad:07a0] type 01 class 0x060400 Sep 13 01:02:55.680073 kernel: pci 0000:00:15.5: PME# supported from D0 D3hot D3cold Sep 13 01:02:55.680133 kernel: pci 0000:00:15.6: [15ad:07a0] type 01 class 0x060400 Sep 13 01:02:55.680192 kernel: pci 0000:00:15.6: PME# supported from D0 D3hot D3cold Sep 13 01:02:55.680247 kernel: pci 0000:00:15.7: [15ad:07a0] type 01 class 0x060400 Sep 13 01:02:55.680296 kernel: pci 0000:00:15.7: PME# supported from D0 D3hot D3cold Sep 13 01:02:55.680349 kernel: pci 0000:00:16.0: [15ad:07a0] type 01 class 0x060400 Sep 13 01:02:55.680399 kernel: pci 0000:00:16.0: PME# supported from D0 D3hot D3cold Sep 13 01:02:55.680455 kernel: pci 0000:00:16.1: [15ad:07a0] type 01 class 0x060400 Sep 13 01:02:55.680526 kernel: pci 0000:00:16.1: PME# supported from D0 D3hot D3cold Sep 13 01:02:55.680581 kernel: pci 0000:00:16.2: [15ad:07a0] type 01 class 0x060400 Sep 13 01:02:55.680631 kernel: pci 0000:00:16.2: PME# supported from D0 D3hot D3cold Sep 13 01:02:55.680684 kernel: pci 0000:00:16.3: [15ad:07a0] type 01 class 0x060400 Sep 13 01:02:55.680737 kernel: pci 0000:00:16.3: PME# supported from D0 D3hot D3cold Sep 13 01:02:55.680789 kernel: pci 0000:00:16.4: [15ad:07a0] type 01 class 0x060400 Sep 13 01:02:55.680838 kernel: pci 0000:00:16.4: PME# supported from D0 D3hot D3cold Sep 13 01:02:55.680893 kernel: pci 0000:00:16.5: [15ad:07a0] type 01 class 0x060400 Sep 13 01:02:55.680953 kernel: pci 0000:00:16.5: PME# supported from D0 D3hot D3cold Sep 13 01:02:55.681010 kernel: pci 0000:00:16.6: [15ad:07a0] type 01 class 0x060400 Sep 13 01:02:55.681060 kernel: pci 0000:00:16.6: PME# supported from D0 D3hot D3cold Sep 13 01:02:55.681115 kernel: pci 0000:00:16.7: [15ad:07a0] type 01 class 0x060400 Sep 13 01:02:55.681165 kernel: pci 0000:00:16.7: PME# supported from D0 D3hot D3cold Sep 13 01:02:55.683352 kernel: pci 0000:00:17.0: [15ad:07a0] type 01 class 0x060400 Sep 13 01:02:55.683419 kernel: pci 0000:00:17.0: PME# supported from D0 D3hot D3cold Sep 13 01:02:55.683486 kernel: pci 0000:00:17.1: [15ad:07a0] type 01 class 0x060400 Sep 13 01:02:55.683537 kernel: pci 0000:00:17.1: PME# supported from D0 D3hot D3cold Sep 13 01:02:55.683595 kernel: pci 0000:00:17.2: [15ad:07a0] type 01 class 0x060400 Sep 13 01:02:55.683646 kernel: pci 0000:00:17.2: PME# supported from D0 D3hot D3cold Sep 13 01:02:55.683699 kernel: pci 0000:00:17.3: [15ad:07a0] type 01 class 0x060400 Sep 13 01:02:55.683749 kernel: pci 0000:00:17.3: PME# supported from D0 D3hot D3cold Sep 13 01:02:55.683802 kernel: pci 0000:00:17.4: [15ad:07a0] type 01 class 0x060400 Sep 13 01:02:55.683852 kernel: pci 0000:00:17.4: PME# supported from D0 D3hot D3cold Sep 13 01:02:55.683907 kernel: pci 0000:00:17.5: [15ad:07a0] type 01 class 0x060400 Sep 13 01:02:55.683958 kernel: pci 0000:00:17.5: PME# supported from D0 D3hot D3cold Sep 13 01:02:55.684013 kernel: pci 0000:00:17.6: [15ad:07a0] type 01 class 0x060400 Sep 13 01:02:55.684063 kernel: pci 0000:00:17.6: PME# supported from D0 D3hot D3cold Sep 13 01:02:55.691476 kernel: pci 0000:00:17.7: [15ad:07a0] type 01 class 0x060400 Sep 13 01:02:55.691550 kernel: pci 0000:00:17.7: PME# supported from D0 D3hot D3cold Sep 13 01:02:55.691612 kernel: pci 0000:00:18.0: [15ad:07a0] type 01 class 0x060400 Sep 13 01:02:55.691664 kernel: pci 0000:00:18.0: PME# supported from D0 D3hot D3cold Sep 13 01:02:55.691730 kernel: pci 0000:00:18.1: [15ad:07a0] type 01 class 0x060400 Sep 13 01:02:55.691784 kernel: pci 0000:00:18.1: PME# supported from D0 D3hot D3cold Sep 13 01:02:55.691838 kernel: pci 0000:00:18.2: [15ad:07a0] type 01 class 0x060400 Sep 13 01:02:55.691888 kernel: pci 0000:00:18.2: PME# supported from D0 D3hot D3cold Sep 13 01:02:55.691942 kernel: pci 0000:00:18.3: [15ad:07a0] type 01 class 0x060400 Sep 13 01:02:55.691995 kernel: pci 0000:00:18.3: PME# supported from D0 D3hot D3cold Sep 13 01:02:55.692048 kernel: pci 0000:00:18.4: [15ad:07a0] type 01 class 0x060400 Sep 13 01:02:55.692098 kernel: pci 0000:00:18.4: PME# supported from D0 D3hot D3cold Sep 13 01:02:55.692176 kernel: pci 0000:00:18.5: [15ad:07a0] type 01 class 0x060400 Sep 13 01:02:55.692239 kernel: pci 0000:00:18.5: PME# supported from D0 D3hot D3cold Sep 13 01:02:55.692294 kernel: pci 0000:00:18.6: [15ad:07a0] type 01 class 0x060400 Sep 13 01:02:55.692346 kernel: pci 0000:00:18.6: PME# supported from D0 D3hot D3cold Sep 13 01:02:55.692400 kernel: pci 0000:00:18.7: [15ad:07a0] type 01 class 0x060400 Sep 13 01:02:55.692450 kernel: pci 0000:00:18.7: PME# supported from D0 D3hot D3cold Sep 13 01:02:55.692532 kernel: pci_bus 0000:01: extended config space not accessible Sep 13 01:02:55.692589 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Sep 13 01:02:55.692647 kernel: pci_bus 0000:02: extended config space not accessible Sep 13 01:02:55.692659 kernel: acpiphp: Slot [32] registered Sep 13 01:02:55.692665 kernel: acpiphp: Slot [33] registered Sep 13 01:02:55.692671 kernel: acpiphp: Slot [34] registered Sep 13 01:02:55.692677 kernel: acpiphp: Slot [35] registered Sep 13 01:02:55.692682 kernel: acpiphp: Slot [36] registered Sep 13 01:02:55.692688 kernel: acpiphp: Slot [37] registered Sep 13 01:02:55.692694 kernel: acpiphp: Slot [38] registered Sep 13 01:02:55.692699 kernel: acpiphp: Slot [39] registered Sep 13 01:02:55.692705 kernel: acpiphp: Slot [40] registered Sep 13 01:02:55.692712 kernel: acpiphp: Slot [41] registered Sep 13 01:02:55.692717 kernel: acpiphp: Slot [42] registered Sep 13 01:02:55.692723 kernel: acpiphp: Slot [43] registered Sep 13 01:02:55.692729 kernel: acpiphp: Slot [44] registered Sep 13 01:02:55.692734 kernel: acpiphp: Slot [45] registered Sep 13 01:02:55.692740 kernel: acpiphp: Slot [46] registered Sep 13 01:02:55.692746 kernel: acpiphp: Slot [47] registered Sep 13 01:02:55.692752 kernel: acpiphp: Slot [48] registered Sep 13 01:02:55.692757 kernel: acpiphp: Slot [49] registered Sep 13 01:02:55.692763 kernel: acpiphp: Slot [50] registered Sep 13 01:02:55.692770 kernel: acpiphp: Slot [51] registered Sep 13 01:02:55.692775 kernel: acpiphp: Slot [52] registered Sep 13 01:02:55.692781 kernel: acpiphp: Slot [53] registered Sep 13 01:02:55.692787 kernel: acpiphp: Slot [54] registered Sep 13 01:02:55.692792 kernel: acpiphp: Slot [55] registered Sep 13 01:02:55.692798 kernel: acpiphp: Slot [56] registered Sep 13 01:02:55.692803 kernel: acpiphp: Slot [57] registered Sep 13 01:02:55.692809 kernel: acpiphp: Slot [58] registered Sep 13 01:02:55.692814 kernel: acpiphp: Slot [59] registered Sep 13 01:02:55.692821 kernel: acpiphp: Slot [60] registered Sep 13 01:02:55.692827 kernel: acpiphp: Slot [61] registered Sep 13 01:02:55.692832 kernel: acpiphp: Slot [62] registered Sep 13 01:02:55.692838 kernel: acpiphp: Slot [63] registered Sep 13 01:02:55.692890 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] (subtractive decode) Sep 13 01:02:55.692941 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] Sep 13 01:02:55.692989 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] Sep 13 01:02:55.693038 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] Sep 13 01:02:55.693085 kernel: pci 0000:00:11.0: bridge window [mem 0x000a0000-0x000bffff window] (subtractive decode) Sep 13 01:02:55.693137 kernel: pci 0000:00:11.0: bridge window [mem 0x000cc000-0x000dbfff window] (subtractive decode) Sep 13 01:02:55.693193 kernel: pci 0000:00:11.0: bridge window [mem 0xc0000000-0xfebfffff window] (subtractive decode) Sep 13 01:02:55.693256 kernel: pci 0000:00:11.0: bridge window [io 0x0000-0x0cf7 window] (subtractive decode) Sep 13 01:02:55.693306 kernel: pci 0000:00:11.0: bridge window [io 0x0d00-0xfeff window] (subtractive decode) Sep 13 01:02:55.693362 kernel: pci 0000:03:00.0: [15ad:07c0] type 00 class 0x010700 Sep 13 01:02:55.693463 kernel: pci 0000:03:00.0: reg 0x10: [io 0x4000-0x4007] Sep 13 01:02:55.693516 kernel: pci 0000:03:00.0: reg 0x14: [mem 0xfd5f8000-0xfd5fffff 64bit] Sep 13 01:02:55.693570 kernel: pci 0000:03:00.0: reg 0x30: [mem 0x00000000-0x0000ffff pref] Sep 13 01:02:55.693622 kernel: pci 0000:03:00.0: PME# supported from D0 D3hot D3cold Sep 13 01:02:55.693672 kernel: pci 0000:03:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' Sep 13 01:02:55.693724 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] Sep 13 01:02:55.693776 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] Sep 13 01:02:55.693830 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] Sep 13 01:02:55.693881 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] Sep 13 01:02:55.693933 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] Sep 13 01:02:55.693982 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] Sep 13 01:02:55.694030 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] Sep 13 01:02:55.694081 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] Sep 13 01:02:55.694130 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] Sep 13 01:02:55.694190 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] Sep 13 01:02:55.694243 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] Sep 13 01:02:55.694299 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] Sep 13 01:02:55.694359 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] Sep 13 01:02:55.694409 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] Sep 13 01:02:55.694459 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] Sep 13 01:02:55.694525 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] Sep 13 01:02:55.694576 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] Sep 13 01:02:55.694629 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] Sep 13 01:02:55.694679 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] Sep 13 01:02:55.694728 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] Sep 13 01:02:55.694779 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] Sep 13 01:02:55.694827 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] Sep 13 01:02:55.694875 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] Sep 13 01:02:55.694925 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] Sep 13 01:02:55.694979 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] Sep 13 01:02:55.695028 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] Sep 13 01:02:55.695084 kernel: pci 0000:0b:00.0: [15ad:07b0] type 00 class 0x020000 Sep 13 01:02:55.695136 kernel: pci 0000:0b:00.0: reg 0x10: [mem 0xfd4fc000-0xfd4fcfff] Sep 13 01:02:55.695195 kernel: pci 0000:0b:00.0: reg 0x14: [mem 0xfd4fd000-0xfd4fdfff] Sep 13 01:02:55.695248 kernel: pci 0000:0b:00.0: reg 0x18: [mem 0xfd4fe000-0xfd4fffff] Sep 13 01:02:55.695299 kernel: pci 0000:0b:00.0: reg 0x1c: [io 0x5000-0x500f] Sep 13 01:02:55.695350 kernel: pci 0000:0b:00.0: reg 0x30: [mem 0x00000000-0x0000ffff pref] Sep 13 01:02:55.695404 kernel: pci 0000:0b:00.0: supports D1 D2 Sep 13 01:02:55.695462 kernel: pci 0000:0b:00.0: PME# supported from D0 D1 D2 D3hot D3cold Sep 13 01:02:55.695529 kernel: pci 0000:0b:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' Sep 13 01:02:55.695581 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] Sep 13 01:02:55.695630 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] Sep 13 01:02:55.695679 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] Sep 13 01:02:55.695734 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] Sep 13 01:02:55.695795 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] Sep 13 01:02:55.695844 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] Sep 13 01:02:55.695893 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] Sep 13 01:02:55.695945 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] Sep 13 01:02:55.695993 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] Sep 13 01:02:55.696061 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] Sep 13 01:02:55.696132 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] Sep 13 01:02:55.697900 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] Sep 13 01:02:55.697970 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] Sep 13 01:02:55.698025 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] Sep 13 01:02:55.698394 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] Sep 13 01:02:55.698454 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] Sep 13 01:02:55.698513 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] Sep 13 01:02:55.698567 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] Sep 13 01:02:55.698618 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] Sep 13 01:02:55.698667 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] Sep 13 01:02:55.698722 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] Sep 13 01:02:55.698771 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] Sep 13 01:02:55.698820 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] Sep 13 01:02:55.698871 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] Sep 13 01:02:55.698922 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] Sep 13 01:02:55.698971 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] Sep 13 01:02:55.699022 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] Sep 13 01:02:55.699071 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] Sep 13 01:02:55.699123 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] Sep 13 01:02:55.699328 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] Sep 13 01:02:55.699385 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] Sep 13 01:02:55.699435 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] Sep 13 01:02:55.699484 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] Sep 13 01:02:55.699538 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] Sep 13 01:02:55.699593 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] Sep 13 01:02:55.699653 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] Sep 13 01:02:55.699704 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] Sep 13 01:02:55.699755 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] Sep 13 01:02:55.699807 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] Sep 13 01:02:55.699876 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] Sep 13 01:02:55.699935 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] Sep 13 01:02:55.699998 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] Sep 13 01:02:55.700054 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] Sep 13 01:02:55.700107 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] Sep 13 01:02:55.700158 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] Sep 13 01:02:55.700217 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] Sep 13 01:02:55.700266 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] Sep 13 01:02:55.700317 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] Sep 13 01:02:55.700367 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] Sep 13 01:02:55.700428 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] Sep 13 01:02:55.700482 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] Sep 13 01:02:55.700535 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] Sep 13 01:02:55.700585 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] Sep 13 01:02:55.700636 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] Sep 13 01:02:55.700685 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] Sep 13 01:02:55.700736 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] Sep 13 01:02:55.700785 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] Sep 13 01:02:55.700837 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] Sep 13 01:02:55.700886 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] Sep 13 01:02:55.700937 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] Sep 13 01:02:55.700986 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] Sep 13 01:02:55.701037 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] Sep 13 01:02:55.701085 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] Sep 13 01:02:55.701134 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] Sep 13 01:02:55.701191 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] Sep 13 01:02:55.701250 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] Sep 13 01:02:55.701310 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] Sep 13 01:02:55.701364 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] Sep 13 01:02:55.701414 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] Sep 13 01:02:55.701463 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] Sep 13 01:02:55.701514 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] Sep 13 01:02:55.701563 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] Sep 13 01:02:55.701611 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] Sep 13 01:02:55.701662 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] Sep 13 01:02:55.701712 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] Sep 13 01:02:55.701764 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] Sep 13 01:02:55.701817 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] Sep 13 01:02:55.701866 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] Sep 13 01:02:55.701916 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] Sep 13 01:02:55.701926 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 9 Sep 13 01:02:55.701932 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 0 Sep 13 01:02:55.701938 kernel: ACPI: PCI: Interrupt link LNKB disabled Sep 13 01:02:55.701944 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 13 01:02:55.701952 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 10 Sep 13 01:02:55.701958 kernel: iommu: Default domain type: Translated Sep 13 01:02:55.701964 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 13 01:02:55.702020 kernel: pci 0000:00:0f.0: vgaarb: setting as boot VGA device Sep 13 01:02:55.702069 kernel: pci 0000:00:0f.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 13 01:02:55.702118 kernel: pci 0000:00:0f.0: vgaarb: bridge control possible Sep 13 01:02:55.702127 kernel: vgaarb: loaded Sep 13 01:02:55.702133 kernel: pps_core: LinuxPPS API ver. 1 registered Sep 13 01:02:55.702139 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Sep 13 01:02:55.702146 kernel: PTP clock support registered Sep 13 01:02:55.702152 kernel: PCI: Using ACPI for IRQ routing Sep 13 01:02:55.702158 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 13 01:02:55.702163 kernel: e820: reserve RAM buffer [mem 0x0009ec00-0x0009ffff] Sep 13 01:02:55.702176 kernel: e820: reserve RAM buffer [mem 0x7fee0000-0x7fffffff] Sep 13 01:02:55.702183 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 Sep 13 01:02:55.702189 kernel: hpet0: 16 comparators, 64-bit 14.318180 MHz counter Sep 13 01:02:55.702195 kernel: clocksource: Switched to clocksource tsc-early Sep 13 01:02:55.702201 kernel: VFS: Disk quotas dquot_6.6.0 Sep 13 01:02:55.702209 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 13 01:02:55.702220 kernel: pnp: PnP ACPI init Sep 13 01:02:55.702284 kernel: system 00:00: [io 0x1000-0x103f] has been reserved Sep 13 01:02:55.702333 kernel: system 00:00: [io 0x1040-0x104f] has been reserved Sep 13 01:02:55.702381 kernel: system 00:00: [io 0x0cf0-0x0cf1] has been reserved Sep 13 01:02:55.702430 kernel: system 00:04: [mem 0xfed00000-0xfed003ff] has been reserved Sep 13 01:02:55.702478 kernel: pnp 00:06: [dma 2] Sep 13 01:02:55.702532 kernel: system 00:07: [io 0xfce0-0xfcff] has been reserved Sep 13 01:02:55.702578 kernel: system 00:07: [mem 0xf0000000-0xf7ffffff] has been reserved Sep 13 01:02:55.702624 kernel: system 00:07: [mem 0xfe800000-0xfe9fffff] has been reserved Sep 13 01:02:55.702632 kernel: pnp: PnP ACPI: found 8 devices Sep 13 01:02:55.702638 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 13 01:02:55.702644 kernel: NET: Registered PF_INET protocol family Sep 13 01:02:55.702650 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 13 01:02:55.702656 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Sep 13 01:02:55.702667 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 13 01:02:55.702675 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Sep 13 01:02:55.702684 kernel: TCP bind hash table entries: 16384 (order: 6, 262144 bytes, linear) Sep 13 01:02:55.702693 kernel: TCP: Hash tables configured (established 16384 bind 16384) Sep 13 01:02:55.702699 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Sep 13 01:02:55.702706 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Sep 13 01:02:55.702712 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 13 01:02:55.702718 kernel: NET: Registered PF_XDP protocol family Sep 13 01:02:55.702777 kernel: pci 0000:00:15.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 Sep 13 01:02:55.702831 kernel: pci 0000:00:15.3: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Sep 13 01:02:55.702883 kernel: pci 0000:00:15.4: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Sep 13 01:02:55.702935 kernel: pci 0000:00:15.5: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Sep 13 01:02:55.702987 kernel: pci 0000:00:15.6: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Sep 13 01:02:55.703039 kernel: pci 0000:00:15.7: bridge window [io 0x1000-0x0fff] to [bus 0a] add_size 1000 Sep 13 01:02:55.703093 kernel: pci 0000:00:16.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 0b] add_size 200000 add_align 100000 Sep 13 01:02:55.703144 kernel: pci 0000:00:16.3: bridge window [io 0x1000-0x0fff] to [bus 0e] add_size 1000 Sep 13 01:02:55.703223 kernel: pci 0000:00:16.4: bridge window [io 0x1000-0x0fff] to [bus 0f] add_size 1000 Sep 13 01:02:55.703288 kernel: pci 0000:00:16.5: bridge window [io 0x1000-0x0fff] to [bus 10] add_size 1000 Sep 13 01:02:55.703340 kernel: pci 0000:00:16.6: bridge window [io 0x1000-0x0fff] to [bus 11] add_size 1000 Sep 13 01:02:55.703392 kernel: pci 0000:00:16.7: bridge window [io 0x1000-0x0fff] to [bus 12] add_size 1000 Sep 13 01:02:55.703456 kernel: pci 0000:00:17.3: bridge window [io 0x1000-0x0fff] to [bus 16] add_size 1000 Sep 13 01:02:55.703515 kernel: pci 0000:00:17.4: bridge window [io 0x1000-0x0fff] to [bus 17] add_size 1000 Sep 13 01:02:55.703566 kernel: pci 0000:00:17.5: bridge window [io 0x1000-0x0fff] to [bus 18] add_size 1000 Sep 13 01:02:55.703617 kernel: pci 0000:00:17.6: bridge window [io 0x1000-0x0fff] to [bus 19] add_size 1000 Sep 13 01:02:55.703668 kernel: pci 0000:00:17.7: bridge window [io 0x1000-0x0fff] to [bus 1a] add_size 1000 Sep 13 01:02:55.703719 kernel: pci 0000:00:18.2: bridge window [io 0x1000-0x0fff] to [bus 1d] add_size 1000 Sep 13 01:02:55.703785 kernel: pci 0000:00:18.3: bridge window [io 0x1000-0x0fff] to [bus 1e] add_size 1000 Sep 13 01:02:55.703837 kernel: pci 0000:00:18.4: bridge window [io 0x1000-0x0fff] to [bus 1f] add_size 1000 Sep 13 01:02:55.703889 kernel: pci 0000:00:18.5: bridge window [io 0x1000-0x0fff] to [bus 20] add_size 1000 Sep 13 01:02:55.703941 kernel: pci 0000:00:18.6: bridge window [io 0x1000-0x0fff] to [bus 21] add_size 1000 Sep 13 01:02:55.703991 kernel: pci 0000:00:18.7: bridge window [io 0x1000-0x0fff] to [bus 22] add_size 1000 Sep 13 01:02:55.704044 kernel: pci 0000:00:15.0: BAR 15: assigned [mem 0xc0000000-0xc01fffff 64bit pref] Sep 13 01:02:55.704461 kernel: pci 0000:00:16.0: BAR 15: assigned [mem 0xc0200000-0xc03fffff 64bit pref] Sep 13 01:02:55.704523 kernel: pci 0000:00:15.3: BAR 13: no space for [io size 0x1000] Sep 13 01:02:55.704576 kernel: pci 0000:00:15.3: BAR 13: failed to assign [io size 0x1000] Sep 13 01:02:55.704628 kernel: pci 0000:00:15.4: BAR 13: no space for [io size 0x1000] Sep 13 01:02:55.705006 kernel: pci 0000:00:15.4: BAR 13: failed to assign [io size 0x1000] Sep 13 01:02:55.705063 kernel: pci 0000:00:15.5: BAR 13: no space for [io size 0x1000] Sep 13 01:02:55.705116 kernel: pci 0000:00:15.5: BAR 13: failed to assign [io size 0x1000] Sep 13 01:02:55.706640 kernel: pci 0000:00:15.6: BAR 13: no space for [io size 0x1000] Sep 13 01:02:55.706939 kernel: pci 0000:00:15.6: BAR 13: failed to assign [io size 0x1000] Sep 13 01:02:55.707005 kernel: pci 0000:00:15.7: BAR 13: no space for [io size 0x1000] Sep 13 01:02:55.707058 kernel: pci 0000:00:15.7: BAR 13: failed to assign [io size 0x1000] Sep 13 01:02:55.707110 kernel: pci 0000:00:16.3: BAR 13: no space for [io size 0x1000] Sep 13 01:02:55.707207 kernel: pci 0000:00:16.3: BAR 13: failed to assign [io size 0x1000] Sep 13 01:02:55.707511 kernel: pci 0000:00:16.4: BAR 13: no space for [io size 0x1000] Sep 13 01:02:55.707572 kernel: pci 0000:00:16.4: BAR 13: failed to assign [io size 0x1000] Sep 13 01:02:55.707628 kernel: pci 0000:00:16.5: BAR 13: no space for [io size 0x1000] Sep 13 01:02:55.707688 kernel: pci 0000:00:16.5: BAR 13: failed to assign [io size 0x1000] Sep 13 01:02:55.707741 kernel: pci 0000:00:16.6: BAR 13: no space for [io size 0x1000] Sep 13 01:02:55.707790 kernel: pci 0000:00:16.6: BAR 13: failed to assign [io size 0x1000] Sep 13 01:02:55.707839 kernel: pci 0000:00:16.7: BAR 13: no space for [io size 0x1000] Sep 13 01:02:55.707887 kernel: pci 0000:00:16.7: BAR 13: failed to assign [io size 0x1000] Sep 13 01:02:55.707948 kernel: pci 0000:00:17.3: BAR 13: no space for [io size 0x1000] Sep 13 01:02:55.707999 kernel: pci 0000:00:17.3: BAR 13: failed to assign [io size 0x1000] Sep 13 01:02:55.708049 kernel: pci 0000:00:17.4: BAR 13: no space for [io size 0x1000] Sep 13 01:02:55.708101 kernel: pci 0000:00:17.4: BAR 13: failed to assign [io size 0x1000] Sep 13 01:02:55.708496 kernel: pci 0000:00:17.5: BAR 13: no space for [io size 0x1000] Sep 13 01:02:55.708570 kernel: pci 0000:00:17.5: BAR 13: failed to assign [io size 0x1000] Sep 13 01:02:55.708630 kernel: pci 0000:00:17.6: BAR 13: no space for [io size 0x1000] Sep 13 01:02:55.708985 kernel: pci 0000:00:17.6: BAR 13: failed to assign [io size 0x1000] Sep 13 01:02:55.709053 kernel: pci 0000:00:17.7: BAR 13: no space for [io size 0x1000] Sep 13 01:02:55.709105 kernel: pci 0000:00:17.7: BAR 13: failed to assign [io size 0x1000] Sep 13 01:02:55.709158 kernel: pci 0000:00:18.2: BAR 13: no space for [io size 0x1000] Sep 13 01:02:55.709224 kernel: pci 0000:00:18.2: BAR 13: failed to assign [io size 0x1000] Sep 13 01:02:55.709283 kernel: pci 0000:00:18.3: BAR 13: no space for [io size 0x1000] Sep 13 01:02:55.709336 kernel: pci 0000:00:18.3: BAR 13: failed to assign [io size 0x1000] Sep 13 01:02:55.709387 kernel: pci 0000:00:18.4: BAR 13: no space for [io size 0x1000] Sep 13 01:02:55.709437 kernel: pci 0000:00:18.4: BAR 13: failed to assign [io size 0x1000] Sep 13 01:02:55.709487 kernel: pci 0000:00:18.5: BAR 13: no space for [io size 0x1000] Sep 13 01:02:55.709540 kernel: pci 0000:00:18.5: BAR 13: failed to assign [io size 0x1000] Sep 13 01:02:55.709596 kernel: pci 0000:00:18.6: BAR 13: no space for [io size 0x1000] Sep 13 01:02:55.709656 kernel: pci 0000:00:18.6: BAR 13: failed to assign [io size 0x1000] Sep 13 01:02:55.709718 kernel: pci 0000:00:18.7: BAR 13: no space for [io size 0x1000] Sep 13 01:02:55.709769 kernel: pci 0000:00:18.7: BAR 13: failed to assign [io size 0x1000] Sep 13 01:02:55.709817 kernel: pci 0000:00:18.7: BAR 13: no space for [io size 0x1000] Sep 13 01:02:55.709870 kernel: pci 0000:00:18.7: BAR 13: failed to assign [io size 0x1000] Sep 13 01:02:55.709924 kernel: pci 0000:00:18.6: BAR 13: no space for [io size 0x1000] Sep 13 01:02:55.709974 kernel: pci 0000:00:18.6: BAR 13: failed to assign [io size 0x1000] Sep 13 01:02:55.710023 kernel: pci 0000:00:18.5: BAR 13: no space for [io size 0x1000] Sep 13 01:02:55.710071 kernel: pci 0000:00:18.5: BAR 13: failed to assign [io size 0x1000] Sep 13 01:02:55.710123 kernel: pci 0000:00:18.4: BAR 13: no space for [io size 0x1000] Sep 13 01:02:55.710181 kernel: pci 0000:00:18.4: BAR 13: failed to assign [io size 0x1000] Sep 13 01:02:55.710236 kernel: pci 0000:00:18.3: BAR 13: no space for [io size 0x1000] Sep 13 01:02:55.710287 kernel: pci 0000:00:18.3: BAR 13: failed to assign [io size 0x1000] Sep 13 01:02:55.710348 kernel: pci 0000:00:18.2: BAR 13: no space for [io size 0x1000] Sep 13 01:02:55.710399 kernel: pci 0000:00:18.2: BAR 13: failed to assign [io size 0x1000] Sep 13 01:02:55.710449 kernel: pci 0000:00:17.7: BAR 13: no space for [io size 0x1000] Sep 13 01:02:55.710509 kernel: pci 0000:00:17.7: BAR 13: failed to assign [io size 0x1000] Sep 13 01:02:55.710562 kernel: pci 0000:00:17.6: BAR 13: no space for [io size 0x1000] Sep 13 01:02:55.710619 kernel: pci 0000:00:17.6: BAR 13: failed to assign [io size 0x1000] Sep 13 01:02:55.710683 kernel: pci 0000:00:17.5: BAR 13: no space for [io size 0x1000] Sep 13 01:02:55.710734 kernel: pci 0000:00:17.5: BAR 13: failed to assign [io size 0x1000] Sep 13 01:02:55.710942 kernel: pci 0000:00:17.4: BAR 13: no space for [io size 0x1000] Sep 13 01:02:55.710997 kernel: pci 0000:00:17.4: BAR 13: failed to assign [io size 0x1000] Sep 13 01:02:55.711052 kernel: pci 0000:00:17.3: BAR 13: no space for [io size 0x1000] Sep 13 01:02:55.711103 kernel: pci 0000:00:17.3: BAR 13: failed to assign [io size 0x1000] Sep 13 01:02:55.711202 kernel: pci 0000:00:16.7: BAR 13: no space for [io size 0x1000] Sep 13 01:02:55.711269 kernel: pci 0000:00:16.7: BAR 13: failed to assign [io size 0x1000] Sep 13 01:02:55.711479 kernel: pci 0000:00:16.6: BAR 13: no space for [io size 0x1000] Sep 13 01:02:55.711537 kernel: pci 0000:00:16.6: BAR 13: failed to assign [io size 0x1000] Sep 13 01:02:55.711589 kernel: pci 0000:00:16.5: BAR 13: no space for [io size 0x1000] Sep 13 01:02:55.711639 kernel: pci 0000:00:16.5: BAR 13: failed to assign [io size 0x1000] Sep 13 01:02:55.712011 kernel: pci 0000:00:16.4: BAR 13: no space for [io size 0x1000] Sep 13 01:02:55.712069 kernel: pci 0000:00:16.4: BAR 13: failed to assign [io size 0x1000] Sep 13 01:02:55.712128 kernel: pci 0000:00:16.3: BAR 13: no space for [io size 0x1000] Sep 13 01:02:55.712195 kernel: pci 0000:00:16.3: BAR 13: failed to assign [io size 0x1000] Sep 13 01:02:55.712250 kernel: pci 0000:00:15.7: BAR 13: no space for [io size 0x1000] Sep 13 01:02:55.712299 kernel: pci 0000:00:15.7: BAR 13: failed to assign [io size 0x1000] Sep 13 01:02:55.712353 kernel: pci 0000:00:15.6: BAR 13: no space for [io size 0x1000] Sep 13 01:02:55.712402 kernel: pci 0000:00:15.6: BAR 13: failed to assign [io size 0x1000] Sep 13 01:02:55.712460 kernel: pci 0000:00:15.5: BAR 13: no space for [io size 0x1000] Sep 13 01:02:55.712524 kernel: pci 0000:00:15.5: BAR 13: failed to assign [io size 0x1000] Sep 13 01:02:55.712582 kernel: pci 0000:00:15.4: BAR 13: no space for [io size 0x1000] Sep 13 01:02:55.712639 kernel: pci 0000:00:15.4: BAR 13: failed to assign [io size 0x1000] Sep 13 01:02:55.712691 kernel: pci 0000:00:15.3: BAR 13: no space for [io size 0x1000] Sep 13 01:02:55.712741 kernel: pci 0000:00:15.3: BAR 13: failed to assign [io size 0x1000] Sep 13 01:02:55.712793 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Sep 13 01:02:55.712843 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] Sep 13 01:02:55.712908 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] Sep 13 01:02:55.712958 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] Sep 13 01:02:55.713008 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] Sep 13 01:02:55.713064 kernel: pci 0000:03:00.0: BAR 6: assigned [mem 0xfd500000-0xfd50ffff pref] Sep 13 01:02:55.713116 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] Sep 13 01:02:55.713256 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] Sep 13 01:02:55.713316 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] Sep 13 01:02:55.713367 kernel: pci 0000:00:15.0: bridge window [mem 0xc0000000-0xc01fffff 64bit pref] Sep 13 01:02:55.713422 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] Sep 13 01:02:55.713470 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] Sep 13 01:02:55.713533 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] Sep 13 01:02:55.713583 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] Sep 13 01:02:55.713633 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] Sep 13 01:02:55.713682 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] Sep 13 01:02:55.713730 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] Sep 13 01:02:55.713779 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] Sep 13 01:02:55.713827 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] Sep 13 01:02:55.713880 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] Sep 13 01:02:55.713931 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] Sep 13 01:02:55.713984 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] Sep 13 01:02:55.714039 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] Sep 13 01:02:55.714088 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] Sep 13 01:02:55.714140 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] Sep 13 01:02:55.714204 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] Sep 13 01:02:55.714257 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] Sep 13 01:02:55.714321 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] Sep 13 01:02:55.714371 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] Sep 13 01:02:55.714420 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] Sep 13 01:02:55.714471 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] Sep 13 01:02:55.714521 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] Sep 13 01:02:55.714585 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] Sep 13 01:02:55.714646 kernel: pci 0000:0b:00.0: BAR 6: assigned [mem 0xfd400000-0xfd40ffff pref] Sep 13 01:02:55.714712 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] Sep 13 01:02:55.714766 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] Sep 13 01:02:55.714817 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] Sep 13 01:02:55.714866 kernel: pci 0000:00:16.0: bridge window [mem 0xc0200000-0xc03fffff 64bit pref] Sep 13 01:02:55.714981 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] Sep 13 01:02:55.715062 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] Sep 13 01:02:55.715116 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] Sep 13 01:02:55.715167 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] Sep 13 01:02:55.715227 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] Sep 13 01:02:55.715297 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] Sep 13 01:02:55.715357 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] Sep 13 01:02:55.715422 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] Sep 13 01:02:55.715475 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] Sep 13 01:02:55.715525 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] Sep 13 01:02:55.715574 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] Sep 13 01:02:55.715630 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] Sep 13 01:02:55.715679 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] Sep 13 01:02:55.715728 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] Sep 13 01:02:55.715778 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] Sep 13 01:02:55.715830 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] Sep 13 01:02:55.715896 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] Sep 13 01:02:55.715968 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] Sep 13 01:02:55.716046 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] Sep 13 01:02:55.716107 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] Sep 13 01:02:55.716159 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] Sep 13 01:02:55.716575 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] Sep 13 01:02:55.716636 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] Sep 13 01:02:55.716720 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] Sep 13 01:02:55.717295 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] Sep 13 01:02:55.717365 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] Sep 13 01:02:55.717419 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] Sep 13 01:02:55.717477 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] Sep 13 01:02:55.717926 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] Sep 13 01:02:55.718010 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] Sep 13 01:02:55.718411 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] Sep 13 01:02:55.718482 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] Sep 13 01:02:55.718824 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] Sep 13 01:02:55.718883 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] Sep 13 01:02:55.718939 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] Sep 13 01:02:55.718991 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] Sep 13 01:02:55.719042 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] Sep 13 01:02:55.719091 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] Sep 13 01:02:55.719143 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] Sep 13 01:02:55.719230 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] Sep 13 01:02:55.719292 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] Sep 13 01:02:55.719344 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] Sep 13 01:02:55.719393 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] Sep 13 01:02:55.719443 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] Sep 13 01:02:55.719494 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] Sep 13 01:02:55.719542 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] Sep 13 01:02:55.719591 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] Sep 13 01:02:55.719640 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] Sep 13 01:02:55.719690 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] Sep 13 01:02:55.719753 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] Sep 13 01:02:55.719820 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] Sep 13 01:02:55.719871 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] Sep 13 01:02:55.719921 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] Sep 13 01:02:55.719977 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] Sep 13 01:02:55.720035 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] Sep 13 01:02:55.720085 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] Sep 13 01:02:55.720134 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] Sep 13 01:02:55.720202 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] Sep 13 01:02:55.720261 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] Sep 13 01:02:55.720310 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] Sep 13 01:02:55.720358 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] Sep 13 01:02:55.720409 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] Sep 13 01:02:55.720462 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] Sep 13 01:02:55.720511 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] Sep 13 01:02:55.720573 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] Sep 13 01:02:55.720629 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] Sep 13 01:02:55.720686 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] Sep 13 01:02:55.720748 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] Sep 13 01:02:55.720807 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] Sep 13 01:02:55.720870 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] Sep 13 01:02:55.720928 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] Sep 13 01:02:55.720978 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] Sep 13 01:02:55.721041 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] Sep 13 01:02:55.721107 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] Sep 13 01:02:55.721158 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] Sep 13 01:02:55.721242 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] Sep 13 01:02:55.721296 kernel: pci_bus 0000:00: resource 4 [mem 0x000a0000-0x000bffff window] Sep 13 01:02:55.721341 kernel: pci_bus 0000:00: resource 5 [mem 0x000cc000-0x000dbfff window] Sep 13 01:02:55.721385 kernel: pci_bus 0000:00: resource 6 [mem 0xc0000000-0xfebfffff window] Sep 13 01:02:55.721428 kernel: pci_bus 0000:00: resource 7 [io 0x0000-0x0cf7 window] Sep 13 01:02:55.721482 kernel: pci_bus 0000:00: resource 8 [io 0x0d00-0xfeff window] Sep 13 01:02:55.721873 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x3fff] Sep 13 01:02:55.721927 kernel: pci_bus 0000:02: resource 1 [mem 0xfd600000-0xfdffffff] Sep 13 01:02:55.721976 kernel: pci_bus 0000:02: resource 2 [mem 0xe7b00000-0xe7ffffff 64bit pref] Sep 13 01:02:55.722023 kernel: pci_bus 0000:02: resource 4 [mem 0x000a0000-0x000bffff window] Sep 13 01:02:55.722408 kernel: pci_bus 0000:02: resource 5 [mem 0x000cc000-0x000dbfff window] Sep 13 01:02:55.722467 kernel: pci_bus 0000:02: resource 6 [mem 0xc0000000-0xfebfffff window] Sep 13 01:02:55.722515 kernel: pci_bus 0000:02: resource 7 [io 0x0000-0x0cf7 window] Sep 13 01:02:55.722564 kernel: pci_bus 0000:02: resource 8 [io 0x0d00-0xfeff window] Sep 13 01:02:55.722904 kernel: pci_bus 0000:03: resource 0 [io 0x4000-0x4fff] Sep 13 01:02:55.722964 kernel: pci_bus 0000:03: resource 1 [mem 0xfd500000-0xfd5fffff] Sep 13 01:02:55.723019 kernel: pci_bus 0000:03: resource 2 [mem 0xc0000000-0xc01fffff 64bit pref] Sep 13 01:02:55.723086 kernel: pci_bus 0000:04: resource 0 [io 0x8000-0x8fff] Sep 13 01:02:55.723389 kernel: pci_bus 0000:04: resource 1 [mem 0xfd100000-0xfd1fffff] Sep 13 01:02:55.723441 kernel: pci_bus 0000:04: resource 2 [mem 0xe7800000-0xe78fffff 64bit pref] Sep 13 01:02:55.723509 kernel: pci_bus 0000:05: resource 0 [io 0xc000-0xcfff] Sep 13 01:02:55.723874 kernel: pci_bus 0000:05: resource 1 [mem 0xfcd00000-0xfcdfffff] Sep 13 01:02:55.723927 kernel: pci_bus 0000:05: resource 2 [mem 0xe7400000-0xe74fffff 64bit pref] Sep 13 01:02:55.723980 kernel: pci_bus 0000:06: resource 1 [mem 0xfc900000-0xfc9fffff] Sep 13 01:02:55.724028 kernel: pci_bus 0000:06: resource 2 [mem 0xe7000000-0xe70fffff 64bit pref] Sep 13 01:02:55.724079 kernel: pci_bus 0000:07: resource 1 [mem 0xfc500000-0xfc5fffff] Sep 13 01:02:55.724128 kernel: pci_bus 0000:07: resource 2 [mem 0xe6c00000-0xe6cfffff 64bit pref] Sep 13 01:02:55.724266 kernel: pci_bus 0000:08: resource 1 [mem 0xfc100000-0xfc1fffff] Sep 13 01:02:55.724323 kernel: pci_bus 0000:08: resource 2 [mem 0xe6800000-0xe68fffff 64bit pref] Sep 13 01:02:55.724407 kernel: pci_bus 0000:09: resource 1 [mem 0xfbd00000-0xfbdfffff] Sep 13 01:02:55.724457 kernel: pci_bus 0000:09: resource 2 [mem 0xe6400000-0xe64fffff 64bit pref] Sep 13 01:02:55.724607 kernel: pci_bus 0000:0a: resource 1 [mem 0xfb900000-0xfb9fffff] Sep 13 01:02:55.724658 kernel: pci_bus 0000:0a: resource 2 [mem 0xe6000000-0xe60fffff 64bit pref] Sep 13 01:02:55.724712 kernel: pci_bus 0000:0b: resource 0 [io 0x5000-0x5fff] Sep 13 01:02:55.724758 kernel: pci_bus 0000:0b: resource 1 [mem 0xfd400000-0xfd4fffff] Sep 13 01:02:55.725126 kernel: pci_bus 0000:0b: resource 2 [mem 0xc0200000-0xc03fffff 64bit pref] Sep 13 01:02:55.725209 kernel: pci_bus 0000:0c: resource 0 [io 0x9000-0x9fff] Sep 13 01:02:55.725274 kernel: pci_bus 0000:0c: resource 1 [mem 0xfd000000-0xfd0fffff] Sep 13 01:02:55.725332 kernel: pci_bus 0000:0c: resource 2 [mem 0xe7700000-0xe77fffff 64bit pref] Sep 13 01:02:55.725385 kernel: pci_bus 0000:0d: resource 0 [io 0xd000-0xdfff] Sep 13 01:02:55.725432 kernel: pci_bus 0000:0d: resource 1 [mem 0xfcc00000-0xfccfffff] Sep 13 01:02:55.725479 kernel: pci_bus 0000:0d: resource 2 [mem 0xe7300000-0xe73fffff 64bit pref] Sep 13 01:02:55.725530 kernel: pci_bus 0000:0e: resource 1 [mem 0xfc800000-0xfc8fffff] Sep 13 01:02:55.725577 kernel: pci_bus 0000:0e: resource 2 [mem 0xe6f00000-0xe6ffffff 64bit pref] Sep 13 01:02:55.725628 kernel: pci_bus 0000:0f: resource 1 [mem 0xfc400000-0xfc4fffff] Sep 13 01:02:55.725677 kernel: pci_bus 0000:0f: resource 2 [mem 0xe6b00000-0xe6bfffff 64bit pref] Sep 13 01:02:55.725728 kernel: pci_bus 0000:10: resource 1 [mem 0xfc000000-0xfc0fffff] Sep 13 01:02:55.725774 kernel: pci_bus 0000:10: resource 2 [mem 0xe6700000-0xe67fffff 64bit pref] Sep 13 01:02:55.725830 kernel: pci_bus 0000:11: resource 1 [mem 0xfbc00000-0xfbcfffff] Sep 13 01:02:55.725886 kernel: pci_bus 0000:11: resource 2 [mem 0xe6300000-0xe63fffff 64bit pref] Sep 13 01:02:55.725958 kernel: pci_bus 0000:12: resource 1 [mem 0xfb800000-0xfb8fffff] Sep 13 01:02:55.726010 kernel: pci_bus 0000:12: resource 2 [mem 0xe5f00000-0xe5ffffff 64bit pref] Sep 13 01:02:55.726059 kernel: pci_bus 0000:13: resource 0 [io 0x6000-0x6fff] Sep 13 01:02:55.726106 kernel: pci_bus 0000:13: resource 1 [mem 0xfd300000-0xfd3fffff] Sep 13 01:02:55.726152 kernel: pci_bus 0000:13: resource 2 [mem 0xe7a00000-0xe7afffff 64bit pref] Sep 13 01:02:55.726259 kernel: pci_bus 0000:14: resource 0 [io 0xa000-0xafff] Sep 13 01:02:55.726320 kernel: pci_bus 0000:14: resource 1 [mem 0xfcf00000-0xfcffffff] Sep 13 01:02:55.726383 kernel: pci_bus 0000:14: resource 2 [mem 0xe7600000-0xe76fffff 64bit pref] Sep 13 01:02:55.726447 kernel: pci_bus 0000:15: resource 0 [io 0xe000-0xefff] Sep 13 01:02:55.726502 kernel: pci_bus 0000:15: resource 1 [mem 0xfcb00000-0xfcbfffff] Sep 13 01:02:55.726559 kernel: pci_bus 0000:15: resource 2 [mem 0xe7200000-0xe72fffff 64bit pref] Sep 13 01:02:55.726638 kernel: pci_bus 0000:16: resource 1 [mem 0xfc700000-0xfc7fffff] Sep 13 01:02:55.726689 kernel: pci_bus 0000:16: resource 2 [mem 0xe6e00000-0xe6efffff 64bit pref] Sep 13 01:02:55.726739 kernel: pci_bus 0000:17: resource 1 [mem 0xfc300000-0xfc3fffff] Sep 13 01:02:55.726789 kernel: pci_bus 0000:17: resource 2 [mem 0xe6a00000-0xe6afffff 64bit pref] Sep 13 01:02:55.726841 kernel: pci_bus 0000:18: resource 1 [mem 0xfbf00000-0xfbffffff] Sep 13 01:02:55.726887 kernel: pci_bus 0000:18: resource 2 [mem 0xe6600000-0xe66fffff 64bit pref] Sep 13 01:02:55.726939 kernel: pci_bus 0000:19: resource 1 [mem 0xfbb00000-0xfbbfffff] Sep 13 01:02:55.726994 kernel: pci_bus 0000:19: resource 2 [mem 0xe6200000-0xe62fffff 64bit pref] Sep 13 01:02:55.727046 kernel: pci_bus 0000:1a: resource 1 [mem 0xfb700000-0xfb7fffff] Sep 13 01:02:55.727094 kernel: pci_bus 0000:1a: resource 2 [mem 0xe5e00000-0xe5efffff 64bit pref] Sep 13 01:02:55.727145 kernel: pci_bus 0000:1b: resource 0 [io 0x7000-0x7fff] Sep 13 01:02:55.727199 kernel: pci_bus 0000:1b: resource 1 [mem 0xfd200000-0xfd2fffff] Sep 13 01:02:55.727246 kernel: pci_bus 0000:1b: resource 2 [mem 0xe7900000-0xe79fffff 64bit pref] Sep 13 01:02:55.727296 kernel: pci_bus 0000:1c: resource 0 [io 0xb000-0xbfff] Sep 13 01:02:55.727342 kernel: pci_bus 0000:1c: resource 1 [mem 0xfce00000-0xfcefffff] Sep 13 01:02:55.727388 kernel: pci_bus 0000:1c: resource 2 [mem 0xe7500000-0xe75fffff 64bit pref] Sep 13 01:02:55.727445 kernel: pci_bus 0000:1d: resource 1 [mem 0xfca00000-0xfcafffff] Sep 13 01:02:55.727492 kernel: pci_bus 0000:1d: resource 2 [mem 0xe7100000-0xe71fffff 64bit pref] Sep 13 01:02:55.727546 kernel: pci_bus 0000:1e: resource 1 [mem 0xfc600000-0xfc6fffff] Sep 13 01:02:55.727595 kernel: pci_bus 0000:1e: resource 2 [mem 0xe6d00000-0xe6dfffff 64bit pref] Sep 13 01:02:55.727647 kernel: pci_bus 0000:1f: resource 1 [mem 0xfc200000-0xfc2fffff] Sep 13 01:02:55.727696 kernel: pci_bus 0000:1f: resource 2 [mem 0xe6900000-0xe69fffff 64bit pref] Sep 13 01:02:55.727748 kernel: pci_bus 0000:20: resource 1 [mem 0xfbe00000-0xfbefffff] Sep 13 01:02:55.727796 kernel: pci_bus 0000:20: resource 2 [mem 0xe6500000-0xe65fffff 64bit pref] Sep 13 01:02:55.727846 kernel: pci_bus 0000:21: resource 1 [mem 0xfba00000-0xfbafffff] Sep 13 01:02:55.727893 kernel: pci_bus 0000:21: resource 2 [mem 0xe6100000-0xe61fffff 64bit pref] Sep 13 01:02:55.727942 kernel: pci_bus 0000:22: resource 1 [mem 0xfb600000-0xfb6fffff] Sep 13 01:02:55.727993 kernel: pci_bus 0000:22: resource 2 [mem 0xe5d00000-0xe5dfffff 64bit pref] Sep 13 01:02:55.728052 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Sep 13 01:02:55.728062 kernel: PCI: CLS 32 bytes, default 64 Sep 13 01:02:55.728069 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Sep 13 01:02:55.728075 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns Sep 13 01:02:55.728082 kernel: clocksource: Switched to clocksource tsc Sep 13 01:02:55.728088 kernel: Initialise system trusted keyrings Sep 13 01:02:55.728094 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Sep 13 01:02:55.728100 kernel: Key type asymmetric registered Sep 13 01:02:55.728108 kernel: Asymmetric key parser 'x509' registered Sep 13 01:02:55.728114 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Sep 13 01:02:55.728121 kernel: io scheduler mq-deadline registered Sep 13 01:02:55.728127 kernel: io scheduler kyber registered Sep 13 01:02:55.728133 kernel: io scheduler bfq registered Sep 13 01:02:55.728193 kernel: pcieport 0000:00:15.0: PME: Signaling with IRQ 24 Sep 13 01:02:55.728246 kernel: pcieport 0000:00:15.0: pciehp: Slot #160 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 13 01:02:55.728297 kernel: pcieport 0000:00:15.1: PME: Signaling with IRQ 25 Sep 13 01:02:55.728348 kernel: pcieport 0000:00:15.1: pciehp: Slot #161 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 13 01:02:55.728401 kernel: pcieport 0000:00:15.2: PME: Signaling with IRQ 26 Sep 13 01:02:55.728450 kernel: pcieport 0000:00:15.2: pciehp: Slot #162 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 13 01:02:55.728501 kernel: pcieport 0000:00:15.3: PME: Signaling with IRQ 27 Sep 13 01:02:55.728552 kernel: pcieport 0000:00:15.3: pciehp: Slot #163 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 13 01:02:55.728608 kernel: pcieport 0000:00:15.4: PME: Signaling with IRQ 28 Sep 13 01:02:55.728676 kernel: pcieport 0000:00:15.4: pciehp: Slot #164 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 13 01:02:55.728733 kernel: pcieport 0000:00:15.5: PME: Signaling with IRQ 29 Sep 13 01:02:55.728783 kernel: pcieport 0000:00:15.5: pciehp: Slot #165 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 13 01:02:55.728832 kernel: pcieport 0000:00:15.6: PME: Signaling with IRQ 30 Sep 13 01:02:55.728883 kernel: pcieport 0000:00:15.6: pciehp: Slot #166 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 13 01:02:55.728933 kernel: pcieport 0000:00:15.7: PME: Signaling with IRQ 31 Sep 13 01:02:55.728989 kernel: pcieport 0000:00:15.7: pciehp: Slot #167 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 13 01:02:55.729041 kernel: pcieport 0000:00:16.0: PME: Signaling with IRQ 32 Sep 13 01:02:55.729090 kernel: pcieport 0000:00:16.0: pciehp: Slot #192 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 13 01:02:55.729140 kernel: pcieport 0000:00:16.1: PME: Signaling with IRQ 33 Sep 13 01:02:55.729202 kernel: pcieport 0000:00:16.1: pciehp: Slot #193 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 13 01:02:55.729258 kernel: pcieport 0000:00:16.2: PME: Signaling with IRQ 34 Sep 13 01:02:55.729309 kernel: pcieport 0000:00:16.2: pciehp: Slot #194 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 13 01:02:55.729361 kernel: pcieport 0000:00:16.3: PME: Signaling with IRQ 35 Sep 13 01:02:55.729412 kernel: pcieport 0000:00:16.3: pciehp: Slot #195 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 13 01:02:55.729462 kernel: pcieport 0000:00:16.4: PME: Signaling with IRQ 36 Sep 13 01:02:55.729512 kernel: pcieport 0000:00:16.4: pciehp: Slot #196 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 13 01:02:55.729562 kernel: pcieport 0000:00:16.5: PME: Signaling with IRQ 37 Sep 13 01:02:55.729614 kernel: pcieport 0000:00:16.5: pciehp: Slot #197 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 13 01:02:55.729664 kernel: pcieport 0000:00:16.6: PME: Signaling with IRQ 38 Sep 13 01:02:55.729719 kernel: pcieport 0000:00:16.6: pciehp: Slot #198 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 13 01:02:55.729769 kernel: pcieport 0000:00:16.7: PME: Signaling with IRQ 39 Sep 13 01:02:55.729820 kernel: pcieport 0000:00:16.7: pciehp: Slot #199 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 13 01:02:55.729873 kernel: pcieport 0000:00:17.0: PME: Signaling with IRQ 40 Sep 13 01:02:55.729923 kernel: pcieport 0000:00:17.0: pciehp: Slot #224 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 13 01:02:55.729980 kernel: pcieport 0000:00:17.1: PME: Signaling with IRQ 41 Sep 13 01:02:55.730059 kernel: pcieport 0000:00:17.1: pciehp: Slot #225 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 13 01:02:55.730142 kernel: pcieport 0000:00:17.2: PME: Signaling with IRQ 42 Sep 13 01:02:55.730352 kernel: pcieport 0000:00:17.2: pciehp: Slot #226 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 13 01:02:55.730410 kernel: pcieport 0000:00:17.3: PME: Signaling with IRQ 43 Sep 13 01:02:55.730462 kernel: pcieport 0000:00:17.3: pciehp: Slot #227 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 13 01:02:55.730516 kernel: pcieport 0000:00:17.4: PME: Signaling with IRQ 44 Sep 13 01:02:55.730878 kernel: pcieport 0000:00:17.4: pciehp: Slot #228 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 13 01:02:55.730937 kernel: pcieport 0000:00:17.5: PME: Signaling with IRQ 45 Sep 13 01:02:55.730990 kernel: pcieport 0000:00:17.5: pciehp: Slot #229 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 13 01:02:55.731226 kernel: pcieport 0000:00:17.6: PME: Signaling with IRQ 46 Sep 13 01:02:55.731284 kernel: pcieport 0000:00:17.6: pciehp: Slot #230 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 13 01:02:55.731708 kernel: pcieport 0000:00:17.7: PME: Signaling with IRQ 47 Sep 13 01:02:55.731776 kernel: pcieport 0000:00:17.7: pciehp: Slot #231 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 13 01:02:55.731830 kernel: pcieport 0000:00:18.0: PME: Signaling with IRQ 48 Sep 13 01:02:55.731881 kernel: pcieport 0000:00:18.0: pciehp: Slot #256 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 13 01:02:55.732151 kernel: pcieport 0000:00:18.1: PME: Signaling with IRQ 49 Sep 13 01:02:55.732234 kernel: pcieport 0000:00:18.1: pciehp: Slot #257 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 13 01:02:55.732289 kernel: pcieport 0000:00:18.2: PME: Signaling with IRQ 50 Sep 13 01:02:55.732666 kernel: pcieport 0000:00:18.2: pciehp: Slot #258 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 13 01:02:55.732725 kernel: pcieport 0000:00:18.3: PME: Signaling with IRQ 51 Sep 13 01:02:55.732778 kernel: pcieport 0000:00:18.3: pciehp: Slot #259 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 13 01:02:55.732835 kernel: pcieport 0000:00:18.4: PME: Signaling with IRQ 52 Sep 13 01:02:55.732913 kernel: pcieport 0000:00:18.4: pciehp: Slot #260 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 13 01:02:55.732968 kernel: pcieport 0000:00:18.5: PME: Signaling with IRQ 53 Sep 13 01:02:55.733077 kernel: pcieport 0000:00:18.5: pciehp: Slot #261 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 13 01:02:55.733144 kernel: pcieport 0000:00:18.6: PME: Signaling with IRQ 54 Sep 13 01:02:55.733345 kernel: pcieport 0000:00:18.6: pciehp: Slot #262 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 13 01:02:55.733403 kernel: pcieport 0000:00:18.7: PME: Signaling with IRQ 55 Sep 13 01:02:55.733454 kernel: pcieport 0000:00:18.7: pciehp: Slot #263 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 13 01:02:55.733463 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 13 01:02:55.733470 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 13 01:02:55.733476 kernel: 00:05: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 13 01:02:55.733483 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBC,PNP0f13:MOUS] at 0x60,0x64 irq 1,12 Sep 13 01:02:55.733489 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 13 01:02:55.733497 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 13 01:02:55.733548 kernel: rtc_cmos 00:01: registered as rtc0 Sep 13 01:02:55.733594 kernel: rtc_cmos 00:01: setting system clock to 2025-09-13T01:02:55 UTC (1757725375) Sep 13 01:02:55.733638 kernel: rtc_cmos 00:01: alarms up to one month, y3k, 114 bytes nvram Sep 13 01:02:55.733647 kernel: intel_pstate: CPU model not supported Sep 13 01:02:55.733653 kernel: NET: Registered PF_INET6 protocol family Sep 13 01:02:55.733659 kernel: Segment Routing with IPv6 Sep 13 01:02:55.733665 kernel: In-situ OAM (IOAM) with IPv6 Sep 13 01:02:55.733673 kernel: NET: Registered PF_PACKET protocol family Sep 13 01:02:55.733679 kernel: Key type dns_resolver registered Sep 13 01:02:55.733685 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Sep 13 01:02:55.733692 kernel: IPI shorthand broadcast: enabled Sep 13 01:02:55.733698 kernel: sched_clock: Marking stable (880492529, 229015255)->(1182874489, -73366705) Sep 13 01:02:55.733704 kernel: registered taskstats version 1 Sep 13 01:02:55.733710 kernel: Loading compiled-in X.509 certificates Sep 13 01:02:55.733716 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.192-flatcar: d4931373bb0d9b9f95da11f02ae07d3649cc6c37' Sep 13 01:02:55.733722 kernel: Key type .fscrypt registered Sep 13 01:02:55.733729 kernel: Key type fscrypt-provisioning registered Sep 13 01:02:55.733735 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 13 01:02:55.733741 kernel: ima: Allocated hash algorithm: sha1 Sep 13 01:02:55.733747 kernel: ima: No architecture policies found Sep 13 01:02:55.733753 kernel: clk: Disabling unused clocks Sep 13 01:02:55.733760 kernel: Freeing unused kernel image (initmem) memory: 47492K Sep 13 01:02:55.733766 kernel: Write protecting the kernel read-only data: 28672k Sep 13 01:02:55.733772 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Sep 13 01:02:55.733784 kernel: Freeing unused kernel image (rodata/data gap) memory: 604K Sep 13 01:02:55.733795 kernel: Run /init as init process Sep 13 01:02:55.733802 kernel: with arguments: Sep 13 01:02:55.733809 kernel: /init Sep 13 01:02:55.733815 kernel: with environment: Sep 13 01:02:55.733821 kernel: HOME=/ Sep 13 01:02:55.733827 kernel: TERM=linux Sep 13 01:02:55.733833 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 13 01:02:55.733842 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Sep 13 01:02:55.733852 systemd[1]: Detected virtualization vmware. Sep 13 01:02:55.733858 systemd[1]: Detected architecture x86-64. Sep 13 01:02:55.733865 systemd[1]: Running in initrd. Sep 13 01:02:55.733871 systemd[1]: No hostname configured, using default hostname. Sep 13 01:02:55.733877 systemd[1]: Hostname set to . Sep 13 01:02:55.733883 systemd[1]: Initializing machine ID from random generator. Sep 13 01:02:55.733890 systemd[1]: Queued start job for default target initrd.target. Sep 13 01:02:55.733896 systemd[1]: Started systemd-ask-password-console.path. Sep 13 01:02:55.733903 systemd[1]: Reached target cryptsetup.target. Sep 13 01:02:55.733909 systemd[1]: Reached target paths.target. Sep 13 01:02:55.733915 systemd[1]: Reached target slices.target. Sep 13 01:02:55.733922 systemd[1]: Reached target swap.target. Sep 13 01:02:55.733928 systemd[1]: Reached target timers.target. Sep 13 01:02:55.733935 systemd[1]: Listening on iscsid.socket. Sep 13 01:02:55.733941 systemd[1]: Listening on iscsiuio.socket. Sep 13 01:02:55.733949 systemd[1]: Listening on systemd-journald-audit.socket. Sep 13 01:02:55.733955 systemd[1]: Listening on systemd-journald-dev-log.socket. Sep 13 01:02:55.733962 systemd[1]: Listening on systemd-journald.socket. Sep 13 01:02:55.733968 systemd[1]: Listening on systemd-networkd.socket. Sep 13 01:02:55.733974 systemd[1]: Listening on systemd-udevd-control.socket. Sep 13 01:02:55.733981 systemd[1]: Listening on systemd-udevd-kernel.socket. Sep 13 01:02:55.733987 systemd[1]: Reached target sockets.target. Sep 13 01:02:55.733993 systemd[1]: Starting kmod-static-nodes.service... Sep 13 01:02:55.733999 systemd[1]: Finished network-cleanup.service. Sep 13 01:02:55.734268 systemd[1]: Starting systemd-fsck-usr.service... Sep 13 01:02:55.734276 systemd[1]: Starting systemd-journald.service... Sep 13 01:02:55.734284 systemd[1]: Starting systemd-modules-load.service... Sep 13 01:02:55.734293 systemd[1]: Starting systemd-resolved.service... Sep 13 01:02:55.734300 systemd[1]: Starting systemd-vconsole-setup.service... Sep 13 01:02:55.734306 systemd[1]: Finished kmod-static-nodes.service. Sep 13 01:02:55.734313 systemd[1]: Finished systemd-fsck-usr.service. Sep 13 01:02:55.734319 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Sep 13 01:02:55.734325 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Sep 13 01:02:55.734334 kernel: audit: type=1130 audit(1757725375.668:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:02:55.734341 systemd[1]: Finished systemd-vconsole-setup.service. Sep 13 01:02:55.734348 kernel: audit: type=1130 audit(1757725375.671:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:02:55.734354 systemd[1]: Starting dracut-cmdline-ask.service... Sep 13 01:02:55.734360 systemd[1]: Finished dracut-cmdline-ask.service. Sep 13 01:02:55.734367 kernel: audit: type=1130 audit(1757725375.683:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:02:55.734373 systemd[1]: Starting dracut-cmdline.service... Sep 13 01:02:55.734380 systemd[1]: Started systemd-resolved.service. Sep 13 01:02:55.734389 kernel: audit: type=1130 audit(1757725375.688:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:02:55.734395 systemd[1]: Reached target nss-lookup.target. Sep 13 01:02:55.734402 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 13 01:02:55.734408 kernel: Bridge firewalling registered Sep 13 01:02:55.734416 kernel: SCSI subsystem initialized Sep 13 01:02:55.734428 systemd-journald[217]: Journal started Sep 13 01:02:55.734476 systemd-journald[217]: Runtime Journal (/run/log/journal/c6607fa8a5e84e1da82c9ab9ac9922b9) is 4.8M, max 38.8M, 34.0M free. Sep 13 01:02:55.668000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:02:55.738095 systemd[1]: Started systemd-journald.service. Sep 13 01:02:55.738109 kernel: audit: type=1130 audit(1757725375.734:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:02:55.671000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:02:55.683000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:02:55.688000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:02:55.734000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:02:55.668744 systemd-modules-load[218]: Inserted module 'overlay' Sep 13 01:02:55.685241 systemd-resolved[219]: Positive Trust Anchors: Sep 13 01:02:55.685250 systemd-resolved[219]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 13 01:02:55.685278 systemd-resolved[219]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Sep 13 01:02:55.687043 systemd-resolved[219]: Defaulting to hostname 'linux'. Sep 13 01:02:55.716241 systemd-modules-load[218]: Inserted module 'br_netfilter' Sep 13 01:02:55.740432 dracut-cmdline[233]: dracut-dracut-053 Sep 13 01:02:55.740432 dracut-cmdline[233]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LA Sep 13 01:02:55.740432 dracut-cmdline[233]: BEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=65d14b740db9e581daa1d0206188b16d2f1a39e5c5e0878b6855323cd7c584ec Sep 13 01:02:55.749002 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 13 01:02:55.749036 kernel: device-mapper: uevent: version 1.0.3 Sep 13 01:02:55.749045 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Sep 13 01:02:55.751113 systemd-modules-load[218]: Inserted module 'dm_multipath' Sep 13 01:02:55.751517 systemd[1]: Finished systemd-modules-load.service. Sep 13 01:02:55.750000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:02:55.752019 systemd[1]: Starting systemd-sysctl.service... Sep 13 01:02:55.755184 kernel: audit: type=1130 audit(1757725375.750:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:02:55.758487 systemd[1]: Finished systemd-sysctl.service. Sep 13 01:02:55.757000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:02:55.761184 kernel: audit: type=1130 audit(1757725375.757:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:02:55.767182 kernel: Loading iSCSI transport class v2.0-870. Sep 13 01:02:55.779194 kernel: iscsi: registered transport (tcp) Sep 13 01:02:55.796191 kernel: iscsi: registered transport (qla4xxx) Sep 13 01:02:55.796235 kernel: QLogic iSCSI HBA Driver Sep 13 01:02:55.813120 systemd[1]: Finished dracut-cmdline.service. Sep 13 01:02:55.812000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:02:55.813804 systemd[1]: Starting dracut-pre-udev.service... Sep 13 01:02:55.816430 kernel: audit: type=1130 audit(1757725375.812:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:02:55.854200 kernel: raid6: avx2x4 gen() 47809 MB/s Sep 13 01:02:55.869185 kernel: raid6: avx2x4 xor() 21139 MB/s Sep 13 01:02:55.886190 kernel: raid6: avx2x2 gen() 52722 MB/s Sep 13 01:02:55.903190 kernel: raid6: avx2x2 xor() 31306 MB/s Sep 13 01:02:55.920209 kernel: raid6: avx2x1 gen() 43583 MB/s Sep 13 01:02:55.937190 kernel: raid6: avx2x1 xor() 26760 MB/s Sep 13 01:02:55.954200 kernel: raid6: sse2x4 gen() 20780 MB/s Sep 13 01:02:55.971191 kernel: raid6: sse2x4 xor() 11626 MB/s Sep 13 01:02:55.988204 kernel: raid6: sse2x2 gen() 20028 MB/s Sep 13 01:02:56.005195 kernel: raid6: sse2x2 xor() 12696 MB/s Sep 13 01:02:56.022195 kernel: raid6: sse2x1 gen() 17966 MB/s Sep 13 01:02:56.039414 kernel: raid6: sse2x1 xor() 8837 MB/s Sep 13 01:02:56.039465 kernel: raid6: using algorithm avx2x2 gen() 52722 MB/s Sep 13 01:02:56.039486 kernel: raid6: .... xor() 31306 MB/s, rmw enabled Sep 13 01:02:56.040600 kernel: raid6: using avx2x2 recovery algorithm Sep 13 01:02:56.050193 kernel: xor: automatically using best checksumming function avx Sep 13 01:02:56.115190 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Sep 13 01:02:56.120324 systemd[1]: Finished dracut-pre-udev.service. Sep 13 01:02:56.119000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:02:56.121063 systemd[1]: Starting systemd-udevd.service... Sep 13 01:02:56.119000 audit: BPF prog-id=7 op=LOAD Sep 13 01:02:56.119000 audit: BPF prog-id=8 op=LOAD Sep 13 01:02:56.128201 kernel: audit: type=1130 audit(1757725376.119:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:02:56.133355 systemd-udevd[415]: Using default interface naming scheme 'v252'. Sep 13 01:02:56.136304 systemd[1]: Started systemd-udevd.service. Sep 13 01:02:56.136998 systemd[1]: Starting dracut-pre-trigger.service... Sep 13 01:02:56.135000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:02:56.146544 dracut-pre-trigger[421]: rd.md=0: removing MD RAID activation Sep 13 01:02:56.164000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:02:56.165113 systemd[1]: Finished dracut-pre-trigger.service. Sep 13 01:02:56.165689 systemd[1]: Starting systemd-udev-trigger.service... Sep 13 01:02:56.235274 systemd[1]: Finished systemd-udev-trigger.service. Sep 13 01:02:56.234000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:02:56.286655 kernel: VMware PVSCSI driver - version 1.0.7.0-k Sep 13 01:02:56.286690 kernel: vmw_pvscsi: using 64bit dma Sep 13 01:02:56.287841 kernel: vmw_pvscsi: max_id: 16 Sep 13 01:02:56.287858 kernel: vmw_pvscsi: setting ring_pages to 8 Sep 13 01:02:56.302544 kernel: vmw_pvscsi: enabling reqCallThreshold Sep 13 01:02:56.302576 kernel: vmw_pvscsi: driver-based request coalescing enabled Sep 13 01:02:56.302584 kernel: vmw_pvscsi: using MSI-X Sep 13 01:02:56.305272 kernel: scsi host0: VMware PVSCSI storage adapter rev 2, req/cmp/msg rings: 8/8/1 pages, cmd_per_lun=254 Sep 13 01:02:56.305400 kernel: vmw_pvscsi 0000:03:00.0: VMware PVSCSI rev 2 host #0 Sep 13 01:02:56.308251 kernel: scsi 0:0:0:0: Direct-Access VMware Virtual disk 2.0 PQ: 0 ANSI: 6 Sep 13 01:02:56.311183 kernel: libata version 3.00 loaded. Sep 13 01:02:56.321186 kernel: cryptd: max_cpu_qlen set to 1000 Sep 13 01:02:56.330184 kernel: VMware vmxnet3 virtual NIC driver - version 1.6.0.0-k-NAPI Sep 13 01:02:56.333184 kernel: AVX2 version of gcm_enc/dec engaged. Sep 13 01:02:56.333205 kernel: AES CTR mode by8 optimization enabled Sep 13 01:02:56.335184 kernel: vmxnet3 0000:0b:00.0: # of Tx queues : 2, # of Rx queues : 2 Sep 13 01:02:56.338799 kernel: ata_piix 0000:00:07.1: version 2.13 Sep 13 01:02:56.341545 kernel: vmxnet3 0000:0b:00.0 eth0: NIC Link is Up 10000 Mbps Sep 13 01:02:56.341622 kernel: vmxnet3 0000:0b:00.0 ens192: renamed from eth0 Sep 13 01:02:56.341683 kernel: scsi host1: ata_piix Sep 13 01:02:56.341771 kernel: scsi host2: ata_piix Sep 13 01:02:56.341844 kernel: ata1: PATA max UDMA/33 cmd 0x1f0 ctl 0x3f6 bmdma 0x1060 irq 14 Sep 13 01:02:56.341857 kernel: ata2: PATA max UDMA/33 cmd 0x170 ctl 0x376 bmdma 0x1068 irq 15 Sep 13 01:02:56.511243 kernel: ata2.00: ATAPI: VMware Virtual IDE CDROM Drive, 00000001, max UDMA/33 Sep 13 01:02:56.517223 kernel: scsi 2:0:0:0: CD-ROM NECVMWar VMware IDE CDR10 1.00 PQ: 0 ANSI: 5 Sep 13 01:02:56.525323 kernel: sd 0:0:0:0: [sda] 17805312 512-byte logical blocks: (9.12 GB/8.49 GiB) Sep 13 01:02:56.558979 kernel: sd 0:0:0:0: [sda] Write Protect is off Sep 13 01:02:56.559061 kernel: sd 0:0:0:0: [sda] Mode Sense: 31 00 00 00 Sep 13 01:02:56.559126 kernel: sd 0:0:0:0: [sda] Cache data unavailable Sep 13 01:02:56.559205 kernel: sd 0:0:0:0: [sda] Assuming drive cache: write through Sep 13 01:02:56.559283 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 13 01:02:56.559293 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Sep 13 01:02:56.582495 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 1x/1x writer dvd-ram cd/rw xa/form2 cdda tray Sep 13 01:02:56.601518 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 13 01:02:56.601531 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/sda6 scanned by (udev-worker) (473) Sep 13 01:02:56.601539 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Sep 13 01:02:56.602981 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Sep 13 01:02:56.620387 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Sep 13 01:02:56.625597 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Sep 13 01:02:56.654889 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Sep 13 01:02:56.655037 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Sep 13 01:02:56.655733 systemd[1]: Starting disk-uuid.service... Sep 13 01:02:56.692185 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 13 01:02:56.698187 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 13 01:02:57.714199 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 13 01:02:57.714387 disk-uuid[548]: The operation has completed successfully. Sep 13 01:02:57.771493 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 13 01:02:57.771552 systemd[1]: Finished disk-uuid.service. Sep 13 01:02:57.770000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:02:57.770000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:02:57.772157 systemd[1]: Starting verity-setup.service... Sep 13 01:02:57.783189 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Sep 13 01:02:57.884796 systemd[1]: Found device dev-mapper-usr.device. Sep 13 01:02:57.885763 systemd[1]: Mounting sysusr-usr.mount... Sep 13 01:02:57.886320 systemd[1]: Finished verity-setup.service. Sep 13 01:02:57.885000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:02:58.006183 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Sep 13 01:02:58.004929 systemd[1]: Mounted sysusr-usr.mount. Sep 13 01:02:58.005622 systemd[1]: Starting afterburn-network-kargs.service... Sep 13 01:02:58.006086 systemd[1]: Starting ignition-setup.service... Sep 13 01:02:58.023303 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Sep 13 01:02:58.023339 kernel: BTRFS info (device sda6): using free space tree Sep 13 01:02:58.023348 kernel: BTRFS info (device sda6): has skinny extents Sep 13 01:02:58.029189 kernel: BTRFS info (device sda6): enabling ssd optimizations Sep 13 01:02:58.035783 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 13 01:02:58.042284 systemd[1]: Finished ignition-setup.service. Sep 13 01:02:58.042870 systemd[1]: Starting ignition-fetch-offline.service... Sep 13 01:02:58.041000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:02:58.210689 systemd[1]: Finished afterburn-network-kargs.service. Sep 13 01:02:58.211509 systemd[1]: Starting parse-ip-for-networkd.service... Sep 13 01:02:58.209000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=afterburn-network-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:02:58.269737 systemd[1]: Finished parse-ip-for-networkd.service. Sep 13 01:02:58.270639 systemd[1]: Starting systemd-networkd.service... Sep 13 01:02:58.268000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:02:58.269000 audit: BPF prog-id=9 op=LOAD Sep 13 01:02:58.285867 systemd-networkd[734]: lo: Link UP Sep 13 01:02:58.285873 systemd-networkd[734]: lo: Gained carrier Sep 13 01:02:58.286750 systemd-networkd[734]: Enumeration completed Sep 13 01:02:58.285000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:02:58.286817 systemd[1]: Started systemd-networkd.service. Sep 13 01:02:58.286971 systemd[1]: Reached target network.target. Sep 13 01:02:58.287516 systemd-networkd[734]: ens192: Configuring with /etc/systemd/network/10-dracut-cmdline-99.network. Sep 13 01:02:58.287805 systemd[1]: Starting iscsiuio.service... Sep 13 01:02:58.292274 kernel: vmxnet3 0000:0b:00.0 ens192: intr type 3, mode 0, 3 vectors allocated Sep 13 01:02:58.292403 kernel: vmxnet3 0000:0b:00.0 ens192: NIC Link is Up 10000 Mbps Sep 13 01:02:58.290000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:02:58.291206 systemd[1]: Started iscsiuio.service. Sep 13 01:02:58.295093 iscsid[739]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Sep 13 01:02:58.295093 iscsid[739]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log Sep 13 01:02:58.295093 iscsid[739]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Sep 13 01:02:58.295093 iscsid[739]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Sep 13 01:02:58.295093 iscsid[739]: If using hardware iscsi like qla4xxx this message can be ignored. Sep 13 01:02:58.295093 iscsid[739]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Sep 13 01:02:58.295093 iscsid[739]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Sep 13 01:02:58.293000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:02:58.291829 systemd[1]: Starting iscsid.service... Sep 13 01:02:58.293041 systemd-networkd[734]: ens192: Link UP Sep 13 01:02:58.293043 systemd-networkd[734]: ens192: Gained carrier Sep 13 01:02:58.295004 systemd[1]: Started iscsid.service. Sep 13 01:02:58.295629 systemd[1]: Starting dracut-initqueue.service... Sep 13 01:02:58.304568 systemd[1]: Finished dracut-initqueue.service. Sep 13 01:02:58.303000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:02:58.305081 systemd[1]: Reached target remote-fs-pre.target. Sep 13 01:02:58.305636 systemd[1]: Reached target remote-cryptsetup.target. Sep 13 01:02:58.305867 systemd[1]: Reached target remote-fs.target. Sep 13 01:02:58.306587 systemd[1]: Starting dracut-pre-mount.service... Sep 13 01:02:58.312385 systemd[1]: Finished dracut-pre-mount.service. Sep 13 01:02:58.311000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:02:58.349706 ignition[605]: Ignition 2.14.0 Sep 13 01:02:58.349718 ignition[605]: Stage: fetch-offline Sep 13 01:02:58.349759 ignition[605]: reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 13 01:02:58.349781 ignition[605]: parsing config with SHA512: bd85a898f7da4744ff98e02742aa4854e1ceea8026a4e95cb6fb599b39b54cff0db353847df13d3c55ae196a9dc5d648977228d55e5da3ea20cd600fa7cec8ed Sep 13 01:02:58.383911 ignition[605]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Sep 13 01:02:58.384300 ignition[605]: parsed url from cmdline: "" Sep 13 01:02:58.384354 ignition[605]: no config URL provided Sep 13 01:02:58.384516 ignition[605]: reading system config file "/usr/lib/ignition/user.ign" Sep 13 01:02:58.384717 ignition[605]: no config at "/usr/lib/ignition/user.ign" Sep 13 01:02:58.385225 ignition[605]: config successfully fetched Sep 13 01:02:58.385246 ignition[605]: parsing config with SHA512: 37be575e64615c34d87caa6a86bc7a7664f2e51688697520700f2c764b327a45da6e19242780555a66459f4d355e6c15d7209258c8461281ec4c345f96f34a85 Sep 13 01:02:58.412956 unknown[605]: fetched base config from "system" Sep 13 01:02:58.413235 unknown[605]: fetched user config from "vmware" Sep 13 01:02:58.413760 ignition[605]: fetch-offline: fetch-offline passed Sep 13 01:02:58.413961 ignition[605]: Ignition finished successfully Sep 13 01:02:58.413000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:02:58.414696 systemd[1]: Finished ignition-fetch-offline.service. Sep 13 01:02:58.414856 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 13 01:02:58.415348 systemd[1]: Starting ignition-kargs.service... Sep 13 01:02:58.421047 ignition[753]: Ignition 2.14.0 Sep 13 01:02:58.421055 ignition[753]: Stage: kargs Sep 13 01:02:58.421116 ignition[753]: reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 13 01:02:58.421127 ignition[753]: parsing config with SHA512: bd85a898f7da4744ff98e02742aa4854e1ceea8026a4e95cb6fb599b39b54cff0db353847df13d3c55ae196a9dc5d648977228d55e5da3ea20cd600fa7cec8ed Sep 13 01:02:58.422397 ignition[753]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Sep 13 01:02:58.423837 ignition[753]: kargs: kargs passed Sep 13 01:02:58.423864 ignition[753]: Ignition finished successfully Sep 13 01:02:58.424811 systemd[1]: Finished ignition-kargs.service. Sep 13 01:02:58.423000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:02:58.425464 systemd[1]: Starting ignition-disks.service... Sep 13 01:02:58.429845 ignition[760]: Ignition 2.14.0 Sep 13 01:02:58.430057 ignition[760]: Stage: disks Sep 13 01:02:58.430245 ignition[760]: reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 13 01:02:58.430397 ignition[760]: parsing config with SHA512: bd85a898f7da4744ff98e02742aa4854e1ceea8026a4e95cb6fb599b39b54cff0db353847df13d3c55ae196a9dc5d648977228d55e5da3ea20cd600fa7cec8ed Sep 13 01:02:58.431683 ignition[760]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Sep 13 01:02:58.433261 ignition[760]: disks: disks passed Sep 13 01:02:58.433408 ignition[760]: Ignition finished successfully Sep 13 01:02:58.433969 systemd[1]: Finished ignition-disks.service. Sep 13 01:02:58.434143 systemd[1]: Reached target initrd-root-device.target. Sep 13 01:02:58.432000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:02:58.434262 systemd[1]: Reached target local-fs-pre.target. Sep 13 01:02:58.434426 systemd[1]: Reached target local-fs.target. Sep 13 01:02:58.434580 systemd[1]: Reached target sysinit.target. Sep 13 01:02:58.434737 systemd[1]: Reached target basic.target. Sep 13 01:02:58.435417 systemd[1]: Starting systemd-fsck-root.service... Sep 13 01:02:58.458780 systemd-fsck[768]: ROOT: clean, 629/1628000 files, 124065/1617920 blocks Sep 13 01:02:58.460598 systemd[1]: Finished systemd-fsck-root.service. Sep 13 01:02:58.459000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:02:58.461252 systemd[1]: Mounting sysroot.mount... Sep 13 01:02:58.475187 kernel: EXT4-fs (sda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Sep 13 01:02:58.475186 systemd[1]: Mounted sysroot.mount. Sep 13 01:02:58.475405 systemd[1]: Reached target initrd-root-fs.target. Sep 13 01:02:58.478275 systemd[1]: Mounting sysroot-usr.mount... Sep 13 01:02:58.478775 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Sep 13 01:02:58.478813 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 13 01:02:58.478830 systemd[1]: Reached target ignition-diskful.target. Sep 13 01:02:58.480905 systemd[1]: Mounted sysroot-usr.mount. Sep 13 01:02:58.481487 systemd[1]: Starting initrd-setup-root.service... Sep 13 01:02:58.484902 initrd-setup-root[778]: cut: /sysroot/etc/passwd: No such file or directory Sep 13 01:02:58.491342 initrd-setup-root[786]: cut: /sysroot/etc/group: No such file or directory Sep 13 01:02:58.494066 initrd-setup-root[794]: cut: /sysroot/etc/shadow: No such file or directory Sep 13 01:02:58.497198 initrd-setup-root[802]: cut: /sysroot/etc/gshadow: No such file or directory Sep 13 01:02:58.651538 systemd[1]: Finished initrd-setup-root.service. Sep 13 01:02:58.650000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:02:58.652296 systemd[1]: Starting ignition-mount.service... Sep 13 01:02:58.652954 systemd[1]: Starting sysroot-boot.service... Sep 13 01:02:58.657047 bash[819]: umount: /sysroot/usr/share/oem: not mounted. Sep 13 01:02:58.662620 ignition[820]: INFO : Ignition 2.14.0 Sep 13 01:02:58.662858 ignition[820]: INFO : Stage: mount Sep 13 01:02:58.663029 ignition[820]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 13 01:02:58.663189 ignition[820]: DEBUG : parsing config with SHA512: bd85a898f7da4744ff98e02742aa4854e1ceea8026a4e95cb6fb599b39b54cff0db353847df13d3c55ae196a9dc5d648977228d55e5da3ea20cd600fa7cec8ed Sep 13 01:02:58.664721 ignition[820]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Sep 13 01:02:58.666363 ignition[820]: INFO : mount: mount passed Sep 13 01:02:58.666524 ignition[820]: INFO : Ignition finished successfully Sep 13 01:02:58.667505 systemd[1]: Finished ignition-mount.service. Sep 13 01:02:58.666000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:02:58.679285 systemd[1]: Finished sysroot-boot.service. Sep 13 01:02:58.678000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:02:58.958440 systemd[1]: Mounting sysroot-usr-share-oem.mount... Sep 13 01:02:59.015201 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (829) Sep 13 01:02:59.018719 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Sep 13 01:02:59.018742 kernel: BTRFS info (device sda6): using free space tree Sep 13 01:02:59.018759 kernel: BTRFS info (device sda6): has skinny extents Sep 13 01:02:59.029235 kernel: BTRFS info (device sda6): enabling ssd optimizations Sep 13 01:02:59.032756 systemd[1]: Mounted sysroot-usr-share-oem.mount. Sep 13 01:02:59.033812 systemd[1]: Starting ignition-files.service... Sep 13 01:02:59.045485 ignition[849]: INFO : Ignition 2.14.0 Sep 13 01:02:59.045485 ignition[849]: INFO : Stage: files Sep 13 01:02:59.045869 ignition[849]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 13 01:02:59.045869 ignition[849]: DEBUG : parsing config with SHA512: bd85a898f7da4744ff98e02742aa4854e1ceea8026a4e95cb6fb599b39b54cff0db353847df13d3c55ae196a9dc5d648977228d55e5da3ea20cd600fa7cec8ed Sep 13 01:02:59.047032 ignition[849]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Sep 13 01:02:59.049416 ignition[849]: DEBUG : files: compiled without relabeling support, skipping Sep 13 01:02:59.050463 ignition[849]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 13 01:02:59.050463 ignition[849]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 13 01:02:59.055236 ignition[849]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 13 01:02:59.055461 ignition[849]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 13 01:02:59.057871 unknown[849]: wrote ssh authorized keys file for user: core Sep 13 01:02:59.058388 ignition[849]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 13 01:02:59.059484 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Sep 13 01:02:59.059484 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Sep 13 01:02:59.669521 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 13 01:02:59.747337 systemd-networkd[734]: ens192: Gained IPv6LL Sep 13 01:03:00.464214 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Sep 13 01:03:00.466846 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 13 01:03:00.467068 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Sep 13 01:03:00.779785 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 13 01:03:01.045067 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 13 01:03:01.045067 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Sep 13 01:03:01.045584 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Sep 13 01:03:01.045584 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 13 01:03:01.045584 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 13 01:03:01.045584 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 13 01:03:01.045584 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 13 01:03:01.045584 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 13 01:03:01.045584 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 13 01:03:01.058108 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 13 01:03:01.058362 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 13 01:03:01.058362 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 13 01:03:01.058362 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 13 01:03:01.069609 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/etc/systemd/system/vmtoolsd.service" Sep 13 01:03:01.069860 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(b): oem config not found in "/usr/share/oem", looking on oem partition Sep 13 01:03:01.083609 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(c): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3834732997" Sep 13 01:03:01.083954 ignition[849]: CRITICAL : files: createFilesystemsFiles: createFiles: op(b): op(c): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3834732997": device or resource busy Sep 13 01:03:01.083954 ignition[849]: ERROR : files: createFilesystemsFiles: createFiles: op(b): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem3834732997", trying btrfs: device or resource busy Sep 13 01:03:01.083954 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(d): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3834732997" Sep 13 01:03:01.085993 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(d): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3834732997" Sep 13 01:03:01.106507 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(e): [started] unmounting "/mnt/oem3834732997" Sep 13 01:03:01.106823 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(e): [finished] unmounting "/mnt/oem3834732997" Sep 13 01:03:01.107090 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/etc/systemd/system/vmtoolsd.service" Sep 13 01:03:01.107378 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 13 01:03:01.107729 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(f): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Sep 13 01:03:01.107732 systemd[1]: mnt-oem3834732997.mount: Deactivated successfully. Sep 13 01:03:01.587096 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(f): GET result: OK Sep 13 01:03:02.882304 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 13 01:03:02.882724 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(10): [started] writing file "/sysroot/etc/systemd/network/00-vmware.network" Sep 13 01:03:02.882724 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(10): [finished] writing file "/sysroot/etc/systemd/network/00-vmware.network" Sep 13 01:03:02.883315 ignition[849]: INFO : files: op(11): [started] processing unit "vmtoolsd.service" Sep 13 01:03:02.883315 ignition[849]: INFO : files: op(11): [finished] processing unit "vmtoolsd.service" Sep 13 01:03:02.883315 ignition[849]: INFO : files: op(12): [started] processing unit "prepare-helm.service" Sep 13 01:03:02.883315 ignition[849]: INFO : files: op(12): op(13): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 13 01:03:02.883315 ignition[849]: INFO : files: op(12): op(13): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 13 01:03:02.883315 ignition[849]: INFO : files: op(12): [finished] processing unit "prepare-helm.service" Sep 13 01:03:02.883315 ignition[849]: INFO : files: op(14): [started] processing unit "coreos-metadata.service" Sep 13 01:03:02.883315 ignition[849]: INFO : files: op(14): op(15): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 13 01:03:02.883315 ignition[849]: INFO : files: op(14): op(15): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 13 01:03:02.883315 ignition[849]: INFO : files: op(14): [finished] processing unit "coreos-metadata.service" Sep 13 01:03:02.883315 ignition[849]: INFO : files: op(16): [started] setting preset to enabled for "vmtoolsd.service" Sep 13 01:03:02.883315 ignition[849]: INFO : files: op(16): [finished] setting preset to enabled for "vmtoolsd.service" Sep 13 01:03:02.883315 ignition[849]: INFO : files: op(17): [started] setting preset to enabled for "prepare-helm.service" Sep 13 01:03:02.883315 ignition[849]: INFO : files: op(17): [finished] setting preset to enabled for "prepare-helm.service" Sep 13 01:03:02.883315 ignition[849]: INFO : files: op(18): [started] setting preset to disabled for "coreos-metadata.service" Sep 13 01:03:02.883315 ignition[849]: INFO : files: op(18): op(19): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 13 01:03:03.002391 ignition[849]: INFO : files: op(18): op(19): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 13 01:03:03.002672 ignition[849]: INFO : files: op(18): [finished] setting preset to disabled for "coreos-metadata.service" Sep 13 01:03:03.002672 ignition[849]: INFO : files: createResultFile: createFiles: op(1a): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 13 01:03:03.002672 ignition[849]: INFO : files: createResultFile: createFiles: op(1a): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 13 01:03:03.002672 ignition[849]: INFO : files: files passed Sep 13 01:03:03.002672 ignition[849]: INFO : Ignition finished successfully Sep 13 01:03:03.004217 systemd[1]: Finished ignition-files.service. Sep 13 01:03:03.008193 kernel: kauditd_printk_skb: 24 callbacks suppressed Sep 13 01:03:03.008227 kernel: audit: type=1130 audit(1757725383.003:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:03.003000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:03.005103 systemd[1]: Starting initrd-setup-root-after-ignition.service... Sep 13 01:03:03.009063 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Sep 13 01:03:03.010511 systemd[1]: Starting ignition-quench.service... Sep 13 01:03:03.012386 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 13 01:03:03.012616 systemd[1]: Finished ignition-quench.service. Sep 13 01:03:03.011000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:03.013449 initrd-setup-root-after-ignition[875]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 13 01:03:03.017366 kernel: audit: type=1130 audit(1757725383.011:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:03.018363 systemd[1]: Finished initrd-setup-root-after-ignition.service. Sep 13 01:03:03.018688 systemd[1]: Reached target ignition-complete.target. Sep 13 01:03:03.011000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:03.019395 systemd[1]: Starting initrd-parse-etc.service... Sep 13 01:03:03.024370 kernel: audit: type=1131 audit(1757725383.011:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:03.024388 kernel: audit: type=1130 audit(1757725383.017:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:03.017000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:03.030031 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 13 01:03:03.030093 systemd[1]: Finished initrd-parse-etc.service. Sep 13 01:03:03.030282 systemd[1]: Reached target initrd-fs.target. Sep 13 01:03:03.030376 systemd[1]: Reached target initrd.target. Sep 13 01:03:03.030489 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Sep 13 01:03:03.031042 systemd[1]: Starting dracut-pre-pivot.service... Sep 13 01:03:03.029000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:03.036664 kernel: audit: type=1130 audit(1757725383.029:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:03.036683 kernel: audit: type=1131 audit(1757725383.029:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:03.029000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:03.037448 systemd[1]: Finished dracut-pre-pivot.service. Sep 13 01:03:03.037975 systemd[1]: Starting initrd-cleanup.service... Sep 13 01:03:03.036000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:03.041184 kernel: audit: type=1130 audit(1757725383.036:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:03.043945 systemd[1]: Stopped target nss-lookup.target. Sep 13 01:03:03.044111 systemd[1]: Stopped target remote-cryptsetup.target. Sep 13 01:03:03.044308 systemd[1]: Stopped target timers.target. Sep 13 01:03:03.044477 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 13 01:03:03.044538 systemd[1]: Stopped dracut-pre-pivot.service. Sep 13 01:03:03.044830 systemd[1]: Stopped target initrd.target. Sep 13 01:03:03.047341 kernel: audit: type=1131 audit(1757725383.043:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:03.043000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:03.047284 systemd[1]: Stopped target basic.target. Sep 13 01:03:03.047442 systemd[1]: Stopped target ignition-complete.target. Sep 13 01:03:03.047626 systemd[1]: Stopped target ignition-diskful.target. Sep 13 01:03:03.047803 systemd[1]: Stopped target initrd-root-device.target. Sep 13 01:03:03.048001 systemd[1]: Stopped target remote-fs.target. Sep 13 01:03:03.048245 systemd[1]: Stopped target remote-fs-pre.target. Sep 13 01:03:03.048400 systemd[1]: Stopped target sysinit.target. Sep 13 01:03:03.048572 systemd[1]: Stopped target local-fs.target. Sep 13 01:03:03.048736 systemd[1]: Stopped target local-fs-pre.target. Sep 13 01:03:03.048907 systemd[1]: Stopped target swap.target. Sep 13 01:03:03.049058 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 13 01:03:03.049117 systemd[1]: Stopped dracut-pre-mount.service. Sep 13 01:03:03.051842 kernel: audit: type=1131 audit(1757725383.048:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:03.048000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:03.049308 systemd[1]: Stopped target cryptsetup.target. Sep 13 01:03:03.054389 kernel: audit: type=1131 audit(1757725383.050:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:03.050000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:03.051769 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 13 01:03:03.053000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:03.051827 systemd[1]: Stopped dracut-initqueue.service. Sep 13 01:03:03.051997 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 13 01:03:03.052053 systemd[1]: Stopped ignition-fetch-offline.service. Sep 13 01:03:03.054538 systemd[1]: Stopped target paths.target. Sep 13 01:03:03.054689 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 13 01:03:03.056197 systemd[1]: Stopped systemd-ask-password-console.path. Sep 13 01:03:03.056358 systemd[1]: Stopped target slices.target. Sep 13 01:03:03.056547 systemd[1]: Stopped target sockets.target. Sep 13 01:03:03.056733 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 13 01:03:03.056801 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Sep 13 01:03:03.055000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:03.057073 systemd[1]: ignition-files.service: Deactivated successfully. Sep 13 01:03:03.056000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:03.057132 systemd[1]: Stopped ignition-files.service. Sep 13 01:03:03.058000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:03.059690 iscsid[739]: iscsid shutting down. Sep 13 01:03:03.059000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:03.059000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:03.060000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:03.061000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:03.061000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:03.057781 systemd[1]: Stopping ignition-mount.service... Sep 13 01:03:03.059054 systemd[1]: Stopping iscsid.service... Sep 13 01:03:03.059138 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 13 01:03:03.059217 systemd[1]: Stopped kmod-static-nodes.service. Sep 13 01:03:03.059920 systemd[1]: Stopping sysroot-boot.service... Sep 13 01:03:03.060230 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 13 01:03:03.065000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:03.067006 ignition[888]: INFO : Ignition 2.14.0 Sep 13 01:03:03.067006 ignition[888]: INFO : Stage: umount Sep 13 01:03:03.067006 ignition[888]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 13 01:03:03.067006 ignition[888]: DEBUG : parsing config with SHA512: bd85a898f7da4744ff98e02742aa4854e1ceea8026a4e95cb6fb599b39b54cff0db353847df13d3c55ae196a9dc5d648977228d55e5da3ea20cd600fa7cec8ed Sep 13 01:03:03.067006 ignition[888]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Sep 13 01:03:03.060318 systemd[1]: Stopped systemd-udev-trigger.service. Sep 13 01:03:03.060526 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 13 01:03:03.060601 systemd[1]: Stopped dracut-pre-trigger.service. Sep 13 01:03:03.061909 systemd[1]: iscsid.service: Deactivated successfully. Sep 13 01:03:03.061987 systemd[1]: Stopped iscsid.service. Sep 13 01:03:03.062402 systemd[1]: iscsid.socket: Deactivated successfully. Sep 13 01:03:03.062462 systemd[1]: Closed iscsid.socket. Sep 13 01:03:03.062620 systemd[1]: Stopping iscsiuio.service... Sep 13 01:03:03.062833 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 13 01:03:03.062900 systemd[1]: Finished initrd-cleanup.service. Sep 13 01:03:03.064062 systemd[1]: iscsiuio.service: Deactivated successfully. Sep 13 01:03:03.064134 systemd[1]: Stopped iscsiuio.service. Sep 13 01:03:03.066722 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 13 01:03:03.066742 systemd[1]: Closed iscsiuio.socket. Sep 13 01:03:03.069870 ignition[888]: INFO : umount: umount passed Sep 13 01:03:03.069980 ignition[888]: INFO : Ignition finished successfully Sep 13 01:03:03.069000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:03.069000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:03.069000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:03.069000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:03.070538 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 13 01:03:03.070592 systemd[1]: Stopped ignition-mount.service. Sep 13 01:03:03.070731 systemd[1]: Stopped target network.target. Sep 13 01:03:03.070814 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 13 01:03:03.070837 systemd[1]: Stopped ignition-disks.service. Sep 13 01:03:03.070939 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 13 01:03:03.070958 systemd[1]: Stopped ignition-kargs.service. Sep 13 01:03:03.071060 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 13 01:03:03.071079 systemd[1]: Stopped ignition-setup.service. Sep 13 01:03:03.071244 systemd[1]: Stopping systemd-networkd.service... Sep 13 01:03:03.071404 systemd[1]: Stopping systemd-resolved.service... Sep 13 01:03:03.076000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:03.077134 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 13 01:03:03.077213 systemd[1]: Stopped systemd-resolved.service. Sep 13 01:03:03.077977 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 13 01:03:03.080000 audit: BPF prog-id=6 op=UNLOAD Sep 13 01:03:03.081761 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 13 01:03:03.081818 systemd[1]: Stopped systemd-networkd.service. Sep 13 01:03:03.082067 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 13 01:03:03.080000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:03.082084 systemd[1]: Closed systemd-networkd.socket. Sep 13 01:03:03.082701 systemd[1]: Stopping network-cleanup.service... Sep 13 01:03:03.082812 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 13 01:03:03.081000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:03.082839 systemd[1]: Stopped parse-ip-for-networkd.service. Sep 13 01:03:03.081000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=afterburn-network-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:03.083014 systemd[1]: afterburn-network-kargs.service: Deactivated successfully. Sep 13 01:03:03.082000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:03.083037 systemd[1]: Stopped afterburn-network-kargs.service. Sep 13 01:03:03.083167 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 13 01:03:03.082000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:03.083206 systemd[1]: Stopped systemd-sysctl.service. Sep 13 01:03:03.083429 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 13 01:03:03.083449 systemd[1]: Stopped systemd-modules-load.service. Sep 13 01:03:03.083689 systemd[1]: Stopping systemd-udevd.service... Sep 13 01:03:03.085152 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 13 01:03:03.085430 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 13 01:03:03.085480 systemd[1]: Stopped sysroot-boot.service. Sep 13 01:03:03.084000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:03.085982 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 13 01:03:03.085000 audit: BPF prog-id=9 op=UNLOAD Sep 13 01:03:03.085000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:03.086010 systemd[1]: Stopped initrd-setup-root.service. Sep 13 01:03:03.087787 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 13 01:03:03.087838 systemd[1]: Stopped network-cleanup.service. Sep 13 01:03:03.086000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:03.088725 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 13 01:03:03.088793 systemd[1]: Stopped systemd-udevd.service. Sep 13 01:03:03.087000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:03.089130 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 13 01:03:03.089154 systemd[1]: Closed systemd-udevd-control.socket. Sep 13 01:03:03.089398 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 13 01:03:03.089415 systemd[1]: Closed systemd-udevd-kernel.socket. Sep 13 01:03:03.088000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:03.089552 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 13 01:03:03.088000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:03.089573 systemd[1]: Stopped dracut-pre-udev.service. Sep 13 01:03:03.088000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:03.089761 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 13 01:03:03.089782 systemd[1]: Stopped dracut-cmdline.service. Sep 13 01:03:03.089915 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 13 01:03:03.089934 systemd[1]: Stopped dracut-cmdline-ask.service. Sep 13 01:03:03.090441 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Sep 13 01:03:03.089000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:03.090630 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 13 01:03:03.090656 systemd[1]: Stopped systemd-vconsole-setup.service. Sep 13 01:03:03.094271 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 13 01:03:03.094328 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Sep 13 01:03:03.093000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:03.093000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:03.094564 systemd[1]: Reached target initrd-switch-root.target. Sep 13 01:03:03.095018 systemd[1]: Starting initrd-switch-root.service... Sep 13 01:03:03.101391 systemd[1]: Switching root. Sep 13 01:03:03.120412 systemd-journald[217]: Journal stopped Sep 13 01:03:06.065987 systemd-journald[217]: Received SIGTERM from PID 1 (systemd). Sep 13 01:03:06.066006 kernel: SELinux: Class mctp_socket not defined in policy. Sep 13 01:03:06.066014 kernel: SELinux: Class anon_inode not defined in policy. Sep 13 01:03:06.066020 kernel: SELinux: the above unknown classes and permissions will be allowed Sep 13 01:03:06.066025 kernel: SELinux: policy capability network_peer_controls=1 Sep 13 01:03:06.066032 kernel: SELinux: policy capability open_perms=1 Sep 13 01:03:06.066038 kernel: SELinux: policy capability extended_socket_class=1 Sep 13 01:03:06.066044 kernel: SELinux: policy capability always_check_network=0 Sep 13 01:03:06.066049 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 13 01:03:06.066055 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 13 01:03:06.066060 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 13 01:03:06.066066 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 13 01:03:06.066074 systemd[1]: Successfully loaded SELinux policy in 41.025ms. Sep 13 01:03:06.066081 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 15.392ms. Sep 13 01:03:06.066090 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Sep 13 01:03:06.066096 systemd[1]: Detected virtualization vmware. Sep 13 01:03:06.066103 systemd[1]: Detected architecture x86-64. Sep 13 01:03:06.066110 systemd[1]: Detected first boot. Sep 13 01:03:06.066117 systemd[1]: Initializing machine ID from random generator. Sep 13 01:03:06.066123 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Sep 13 01:03:06.066129 systemd[1]: Populated /etc with preset unit settings. Sep 13 01:03:06.066136 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 13 01:03:06.066143 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 13 01:03:06.066150 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 01:03:06.066158 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 13 01:03:06.066164 systemd[1]: Stopped initrd-switch-root.service. Sep 13 01:03:06.066178 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 13 01:03:06.066185 systemd[1]: Created slice system-addon\x2dconfig.slice. Sep 13 01:03:06.066192 systemd[1]: Created slice system-addon\x2drun.slice. Sep 13 01:03:06.066198 systemd[1]: Created slice system-getty.slice. Sep 13 01:03:06.066205 systemd[1]: Created slice system-modprobe.slice. Sep 13 01:03:06.066212 systemd[1]: Created slice system-serial\x2dgetty.slice. Sep 13 01:03:06.066219 systemd[1]: Created slice system-system\x2dcloudinit.slice. Sep 13 01:03:06.066226 systemd[1]: Created slice system-systemd\x2dfsck.slice. Sep 13 01:03:06.066233 systemd[1]: Created slice user.slice. Sep 13 01:03:06.066239 systemd[1]: Started systemd-ask-password-console.path. Sep 13 01:03:06.066246 systemd[1]: Started systemd-ask-password-wall.path. Sep 13 01:03:06.066252 systemd[1]: Set up automount boot.automount. Sep 13 01:03:06.066259 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Sep 13 01:03:06.066266 systemd[1]: Stopped target initrd-switch-root.target. Sep 13 01:03:06.066274 systemd[1]: Stopped target initrd-fs.target. Sep 13 01:03:06.066281 systemd[1]: Stopped target initrd-root-fs.target. Sep 13 01:03:06.066288 systemd[1]: Reached target integritysetup.target. Sep 13 01:03:06.066295 systemd[1]: Reached target remote-cryptsetup.target. Sep 13 01:03:06.066301 systemd[1]: Reached target remote-fs.target. Sep 13 01:03:06.066308 systemd[1]: Reached target slices.target. Sep 13 01:03:06.066315 systemd[1]: Reached target swap.target. Sep 13 01:03:06.066321 systemd[1]: Reached target torcx.target. Sep 13 01:03:06.066329 systemd[1]: Reached target veritysetup.target. Sep 13 01:03:06.066336 systemd[1]: Listening on systemd-coredump.socket. Sep 13 01:03:06.066342 systemd[1]: Listening on systemd-initctl.socket. Sep 13 01:03:06.066349 systemd[1]: Listening on systemd-networkd.socket. Sep 13 01:03:06.066356 systemd[1]: Listening on systemd-udevd-control.socket. Sep 13 01:03:06.066363 systemd[1]: Listening on systemd-udevd-kernel.socket. Sep 13 01:03:06.066372 systemd[1]: Listening on systemd-userdbd.socket. Sep 13 01:03:06.066379 systemd[1]: Mounting dev-hugepages.mount... Sep 13 01:03:06.066386 systemd[1]: Mounting dev-mqueue.mount... Sep 13 01:03:06.066393 systemd[1]: Mounting media.mount... Sep 13 01:03:06.066400 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 01:03:06.066407 systemd[1]: Mounting sys-kernel-debug.mount... Sep 13 01:03:06.066414 systemd[1]: Mounting sys-kernel-tracing.mount... Sep 13 01:03:06.066422 systemd[1]: Mounting tmp.mount... Sep 13 01:03:06.066429 systemd[1]: Starting flatcar-tmpfiles.service... Sep 13 01:03:06.066436 systemd[1]: Starting ignition-delete-config.service... Sep 13 01:03:06.066443 systemd[1]: Starting kmod-static-nodes.service... Sep 13 01:03:06.066449 systemd[1]: Starting modprobe@configfs.service... Sep 13 01:03:06.066456 systemd[1]: Starting modprobe@dm_mod.service... Sep 13 01:03:06.066463 systemd[1]: Starting modprobe@drm.service... Sep 13 01:03:06.066470 systemd[1]: Starting modprobe@efi_pstore.service... Sep 13 01:03:06.066477 systemd[1]: Starting modprobe@fuse.service... Sep 13 01:03:06.066485 systemd[1]: Starting modprobe@loop.service... Sep 13 01:03:06.066493 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 13 01:03:06.066500 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 13 01:03:06.066507 systemd[1]: Stopped systemd-fsck-root.service. Sep 13 01:03:06.066514 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 13 01:03:06.066522 systemd[1]: Stopped systemd-fsck-usr.service. Sep 13 01:03:06.066529 systemd[1]: Stopped systemd-journald.service. Sep 13 01:03:06.066535 systemd[1]: Starting systemd-journald.service... Sep 13 01:03:06.066542 systemd[1]: Starting systemd-modules-load.service... Sep 13 01:03:06.066550 systemd[1]: Starting systemd-network-generator.service... Sep 13 01:03:06.066557 systemd[1]: Starting systemd-remount-fs.service... Sep 13 01:03:06.066565 systemd[1]: Starting systemd-udev-trigger.service... Sep 13 01:03:06.066571 systemd[1]: verity-setup.service: Deactivated successfully. Sep 13 01:03:06.066578 systemd[1]: Stopped verity-setup.service. Sep 13 01:03:06.066585 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 01:03:06.066592 systemd[1]: Mounted dev-hugepages.mount. Sep 13 01:03:06.066600 systemd[1]: Mounted dev-mqueue.mount. Sep 13 01:03:06.066606 systemd[1]: Mounted media.mount. Sep 13 01:03:06.066615 systemd[1]: Mounted sys-kernel-debug.mount. Sep 13 01:03:06.066622 systemd[1]: Mounted sys-kernel-tracing.mount. Sep 13 01:03:06.066628 systemd[1]: Mounted tmp.mount. Sep 13 01:03:06.066635 systemd[1]: Finished kmod-static-nodes.service. Sep 13 01:03:06.066642 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 01:03:06.066649 systemd[1]: Finished modprobe@dm_mod.service. Sep 13 01:03:06.066656 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 13 01:03:06.066663 systemd[1]: Finished modprobe@drm.service. Sep 13 01:03:06.066671 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 13 01:03:06.066679 systemd[1]: Finished modprobe@configfs.service. Sep 13 01:03:06.066686 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 01:03:06.066692 systemd[1]: Finished modprobe@efi_pstore.service. Sep 13 01:03:06.066699 systemd[1]: Finished systemd-network-generator.service. Sep 13 01:03:06.066706 systemd[1]: Finished systemd-remount-fs.service. Sep 13 01:03:06.066713 systemd[1]: Reached target network-pre.target. Sep 13 01:03:06.066720 systemd[1]: Mounting sys-kernel-config.mount... Sep 13 01:03:06.066727 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 13 01:03:06.066736 systemd-journald[1001]: Journal started Sep 13 01:03:06.066766 systemd-journald[1001]: Runtime Journal (/run/log/journal/59d73cca467b4f65994bd09c2dd086e5) is 4.8M, max 38.8M, 34.0M free. Sep 13 01:03:03.237000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 13 01:03:03.399000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Sep 13 01:03:03.399000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Sep 13 01:03:03.399000 audit: BPF prog-id=10 op=LOAD Sep 13 01:03:03.399000 audit: BPF prog-id=10 op=UNLOAD Sep 13 01:03:03.399000 audit: BPF prog-id=11 op=LOAD Sep 13 01:03:03.399000 audit: BPF prog-id=11 op=UNLOAD Sep 13 01:03:03.493000 audit[922]: AVC avc: denied { associate } for pid=922 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Sep 13 01:03:03.493000 audit[922]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c00014d8b4 a1=c0000cede0 a2=c0000d70c0 a3=32 items=0 ppid=905 pid=922 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:03:03.493000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Sep 13 01:03:03.494000 audit[922]: AVC avc: denied { associate } for pid=922 comm="torcx-generator" name="bin" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Sep 13 01:03:03.494000 audit[922]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c00014d999 a2=1ed a3=0 items=2 ppid=905 pid=922 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:03:03.494000 audit: CWD cwd="/" Sep 13 01:03:03.494000 audit: PATH item=0 name=(null) inode=2 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:03:03.494000 audit: PATH item=1 name=(null) inode=3 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:03:03.494000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Sep 13 01:03:05.942000 audit: BPF prog-id=12 op=LOAD Sep 13 01:03:05.942000 audit: BPF prog-id=3 op=UNLOAD Sep 13 01:03:05.942000 audit: BPF prog-id=13 op=LOAD Sep 13 01:03:05.942000 audit: BPF prog-id=14 op=LOAD Sep 13 01:03:05.942000 audit: BPF prog-id=4 op=UNLOAD Sep 13 01:03:05.942000 audit: BPF prog-id=5 op=UNLOAD Sep 13 01:03:05.943000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:05.945000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:05.945000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:05.952000 audit: BPF prog-id=12 op=UNLOAD Sep 13 01:03:06.014000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:06.016000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:06.017000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:06.017000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:06.017000 audit: BPF prog-id=15 op=LOAD Sep 13 01:03:06.017000 audit: BPF prog-id=16 op=LOAD Sep 13 01:03:06.017000 audit: BPF prog-id=17 op=LOAD Sep 13 01:03:06.017000 audit: BPF prog-id=13 op=UNLOAD Sep 13 01:03:06.017000 audit: BPF prog-id=14 op=UNLOAD Sep 13 01:03:06.035000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:06.045000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:06.047000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:06.047000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:06.051000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:06.051000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:06.053000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:06.053000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:06.055000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:06.055000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:06.058000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:06.058000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:06.061000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Sep 13 01:03:06.061000 audit[1001]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7fff9f76db70 a2=4000 a3=7fff9f76dc0c items=0 ppid=1 pid=1001 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:03:06.061000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Sep 13 01:03:03.490400 /usr/lib/systemd/system-generators/torcx-generator[922]: time="2025-09-13T01:03:03Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 13 01:03:05.941132 systemd[1]: Queued start job for default target multi-user.target. Sep 13 01:03:03.491621 /usr/lib/systemd/system-generators/torcx-generator[922]: time="2025-09-13T01:03:03Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Sep 13 01:03:05.941141 systemd[1]: Unnecessary job was removed for dev-sda6.device. Sep 13 01:03:03.491637 /usr/lib/systemd/system-generators/torcx-generator[922]: time="2025-09-13T01:03:03Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Sep 13 01:03:05.944285 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 13 01:03:03.491658 /usr/lib/systemd/system-generators/torcx-generator[922]: time="2025-09-13T01:03:03Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Sep 13 01:03:03.491665 /usr/lib/systemd/system-generators/torcx-generator[922]: time="2025-09-13T01:03:03Z" level=debug msg="skipped missing lower profile" missing profile=oem Sep 13 01:03:03.491685 /usr/lib/systemd/system-generators/torcx-generator[922]: time="2025-09-13T01:03:03Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Sep 13 01:03:03.491692 /usr/lib/systemd/system-generators/torcx-generator[922]: time="2025-09-13T01:03:03Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Sep 13 01:03:03.491822 /usr/lib/systemd/system-generators/torcx-generator[922]: time="2025-09-13T01:03:03Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Sep 13 01:03:03.491846 /usr/lib/systemd/system-generators/torcx-generator[922]: time="2025-09-13T01:03:03Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Sep 13 01:03:06.068245 jq[988]: true Sep 13 01:03:03.491860 /usr/lib/systemd/system-generators/torcx-generator[922]: time="2025-09-13T01:03:03Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Sep 13 01:03:03.493370 /usr/lib/systemd/system-generators/torcx-generator[922]: time="2025-09-13T01:03:03Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Sep 13 01:03:03.493413 /usr/lib/systemd/system-generators/torcx-generator[922]: time="2025-09-13T01:03:03Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Sep 13 01:03:03.493432 /usr/lib/systemd/system-generators/torcx-generator[922]: time="2025-09-13T01:03:03Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.8: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.8 Sep 13 01:03:03.493442 /usr/lib/systemd/system-generators/torcx-generator[922]: time="2025-09-13T01:03:03Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Sep 13 01:03:03.493452 /usr/lib/systemd/system-generators/torcx-generator[922]: time="2025-09-13T01:03:03Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.8: no such file or directory" path=/var/lib/torcx/store/3510.3.8 Sep 13 01:03:03.493459 /usr/lib/systemd/system-generators/torcx-generator[922]: time="2025-09-13T01:03:03Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Sep 13 01:03:05.506906 /usr/lib/systemd/system-generators/torcx-generator[922]: time="2025-09-13T01:03:05Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Sep 13 01:03:05.507057 /usr/lib/systemd/system-generators/torcx-generator[922]: time="2025-09-13T01:03:05Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Sep 13 01:03:05.507122 /usr/lib/systemd/system-generators/torcx-generator[922]: time="2025-09-13T01:03:05Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Sep 13 01:03:05.507244 /usr/lib/systemd/system-generators/torcx-generator[922]: time="2025-09-13T01:03:05Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Sep 13 01:03:05.507277 /usr/lib/systemd/system-generators/torcx-generator[922]: time="2025-09-13T01:03:05Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Sep 13 01:03:05.507320 /usr/lib/systemd/system-generators/torcx-generator[922]: time="2025-09-13T01:03:05Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Sep 13 01:03:06.076174 systemd[1]: Starting systemd-hwdb-update.service... Sep 13 01:03:06.076210 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 13 01:03:06.076221 kernel: fuse: init (API version 7.34) Sep 13 01:03:06.076230 systemd[1]: Starting systemd-random-seed.service... Sep 13 01:03:06.076242 systemd[1]: Started systemd-journald.service. Sep 13 01:03:06.074000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:06.077000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:06.078070 systemd[1]: Finished systemd-modules-load.service. Sep 13 01:03:06.078303 systemd[1]: Mounted sys-kernel-config.mount. Sep 13 01:03:06.079000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:06.084270 systemd-journald[1001]: Time spent on flushing to /var/log/journal/59d73cca467b4f65994bd09c2dd086e5 is 58.260ms for 1982 entries. Sep 13 01:03:06.084270 systemd-journald[1001]: System Journal (/var/log/journal/59d73cca467b4f65994bd09c2dd086e5) is 8.0M, max 584.8M, 576.8M free. Sep 13 01:03:06.149487 systemd-journald[1001]: Received client request to flush runtime journal. Sep 13 01:03:06.149524 kernel: loop: module loaded Sep 13 01:03:06.109000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:06.109000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:06.110000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:06.125000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:06.125000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:06.131000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:06.079442 systemd[1]: Starting systemd-journal-flush.service... Sep 13 01:03:06.080281 systemd[1]: Starting systemd-sysctl.service... Sep 13 01:03:06.148000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:06.080585 systemd[1]: Finished systemd-random-seed.service. Sep 13 01:03:06.080752 systemd[1]: Reached target first-boot-complete.target. Sep 13 01:03:06.150322 jq[1021]: true Sep 13 01:03:06.110736 systemd[1]: Finished systemd-sysctl.service. Sep 13 01:03:06.111008 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 13 01:03:06.111084 systemd[1]: Finished modprobe@fuse.service. Sep 13 01:03:06.112080 systemd[1]: Mounting sys-fs-fuse-connections.mount... Sep 13 01:03:06.114553 systemd[1]: Mounted sys-fs-fuse-connections.mount. Sep 13 01:03:06.126896 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 01:03:06.126980 systemd[1]: Finished modprobe@loop.service. Sep 13 01:03:06.127196 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 13 01:03:06.132875 systemd[1]: Finished flatcar-tmpfiles.service. Sep 13 01:03:06.133810 systemd[1]: Starting systemd-sysusers.service... Sep 13 01:03:06.149962 systemd[1]: Finished systemd-journal-flush.service. Sep 13 01:03:06.186000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:06.187378 systemd[1]: Finished systemd-udev-trigger.service. Sep 13 01:03:06.188352 systemd[1]: Starting systemd-udev-settle.service... Sep 13 01:03:06.196746 udevadm[1047]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Sep 13 01:03:06.234000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:06.235689 systemd[1]: Finished systemd-sysusers.service. Sep 13 01:03:06.409787 ignition[1025]: Ignition 2.14.0 Sep 13 01:03:06.410232 ignition[1025]: deleting config from guestinfo properties Sep 13 01:03:06.415265 ignition[1025]: Successfully deleted config Sep 13 01:03:06.415000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ignition-delete-config comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:06.415948 systemd[1]: Finished ignition-delete-config.service. Sep 13 01:03:06.678081 systemd[1]: Finished systemd-hwdb-update.service. Sep 13 01:03:06.679185 systemd[1]: Starting systemd-udevd.service... Sep 13 01:03:06.677000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:06.677000 audit: BPF prog-id=18 op=LOAD Sep 13 01:03:06.677000 audit: BPF prog-id=19 op=LOAD Sep 13 01:03:06.677000 audit: BPF prog-id=7 op=UNLOAD Sep 13 01:03:06.677000 audit: BPF prog-id=8 op=UNLOAD Sep 13 01:03:06.690727 systemd-udevd[1052]: Using default interface naming scheme 'v252'. Sep 13 01:03:06.747549 systemd[1]: Started systemd-udevd.service. Sep 13 01:03:06.746000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:06.747000 audit: BPF prog-id=20 op=LOAD Sep 13 01:03:06.748991 systemd[1]: Starting systemd-networkd.service... Sep 13 01:03:06.759847 systemd[1]: Starting systemd-userdbd.service... Sep 13 01:03:06.758000 audit: BPF prog-id=21 op=LOAD Sep 13 01:03:06.758000 audit: BPF prog-id=22 op=LOAD Sep 13 01:03:06.758000 audit: BPF prog-id=23 op=LOAD Sep 13 01:03:06.781594 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. Sep 13 01:03:06.783788 systemd[1]: Started systemd-userdbd.service. Sep 13 01:03:06.782000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:06.818262 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Sep 13 01:03:06.825238 kernel: ACPI: button: Power Button [PWRF] Sep 13 01:03:06.874000 audit[1063]: AVC avc: denied { confidentiality } for pid=1063 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Sep 13 01:03:06.874000 audit[1063]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=55af6717fe30 a1=338ec a2=7f908c1eebc5 a3=5 items=110 ppid=1052 pid=1063 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:03:06.874000 audit: CWD cwd="/" Sep 13 01:03:06.874000 audit: PATH item=0 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:03:06.874000 audit: PATH item=1 name=(null) inode=24866 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:03:06.874000 audit: PATH item=2 name=(null) inode=24866 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:03:06.874000 audit: PATH item=3 name=(null) inode=24867 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:03:06.874000 audit: PATH item=4 name=(null) inode=24866 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:03:06.874000 audit: PATH item=5 name=(null) inode=24868 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:03:06.874000 audit: PATH item=6 name=(null) inode=24866 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:03:06.874000 audit: PATH item=7 name=(null) inode=24869 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:03:06.874000 audit: PATH item=8 name=(null) inode=24869 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:03:06.874000 audit: PATH item=9 name=(null) inode=24870 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:03:06.874000 audit: PATH item=10 name=(null) inode=24869 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:03:06.874000 audit: PATH item=11 name=(null) inode=24871 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:03:06.874000 audit: PATH item=12 name=(null) inode=24869 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:03:06.874000 audit: PATH item=13 name=(null) inode=24872 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:03:06.874000 audit: PATH item=14 name=(null) inode=24869 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:03:06.874000 audit: PATH item=15 name=(null) inode=24873 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:03:06.874000 audit: PATH item=16 name=(null) inode=24869 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:03:06.874000 audit: PATH item=17 name=(null) inode=24874 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:03:06.874000 audit: PATH item=18 name=(null) inode=24866 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:03:06.874000 audit: PATH item=19 name=(null) inode=24875 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:03:06.874000 audit: PATH item=20 name=(null) inode=24875 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:03:06.874000 audit: PATH item=21 name=(null) inode=24876 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:03:06.874000 audit: PATH item=22 name=(null) inode=24875 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:03:06.874000 audit: PATH item=23 name=(null) inode=24877 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:03:06.874000 audit: PATH item=24 name=(null) inode=24875 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:03:06.874000 audit: PATH item=25 name=(null) inode=24878 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:03:06.874000 audit: PATH item=26 name=(null) inode=24875 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:03:06.874000 audit: PATH item=27 name=(null) inode=24879 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:03:06.874000 audit: PATH item=28 name=(null) inode=24875 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:03:06.874000 audit: PATH item=29 name=(null) inode=24880 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:03:06.874000 audit: PATH item=30 name=(null) inode=24866 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:03:06.874000 audit: PATH item=31 name=(null) inode=24881 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:03:06.874000 audit: PATH item=32 name=(null) inode=24881 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:03:06.874000 audit: PATH item=33 name=(null) inode=24882 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:03:06.874000 audit: PATH item=34 name=(null) inode=24881 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:03:06.874000 audit: PATH item=35 name=(null) inode=24883 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:03:06.874000 audit: PATH item=36 name=(null) inode=24881 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:03:06.874000 audit: PATH item=37 name=(null) inode=24884 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:03:06.874000 audit: PATH item=38 name=(null) inode=24881 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:03:06.874000 audit: PATH item=39 name=(null) inode=24885 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:03:06.874000 audit: PATH item=40 name=(null) inode=24881 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:03:06.874000 audit: PATH item=41 name=(null) inode=24886 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:03:06.874000 audit: PATH item=42 name=(null) inode=24866 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:03:06.880190 kernel: vmw_vmci 0000:00:07.7: Found VMCI PCI device at 0x11080, irq 16 Sep 13 01:03:06.882390 kernel: vmw_vmci 0000:00:07.7: Using capabilities 0xc Sep 13 01:03:06.882543 kernel: Guest personality initialized and is active Sep 13 01:03:06.874000 audit: PATH item=43 name=(null) inode=24887 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:03:06.874000 audit: PATH item=44 name=(null) inode=24887 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:03:06.874000 audit: PATH item=45 name=(null) inode=24888 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:03:06.874000 audit: PATH item=46 name=(null) inode=24887 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:03:06.874000 audit: PATH item=47 name=(null) inode=24889 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:03:06.874000 audit: PATH item=48 name=(null) inode=24887 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:03:06.874000 audit: PATH item=49 name=(null) inode=24890 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:03:06.874000 audit: PATH item=50 name=(null) inode=24887 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:03:06.874000 audit: PATH item=51 name=(null) inode=24891 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:03:06.874000 audit: PATH item=52 name=(null) inode=24887 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:03:06.874000 audit: PATH item=53 name=(null) inode=24892 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:03:06.874000 audit: PATH item=54 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:03:06.874000 audit: PATH item=55 name=(null) inode=24893 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:03:06.874000 audit: PATH item=56 name=(null) inode=24893 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:03:06.874000 audit: PATH item=57 name=(null) inode=24894 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:03:06.874000 audit: PATH item=58 name=(null) inode=24893 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:03:06.874000 audit: PATH item=59 name=(null) inode=24895 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:03:06.874000 audit: PATH item=60 name=(null) inode=24893 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:03:06.874000 audit: PATH item=61 name=(null) inode=24896 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:03:06.874000 audit: PATH item=62 name=(null) inode=24896 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:03:06.874000 audit: PATH item=63 name=(null) inode=24897 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:03:06.874000 audit: PATH item=64 name=(null) inode=24896 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:03:06.874000 audit: PATH item=65 name=(null) inode=24898 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:03:06.874000 audit: PATH item=66 name=(null) inode=24896 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:03:06.874000 audit: PATH item=67 name=(null) inode=24899 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:03:06.883223 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Sep 13 01:03:06.874000 audit: PATH item=68 name=(null) inode=24896 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:03:06.874000 audit: PATH item=69 name=(null) inode=24900 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:03:06.874000 audit: PATH item=70 name=(null) inode=24896 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:03:06.874000 audit: PATH item=71 name=(null) inode=24901 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:03:06.874000 audit: PATH item=72 name=(null) inode=24893 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:03:06.874000 audit: PATH item=73 name=(null) inode=24902 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:03:06.874000 audit: PATH item=74 name=(null) inode=24902 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:03:06.874000 audit: PATH item=75 name=(null) inode=24903 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:03:06.874000 audit: PATH item=76 name=(null) inode=24902 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:03:06.874000 audit: PATH item=77 name=(null) inode=24904 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:03:06.874000 audit: PATH item=78 name=(null) inode=24902 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:03:06.874000 audit: PATH item=79 name=(null) inode=24905 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:03:06.874000 audit: PATH item=80 name=(null) inode=24902 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:03:06.874000 audit: PATH item=81 name=(null) inode=24906 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:03:06.874000 audit: PATH item=82 name=(null) inode=24902 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:03:06.874000 audit: PATH item=83 name=(null) inode=24907 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:03:06.874000 audit: PATH item=84 name=(null) inode=24893 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:03:06.874000 audit: PATH item=85 name=(null) inode=24908 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:03:06.874000 audit: PATH item=86 name=(null) inode=24908 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:03:06.874000 audit: PATH item=87 name=(null) inode=24909 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:03:06.874000 audit: PATH item=88 name=(null) inode=24908 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:03:06.874000 audit: PATH item=89 name=(null) inode=24910 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:03:06.874000 audit: PATH item=90 name=(null) inode=24908 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:03:06.874000 audit: PATH item=91 name=(null) inode=24911 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:03:06.874000 audit: PATH item=92 name=(null) inode=24908 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:03:06.874000 audit: PATH item=93 name=(null) inode=24912 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:03:06.874000 audit: PATH item=94 name=(null) inode=24908 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:03:06.874000 audit: PATH item=95 name=(null) inode=24913 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:03:06.874000 audit: PATH item=96 name=(null) inode=24893 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:03:06.874000 audit: PATH item=97 name=(null) inode=24914 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:03:06.874000 audit: PATH item=98 name=(null) inode=24914 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:03:06.874000 audit: PATH item=99 name=(null) inode=24915 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:03:06.874000 audit: PATH item=100 name=(null) inode=24914 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:03:06.874000 audit: PATH item=101 name=(null) inode=24916 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:03:06.874000 audit: PATH item=102 name=(null) inode=24914 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:03:06.874000 audit: PATH item=103 name=(null) inode=24917 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:03:06.874000 audit: PATH item=104 name=(null) inode=24914 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:03:06.874000 audit: PATH item=105 name=(null) inode=24918 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:03:06.874000 audit: PATH item=106 name=(null) inode=24914 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:03:06.874000 audit: PATH item=107 name=(null) inode=24919 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:03:06.874000 audit: PATH item=108 name=(null) inode=1 dev=00:07 mode=040700 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:03:06.874000 audit: PATH item=109 name=(null) inode=24920 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:03:06.874000 audit: PROCTITLE proctitle="(udev-worker)" Sep 13 01:03:06.884497 kernel: Initialized host personality Sep 13 01:03:06.887179 kernel: piix4_smbus 0000:00:07.3: SMBus Host Controller not enabled! Sep 13 01:03:06.896370 systemd-networkd[1060]: lo: Link UP Sep 13 01:03:06.896375 systemd-networkd[1060]: lo: Gained carrier Sep 13 01:03:06.896681 systemd-networkd[1060]: Enumeration completed Sep 13 01:03:06.896748 systemd[1]: Started systemd-networkd.service. Sep 13 01:03:06.895000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:06.896964 systemd-networkd[1060]: ens192: Configuring with /etc/systemd/network/00-vmware.network. Sep 13 01:03:06.900191 kernel: vmxnet3 0000:0b:00.0 ens192: intr type 3, mode 0, 3 vectors allocated Sep 13 01:03:06.900313 kernel: vmxnet3 0000:0b:00.0 ens192: NIC Link is Up 10000 Mbps Sep 13 01:03:06.900394 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): ens192: link becomes ready Sep 13 01:03:06.901452 systemd-networkd[1060]: ens192: Link UP Sep 13 01:03:06.901544 systemd-networkd[1060]: ens192: Gained carrier Sep 13 01:03:06.911185 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input3 Sep 13 01:03:06.939183 kernel: mousedev: PS/2 mouse device common for all mice Sep 13 01:03:06.942860 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Sep 13 01:03:06.947499 (udev-worker)[1059]: id: Truncating stdout of 'dmi_memory_id' up to 16384 byte. Sep 13 01:03:06.960395 systemd[1]: Finished systemd-udev-settle.service. Sep 13 01:03:06.959000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:06.961299 systemd[1]: Starting lvm2-activation-early.service... Sep 13 01:03:07.018195 lvm[1085]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 13 01:03:07.040753 systemd[1]: Finished lvm2-activation-early.service. Sep 13 01:03:07.039000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:07.040956 systemd[1]: Reached target cryptsetup.target. Sep 13 01:03:07.041890 systemd[1]: Starting lvm2-activation.service... Sep 13 01:03:07.044525 lvm[1086]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 13 01:03:07.061803 systemd[1]: Finished lvm2-activation.service. Sep 13 01:03:07.060000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:07.061987 systemd[1]: Reached target local-fs-pre.target. Sep 13 01:03:07.062090 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 13 01:03:07.062106 systemd[1]: Reached target local-fs.target. Sep 13 01:03:07.062207 systemd[1]: Reached target machines.target. Sep 13 01:03:07.063227 systemd[1]: Starting ldconfig.service... Sep 13 01:03:07.068527 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 13 01:03:07.068556 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 13 01:03:07.069339 systemd[1]: Starting systemd-boot-update.service... Sep 13 01:03:07.069973 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Sep 13 01:03:07.070786 systemd[1]: Starting systemd-machine-id-commit.service... Sep 13 01:03:07.071582 systemd[1]: Starting systemd-sysext.service... Sep 13 01:03:07.088681 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1088 (bootctl) Sep 13 01:03:07.089414 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Sep 13 01:03:07.106294 systemd[1]: Unmounting usr-share-oem.mount... Sep 13 01:03:07.122433 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Sep 13 01:03:07.121000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:07.125566 systemd[1]: usr-share-oem.mount: Deactivated successfully. Sep 13 01:03:07.125696 systemd[1]: Unmounted usr-share-oem.mount. Sep 13 01:03:07.146185 kernel: loop0: detected capacity change from 0 to 229808 Sep 13 01:03:08.084335 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 13 01:03:08.083000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:08.084716 systemd[1]: Finished systemd-machine-id-commit.service. Sep 13 01:03:08.085504 kernel: kauditd_printk_skb: 225 callbacks suppressed Sep 13 01:03:08.085533 kernel: audit: type=1130 audit(1757725388.083:148): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:08.110972 systemd-fsck[1098]: fsck.fat 4.2 (2021-01-31) Sep 13 01:03:08.110972 systemd-fsck[1098]: /dev/sda1: 790 files, 120761/258078 clusters Sep 13 01:03:08.113898 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Sep 13 01:03:08.113000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:08.115581 systemd[1]: Mounting boot.mount... Sep 13 01:03:08.117793 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 13 01:03:08.117824 kernel: audit: type=1130 audit(1757725388.113:149): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:08.152186 kernel: loop1: detected capacity change from 0 to 229808 Sep 13 01:03:08.152456 systemd[1]: Mounted boot.mount. Sep 13 01:03:08.174214 systemd[1]: Finished systemd-boot-update.service. Sep 13 01:03:08.173000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:08.177186 kernel: audit: type=1130 audit(1757725388.173:150): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:08.208482 (sd-sysext)[1102]: Using extensions 'kubernetes'. Sep 13 01:03:08.208935 (sd-sysext)[1102]: Merged extensions into '/usr'. Sep 13 01:03:08.221038 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 01:03:08.222011 systemd[1]: Mounting usr-share-oem.mount... Sep 13 01:03:08.222791 systemd[1]: Starting modprobe@dm_mod.service... Sep 13 01:03:08.224651 systemd[1]: Starting modprobe@efi_pstore.service... Sep 13 01:03:08.225694 systemd[1]: Starting modprobe@loop.service... Sep 13 01:03:08.226003 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 13 01:03:08.226124 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 13 01:03:08.226234 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 01:03:08.229012 systemd[1]: Mounted usr-share-oem.mount. Sep 13 01:03:08.229290 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 01:03:08.229372 systemd[1]: Finished modprobe@dm_mod.service. Sep 13 01:03:08.228000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:08.229671 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 01:03:08.229735 systemd[1]: Finished modprobe@efi_pstore.service. Sep 13 01:03:08.232449 kernel: audit: type=1130 audit(1757725388.228:151): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:08.232481 kernel: audit: type=1131 audit(1757725388.228:152): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:08.228000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:08.232377 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 01:03:08.232442 systemd[1]: Finished modprobe@loop.service. Sep 13 01:03:08.235182 kernel: audit: type=1130 audit(1757725388.231:153): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:08.235212 kernel: audit: type=1131 audit(1757725388.231:154): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:08.231000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:08.231000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:08.239000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:08.241006 systemd[1]: Finished systemd-sysext.service. Sep 13 01:03:08.239000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:08.245509 kernel: audit: type=1130 audit(1757725388.239:155): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:08.245546 kernel: audit: type=1131 audit(1757725388.239:156): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:08.244000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:08.250594 kernel: audit: type=1130 audit(1757725388.244:157): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:08.247354 systemd[1]: Starting ensure-sysext.service... Sep 13 01:03:08.249336 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 13 01:03:08.249375 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 13 01:03:08.250252 systemd[1]: Starting systemd-tmpfiles-setup.service... Sep 13 01:03:08.253241 systemd[1]: Reloading. Sep 13 01:03:08.272163 systemd-tmpfiles[1109]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Sep 13 01:03:08.289262 systemd-tmpfiles[1109]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 13 01:03:08.296697 /usr/lib/systemd/system-generators/torcx-generator[1128]: time="2025-09-13T01:03:08Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 13 01:03:08.296888 /usr/lib/systemd/system-generators/torcx-generator[1128]: time="2025-09-13T01:03:08Z" level=info msg="torcx already run" Sep 13 01:03:08.302925 systemd-tmpfiles[1109]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 13 01:03:08.363806 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 13 01:03:08.363820 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 13 01:03:08.378689 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 01:03:08.416000 audit: BPF prog-id=24 op=LOAD Sep 13 01:03:08.416000 audit: BPF prog-id=20 op=UNLOAD Sep 13 01:03:08.416000 audit: BPF prog-id=25 op=LOAD Sep 13 01:03:08.416000 audit: BPF prog-id=21 op=UNLOAD Sep 13 01:03:08.416000 audit: BPF prog-id=26 op=LOAD Sep 13 01:03:08.416000 audit: BPF prog-id=27 op=LOAD Sep 13 01:03:08.416000 audit: BPF prog-id=22 op=UNLOAD Sep 13 01:03:08.416000 audit: BPF prog-id=23 op=UNLOAD Sep 13 01:03:08.416000 audit: BPF prog-id=28 op=LOAD Sep 13 01:03:08.416000 audit: BPF prog-id=29 op=LOAD Sep 13 01:03:08.416000 audit: BPF prog-id=18 op=UNLOAD Sep 13 01:03:08.416000 audit: BPF prog-id=19 op=UNLOAD Sep 13 01:03:08.417000 audit: BPF prog-id=30 op=LOAD Sep 13 01:03:08.417000 audit: BPF prog-id=15 op=UNLOAD Sep 13 01:03:08.417000 audit: BPF prog-id=31 op=LOAD Sep 13 01:03:08.417000 audit: BPF prog-id=32 op=LOAD Sep 13 01:03:08.417000 audit: BPF prog-id=16 op=UNLOAD Sep 13 01:03:08.417000 audit: BPF prog-id=17 op=UNLOAD Sep 13 01:03:08.431984 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 01:03:08.432875 systemd[1]: Starting modprobe@dm_mod.service... Sep 13 01:03:08.433938 systemd[1]: Starting modprobe@efi_pstore.service... Sep 13 01:03:08.435883 systemd[1]: Starting modprobe@loop.service... Sep 13 01:03:08.436009 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 13 01:03:08.436153 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 13 01:03:08.436268 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 01:03:08.436848 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 01:03:08.436927 systemd[1]: Finished modprobe@efi_pstore.service. Sep 13 01:03:08.435000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:08.435000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:08.437249 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 13 01:03:08.437971 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 01:03:08.438056 systemd[1]: Finished modprobe@loop.service. Sep 13 01:03:08.437000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:08.437000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:08.438359 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 01:03:08.439036 systemd[1]: Starting modprobe@efi_pstore.service... Sep 13 01:03:08.439161 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 13 01:03:08.439308 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 13 01:03:08.439382 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 01:03:08.440878 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 01:03:08.442217 systemd[1]: Starting modprobe@drm.service... Sep 13 01:03:08.443700 systemd[1]: Starting modprobe@loop.service... Sep 13 01:03:08.443860 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 13 01:03:08.443942 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 13 01:03:08.444948 systemd[1]: Starting systemd-networkd-wait-online.service... Sep 13 01:03:08.445112 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 01:03:08.445706 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 01:03:08.445965 systemd[1]: Finished modprobe@dm_mod.service. Sep 13 01:03:08.445000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:08.445000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:08.446603 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 01:03:08.446677 systemd[1]: Finished modprobe@efi_pstore.service. Sep 13 01:03:08.445000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:08.445000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:08.446977 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 13 01:03:08.447041 systemd[1]: Finished modprobe@drm.service. Sep 13 01:03:08.446000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:08.446000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:08.447443 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 01:03:08.447514 systemd[1]: Finished modprobe@loop.service. Sep 13 01:03:08.446000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:08.446000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:08.447904 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 13 01:03:08.447965 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 13 01:03:08.448609 systemd[1]: Finished ensure-sysext.service. Sep 13 01:03:08.447000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:08.477095 systemd[1]: Finished systemd-tmpfiles-setup.service. Sep 13 01:03:08.476000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:08.478467 systemd[1]: Starting audit-rules.service... Sep 13 01:03:08.479965 systemd[1]: Starting clean-ca-certificates.service... Sep 13 01:03:08.481000 audit: BPF prog-id=33 op=LOAD Sep 13 01:03:08.481510 systemd[1]: Starting systemd-journal-catalog-update.service... Sep 13 01:03:08.483000 audit: BPF prog-id=34 op=LOAD Sep 13 01:03:08.483396 systemd[1]: Starting systemd-resolved.service... Sep 13 01:03:08.485659 systemd[1]: Starting systemd-timesyncd.service... Sep 13 01:03:08.488000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:08.487835 systemd[1]: Starting systemd-update-utmp.service... Sep 13 01:03:08.489478 systemd[1]: Finished clean-ca-certificates.service. Sep 13 01:03:08.489773 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 13 01:03:08.495000 audit[1201]: SYSTEM_BOOT pid=1201 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Sep 13 01:03:08.497521 systemd[1]: Finished systemd-update-utmp.service. Sep 13 01:03:08.496000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:08.524366 systemd[1]: Finished systemd-journal-catalog-update.service. Sep 13 01:03:08.523000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:08.538000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Sep 13 01:03:08.538000 audit[1216]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fffa02b6e70 a2=420 a3=0 items=0 ppid=1196 pid=1216 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:03:08.538000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Sep 13 01:03:08.540463 augenrules[1216]: No rules Sep 13 01:03:08.540493 systemd[1]: Finished audit-rules.service. Sep 13 01:03:08.540669 systemd[1]: Started systemd-timesyncd.service. Sep 13 01:03:08.540810 systemd[1]: Reached target time-set.target. Sep 13 01:03:08.551180 ldconfig[1087]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 13 01:03:08.554869 systemd[1]: Finished ldconfig.service. Sep 13 01:03:08.555948 systemd[1]: Starting systemd-update-done.service... Sep 13 01:03:08.556181 systemd-resolved[1199]: Positive Trust Anchors: Sep 13 01:03:08.556187 systemd-resolved[1199]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 13 01:03:08.556206 systemd-resolved[1199]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Sep 13 01:04:41.413466 systemd-timesyncd[1200]: Contacted time server 66.244.16.123:123 (0.flatcar.pool.ntp.org). Sep 13 01:04:41.413626 systemd-timesyncd[1200]: Initial clock synchronization to Sat 2025-09-13 01:04:41.413393 UTC. Sep 13 01:04:41.416607 systemd[1]: Finished systemd-update-done.service. Sep 13 01:04:41.438567 systemd-resolved[1199]: Defaulting to hostname 'linux'. Sep 13 01:04:41.439625 systemd[1]: Started systemd-resolved.service. Sep 13 01:04:41.439785 systemd[1]: Reached target network.target. Sep 13 01:04:41.439876 systemd[1]: Reached target nss-lookup.target. Sep 13 01:04:41.439971 systemd[1]: Reached target sysinit.target. Sep 13 01:04:41.440110 systemd[1]: Started motdgen.path. Sep 13 01:04:41.440242 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Sep 13 01:04:41.440441 systemd[1]: Started logrotate.timer. Sep 13 01:04:41.440564 systemd[1]: Started mdadm.timer. Sep 13 01:04:41.440650 systemd[1]: Started systemd-tmpfiles-clean.timer. Sep 13 01:04:41.440741 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 13 01:04:41.440762 systemd[1]: Reached target paths.target. Sep 13 01:04:41.440846 systemd[1]: Reached target timers.target. Sep 13 01:04:41.441075 systemd[1]: Listening on dbus.socket. Sep 13 01:04:41.441948 systemd[1]: Starting docker.socket... Sep 13 01:04:41.444710 systemd[1]: Listening on sshd.socket. Sep 13 01:04:41.444874 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 13 01:04:41.445124 systemd[1]: Listening on docker.socket. Sep 13 01:04:41.445248 systemd[1]: Reached target sockets.target. Sep 13 01:04:41.445333 systemd[1]: Reached target basic.target. Sep 13 01:04:41.445452 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Sep 13 01:04:41.445465 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Sep 13 01:04:41.446250 systemd[1]: Starting containerd.service... Sep 13 01:04:41.447073 systemd[1]: Starting dbus.service... Sep 13 01:04:41.447837 systemd[1]: Starting enable-oem-cloudinit.service... Sep 13 01:04:41.450194 systemd[1]: Starting extend-filesystems.service... Sep 13 01:04:41.450333 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Sep 13 01:04:41.451049 systemd[1]: Starting motdgen.service... Sep 13 01:04:41.456505 jq[1227]: false Sep 13 01:04:41.451896 systemd[1]: Starting prepare-helm.service... Sep 13 01:04:41.453232 systemd[1]: Starting ssh-key-proc-cmdline.service... Sep 13 01:04:41.454504 systemd[1]: Starting sshd-keygen.service... Sep 13 01:04:41.457104 systemd[1]: Starting systemd-logind.service... Sep 13 01:04:41.458455 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 13 01:04:41.458498 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 13 01:04:41.458923 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 13 01:04:41.459285 systemd[1]: Starting update-engine.service... Sep 13 01:04:41.460157 systemd[1]: Starting update-ssh-keys-after-ignition.service... Sep 13 01:04:41.461249 systemd[1]: Starting vmtoolsd.service... Sep 13 01:04:41.462232 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 13 01:04:41.462346 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Sep 13 01:04:41.468574 jq[1237]: true Sep 13 01:04:41.468998 systemd[1]: Started vmtoolsd.service. Sep 13 01:04:41.472919 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 13 01:04:41.473023 systemd[1]: Finished ssh-key-proc-cmdline.service. Sep 13 01:04:41.483736 tar[1242]: linux-amd64/LICENSE Sep 13 01:04:41.483736 tar[1242]: linux-amd64/helm Sep 13 01:04:41.484050 jq[1249]: true Sep 13 01:04:41.494991 extend-filesystems[1228]: Found loop1 Sep 13 01:04:41.499064 extend-filesystems[1228]: Found sda Sep 13 01:04:41.499247 extend-filesystems[1228]: Found sda1 Sep 13 01:04:41.499362 systemd[1]: motdgen.service: Deactivated successfully. Sep 13 01:04:41.499484 extend-filesystems[1228]: Found sda2 Sep 13 01:04:41.499489 systemd[1]: Finished motdgen.service. Sep 13 01:04:41.499660 extend-filesystems[1228]: Found sda3 Sep 13 01:04:41.499828 extend-filesystems[1228]: Found usr Sep 13 01:04:41.500130 extend-filesystems[1228]: Found sda4 Sep 13 01:04:41.500130 extend-filesystems[1228]: Found sda6 Sep 13 01:04:41.500130 extend-filesystems[1228]: Found sda7 Sep 13 01:04:41.500130 extend-filesystems[1228]: Found sda9 Sep 13 01:04:41.500130 extend-filesystems[1228]: Checking size of /dev/sda9 Sep 13 01:04:41.510539 dbus-daemon[1226]: [system] SELinux support is enabled Sep 13 01:04:41.510640 systemd[1]: Started dbus.service. Sep 13 01:04:41.511980 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 13 01:04:41.512001 systemd[1]: Reached target system-config.target. Sep 13 01:04:41.512115 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 13 01:04:41.512128 systemd[1]: Reached target user-config.target. Sep 13 01:04:41.525815 extend-filesystems[1228]: Old size kept for /dev/sda9 Sep 13 01:04:41.532648 extend-filesystems[1228]: Found sr0 Sep 13 01:04:41.533681 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 13 01:04:41.533782 systemd[1]: Finished extend-filesystems.service. Sep 13 01:04:41.538077 bash[1275]: Updated "/home/core/.ssh/authorized_keys" Sep 13 01:04:41.539152 systemd[1]: Finished update-ssh-keys-after-ignition.service. Sep 13 01:04:41.539425 kernel: NET: Registered PF_VSOCK protocol family Sep 13 01:04:41.555251 update_engine[1236]: I0913 01:04:41.554459 1236 main.cc:92] Flatcar Update Engine starting Sep 13 01:04:41.556947 systemd[1]: Started update-engine.service. Sep 13 01:04:41.557176 update_engine[1236]: I0913 01:04:41.557163 1236 update_check_scheduler.cc:74] Next update check in 7m4s Sep 13 01:04:41.574767 systemd[1]: Started locksmithd.service. Sep 13 01:04:41.587006 systemd-logind[1235]: Watching system buttons on /dev/input/event1 (Power Button) Sep 13 01:04:41.587026 systemd-logind[1235]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 13 01:04:41.587236 systemd-logind[1235]: New seat seat0. Sep 13 01:04:41.595696 systemd[1]: Started systemd-logind.service. Sep 13 01:04:41.597104 env[1244]: time="2025-09-13T01:04:41.597073966Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Sep 13 01:04:41.625164 env[1244]: time="2025-09-13T01:04:41.625137974Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 13 01:04:41.625319 env[1244]: time="2025-09-13T01:04:41.625308440Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 13 01:04:41.626426 env[1244]: time="2025-09-13T01:04:41.626391422Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.192-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 13 01:04:41.626498 env[1244]: time="2025-09-13T01:04:41.626487784Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 13 01:04:41.626706 env[1244]: time="2025-09-13T01:04:41.626693789Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 13 01:04:41.626755 env[1244]: time="2025-09-13T01:04:41.626744521Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 13 01:04:41.626809 env[1244]: time="2025-09-13T01:04:41.626798292Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Sep 13 01:04:41.626923 env[1244]: time="2025-09-13T01:04:41.626912914Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 13 01:04:41.627011 env[1244]: time="2025-09-13T01:04:41.627001601Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 13 01:04:41.627209 env[1244]: time="2025-09-13T01:04:41.627196011Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 13 01:04:41.627334 env[1244]: time="2025-09-13T01:04:41.627321782Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 13 01:04:41.627378 env[1244]: time="2025-09-13T01:04:41.627368784Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 13 01:04:41.627498 env[1244]: time="2025-09-13T01:04:41.627487790Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Sep 13 01:04:41.627542 env[1244]: time="2025-09-13T01:04:41.627532498Z" level=info msg="metadata content store policy set" policy=shared Sep 13 01:04:41.630275 env[1244]: time="2025-09-13T01:04:41.630260102Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 13 01:04:41.630349 env[1244]: time="2025-09-13T01:04:41.630336271Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 13 01:04:41.630438 env[1244]: time="2025-09-13T01:04:41.630399259Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 13 01:04:41.630524 env[1244]: time="2025-09-13T01:04:41.630511042Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 13 01:04:41.630584 env[1244]: time="2025-09-13T01:04:41.630570922Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 13 01:04:41.630634 env[1244]: time="2025-09-13T01:04:41.630624128Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 13 01:04:41.630679 env[1244]: time="2025-09-13T01:04:41.630669420Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 13 01:04:41.630816 env[1244]: time="2025-09-13T01:04:41.630807025Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 13 01:04:41.630869 env[1244]: time="2025-09-13T01:04:41.630859509Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Sep 13 01:04:41.630914 env[1244]: time="2025-09-13T01:04:41.630904642Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 13 01:04:41.630960 env[1244]: time="2025-09-13T01:04:41.630951236Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 13 01:04:41.631013 env[1244]: time="2025-09-13T01:04:41.631000895Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 13 01:04:41.631125 env[1244]: time="2025-09-13T01:04:41.631116277Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 13 01:04:41.631226 env[1244]: time="2025-09-13T01:04:41.631216710Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 13 01:04:41.631515 env[1244]: time="2025-09-13T01:04:41.631503085Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 13 01:04:41.631603 env[1244]: time="2025-09-13T01:04:41.631589562Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 13 01:04:41.631658 env[1244]: time="2025-09-13T01:04:41.631646373Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 13 01:04:41.631730 env[1244]: time="2025-09-13T01:04:41.631720987Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 13 01:04:41.631780 env[1244]: time="2025-09-13T01:04:41.631770928Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 13 01:04:41.631828 env[1244]: time="2025-09-13T01:04:41.631817731Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 13 01:04:41.631959 env[1244]: time="2025-09-13T01:04:41.631879346Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 13 01:04:41.632009 env[1244]: time="2025-09-13T01:04:41.631999908Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 13 01:04:41.632056 env[1244]: time="2025-09-13T01:04:41.632045302Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 13 01:04:41.632115 env[1244]: time="2025-09-13T01:04:41.632102773Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 13 01:04:41.632172 env[1244]: time="2025-09-13T01:04:41.632162237Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 13 01:04:41.632227 env[1244]: time="2025-09-13T01:04:41.632216991Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 13 01:04:41.632362 env[1244]: time="2025-09-13T01:04:41.632352779Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 13 01:04:41.632429 env[1244]: time="2025-09-13T01:04:41.632402642Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 13 01:04:41.632484 env[1244]: time="2025-09-13T01:04:41.632474621Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 13 01:04:41.632529 env[1244]: time="2025-09-13T01:04:41.632519507Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 13 01:04:41.632585 env[1244]: time="2025-09-13T01:04:41.632573795Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Sep 13 01:04:41.632635 env[1244]: time="2025-09-13T01:04:41.632623371Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 13 01:04:41.632686 env[1244]: time="2025-09-13T01:04:41.632675970Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Sep 13 01:04:41.632752 env[1244]: time="2025-09-13T01:04:41.632741392Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 13 01:04:41.632947 env[1244]: time="2025-09-13T01:04:41.632916560Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 13 01:04:41.634841 env[1244]: time="2025-09-13T01:04:41.634520878Z" level=info msg="Connect containerd service" Sep 13 01:04:41.634977 env[1244]: time="2025-09-13T01:04:41.634966234Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 13 01:04:41.635339 env[1244]: time="2025-09-13T01:04:41.635326324Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 13 01:04:41.635521 env[1244]: time="2025-09-13T01:04:41.635510965Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 13 01:04:41.640506 env[1244]: time="2025-09-13T01:04:41.640492655Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 13 01:04:41.640586 env[1244]: time="2025-09-13T01:04:41.640577104Z" level=info msg="containerd successfully booted in 0.050205s" Sep 13 01:04:41.640639 systemd[1]: Started containerd.service. Sep 13 01:04:41.641663 env[1244]: time="2025-09-13T01:04:41.641645486Z" level=info msg="Start subscribing containerd event" Sep 13 01:04:41.641731 env[1244]: time="2025-09-13T01:04:41.641720684Z" level=info msg="Start recovering state" Sep 13 01:04:41.641814 env[1244]: time="2025-09-13T01:04:41.641805342Z" level=info msg="Start event monitor" Sep 13 01:04:41.641862 env[1244]: time="2025-09-13T01:04:41.641852852Z" level=info msg="Start snapshots syncer" Sep 13 01:04:41.641907 env[1244]: time="2025-09-13T01:04:41.641897981Z" level=info msg="Start cni network conf syncer for default" Sep 13 01:04:41.641953 env[1244]: time="2025-09-13T01:04:41.641942131Z" level=info msg="Start streaming server" Sep 13 01:04:41.691574 systemd-networkd[1060]: ens192: Gained IPv6LL Sep 13 01:04:41.692764 systemd[1]: Finished systemd-networkd-wait-online.service. Sep 13 01:04:41.693073 systemd[1]: Reached target network-online.target. Sep 13 01:04:41.694703 systemd[1]: Starting kubelet.service... Sep 13 01:04:42.037434 tar[1242]: linux-amd64/README.md Sep 13 01:04:42.041033 systemd[1]: Finished prepare-helm.service. Sep 13 01:04:42.127320 locksmithd[1284]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 13 01:04:42.565292 sshd_keygen[1254]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 13 01:04:42.578709 systemd[1]: Finished sshd-keygen.service. Sep 13 01:04:42.579912 systemd[1]: Starting issuegen.service... Sep 13 01:04:42.583056 systemd[1]: issuegen.service: Deactivated successfully. Sep 13 01:04:42.583146 systemd[1]: Finished issuegen.service. Sep 13 01:04:42.584204 systemd[1]: Starting systemd-user-sessions.service... Sep 13 01:04:42.588369 systemd[1]: Finished systemd-user-sessions.service. Sep 13 01:04:42.589297 systemd[1]: Started getty@tty1.service. Sep 13 01:04:42.590145 systemd[1]: Started serial-getty@ttyS0.service. Sep 13 01:04:42.590343 systemd[1]: Reached target getty.target. Sep 13 01:04:43.317058 systemd[1]: Started kubelet.service. Sep 13 01:04:43.317406 systemd[1]: Reached target multi-user.target. Sep 13 01:04:43.318543 systemd[1]: Starting systemd-update-utmp-runlevel.service... Sep 13 01:04:43.322917 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Sep 13 01:04:43.323018 systemd[1]: Finished systemd-update-utmp-runlevel.service. Sep 13 01:04:43.323192 systemd[1]: Startup finished in 920ms (kernel) + 7.637s (initrd) + 7.281s (userspace) = 15.839s. Sep 13 01:04:43.409357 login[1353]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Sep 13 01:04:43.410529 login[1354]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Sep 13 01:04:43.430360 systemd[1]: Created slice user-500.slice. Sep 13 01:04:43.431728 systemd[1]: Starting user-runtime-dir@500.service... Sep 13 01:04:43.435786 systemd-logind[1235]: New session 2 of user core. Sep 13 01:04:43.442181 systemd-logind[1235]: New session 1 of user core. Sep 13 01:04:43.445558 systemd[1]: Finished user-runtime-dir@500.service. Sep 13 01:04:43.446720 systemd[1]: Starting user@500.service... Sep 13 01:04:43.451979 (systemd)[1360]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 13 01:04:43.540193 systemd[1360]: Queued start job for default target default.target. Sep 13 01:04:43.540715 systemd[1360]: Reached target paths.target. Sep 13 01:04:43.540735 systemd[1360]: Reached target sockets.target. Sep 13 01:04:43.540746 systemd[1360]: Reached target timers.target. Sep 13 01:04:43.540754 systemd[1360]: Reached target basic.target. Sep 13 01:04:43.540796 systemd[1360]: Reached target default.target. Sep 13 01:04:43.540816 systemd[1360]: Startup finished in 85ms. Sep 13 01:04:43.540839 systemd[1]: Started user@500.service. Sep 13 01:04:43.541727 systemd[1]: Started session-1.scope. Sep 13 01:04:43.542272 systemd[1]: Started session-2.scope. Sep 13 01:04:44.656253 kubelet[1357]: E0913 01:04:44.656226 1357 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 01:04:44.657513 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 01:04:44.657601 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 01:04:54.908093 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 13 01:04:54.908221 systemd[1]: Stopped kubelet.service. Sep 13 01:04:54.909239 systemd[1]: Starting kubelet.service... Sep 13 01:04:55.080693 systemd[1]: Started kubelet.service. Sep 13 01:04:55.114987 kubelet[1389]: E0913 01:04:55.114958 1389 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 01:04:55.117169 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 01:04:55.117245 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 01:05:05.367835 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 13 01:05:05.367986 systemd[1]: Stopped kubelet.service. Sep 13 01:05:05.369225 systemd[1]: Starting kubelet.service... Sep 13 01:05:05.793049 systemd[1]: Started kubelet.service. Sep 13 01:05:05.863894 kubelet[1398]: E0913 01:05:05.863868 1398 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 01:05:05.865112 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 01:05:05.865185 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 01:05:11.690530 systemd[1]: Created slice system-sshd.slice. Sep 13 01:05:11.692780 systemd[1]: Started sshd@0-139.178.70.102:22-147.75.109.163:41664.service. Sep 13 01:05:11.767358 sshd[1405]: Accepted publickey for core from 147.75.109.163 port 41664 ssh2: RSA SHA256:sJGDjo0Z2Vx3Gx4EUnUZFO+gxzu8eUeKwoabCfe3hp8 Sep 13 01:05:11.768285 sshd[1405]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 01:05:11.771567 systemd[1]: Started session-3.scope. Sep 13 01:05:11.771809 systemd-logind[1235]: New session 3 of user core. Sep 13 01:05:11.820641 systemd[1]: Started sshd@1-139.178.70.102:22-147.75.109.163:41674.service. Sep 13 01:05:11.864292 sshd[1410]: Accepted publickey for core from 147.75.109.163 port 41674 ssh2: RSA SHA256:sJGDjo0Z2Vx3Gx4EUnUZFO+gxzu8eUeKwoabCfe3hp8 Sep 13 01:05:11.865323 sshd[1410]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 01:05:11.868717 systemd[1]: Started session-4.scope. Sep 13 01:05:11.868829 systemd-logind[1235]: New session 4 of user core. Sep 13 01:05:11.920258 sshd[1410]: pam_unix(sshd:session): session closed for user core Sep 13 01:05:11.923818 systemd[1]: Started sshd@2-139.178.70.102:22-147.75.109.163:41676.service. Sep 13 01:05:11.924153 systemd[1]: sshd@1-139.178.70.102:22-147.75.109.163:41674.service: Deactivated successfully. Sep 13 01:05:11.924572 systemd[1]: session-4.scope: Deactivated successfully. Sep 13 01:05:11.925783 systemd-logind[1235]: Session 4 logged out. Waiting for processes to exit. Sep 13 01:05:11.926602 systemd-logind[1235]: Removed session 4. Sep 13 01:05:11.955661 sshd[1415]: Accepted publickey for core from 147.75.109.163 port 41676 ssh2: RSA SHA256:sJGDjo0Z2Vx3Gx4EUnUZFO+gxzu8eUeKwoabCfe3hp8 Sep 13 01:05:11.956690 sshd[1415]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 01:05:11.959569 systemd[1]: Started session-5.scope. Sep 13 01:05:11.960452 systemd-logind[1235]: New session 5 of user core. Sep 13 01:05:12.007173 sshd[1415]: pam_unix(sshd:session): session closed for user core Sep 13 01:05:12.009356 systemd[1]: sshd@2-139.178.70.102:22-147.75.109.163:41676.service: Deactivated successfully. Sep 13 01:05:12.009729 systemd[1]: session-5.scope: Deactivated successfully. Sep 13 01:05:12.010086 systemd-logind[1235]: Session 5 logged out. Waiting for processes to exit. Sep 13 01:05:12.010754 systemd[1]: Started sshd@3-139.178.70.102:22-147.75.109.163:41690.service. Sep 13 01:05:12.011176 systemd-logind[1235]: Removed session 5. Sep 13 01:05:12.044125 sshd[1422]: Accepted publickey for core from 147.75.109.163 port 41690 ssh2: RSA SHA256:sJGDjo0Z2Vx3Gx4EUnUZFO+gxzu8eUeKwoabCfe3hp8 Sep 13 01:05:12.045129 sshd[1422]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 01:05:12.048074 systemd[1]: Started session-6.scope. Sep 13 01:05:12.048386 systemd-logind[1235]: New session 6 of user core. Sep 13 01:05:12.098612 sshd[1422]: pam_unix(sshd:session): session closed for user core Sep 13 01:05:12.100606 systemd[1]: sshd@3-139.178.70.102:22-147.75.109.163:41690.service: Deactivated successfully. Sep 13 01:05:12.100924 systemd[1]: session-6.scope: Deactivated successfully. Sep 13 01:05:12.101382 systemd-logind[1235]: Session 6 logged out. Waiting for processes to exit. Sep 13 01:05:12.102027 systemd[1]: Started sshd@4-139.178.70.102:22-147.75.109.163:41696.service. Sep 13 01:05:12.102664 systemd-logind[1235]: Removed session 6. Sep 13 01:05:12.135503 sshd[1428]: Accepted publickey for core from 147.75.109.163 port 41696 ssh2: RSA SHA256:sJGDjo0Z2Vx3Gx4EUnUZFO+gxzu8eUeKwoabCfe3hp8 Sep 13 01:05:12.136320 sshd[1428]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 01:05:12.138817 systemd-logind[1235]: New session 7 of user core. Sep 13 01:05:12.139310 systemd[1]: Started session-7.scope. Sep 13 01:05:12.208790 sudo[1431]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 13 01:05:12.208926 sudo[1431]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Sep 13 01:05:12.224755 systemd[1]: Starting docker.service... Sep 13 01:05:12.254247 env[1441]: time="2025-09-13T01:05:12.254213487Z" level=info msg="Starting up" Sep 13 01:05:12.254905 env[1441]: time="2025-09-13T01:05:12.254893497Z" level=info msg="parsed scheme: \"unix\"" module=grpc Sep 13 01:05:12.254964 env[1441]: time="2025-09-13T01:05:12.254951441Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Sep 13 01:05:12.255049 env[1441]: time="2025-09-13T01:05:12.255039129Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Sep 13 01:05:12.255096 env[1441]: time="2025-09-13T01:05:12.255087150Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Sep 13 01:05:12.256359 env[1441]: time="2025-09-13T01:05:12.256348179Z" level=info msg="parsed scheme: \"unix\"" module=grpc Sep 13 01:05:12.256420 env[1441]: time="2025-09-13T01:05:12.256403112Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Sep 13 01:05:12.256468 env[1441]: time="2025-09-13T01:05:12.256456462Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Sep 13 01:05:12.256513 env[1441]: time="2025-09-13T01:05:12.256504314Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Sep 13 01:05:12.275820 env[1441]: time="2025-09-13T01:05:12.275794921Z" level=info msg="Loading containers: start." Sep 13 01:05:12.375443 kernel: Initializing XFRM netlink socket Sep 13 01:05:12.401437 env[1441]: time="2025-09-13T01:05:12.401398546Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Sep 13 01:05:12.441320 systemd-networkd[1060]: docker0: Link UP Sep 13 01:05:12.451337 env[1441]: time="2025-09-13T01:05:12.451316538Z" level=info msg="Loading containers: done." Sep 13 01:05:12.461891 env[1441]: time="2025-09-13T01:05:12.461833415Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 13 01:05:12.462210 env[1441]: time="2025-09-13T01:05:12.462193466Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Sep 13 01:05:12.462268 env[1441]: time="2025-09-13T01:05:12.462256103Z" level=info msg="Daemon has completed initialization" Sep 13 01:05:12.469642 systemd[1]: Started docker.service. Sep 13 01:05:12.476126 env[1441]: time="2025-09-13T01:05:12.475963751Z" level=info msg="API listen on /run/docker.sock" Sep 13 01:05:13.450865 env[1244]: time="2025-09-13T01:05:13.450829247Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\"" Sep 13 01:05:13.971124 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3360673556.mount: Deactivated successfully. Sep 13 01:05:15.252722 env[1244]: time="2025-09-13T01:05:15.252673568Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.33.5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:05:15.256311 env[1244]: time="2025-09-13T01:05:15.256286210Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:05:15.258205 env[1244]: time="2025-09-13T01:05:15.258185091Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.33.5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:05:15.259638 env[1244]: time="2025-09-13T01:05:15.259622115Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:05:15.260098 env[1244]: time="2025-09-13T01:05:15.260074625Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\" returns image reference \"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\"" Sep 13 01:05:15.262075 env[1244]: time="2025-09-13T01:05:15.262056754Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\"" Sep 13 01:05:16.115737 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Sep 13 01:05:16.115868 systemd[1]: Stopped kubelet.service. Sep 13 01:05:16.116887 systemd[1]: Starting kubelet.service... Sep 13 01:05:16.177362 systemd[1]: Started kubelet.service. Sep 13 01:05:16.197607 kubelet[1568]: E0913 01:05:16.197573 1568 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 01:05:16.198658 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 01:05:16.198733 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 01:05:17.040916 env[1244]: time="2025-09-13T01:05:17.040887834Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.33.5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:05:17.058510 env[1244]: time="2025-09-13T01:05:17.058493914Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:05:17.065333 env[1244]: time="2025-09-13T01:05:17.065317267Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.33.5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:05:17.094778 env[1244]: time="2025-09-13T01:05:17.094754222Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:05:17.095329 env[1244]: time="2025-09-13T01:05:17.095314739Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\" returns image reference \"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\"" Sep 13 01:05:17.095670 env[1244]: time="2025-09-13T01:05:17.095658863Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\"" Sep 13 01:05:18.597504 env[1244]: time="2025-09-13T01:05:18.597445480Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.33.5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:05:18.598227 env[1244]: time="2025-09-13T01:05:18.598211358Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:05:18.599752 env[1244]: time="2025-09-13T01:05:18.599736236Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.33.5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:05:18.601085 env[1244]: time="2025-09-13T01:05:18.601068695Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:05:18.601471 env[1244]: time="2025-09-13T01:05:18.601452820Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\" returns image reference \"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\"" Sep 13 01:05:18.601885 env[1244]: time="2025-09-13T01:05:18.601872977Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\"" Sep 13 01:05:19.579801 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount51067227.mount: Deactivated successfully. Sep 13 01:05:20.677697 env[1244]: time="2025-09-13T01:05:20.677652137Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.33.5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:05:20.679737 env[1244]: time="2025-09-13T01:05:20.679708103Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:05:20.681213 env[1244]: time="2025-09-13T01:05:20.681197658Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.33.5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:05:20.683591 env[1244]: time="2025-09-13T01:05:20.683577412Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:05:20.683854 env[1244]: time="2025-09-13T01:05:20.683811591Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\" returns image reference \"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\"" Sep 13 01:05:20.684229 env[1244]: time="2025-09-13T01:05:20.684216605Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Sep 13 01:05:21.198427 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2105908459.mount: Deactivated successfully. Sep 13 01:05:22.932322 env[1244]: time="2025-09-13T01:05:22.932291357Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.12.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:05:22.945044 env[1244]: time="2025-09-13T01:05:22.945020241Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:05:22.947905 env[1244]: time="2025-09-13T01:05:22.947888885Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.12.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:05:22.957466 env[1244]: time="2025-09-13T01:05:22.957450388Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:05:22.957691 env[1244]: time="2025-09-13T01:05:22.957674055Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Sep 13 01:05:22.957976 env[1244]: time="2025-09-13T01:05:22.957964695Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 13 01:05:23.588590 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2553888216.mount: Deactivated successfully. Sep 13 01:05:23.594662 env[1244]: time="2025-09-13T01:05:23.594629914Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:05:23.597226 env[1244]: time="2025-09-13T01:05:23.597199471Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:05:23.600102 env[1244]: time="2025-09-13T01:05:23.600081185Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:05:23.601682 env[1244]: time="2025-09-13T01:05:23.601667218Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:05:23.602026 env[1244]: time="2025-09-13T01:05:23.602011415Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 13 01:05:23.602920 env[1244]: time="2025-09-13T01:05:23.602908843Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Sep 13 01:05:24.171650 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount328563494.mount: Deactivated successfully. Sep 13 01:05:26.216205 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Sep 13 01:05:26.216363 systemd[1]: Stopped kubelet.service. Sep 13 01:05:26.217656 systemd[1]: Starting kubelet.service... Sep 13 01:05:26.491695 update_engine[1236]: I0913 01:05:26.491462 1236 update_attempter.cc:509] Updating boot flags... Sep 13 01:05:28.071352 env[1244]: time="2025-09-13T01:05:28.071315102Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.21-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:05:28.116222 env[1244]: time="2025-09-13T01:05:28.116187738Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:05:28.123423 env[1244]: time="2025-09-13T01:05:28.123342845Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.21-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:05:28.128721 env[1244]: time="2025-09-13T01:05:28.128690614Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:05:28.129330 env[1244]: time="2025-09-13T01:05:28.129310247Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Sep 13 01:05:30.540280 systemd[1]: Started kubelet.service. Sep 13 01:05:30.585888 kubelet[1616]: E0913 01:05:30.585858 1616 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 01:05:30.586764 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 01:05:30.586847 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 01:05:31.568730 systemd[1]: Stopped kubelet.service. Sep 13 01:05:31.570345 systemd[1]: Starting kubelet.service... Sep 13 01:05:31.593408 systemd[1]: Reloading. Sep 13 01:05:31.659545 /usr/lib/systemd/system-generators/torcx-generator[1649]: time="2025-09-13T01:05:31Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 13 01:05:31.659562 /usr/lib/systemd/system-generators/torcx-generator[1649]: time="2025-09-13T01:05:31Z" level=info msg="torcx already run" Sep 13 01:05:31.692550 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 13 01:05:31.692561 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 13 01:05:31.705332 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 01:05:31.884699 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 13 01:05:31.884765 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 13 01:05:31.885050 systemd[1]: Stopped kubelet.service. Sep 13 01:05:31.887051 systemd[1]: Starting kubelet.service... Sep 13 01:05:33.333496 systemd[1]: Started kubelet.service. Sep 13 01:05:33.372115 kubelet[1712]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 01:05:33.372366 kubelet[1712]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 13 01:05:33.372422 kubelet[1712]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 01:05:33.372520 kubelet[1712]: I0913 01:05:33.372503 1712 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 13 01:05:34.053639 kubelet[1712]: I0913 01:05:34.053615 1712 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 13 01:05:34.053639 kubelet[1712]: I0913 01:05:34.053634 1712 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 13 01:05:34.053811 kubelet[1712]: I0913 01:05:34.053788 1712 server.go:956] "Client rotation is on, will bootstrap in background" Sep 13 01:05:34.184730 kubelet[1712]: I0913 01:05:34.184700 1712 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 13 01:05:34.185917 kubelet[1712]: E0913 01:05:34.185902 1712 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://139.178.70.102:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 139.178.70.102:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Sep 13 01:05:34.192802 kubelet[1712]: E0913 01:05:34.192763 1712 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 13 01:05:34.192921 kubelet[1712]: I0913 01:05:34.192910 1712 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 13 01:05:34.196177 kubelet[1712]: I0913 01:05:34.196166 1712 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 13 01:05:34.196419 kubelet[1712]: I0913 01:05:34.196385 1712 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 13 01:05:34.196570 kubelet[1712]: I0913 01:05:34.196463 1712 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 13 01:05:34.196660 kubelet[1712]: I0913 01:05:34.196652 1712 topology_manager.go:138] "Creating topology manager with none policy" Sep 13 01:05:34.196705 kubelet[1712]: I0913 01:05:34.196698 1712 container_manager_linux.go:303] "Creating device plugin manager" Sep 13 01:05:34.196822 kubelet[1712]: I0913 01:05:34.196815 1712 state_mem.go:36] "Initialized new in-memory state store" Sep 13 01:05:34.200579 kubelet[1712]: I0913 01:05:34.200562 1712 kubelet.go:480] "Attempting to sync node with API server" Sep 13 01:05:34.200653 kubelet[1712]: I0913 01:05:34.200644 1712 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 13 01:05:34.200747 kubelet[1712]: I0913 01:05:34.200739 1712 kubelet.go:386] "Adding apiserver pod source" Sep 13 01:05:34.200797 kubelet[1712]: I0913 01:05:34.200790 1712 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 13 01:05:34.280951 kubelet[1712]: E0913 01:05:34.280914 1712 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://139.178.70.102:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 139.178.70.102:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Sep 13 01:05:34.331245 kubelet[1712]: E0913 01:05:34.330612 1712 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://139.178.70.102:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 139.178.70.102:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Sep 13 01:05:34.331768 kubelet[1712]: I0913 01:05:34.331754 1712 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Sep 13 01:05:34.332295 kubelet[1712]: I0913 01:05:34.332282 1712 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 13 01:05:34.338364 kubelet[1712]: W0913 01:05:34.338345 1712 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 13 01:05:34.345552 kubelet[1712]: I0913 01:05:34.345537 1712 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 13 01:05:34.345685 kubelet[1712]: I0913 01:05:34.345675 1712 server.go:1289] "Started kubelet" Sep 13 01:05:34.349304 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Sep 13 01:05:34.351764 kubelet[1712]: I0913 01:05:34.351751 1712 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 13 01:05:34.356771 kubelet[1712]: I0913 01:05:34.356748 1712 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 13 01:05:34.359480 kubelet[1712]: I0913 01:05:34.359459 1712 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 13 01:05:34.360105 kubelet[1712]: I0913 01:05:34.360087 1712 server.go:317] "Adding debug handlers to kubelet server" Sep 13 01:05:34.360980 kubelet[1712]: E0913 01:05:34.358982 1712 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://139.178.70.102:6443/api/v1/namespaces/default/events\": dial tcp 139.178.70.102:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1864b20f874e56c6 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-13 01:05:34.345647814 +0000 UTC m=+1.007488575,LastTimestamp:2025-09-13 01:05:34.345647814 +0000 UTC m=+1.007488575,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 13 01:05:34.361403 kubelet[1712]: I0913 01:05:34.361374 1712 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 13 01:05:34.361691 kubelet[1712]: I0913 01:05:34.361682 1712 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 13 01:05:34.361886 kubelet[1712]: I0913 01:05:34.361876 1712 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 13 01:05:34.362449 kubelet[1712]: I0913 01:05:34.362436 1712 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 13 01:05:34.362596 kubelet[1712]: E0913 01:05:34.362582 1712 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 01:05:34.364720 kubelet[1712]: E0913 01:05:34.364692 1712 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.102:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.102:6443: connect: connection refused" interval="200ms" Sep 13 01:05:34.364830 kubelet[1712]: I0913 01:05:34.364818 1712 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 13 01:05:34.364882 kubelet[1712]: I0913 01:05:34.364868 1712 reconciler.go:26] "Reconciler: start to sync state" Sep 13 01:05:34.365123 kubelet[1712]: E0913 01:05:34.365088 1712 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://139.178.70.102:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 139.178.70.102:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Sep 13 01:05:34.365556 kubelet[1712]: E0913 01:05:34.365534 1712 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 13 01:05:34.366274 kubelet[1712]: I0913 01:05:34.366258 1712 factory.go:223] Registration of the containerd container factory successfully Sep 13 01:05:34.366333 kubelet[1712]: I0913 01:05:34.366326 1712 factory.go:223] Registration of the systemd container factory successfully Sep 13 01:05:34.366486 kubelet[1712]: I0913 01:05:34.366474 1712 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 13 01:05:34.376557 kubelet[1712]: I0913 01:05:34.376537 1712 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 13 01:05:34.376557 kubelet[1712]: I0913 01:05:34.376550 1712 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 13 01:05:34.376557 kubelet[1712]: I0913 01:05:34.376562 1712 state_mem.go:36] "Initialized new in-memory state store" Sep 13 01:05:34.378288 kubelet[1712]: I0913 01:05:34.378275 1712 policy_none.go:49] "None policy: Start" Sep 13 01:05:34.378338 kubelet[1712]: I0913 01:05:34.378292 1712 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 13 01:05:34.378338 kubelet[1712]: I0913 01:05:34.378299 1712 state_mem.go:35] "Initializing new in-memory state store" Sep 13 01:05:34.382538 systemd[1]: Created slice kubepods.slice. Sep 13 01:05:34.389173 systemd[1]: Created slice kubepods-burstable.slice. Sep 13 01:05:34.389720 kubelet[1712]: I0913 01:05:34.389483 1712 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 13 01:05:34.389720 kubelet[1712]: I0913 01:05:34.389501 1712 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 13 01:05:34.390716 kubelet[1712]: I0913 01:05:34.390645 1712 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 13 01:05:34.390716 kubelet[1712]: I0913 01:05:34.390657 1712 kubelet.go:2436] "Starting kubelet main sync loop" Sep 13 01:05:34.390716 kubelet[1712]: E0913 01:05:34.390686 1712 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 13 01:05:34.391386 kubelet[1712]: E0913 01:05:34.391365 1712 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://139.178.70.102:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 139.178.70.102:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Sep 13 01:05:34.392881 systemd[1]: Created slice kubepods-besteffort.slice. Sep 13 01:05:34.399070 kubelet[1712]: E0913 01:05:34.399050 1712 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 13 01:05:34.399166 kubelet[1712]: I0913 01:05:34.399151 1712 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 13 01:05:34.399199 kubelet[1712]: I0913 01:05:34.399162 1712 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 13 01:05:34.399792 kubelet[1712]: I0913 01:05:34.399542 1712 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 13 01:05:34.400504 kubelet[1712]: E0913 01:05:34.400486 1712 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 13 01:05:34.400540 kubelet[1712]: E0913 01:05:34.400528 1712 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 13 01:05:34.498308 systemd[1]: Created slice kubepods-burstable-pod0a9edb313ae1c084b25c59898de9a580.slice. Sep 13 01:05:34.500152 kubelet[1712]: I0913 01:05:34.500126 1712 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 13 01:05:34.500437 kubelet[1712]: E0913 01:05:34.500389 1712 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://139.178.70.102:6443/api/v1/nodes\": dial tcp 139.178.70.102:6443: connect: connection refused" node="localhost" Sep 13 01:05:34.511890 kubelet[1712]: E0913 01:05:34.511747 1712 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 13 01:05:34.514053 systemd[1]: Created slice kubepods-burstable-podb678d5c6713e936e66aa5bb73166297e.slice. Sep 13 01:05:34.515378 kubelet[1712]: E0913 01:05:34.515333 1712 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 13 01:05:34.522126 systemd[1]: Created slice kubepods-burstable-pod7b968cf906b2d9d713a362c43868bef2.slice. Sep 13 01:05:34.523529 kubelet[1712]: E0913 01:05:34.523514 1712 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 13 01:05:34.566156 kubelet[1712]: E0913 01:05:34.566113 1712 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.102:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.102:6443: connect: connection refused" interval="400ms" Sep 13 01:05:34.667466 kubelet[1712]: I0913 01:05:34.666548 1712 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0a9edb313ae1c084b25c59898de9a580-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"0a9edb313ae1c084b25c59898de9a580\") " pod="kube-system/kube-apiserver-localhost" Sep 13 01:05:34.667637 kubelet[1712]: I0913 01:05:34.667622 1712 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 01:05:34.667740 kubelet[1712]: I0913 01:05:34.667726 1712 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 01:05:34.668025 kubelet[1712]: I0913 01:05:34.667998 1712 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 01:05:34.668155 kubelet[1712]: I0913 01:05:34.668141 1712 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 01:05:34.668252 kubelet[1712]: I0913 01:05:34.668241 1712 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0a9edb313ae1c084b25c59898de9a580-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"0a9edb313ae1c084b25c59898de9a580\") " pod="kube-system/kube-apiserver-localhost" Sep 13 01:05:34.668338 kubelet[1712]: I0913 01:05:34.668327 1712 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0a9edb313ae1c084b25c59898de9a580-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"0a9edb313ae1c084b25c59898de9a580\") " pod="kube-system/kube-apiserver-localhost" Sep 13 01:05:34.668434 kubelet[1712]: I0913 01:05:34.668407 1712 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 01:05:34.668512 kubelet[1712]: I0913 01:05:34.668501 1712 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7b968cf906b2d9d713a362c43868bef2-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"7b968cf906b2d9d713a362c43868bef2\") " pod="kube-system/kube-scheduler-localhost" Sep 13 01:05:34.701896 kubelet[1712]: I0913 01:05:34.701872 1712 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 13 01:05:34.702175 kubelet[1712]: E0913 01:05:34.702147 1712 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://139.178.70.102:6443/api/v1/nodes\": dial tcp 139.178.70.102:6443: connect: connection refused" node="localhost" Sep 13 01:05:34.812986 env[1244]: time="2025-09-13T01:05:34.812709441Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:0a9edb313ae1c084b25c59898de9a580,Namespace:kube-system,Attempt:0,}" Sep 13 01:05:34.816752 env[1244]: time="2025-09-13T01:05:34.816727637Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:b678d5c6713e936e66aa5bb73166297e,Namespace:kube-system,Attempt:0,}" Sep 13 01:05:34.824722 env[1244]: time="2025-09-13T01:05:34.824432814Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:7b968cf906b2d9d713a362c43868bef2,Namespace:kube-system,Attempt:0,}" Sep 13 01:05:34.967230 kubelet[1712]: E0913 01:05:34.967207 1712 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.102:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.102:6443: connect: connection refused" interval="800ms" Sep 13 01:05:35.103899 kubelet[1712]: I0913 01:05:35.103875 1712 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 13 01:05:35.104096 kubelet[1712]: E0913 01:05:35.104073 1712 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://139.178.70.102:6443/api/v1/nodes\": dial tcp 139.178.70.102:6443: connect: connection refused" node="localhost" Sep 13 01:05:35.200090 kubelet[1712]: E0913 01:05:35.200060 1712 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://139.178.70.102:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 139.178.70.102:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Sep 13 01:05:35.222706 kubelet[1712]: E0913 01:05:35.222615 1712 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://139.178.70.102:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 139.178.70.102:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Sep 13 01:05:35.294883 kubelet[1712]: E0913 01:05:35.294852 1712 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://139.178.70.102:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 139.178.70.102:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Sep 13 01:05:35.387542 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2073191592.mount: Deactivated successfully. Sep 13 01:05:35.390800 env[1244]: time="2025-09-13T01:05:35.390756794Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:05:35.391578 env[1244]: time="2025-09-13T01:05:35.391492025Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:05:35.392808 env[1244]: time="2025-09-13T01:05:35.392794151Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:05:35.393349 env[1244]: time="2025-09-13T01:05:35.393337087Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:05:35.394134 env[1244]: time="2025-09-13T01:05:35.394122411Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:05:35.394606 env[1244]: time="2025-09-13T01:05:35.394591456Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:05:35.395087 env[1244]: time="2025-09-13T01:05:35.395071478Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:05:35.397095 env[1244]: time="2025-09-13T01:05:35.397077821Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:05:35.398562 env[1244]: time="2025-09-13T01:05:35.398547109Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:05:35.398926 env[1244]: time="2025-09-13T01:05:35.398910443Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:05:35.400220 env[1244]: time="2025-09-13T01:05:35.400206979Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:05:35.406654 env[1244]: time="2025-09-13T01:05:35.406614966Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:05:35.413850 env[1244]: time="2025-09-13T01:05:35.413798107Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 01:05:35.416786 env[1244]: time="2025-09-13T01:05:35.413838320Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 01:05:35.416786 env[1244]: time="2025-09-13T01:05:35.413846016Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 01:05:35.416786 env[1244]: time="2025-09-13T01:05:35.413981539Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/4d83247d7ea3cb9a017ec959c12f5e8b952968dee2a0f7e72766eb0713ded57d pid=1754 runtime=io.containerd.runc.v2 Sep 13 01:05:35.427488 env[1244]: time="2025-09-13T01:05:35.427443005Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 01:05:35.427488 env[1244]: time="2025-09-13T01:05:35.427468495Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 01:05:35.427488 env[1244]: time="2025-09-13T01:05:35.427475641Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 01:05:35.427707 env[1244]: time="2025-09-13T01:05:35.427680892Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c6d49531ce94f3e81cf0a23acc6b69ab8d6dd84ddbaf0445db572e9c21d83eb4 pid=1776 runtime=io.containerd.runc.v2 Sep 13 01:05:35.432818 systemd[1]: Started cri-containerd-4d83247d7ea3cb9a017ec959c12f5e8b952968dee2a0f7e72766eb0713ded57d.scope. Sep 13 01:05:35.458489 env[1244]: time="2025-09-13T01:05:35.458454221Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 01:05:35.458613 env[1244]: time="2025-09-13T01:05:35.458598943Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 01:05:35.458680 env[1244]: time="2025-09-13T01:05:35.458667560Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 01:05:35.458807 env[1244]: time="2025-09-13T01:05:35.458791712Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/b051667c9b5647a3c887d18e893ac1b9fe921cf5d117b0eba57cb7e4a769df1a pid=1813 runtime=io.containerd.runc.v2 Sep 13 01:05:35.468901 systemd[1]: Started cri-containerd-c6d49531ce94f3e81cf0a23acc6b69ab8d6dd84ddbaf0445db572e9c21d83eb4.scope. Sep 13 01:05:35.472362 env[1244]: time="2025-09-13T01:05:35.472333036Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:b678d5c6713e936e66aa5bb73166297e,Namespace:kube-system,Attempt:0,} returns sandbox id \"4d83247d7ea3cb9a017ec959c12f5e8b952968dee2a0f7e72766eb0713ded57d\"" Sep 13 01:05:35.481861 systemd[1]: Started cri-containerd-b051667c9b5647a3c887d18e893ac1b9fe921cf5d117b0eba57cb7e4a769df1a.scope. Sep 13 01:05:35.486831 env[1244]: time="2025-09-13T01:05:35.486809123Z" level=info msg="CreateContainer within sandbox \"4d83247d7ea3cb9a017ec959c12f5e8b952968dee2a0f7e72766eb0713ded57d\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 13 01:05:35.504531 env[1244]: time="2025-09-13T01:05:35.504507340Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:0a9edb313ae1c084b25c59898de9a580,Namespace:kube-system,Attempt:0,} returns sandbox id \"c6d49531ce94f3e81cf0a23acc6b69ab8d6dd84ddbaf0445db572e9c21d83eb4\"" Sep 13 01:05:35.510943 env[1244]: time="2025-09-13T01:05:35.510924401Z" level=info msg="CreateContainer within sandbox \"c6d49531ce94f3e81cf0a23acc6b69ab8d6dd84ddbaf0445db572e9c21d83eb4\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 13 01:05:35.521495 env[1244]: time="2025-09-13T01:05:35.521468213Z" level=info msg="CreateContainer within sandbox \"4d83247d7ea3cb9a017ec959c12f5e8b952968dee2a0f7e72766eb0713ded57d\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"a92d74a8fcaf95a264d4f76594faf2406533c2e496f1e6c3a83e85c02f3a6b43\"" Sep 13 01:05:35.522175 env[1244]: time="2025-09-13T01:05:35.522160343Z" level=info msg="StartContainer for \"a92d74a8fcaf95a264d4f76594faf2406533c2e496f1e6c3a83e85c02f3a6b43\"" Sep 13 01:05:35.525420 env[1244]: time="2025-09-13T01:05:35.525384120Z" level=info msg="CreateContainer within sandbox \"c6d49531ce94f3e81cf0a23acc6b69ab8d6dd84ddbaf0445db572e9c21d83eb4\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"7a5ddd9d5f710b3c74b738a6dbd1bcd15f8470006ac3d0f3ad4a29ad510aff76\"" Sep 13 01:05:35.526446 env[1244]: time="2025-09-13T01:05:35.526408353Z" level=info msg="StartContainer for \"7a5ddd9d5f710b3c74b738a6dbd1bcd15f8470006ac3d0f3ad4a29ad510aff76\"" Sep 13 01:05:35.527616 env[1244]: time="2025-09-13T01:05:35.527599213Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:7b968cf906b2d9d713a362c43868bef2,Namespace:kube-system,Attempt:0,} returns sandbox id \"b051667c9b5647a3c887d18e893ac1b9fe921cf5d117b0eba57cb7e4a769df1a\"" Sep 13 01:05:35.529848 env[1244]: time="2025-09-13T01:05:35.529826489Z" level=info msg="CreateContainer within sandbox \"b051667c9b5647a3c887d18e893ac1b9fe921cf5d117b0eba57cb7e4a769df1a\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 13 01:05:35.541240 systemd[1]: Started cri-containerd-a92d74a8fcaf95a264d4f76594faf2406533c2e496f1e6c3a83e85c02f3a6b43.scope. Sep 13 01:05:35.551035 env[1244]: time="2025-09-13T01:05:35.549918713Z" level=info msg="CreateContainer within sandbox \"b051667c9b5647a3c887d18e893ac1b9fe921cf5d117b0eba57cb7e4a769df1a\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"1c90487897ac84ff22a33812777f128ca1e6670c3c93d783d1ce56749c4f89ed\"" Sep 13 01:05:35.555107 systemd[1]: Started cri-containerd-7a5ddd9d5f710b3c74b738a6dbd1bcd15f8470006ac3d0f3ad4a29ad510aff76.scope. Sep 13 01:05:35.555960 env[1244]: time="2025-09-13T01:05:35.555895009Z" level=info msg="StartContainer for \"1c90487897ac84ff22a33812777f128ca1e6670c3c93d783d1ce56749c4f89ed\"" Sep 13 01:05:35.571051 systemd[1]: Started cri-containerd-1c90487897ac84ff22a33812777f128ca1e6670c3c93d783d1ce56749c4f89ed.scope. Sep 13 01:05:35.590995 env[1244]: time="2025-09-13T01:05:35.590971720Z" level=info msg="StartContainer for \"a92d74a8fcaf95a264d4f76594faf2406533c2e496f1e6c3a83e85c02f3a6b43\" returns successfully" Sep 13 01:05:35.610030 env[1244]: time="2025-09-13T01:05:35.610006697Z" level=info msg="StartContainer for \"7a5ddd9d5f710b3c74b738a6dbd1bcd15f8470006ac3d0f3ad4a29ad510aff76\" returns successfully" Sep 13 01:05:35.620693 env[1244]: time="2025-09-13T01:05:35.620665794Z" level=info msg="StartContainer for \"1c90487897ac84ff22a33812777f128ca1e6670c3c93d783d1ce56749c4f89ed\" returns successfully" Sep 13 01:05:35.639445 kubelet[1712]: E0913 01:05:35.639398 1712 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://139.178.70.102:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 139.178.70.102:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Sep 13 01:05:35.767683 kubelet[1712]: E0913 01:05:35.767610 1712 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.102:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.102:6443: connect: connection refused" interval="1.6s" Sep 13 01:05:35.905467 kubelet[1712]: I0913 01:05:35.905259 1712 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 13 01:05:35.905467 kubelet[1712]: E0913 01:05:35.905441 1712 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://139.178.70.102:6443/api/v1/nodes\": dial tcp 139.178.70.102:6443: connect: connection refused" node="localhost" Sep 13 01:05:36.317613 kubelet[1712]: E0913 01:05:36.317589 1712 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://139.178.70.102:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 139.178.70.102:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Sep 13 01:05:36.395550 kubelet[1712]: E0913 01:05:36.395535 1712 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 13 01:05:36.396921 kubelet[1712]: E0913 01:05:36.396910 1712 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 13 01:05:36.397931 kubelet[1712]: E0913 01:05:36.397922 1712 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 13 01:05:37.399234 kubelet[1712]: E0913 01:05:37.399215 1712 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 13 01:05:37.399497 kubelet[1712]: E0913 01:05:37.399396 1712 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 13 01:05:37.507118 kubelet[1712]: I0913 01:05:37.507098 1712 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 13 01:05:37.704641 kubelet[1712]: E0913 01:05:37.704574 1712 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 13 01:05:37.867097 kubelet[1712]: E0913 01:05:37.867072 1712 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Sep 13 01:05:37.960169 kubelet[1712]: I0913 01:05:37.960101 1712 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 13 01:05:37.960169 kubelet[1712]: E0913 01:05:37.960127 1712 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Sep 13 01:05:37.967303 kubelet[1712]: E0913 01:05:37.967281 1712 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 01:05:38.067973 kubelet[1712]: E0913 01:05:38.067942 1712 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 01:05:38.169076 kubelet[1712]: E0913 01:05:38.169016 1712 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 01:05:38.269761 kubelet[1712]: E0913 01:05:38.269734 1712 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 01:05:38.369866 kubelet[1712]: E0913 01:05:38.369831 1712 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 01:05:38.400105 kubelet[1712]: E0913 01:05:38.400052 1712 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 13 01:05:38.400315 kubelet[1712]: E0913 01:05:38.400267 1712 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 13 01:05:38.470045 kubelet[1712]: E0913 01:05:38.470012 1712 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 01:05:38.570888 kubelet[1712]: E0913 01:05:38.570811 1712 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 01:05:38.663059 kubelet[1712]: I0913 01:05:38.663024 1712 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 13 01:05:38.667621 kubelet[1712]: E0913 01:05:38.667601 1712 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Sep 13 01:05:38.667743 kubelet[1712]: I0913 01:05:38.667735 1712 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 13 01:05:38.668802 kubelet[1712]: E0913 01:05:38.668790 1712 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Sep 13 01:05:38.668862 kubelet[1712]: I0913 01:05:38.668854 1712 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 13 01:05:38.669698 kubelet[1712]: E0913 01:05:38.669688 1712 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Sep 13 01:05:39.283652 kubelet[1712]: I0913 01:05:39.283630 1712 apiserver.go:52] "Watching apiserver" Sep 13 01:05:39.365807 kubelet[1712]: I0913 01:05:39.365770 1712 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 13 01:05:39.870737 systemd[1]: Reloading. Sep 13 01:05:39.927476 /usr/lib/systemd/system-generators/torcx-generator[2010]: time="2025-09-13T01:05:39Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 13 01:05:39.927502 /usr/lib/systemd/system-generators/torcx-generator[2010]: time="2025-09-13T01:05:39Z" level=info msg="torcx already run" Sep 13 01:05:40.016976 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 13 01:05:40.016994 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 13 01:05:40.031118 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 01:05:40.117580 systemd[1]: Stopping kubelet.service... Sep 13 01:05:40.132823 systemd[1]: kubelet.service: Deactivated successfully. Sep 13 01:05:40.132954 systemd[1]: Stopped kubelet.service. Sep 13 01:05:40.132989 systemd[1]: kubelet.service: Consumed 1.036s CPU time. Sep 13 01:05:40.134683 systemd[1]: Starting kubelet.service... Sep 13 01:05:41.893290 systemd[1]: Started kubelet.service. Sep 13 01:05:42.023605 kubelet[2075]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 01:05:42.023605 kubelet[2075]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 13 01:05:42.023605 kubelet[2075]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 01:05:42.023918 kubelet[2075]: I0913 01:05:42.023640 2075 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 13 01:05:42.039560 kubelet[2075]: I0913 01:05:42.039537 2075 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 13 01:05:42.039723 kubelet[2075]: I0913 01:05:42.039713 2075 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 13 01:05:42.040027 kubelet[2075]: I0913 01:05:42.040016 2075 server.go:956] "Client rotation is on, will bootstrap in background" Sep 13 01:05:42.051577 kubelet[2075]: I0913 01:05:42.051552 2075 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Sep 13 01:05:42.091928 kubelet[2075]: I0913 01:05:42.091905 2075 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 13 01:05:42.100004 kubelet[2075]: E0913 01:05:42.099964 2075 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 13 01:05:42.100004 kubelet[2075]: I0913 01:05:42.099994 2075 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 13 01:05:42.114331 kubelet[2075]: I0913 01:05:42.114307 2075 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 13 01:05:42.114451 kubelet[2075]: I0913 01:05:42.114432 2075 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 13 01:05:42.128331 kubelet[2075]: I0913 01:05:42.114449 2075 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 13 01:05:42.128331 kubelet[2075]: I0913 01:05:42.128334 2075 topology_manager.go:138] "Creating topology manager with none policy" Sep 13 01:05:42.128540 kubelet[2075]: I0913 01:05:42.128345 2075 container_manager_linux.go:303] "Creating device plugin manager" Sep 13 01:05:42.135451 kubelet[2075]: I0913 01:05:42.135434 2075 state_mem.go:36] "Initialized new in-memory state store" Sep 13 01:05:42.135616 kubelet[2075]: I0913 01:05:42.135605 2075 kubelet.go:480] "Attempting to sync node with API server" Sep 13 01:05:42.135652 kubelet[2075]: I0913 01:05:42.135624 2075 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 13 01:05:42.135652 kubelet[2075]: I0913 01:05:42.135639 2075 kubelet.go:386] "Adding apiserver pod source" Sep 13 01:05:42.135807 kubelet[2075]: I0913 01:05:42.135784 2075 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 13 01:05:42.195031 kubelet[2075]: I0913 01:05:42.194338 2075 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Sep 13 01:05:42.195423 kubelet[2075]: I0913 01:05:42.195400 2075 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 13 01:05:42.197383 kubelet[2075]: I0913 01:05:42.197374 2075 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 13 01:05:42.197475 kubelet[2075]: I0913 01:05:42.197464 2075 server.go:1289] "Started kubelet" Sep 13 01:05:42.199844 kubelet[2075]: I0913 01:05:42.197958 2075 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 13 01:05:42.199844 kubelet[2075]: I0913 01:05:42.199336 2075 server.go:317] "Adding debug handlers to kubelet server" Sep 13 01:05:42.200033 kubelet[2075]: I0913 01:05:42.200005 2075 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 13 01:05:42.200213 kubelet[2075]: I0913 01:05:42.200204 2075 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 13 01:05:42.202156 kubelet[2075]: I0913 01:05:42.202143 2075 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 13 01:05:42.202372 kubelet[2075]: I0913 01:05:42.202363 2075 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 13 01:05:42.217546 kubelet[2075]: I0913 01:05:42.217527 2075 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 13 01:05:42.218696 kubelet[2075]: I0913 01:05:42.217835 2075 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 13 01:05:42.218696 kubelet[2075]: I0913 01:05:42.217917 2075 reconciler.go:26] "Reconciler: start to sync state" Sep 13 01:05:42.219191 kubelet[2075]: I0913 01:05:42.219177 2075 factory.go:223] Registration of the systemd container factory successfully Sep 13 01:05:42.219254 kubelet[2075]: I0913 01:05:42.219239 2075 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 13 01:05:42.221215 kubelet[2075]: E0913 01:05:42.220340 2075 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 13 01:05:42.221215 kubelet[2075]: I0913 01:05:42.220734 2075 factory.go:223] Registration of the containerd container factory successfully Sep 13 01:05:42.244454 kubelet[2075]: I0913 01:05:42.244405 2075 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 13 01:05:42.244569 kubelet[2075]: I0913 01:05:42.244561 2075 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 13 01:05:42.244637 kubelet[2075]: I0913 01:05:42.244632 2075 state_mem.go:36] "Initialized new in-memory state store" Sep 13 01:05:42.244751 kubelet[2075]: I0913 01:05:42.244743 2075 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 13 01:05:42.244845 kubelet[2075]: I0913 01:05:42.244830 2075 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 13 01:05:42.246367 kubelet[2075]: I0913 01:05:42.246314 2075 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 13 01:05:42.247027 kubelet[2075]: I0913 01:05:42.247012 2075 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 13 01:05:42.247027 kubelet[2075]: I0913 01:05:42.247023 2075 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 13 01:05:42.247085 kubelet[2075]: I0913 01:05:42.247040 2075 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 13 01:05:42.247085 kubelet[2075]: I0913 01:05:42.247048 2075 kubelet.go:2436] "Starting kubelet main sync loop" Sep 13 01:05:42.247085 kubelet[2075]: E0913 01:05:42.247070 2075 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 13 01:05:42.252789 kubelet[2075]: I0913 01:05:42.252777 2075 policy_none.go:49] "None policy: Start" Sep 13 01:05:42.252880 kubelet[2075]: I0913 01:05:42.252872 2075 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 13 01:05:42.252931 kubelet[2075]: I0913 01:05:42.252924 2075 state_mem.go:35] "Initializing new in-memory state store" Sep 13 01:05:42.253047 kubelet[2075]: I0913 01:05:42.253039 2075 state_mem.go:75] "Updated machine memory state" Sep 13 01:05:42.255211 kubelet[2075]: E0913 01:05:42.255195 2075 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 13 01:05:42.255298 kubelet[2075]: I0913 01:05:42.255287 2075 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 13 01:05:42.255369 kubelet[2075]: I0913 01:05:42.255341 2075 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 13 01:05:42.255574 kubelet[2075]: I0913 01:05:42.255567 2075 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 13 01:05:42.256902 kubelet[2075]: E0913 01:05:42.256892 2075 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 13 01:05:42.273928 sudo[2090]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Sep 13 01:05:42.274082 sudo[2090]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Sep 13 01:05:42.348045 kubelet[2075]: I0913 01:05:42.348023 2075 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 13 01:05:42.348150 kubelet[2075]: I0913 01:05:42.348140 2075 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 13 01:05:42.348269 kubelet[2075]: I0913 01:05:42.348258 2075 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 13 01:05:42.358458 kubelet[2075]: I0913 01:05:42.358438 2075 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 13 01:05:42.363833 kubelet[2075]: I0913 01:05:42.363794 2075 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Sep 13 01:05:42.363833 kubelet[2075]: I0913 01:05:42.363840 2075 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 13 01:05:42.518902 kubelet[2075]: I0913 01:05:42.518875 2075 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 01:05:42.518902 kubelet[2075]: I0913 01:05:42.518901 2075 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 01:05:42.519061 kubelet[2075]: I0913 01:05:42.518913 2075 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 01:05:42.519061 kubelet[2075]: I0913 01:05:42.518922 2075 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 01:05:42.519061 kubelet[2075]: I0913 01:05:42.518932 2075 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0a9edb313ae1c084b25c59898de9a580-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"0a9edb313ae1c084b25c59898de9a580\") " pod="kube-system/kube-apiserver-localhost" Sep 13 01:05:42.519061 kubelet[2075]: I0913 01:05:42.518940 2075 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0a9edb313ae1c084b25c59898de9a580-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"0a9edb313ae1c084b25c59898de9a580\") " pod="kube-system/kube-apiserver-localhost" Sep 13 01:05:42.519061 kubelet[2075]: I0913 01:05:42.518949 2075 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0a9edb313ae1c084b25c59898de9a580-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"0a9edb313ae1c084b25c59898de9a580\") " pod="kube-system/kube-apiserver-localhost" Sep 13 01:05:42.519170 kubelet[2075]: I0913 01:05:42.518958 2075 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 01:05:42.519170 kubelet[2075]: I0913 01:05:42.518967 2075 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7b968cf906b2d9d713a362c43868bef2-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"7b968cf906b2d9d713a362c43868bef2\") " pod="kube-system/kube-scheduler-localhost" Sep 13 01:05:42.945640 sudo[2090]: pam_unix(sudo:session): session closed for user root Sep 13 01:05:43.175479 kubelet[2075]: I0913 01:05:43.175450 2075 apiserver.go:52] "Watching apiserver" Sep 13 01:05:43.218406 kubelet[2075]: I0913 01:05:43.218385 2075 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 13 01:05:43.300139 kubelet[2075]: I0913 01:05:43.300100 2075 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.300088054 podStartE2EDuration="1.300088054s" podCreationTimestamp="2025-09-13 01:05:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 01:05:43.286014005 +0000 UTC m=+1.331489144" watchObservedRunningTime="2025-09-13 01:05:43.300088054 +0000 UTC m=+1.345563185" Sep 13 01:05:43.310991 kubelet[2075]: I0913 01:05:43.310951 2075 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.310932461 podStartE2EDuration="1.310932461s" podCreationTimestamp="2025-09-13 01:05:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 01:05:43.300599357 +0000 UTC m=+1.346074498" watchObservedRunningTime="2025-09-13 01:05:43.310932461 +0000 UTC m=+1.356407596" Sep 13 01:05:43.330533 kubelet[2075]: I0913 01:05:43.330500 2075 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.330488785 podStartE2EDuration="1.330488785s" podCreationTimestamp="2025-09-13 01:05:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 01:05:43.31178219 +0000 UTC m=+1.357257329" watchObservedRunningTime="2025-09-13 01:05:43.330488785 +0000 UTC m=+1.375963921" Sep 13 01:05:44.918174 kubelet[2075]: I0913 01:05:44.918155 2075 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 13 01:05:44.919370 env[1244]: time="2025-09-13T01:05:44.918635350Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 13 01:05:44.919529 kubelet[2075]: I0913 01:05:44.918736 2075 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 13 01:05:45.110892 sudo[1431]: pam_unix(sudo:session): session closed for user root Sep 13 01:05:45.112486 sshd[1428]: pam_unix(sshd:session): session closed for user core Sep 13 01:05:45.113971 systemd-logind[1235]: Session 7 logged out. Waiting for processes to exit. Sep 13 01:05:45.114789 systemd[1]: sshd@4-139.178.70.102:22-147.75.109.163:41696.service: Deactivated successfully. Sep 13 01:05:45.115215 systemd[1]: session-7.scope: Deactivated successfully. Sep 13 01:05:45.115303 systemd[1]: session-7.scope: Consumed 3.689s CPU time. Sep 13 01:05:45.116126 systemd-logind[1235]: Removed session 7. Sep 13 01:05:45.404710 systemd[1]: Created slice kubepods-besteffort-podf138070b_1d84_48f7_ac58_5f17d6baba70.slice. Sep 13 01:05:45.417402 systemd[1]: Created slice kubepods-burstable-pod74bb19c9_295d_4ca9_96c4_8268351c5a4d.slice. Sep 13 01:05:45.439382 kubelet[2075]: I0913 01:05:45.439352 2075 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/f138070b-1d84-48f7-ac58-5f17d6baba70-kube-proxy\") pod \"kube-proxy-t6f74\" (UID: \"f138070b-1d84-48f7-ac58-5f17d6baba70\") " pod="kube-system/kube-proxy-t6f74" Sep 13 01:05:45.439549 kubelet[2075]: I0913 01:05:45.439537 2075 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f138070b-1d84-48f7-ac58-5f17d6baba70-lib-modules\") pod \"kube-proxy-t6f74\" (UID: \"f138070b-1d84-48f7-ac58-5f17d6baba70\") " pod="kube-system/kube-proxy-t6f74" Sep 13 01:05:45.439622 kubelet[2075]: I0913 01:05:45.439613 2075 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/74bb19c9-295d-4ca9-96c4-8268351c5a4d-bpf-maps\") pod \"cilium-tpwkg\" (UID: \"74bb19c9-295d-4ca9-96c4-8268351c5a4d\") " pod="kube-system/cilium-tpwkg" Sep 13 01:05:45.439705 kubelet[2075]: I0913 01:05:45.439694 2075 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/74bb19c9-295d-4ca9-96c4-8268351c5a4d-cni-path\") pod \"cilium-tpwkg\" (UID: \"74bb19c9-295d-4ca9-96c4-8268351c5a4d\") " pod="kube-system/cilium-tpwkg" Sep 13 01:05:45.439777 kubelet[2075]: I0913 01:05:45.439768 2075 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/74bb19c9-295d-4ca9-96c4-8268351c5a4d-lib-modules\") pod \"cilium-tpwkg\" (UID: \"74bb19c9-295d-4ca9-96c4-8268351c5a4d\") " pod="kube-system/cilium-tpwkg" Sep 13 01:05:45.439847 kubelet[2075]: I0913 01:05:45.439839 2075 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/74bb19c9-295d-4ca9-96c4-8268351c5a4d-clustermesh-secrets\") pod \"cilium-tpwkg\" (UID: \"74bb19c9-295d-4ca9-96c4-8268351c5a4d\") " pod="kube-system/cilium-tpwkg" Sep 13 01:05:45.439908 kubelet[2075]: I0913 01:05:45.439893 2075 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/74bb19c9-295d-4ca9-96c4-8268351c5a4d-hubble-tls\") pod \"cilium-tpwkg\" (UID: \"74bb19c9-295d-4ca9-96c4-8268351c5a4d\") " pod="kube-system/cilium-tpwkg" Sep 13 01:05:45.439969 kubelet[2075]: I0913 01:05:45.439960 2075 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/74bb19c9-295d-4ca9-96c4-8268351c5a4d-hostproc\") pod \"cilium-tpwkg\" (UID: \"74bb19c9-295d-4ca9-96c4-8268351c5a4d\") " pod="kube-system/cilium-tpwkg" Sep 13 01:05:45.440033 kubelet[2075]: I0913 01:05:45.440025 2075 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/74bb19c9-295d-4ca9-96c4-8268351c5a4d-cilium-cgroup\") pod \"cilium-tpwkg\" (UID: \"74bb19c9-295d-4ca9-96c4-8268351c5a4d\") " pod="kube-system/cilium-tpwkg" Sep 13 01:05:45.440088 kubelet[2075]: I0913 01:05:45.440079 2075 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/74bb19c9-295d-4ca9-96c4-8268351c5a4d-xtables-lock\") pod \"cilium-tpwkg\" (UID: \"74bb19c9-295d-4ca9-96c4-8268351c5a4d\") " pod="kube-system/cilium-tpwkg" Sep 13 01:05:45.440154 kubelet[2075]: I0913 01:05:45.440145 2075 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/74bb19c9-295d-4ca9-96c4-8268351c5a4d-host-proc-sys-net\") pod \"cilium-tpwkg\" (UID: \"74bb19c9-295d-4ca9-96c4-8268351c5a4d\") " pod="kube-system/cilium-tpwkg" Sep 13 01:05:45.440223 kubelet[2075]: I0913 01:05:45.440213 2075 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zjtfr\" (UniqueName: \"kubernetes.io/projected/74bb19c9-295d-4ca9-96c4-8268351c5a4d-kube-api-access-zjtfr\") pod \"cilium-tpwkg\" (UID: \"74bb19c9-295d-4ca9-96c4-8268351c5a4d\") " pod="kube-system/cilium-tpwkg" Sep 13 01:05:45.440289 kubelet[2075]: I0913 01:05:45.440273 2075 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f138070b-1d84-48f7-ac58-5f17d6baba70-xtables-lock\") pod \"kube-proxy-t6f74\" (UID: \"f138070b-1d84-48f7-ac58-5f17d6baba70\") " pod="kube-system/kube-proxy-t6f74" Sep 13 01:05:45.440351 kubelet[2075]: I0913 01:05:45.440342 2075 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hg44f\" (UniqueName: \"kubernetes.io/projected/f138070b-1d84-48f7-ac58-5f17d6baba70-kube-api-access-hg44f\") pod \"kube-proxy-t6f74\" (UID: \"f138070b-1d84-48f7-ac58-5f17d6baba70\") " pod="kube-system/kube-proxy-t6f74" Sep 13 01:05:45.440427 kubelet[2075]: I0913 01:05:45.440396 2075 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/74bb19c9-295d-4ca9-96c4-8268351c5a4d-cilium-run\") pod \"cilium-tpwkg\" (UID: \"74bb19c9-295d-4ca9-96c4-8268351c5a4d\") " pod="kube-system/cilium-tpwkg" Sep 13 01:05:45.440498 kubelet[2075]: I0913 01:05:45.440489 2075 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/74bb19c9-295d-4ca9-96c4-8268351c5a4d-etc-cni-netd\") pod \"cilium-tpwkg\" (UID: \"74bb19c9-295d-4ca9-96c4-8268351c5a4d\") " pod="kube-system/cilium-tpwkg" Sep 13 01:05:45.440562 kubelet[2075]: I0913 01:05:45.440547 2075 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/74bb19c9-295d-4ca9-96c4-8268351c5a4d-cilium-config-path\") pod \"cilium-tpwkg\" (UID: \"74bb19c9-295d-4ca9-96c4-8268351c5a4d\") " pod="kube-system/cilium-tpwkg" Sep 13 01:05:45.440625 kubelet[2075]: I0913 01:05:45.440613 2075 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/74bb19c9-295d-4ca9-96c4-8268351c5a4d-host-proc-sys-kernel\") pod \"cilium-tpwkg\" (UID: \"74bb19c9-295d-4ca9-96c4-8268351c5a4d\") " pod="kube-system/cilium-tpwkg" Sep 13 01:05:45.543543 kubelet[2075]: I0913 01:05:45.543519 2075 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Sep 13 01:05:45.554540 kubelet[2075]: E0913 01:05:45.554517 2075 projected.go:289] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Sep 13 01:05:45.554675 kubelet[2075]: E0913 01:05:45.554663 2075 projected.go:194] Error preparing data for projected volume kube-api-access-zjtfr for pod kube-system/cilium-tpwkg: configmap "kube-root-ca.crt" not found Sep 13 01:05:45.554783 kubelet[2075]: E0913 01:05:45.554774 2075 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/74bb19c9-295d-4ca9-96c4-8268351c5a4d-kube-api-access-zjtfr podName:74bb19c9-295d-4ca9-96c4-8268351c5a4d nodeName:}" failed. No retries permitted until 2025-09-13 01:05:46.054758364 +0000 UTC m=+4.100233499 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-zjtfr" (UniqueName: "kubernetes.io/projected/74bb19c9-295d-4ca9-96c4-8268351c5a4d-kube-api-access-zjtfr") pod "cilium-tpwkg" (UID: "74bb19c9-295d-4ca9-96c4-8268351c5a4d") : configmap "kube-root-ca.crt" not found Sep 13 01:05:45.556016 kubelet[2075]: E0913 01:05:45.556001 2075 projected.go:289] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Sep 13 01:05:45.556154 kubelet[2075]: E0913 01:05:45.556098 2075 projected.go:194] Error preparing data for projected volume kube-api-access-hg44f for pod kube-system/kube-proxy-t6f74: configmap "kube-root-ca.crt" not found Sep 13 01:05:45.556243 kubelet[2075]: E0913 01:05:45.556235 2075 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f138070b-1d84-48f7-ac58-5f17d6baba70-kube-api-access-hg44f podName:f138070b-1d84-48f7-ac58-5f17d6baba70 nodeName:}" failed. No retries permitted until 2025-09-13 01:05:46.056223687 +0000 UTC m=+4.101698819 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-hg44f" (UniqueName: "kubernetes.io/projected/f138070b-1d84-48f7-ac58-5f17d6baba70-kube-api-access-hg44f") pod "kube-proxy-t6f74" (UID: "f138070b-1d84-48f7-ac58-5f17d6baba70") : configmap "kube-root-ca.crt" not found Sep 13 01:05:45.852728 systemd[1]: Created slice kubepods-besteffort-pod69226fd4_06ac_4fe7_849b_ba866ebbbfe4.slice. Sep 13 01:05:45.945570 kubelet[2075]: I0913 01:05:45.945538 2075 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hjklt\" (UniqueName: \"kubernetes.io/projected/69226fd4-06ac-4fe7-849b-ba866ebbbfe4-kube-api-access-hjklt\") pod \"cilium-operator-6c4d7847fc-drszc\" (UID: \"69226fd4-06ac-4fe7-849b-ba866ebbbfe4\") " pod="kube-system/cilium-operator-6c4d7847fc-drszc" Sep 13 01:05:45.945849 kubelet[2075]: I0913 01:05:45.945837 2075 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/69226fd4-06ac-4fe7-849b-ba866ebbbfe4-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-drszc\" (UID: \"69226fd4-06ac-4fe7-849b-ba866ebbbfe4\") " pod="kube-system/cilium-operator-6c4d7847fc-drszc" Sep 13 01:05:46.156885 env[1244]: time="2025-09-13T01:05:46.156571865Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-drszc,Uid:69226fd4-06ac-4fe7-849b-ba866ebbbfe4,Namespace:kube-system,Attempt:0,}" Sep 13 01:05:46.210777 env[1244]: time="2025-09-13T01:05:46.210725803Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 01:05:46.210777 env[1244]: time="2025-09-13T01:05:46.210760180Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 01:05:46.210951 env[1244]: time="2025-09-13T01:05:46.210928387Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 01:05:46.211150 env[1244]: time="2025-09-13T01:05:46.211121769Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/3983bfddccb2b71ee966301b2975f370519d4f14ca456ea1a3996203aecefbb9 pid=2160 runtime=io.containerd.runc.v2 Sep 13 01:05:46.219860 systemd[1]: Started cri-containerd-3983bfddccb2b71ee966301b2975f370519d4f14ca456ea1a3996203aecefbb9.scope. Sep 13 01:05:46.253652 env[1244]: time="2025-09-13T01:05:46.253626733Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-drszc,Uid:69226fd4-06ac-4fe7-849b-ba866ebbbfe4,Namespace:kube-system,Attempt:0,} returns sandbox id \"3983bfddccb2b71ee966301b2975f370519d4f14ca456ea1a3996203aecefbb9\"" Sep 13 01:05:46.255632 env[1244]: time="2025-09-13T01:05:46.255610762Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Sep 13 01:05:46.315295 env[1244]: time="2025-09-13T01:05:46.315261085Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-t6f74,Uid:f138070b-1d84-48f7-ac58-5f17d6baba70,Namespace:kube-system,Attempt:0,}" Sep 13 01:05:46.320001 env[1244]: time="2025-09-13T01:05:46.319978496Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-tpwkg,Uid:74bb19c9-295d-4ca9-96c4-8268351c5a4d,Namespace:kube-system,Attempt:0,}" Sep 13 01:05:46.327648 env[1244]: time="2025-09-13T01:05:46.327590677Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 01:05:46.327648 env[1244]: time="2025-09-13T01:05:46.327623861Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 01:05:46.327648 env[1244]: time="2025-09-13T01:05:46.327631846Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 01:05:46.327981 env[1244]: time="2025-09-13T01:05:46.327932843Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/a8245658a00b27d9499f7028f3cf7f94fc886927ea048bc6048fd48cc2b7d0b0 pid=2200 runtime=io.containerd.runc.v2 Sep 13 01:05:46.329980 env[1244]: time="2025-09-13T01:05:46.329936031Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 01:05:46.330065 env[1244]: time="2025-09-13T01:05:46.329972355Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 01:05:46.330065 env[1244]: time="2025-09-13T01:05:46.329981620Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 01:05:46.330148 env[1244]: time="2025-09-13T01:05:46.330096787Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/7407dfe13184b5858c52276885bcf57e825256acd6aa55e51f52f3c68c379266 pid=2212 runtime=io.containerd.runc.v2 Sep 13 01:05:46.336890 systemd[1]: Started cri-containerd-a8245658a00b27d9499f7028f3cf7f94fc886927ea048bc6048fd48cc2b7d0b0.scope. Sep 13 01:05:46.346274 systemd[1]: Started cri-containerd-7407dfe13184b5858c52276885bcf57e825256acd6aa55e51f52f3c68c379266.scope. Sep 13 01:05:46.380873 env[1244]: time="2025-09-13T01:05:46.380842970Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-tpwkg,Uid:74bb19c9-295d-4ca9-96c4-8268351c5a4d,Namespace:kube-system,Attempt:0,} returns sandbox id \"7407dfe13184b5858c52276885bcf57e825256acd6aa55e51f52f3c68c379266\"" Sep 13 01:05:46.382343 env[1244]: time="2025-09-13T01:05:46.382313800Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-t6f74,Uid:f138070b-1d84-48f7-ac58-5f17d6baba70,Namespace:kube-system,Attempt:0,} returns sandbox id \"a8245658a00b27d9499f7028f3cf7f94fc886927ea048bc6048fd48cc2b7d0b0\"" Sep 13 01:05:46.387665 env[1244]: time="2025-09-13T01:05:46.387628121Z" level=info msg="CreateContainer within sandbox \"a8245658a00b27d9499f7028f3cf7f94fc886927ea048bc6048fd48cc2b7d0b0\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 13 01:05:46.394744 env[1244]: time="2025-09-13T01:05:46.394716442Z" level=info msg="CreateContainer within sandbox \"a8245658a00b27d9499f7028f3cf7f94fc886927ea048bc6048fd48cc2b7d0b0\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"7191e0f404450218e62ed45cc48b59414b22b087818deea37a09f232a9ec1ccd\"" Sep 13 01:05:46.396491 env[1244]: time="2025-09-13T01:05:46.396470456Z" level=info msg="StartContainer for \"7191e0f404450218e62ed45cc48b59414b22b087818deea37a09f232a9ec1ccd\"" Sep 13 01:05:46.408895 systemd[1]: Started cri-containerd-7191e0f404450218e62ed45cc48b59414b22b087818deea37a09f232a9ec1ccd.scope. Sep 13 01:05:46.432384 env[1244]: time="2025-09-13T01:05:46.432354416Z" level=info msg="StartContainer for \"7191e0f404450218e62ed45cc48b59414b22b087818deea37a09f232a9ec1ccd\" returns successfully" Sep 13 01:05:47.464850 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3174668873.mount: Deactivated successfully. Sep 13 01:05:47.851251 kubelet[2075]: I0913 01:05:47.851216 2075 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-t6f74" podStartSLOduration=2.85120637 podStartE2EDuration="2.85120637s" podCreationTimestamp="2025-09-13 01:05:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 01:05:47.28291564 +0000 UTC m=+5.328390780" watchObservedRunningTime="2025-09-13 01:05:47.85120637 +0000 UTC m=+5.896681504" Sep 13 01:05:48.301335 env[1244]: time="2025-09-13T01:05:48.301302669Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:05:48.302110 env[1244]: time="2025-09-13T01:05:48.302091478Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:05:48.302931 env[1244]: time="2025-09-13T01:05:48.302917075Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:05:48.303355 env[1244]: time="2025-09-13T01:05:48.303335981Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Sep 13 01:05:48.309229 env[1244]: time="2025-09-13T01:05:48.309205503Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Sep 13 01:05:48.313070 env[1244]: time="2025-09-13T01:05:48.313042802Z" level=info msg="CreateContainer within sandbox \"3983bfddccb2b71ee966301b2975f370519d4f14ca456ea1a3996203aecefbb9\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Sep 13 01:05:48.321661 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount183620766.mount: Deactivated successfully. Sep 13 01:05:48.324952 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2215057251.mount: Deactivated successfully. Sep 13 01:05:48.326429 env[1244]: time="2025-09-13T01:05:48.326371679Z" level=info msg="CreateContainer within sandbox \"3983bfddccb2b71ee966301b2975f370519d4f14ca456ea1a3996203aecefbb9\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"2b627ee9a3855d0c2e77844c70249af96efaba26a3ab181ec48c584470a130de\"" Sep 13 01:05:48.328477 env[1244]: time="2025-09-13T01:05:48.327535688Z" level=info msg="StartContainer for \"2b627ee9a3855d0c2e77844c70249af96efaba26a3ab181ec48c584470a130de\"" Sep 13 01:05:48.344726 systemd[1]: Started cri-containerd-2b627ee9a3855d0c2e77844c70249af96efaba26a3ab181ec48c584470a130de.scope. Sep 13 01:05:48.380032 env[1244]: time="2025-09-13T01:05:48.379999080Z" level=info msg="StartContainer for \"2b627ee9a3855d0c2e77844c70249af96efaba26a3ab181ec48c584470a130de\" returns successfully" Sep 13 01:05:49.412786 kubelet[2075]: I0913 01:05:49.412743 2075 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-drszc" podStartSLOduration=2.359562746 podStartE2EDuration="4.41273112s" podCreationTimestamp="2025-09-13 01:05:45 +0000 UTC" firstStartedPulling="2025-09-13 01:05:46.254488555 +0000 UTC m=+4.299963684" lastFinishedPulling="2025-09-13 01:05:48.307656926 +0000 UTC m=+6.353132058" observedRunningTime="2025-09-13 01:05:49.412058909 +0000 UTC m=+7.457534054" watchObservedRunningTime="2025-09-13 01:05:49.41273112 +0000 UTC m=+7.458206259" Sep 13 01:05:53.607296 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount837120375.mount: Deactivated successfully. Sep 13 01:05:58.000581 env[1244]: time="2025-09-13T01:05:58.000541931Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:05:58.025434 env[1244]: time="2025-09-13T01:05:58.025395345Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:05:58.030517 env[1244]: time="2025-09-13T01:05:58.030502503Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:05:58.030780 env[1244]: time="2025-09-13T01:05:58.030765057Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Sep 13 01:05:58.170836 env[1244]: time="2025-09-13T01:05:58.170731994Z" level=info msg="CreateContainer within sandbox \"7407dfe13184b5858c52276885bcf57e825256acd6aa55e51f52f3c68c379266\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 13 01:05:58.183938 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3502114222.mount: Deactivated successfully. Sep 13 01:05:58.189692 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount368733235.mount: Deactivated successfully. Sep 13 01:05:58.205647 env[1244]: time="2025-09-13T01:05:58.205603942Z" level=info msg="CreateContainer within sandbox \"7407dfe13184b5858c52276885bcf57e825256acd6aa55e51f52f3c68c379266\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"b2e669bbf1f0ecf7fab316e14e3407966ec944824d095a0f7db0bdf83ea443d7\"" Sep 13 01:05:58.211970 env[1244]: time="2025-09-13T01:05:58.206017714Z" level=info msg="StartContainer for \"b2e669bbf1f0ecf7fab316e14e3407966ec944824d095a0f7db0bdf83ea443d7\"" Sep 13 01:05:58.231483 systemd[1]: Started cri-containerd-b2e669bbf1f0ecf7fab316e14e3407966ec944824d095a0f7db0bdf83ea443d7.scope. Sep 13 01:05:58.262692 env[1244]: time="2025-09-13T01:05:58.262472187Z" level=info msg="StartContainer for \"b2e669bbf1f0ecf7fab316e14e3407966ec944824d095a0f7db0bdf83ea443d7\" returns successfully" Sep 13 01:05:58.283205 systemd[1]: cri-containerd-b2e669bbf1f0ecf7fab316e14e3407966ec944824d095a0f7db0bdf83ea443d7.scope: Deactivated successfully. Sep 13 01:05:58.863255 env[1244]: time="2025-09-13T01:05:58.863214388Z" level=info msg="shim disconnected" id=b2e669bbf1f0ecf7fab316e14e3407966ec944824d095a0f7db0bdf83ea443d7 Sep 13 01:05:58.863255 env[1244]: time="2025-09-13T01:05:58.863252601Z" level=warning msg="cleaning up after shim disconnected" id=b2e669bbf1f0ecf7fab316e14e3407966ec944824d095a0f7db0bdf83ea443d7 namespace=k8s.io Sep 13 01:05:58.863255 env[1244]: time="2025-09-13T01:05:58.863261000Z" level=info msg="cleaning up dead shim" Sep 13 01:05:58.868639 env[1244]: time="2025-09-13T01:05:58.868607386Z" level=warning msg="cleanup warnings time=\"2025-09-13T01:05:58Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2531 runtime=io.containerd.runc.v2\n" Sep 13 01:05:59.181105 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b2e669bbf1f0ecf7fab316e14e3407966ec944824d095a0f7db0bdf83ea443d7-rootfs.mount: Deactivated successfully. Sep 13 01:05:59.440542 env[1244]: time="2025-09-13T01:05:59.439521443Z" level=info msg="CreateContainer within sandbox \"7407dfe13184b5858c52276885bcf57e825256acd6aa55e51f52f3c68c379266\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 13 01:05:59.446082 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3339035766.mount: Deactivated successfully. Sep 13 01:05:59.452554 env[1244]: time="2025-09-13T01:05:59.452513952Z" level=info msg="CreateContainer within sandbox \"7407dfe13184b5858c52276885bcf57e825256acd6aa55e51f52f3c68c379266\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"47296e40826bd2524c563442197b38309470284bbf31b75e75716393eb10eed1\"" Sep 13 01:05:59.453385 env[1244]: time="2025-09-13T01:05:59.453360574Z" level=info msg="StartContainer for \"47296e40826bd2524c563442197b38309470284bbf31b75e75716393eb10eed1\"" Sep 13 01:05:59.469137 systemd[1]: Started cri-containerd-47296e40826bd2524c563442197b38309470284bbf31b75e75716393eb10eed1.scope. Sep 13 01:05:59.494028 env[1244]: time="2025-09-13T01:05:59.493995559Z" level=info msg="StartContainer for \"47296e40826bd2524c563442197b38309470284bbf31b75e75716393eb10eed1\" returns successfully" Sep 13 01:05:59.553123 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 13 01:05:59.553262 systemd[1]: Stopped systemd-sysctl.service. Sep 13 01:05:59.554054 systemd[1]: Stopping systemd-sysctl.service... Sep 13 01:05:59.555754 systemd[1]: Starting systemd-sysctl.service... Sep 13 01:05:59.559500 systemd[1]: cri-containerd-47296e40826bd2524c563442197b38309470284bbf31b75e75716393eb10eed1.scope: Deactivated successfully. Sep 13 01:05:59.590850 env[1244]: time="2025-09-13T01:05:59.590804739Z" level=info msg="shim disconnected" id=47296e40826bd2524c563442197b38309470284bbf31b75e75716393eb10eed1 Sep 13 01:05:59.591250 env[1244]: time="2025-09-13T01:05:59.591238157Z" level=warning msg="cleaning up after shim disconnected" id=47296e40826bd2524c563442197b38309470284bbf31b75e75716393eb10eed1 namespace=k8s.io Sep 13 01:05:59.592192 env[1244]: time="2025-09-13T01:05:59.591345898Z" level=info msg="cleaning up dead shim" Sep 13 01:05:59.597279 env[1244]: time="2025-09-13T01:05:59.597247646Z" level=warning msg="cleanup warnings time=\"2025-09-13T01:05:59Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2599 runtime=io.containerd.runc.v2\n" Sep 13 01:05:59.644065 systemd[1]: Finished systemd-sysctl.service. Sep 13 01:06:00.180816 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-47296e40826bd2524c563442197b38309470284bbf31b75e75716393eb10eed1-rootfs.mount: Deactivated successfully. Sep 13 01:06:00.443791 env[1244]: time="2025-09-13T01:06:00.443580616Z" level=info msg="CreateContainer within sandbox \"7407dfe13184b5858c52276885bcf57e825256acd6aa55e51f52f3c68c379266\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 13 01:06:00.472488 env[1244]: time="2025-09-13T01:06:00.472215599Z" level=info msg="CreateContainer within sandbox \"7407dfe13184b5858c52276885bcf57e825256acd6aa55e51f52f3c68c379266\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"ddf82f03195851e64d782b243767f0279ac51ac1b50c9866b8e8600aa0ce8142\"" Sep 13 01:06:00.472717 env[1244]: time="2025-09-13T01:06:00.472700540Z" level=info msg="StartContainer for \"ddf82f03195851e64d782b243767f0279ac51ac1b50c9866b8e8600aa0ce8142\"" Sep 13 01:06:00.488640 systemd[1]: Started cri-containerd-ddf82f03195851e64d782b243767f0279ac51ac1b50c9866b8e8600aa0ce8142.scope. Sep 13 01:06:00.521360 env[1244]: time="2025-09-13T01:06:00.521322484Z" level=info msg="StartContainer for \"ddf82f03195851e64d782b243767f0279ac51ac1b50c9866b8e8600aa0ce8142\" returns successfully" Sep 13 01:06:00.573789 systemd[1]: cri-containerd-ddf82f03195851e64d782b243767f0279ac51ac1b50c9866b8e8600aa0ce8142.scope: Deactivated successfully. Sep 13 01:06:00.593492 env[1244]: time="2025-09-13T01:06:00.593457657Z" level=info msg="shim disconnected" id=ddf82f03195851e64d782b243767f0279ac51ac1b50c9866b8e8600aa0ce8142 Sep 13 01:06:00.597664 env[1244]: time="2025-09-13T01:06:00.593685572Z" level=warning msg="cleaning up after shim disconnected" id=ddf82f03195851e64d782b243767f0279ac51ac1b50c9866b8e8600aa0ce8142 namespace=k8s.io Sep 13 01:06:00.597664 env[1244]: time="2025-09-13T01:06:00.593694894Z" level=info msg="cleaning up dead shim" Sep 13 01:06:00.598226 env[1244]: time="2025-09-13T01:06:00.598208630Z" level=warning msg="cleanup warnings time=\"2025-09-13T01:06:00Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2661 runtime=io.containerd.runc.v2\n" Sep 13 01:06:01.180823 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ddf82f03195851e64d782b243767f0279ac51ac1b50c9866b8e8600aa0ce8142-rootfs.mount: Deactivated successfully. Sep 13 01:06:01.451365 env[1244]: time="2025-09-13T01:06:01.451024199Z" level=info msg="CreateContainer within sandbox \"7407dfe13184b5858c52276885bcf57e825256acd6aa55e51f52f3c68c379266\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 13 01:06:01.502041 env[1244]: time="2025-09-13T01:06:01.501945103Z" level=info msg="CreateContainer within sandbox \"7407dfe13184b5858c52276885bcf57e825256acd6aa55e51f52f3c68c379266\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"eb9f07d6425112b0fdc6cb3e5c6068cb1237e2be730343831192eeb22b80fb80\"" Sep 13 01:06:01.502448 env[1244]: time="2025-09-13T01:06:01.502430395Z" level=info msg="StartContainer for \"eb9f07d6425112b0fdc6cb3e5c6068cb1237e2be730343831192eeb22b80fb80\"" Sep 13 01:06:01.520316 systemd[1]: Started cri-containerd-eb9f07d6425112b0fdc6cb3e5c6068cb1237e2be730343831192eeb22b80fb80.scope. Sep 13 01:06:01.543680 env[1244]: time="2025-09-13T01:06:01.543649877Z" level=info msg="StartContainer for \"eb9f07d6425112b0fdc6cb3e5c6068cb1237e2be730343831192eeb22b80fb80\" returns successfully" Sep 13 01:06:01.546236 systemd[1]: cri-containerd-eb9f07d6425112b0fdc6cb3e5c6068cb1237e2be730343831192eeb22b80fb80.scope: Deactivated successfully. Sep 13 01:06:01.569333 env[1244]: time="2025-09-13T01:06:01.569290029Z" level=info msg="shim disconnected" id=eb9f07d6425112b0fdc6cb3e5c6068cb1237e2be730343831192eeb22b80fb80 Sep 13 01:06:01.569333 env[1244]: time="2025-09-13T01:06:01.569319612Z" level=warning msg="cleaning up after shim disconnected" id=eb9f07d6425112b0fdc6cb3e5c6068cb1237e2be730343831192eeb22b80fb80 namespace=k8s.io Sep 13 01:06:01.569333 env[1244]: time="2025-09-13T01:06:01.569331860Z" level=info msg="cleaning up dead shim" Sep 13 01:06:01.575221 env[1244]: time="2025-09-13T01:06:01.575190836Z" level=warning msg="cleanup warnings time=\"2025-09-13T01:06:01Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2715 runtime=io.containerd.runc.v2\n" Sep 13 01:06:02.181229 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-eb9f07d6425112b0fdc6cb3e5c6068cb1237e2be730343831192eeb22b80fb80-rootfs.mount: Deactivated successfully. Sep 13 01:06:02.446814 env[1244]: time="2025-09-13T01:06:02.446587153Z" level=info msg="CreateContainer within sandbox \"7407dfe13184b5858c52276885bcf57e825256acd6aa55e51f52f3c68c379266\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 13 01:06:02.527976 env[1244]: time="2025-09-13T01:06:02.527945148Z" level=info msg="CreateContainer within sandbox \"7407dfe13184b5858c52276885bcf57e825256acd6aa55e51f52f3c68c379266\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"0eb742700f72fe40eea8b059134e703bd39dd054429a05dd166e4232b8750e75\"" Sep 13 01:06:02.542334 env[1244]: time="2025-09-13T01:06:02.542307584Z" level=info msg="StartContainer for \"0eb742700f72fe40eea8b059134e703bd39dd054429a05dd166e4232b8750e75\"" Sep 13 01:06:02.553379 systemd[1]: Started cri-containerd-0eb742700f72fe40eea8b059134e703bd39dd054429a05dd166e4232b8750e75.scope. Sep 13 01:06:02.600903 env[1244]: time="2025-09-13T01:06:02.600870362Z" level=info msg="StartContainer for \"0eb742700f72fe40eea8b059134e703bd39dd054429a05dd166e4232b8750e75\" returns successfully" Sep 13 01:06:03.072025 kubelet[2075]: I0913 01:06:03.071961 2075 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 13 01:06:03.399904 systemd[1]: Created slice kubepods-burstable-pod17dc8b0e_e8ab_4cf0_898e_79e06e4228db.slice. Sep 13 01:06:03.417254 systemd[1]: Created slice kubepods-burstable-pod8e3c6818_853f_44b3_9639_b8c5145fb1e7.slice. Sep 13 01:06:03.480896 kubelet[2075]: I0913 01:06:03.480868 2075 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4d7gp\" (UniqueName: \"kubernetes.io/projected/17dc8b0e-e8ab-4cf0-898e-79e06e4228db-kube-api-access-4d7gp\") pod \"coredns-674b8bbfcf-qnvpn\" (UID: \"17dc8b0e-e8ab-4cf0-898e-79e06e4228db\") " pod="kube-system/coredns-674b8bbfcf-qnvpn" Sep 13 01:06:03.481702 kubelet[2075]: I0913 01:06:03.481025 2075 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8e3c6818-853f-44b3-9639-b8c5145fb1e7-config-volume\") pod \"coredns-674b8bbfcf-62fsq\" (UID: \"8e3c6818-853f-44b3-9639-b8c5145fb1e7\") " pod="kube-system/coredns-674b8bbfcf-62fsq" Sep 13 01:06:03.481702 kubelet[2075]: I0913 01:06:03.481065 2075 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lkgf9\" (UniqueName: \"kubernetes.io/projected/8e3c6818-853f-44b3-9639-b8c5145fb1e7-kube-api-access-lkgf9\") pod \"coredns-674b8bbfcf-62fsq\" (UID: \"8e3c6818-853f-44b3-9639-b8c5145fb1e7\") " pod="kube-system/coredns-674b8bbfcf-62fsq" Sep 13 01:06:03.481702 kubelet[2075]: I0913 01:06:03.481085 2075 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/17dc8b0e-e8ab-4cf0-898e-79e06e4228db-config-volume\") pod \"coredns-674b8bbfcf-qnvpn\" (UID: \"17dc8b0e-e8ab-4cf0-898e-79e06e4228db\") " pod="kube-system/coredns-674b8bbfcf-qnvpn" Sep 13 01:06:03.715034 env[1244]: time="2025-09-13T01:06:03.714952213Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-qnvpn,Uid:17dc8b0e-e8ab-4cf0-898e-79e06e4228db,Namespace:kube-system,Attempt:0,}" Sep 13 01:06:03.724679 env[1244]: time="2025-09-13T01:06:03.724655104Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-62fsq,Uid:8e3c6818-853f-44b3-9639-b8c5145fb1e7,Namespace:kube-system,Attempt:0,}" Sep 13 01:06:03.823481 kernel: Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks! Sep 13 01:06:04.117429 kernel: Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks! Sep 13 01:06:05.887993 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Sep 13 01:06:05.888126 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Sep 13 01:06:05.912947 systemd-networkd[1060]: cilium_host: Link UP Sep 13 01:06:05.913548 systemd-networkd[1060]: cilium_net: Link UP Sep 13 01:06:05.913648 systemd-networkd[1060]: cilium_net: Gained carrier Sep 13 01:06:05.913772 systemd-networkd[1060]: cilium_host: Gained carrier Sep 13 01:06:06.085866 systemd-networkd[1060]: cilium_vxlan: Link UP Sep 13 01:06:06.085871 systemd-networkd[1060]: cilium_vxlan: Gained carrier Sep 13 01:06:06.315535 systemd-networkd[1060]: cilium_host: Gained IPv6LL Sep 13 01:06:06.479431 kernel: NET: Registered PF_ALG protocol family Sep 13 01:06:06.747532 systemd-networkd[1060]: cilium_net: Gained IPv6LL Sep 13 01:06:07.170483 systemd-networkd[1060]: lxc_health: Link UP Sep 13 01:06:07.179478 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Sep 13 01:06:07.179686 systemd-networkd[1060]: lxc_health: Gained carrier Sep 13 01:06:07.257124 systemd-networkd[1060]: lxc2df0a4e16b6e: Link UP Sep 13 01:06:07.265434 kernel: eth0: renamed from tmpe0b18 Sep 13 01:06:07.270450 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc2df0a4e16b6e: link becomes ready Sep 13 01:06:07.270383 systemd-networkd[1060]: lxc2df0a4e16b6e: Gained carrier Sep 13 01:06:07.279856 systemd-networkd[1060]: lxc1dfc196be581: Link UP Sep 13 01:06:07.283428 kernel: eth0: renamed from tmp22e32 Sep 13 01:06:07.289047 systemd-networkd[1060]: lxc1dfc196be581: Gained carrier Sep 13 01:06:07.289424 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc1dfc196be581: link becomes ready Sep 13 01:06:07.388559 systemd-networkd[1060]: cilium_vxlan: Gained IPv6LL Sep 13 01:06:08.351296 kubelet[2075]: I0913 01:06:08.349165 2075 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-tpwkg" podStartSLOduration=11.661318689 podStartE2EDuration="23.347540717s" podCreationTimestamp="2025-09-13 01:05:45 +0000 UTC" firstStartedPulling="2025-09-13 01:05:46.383168272 +0000 UTC m=+4.428643407" lastFinishedPulling="2025-09-13 01:05:58.069390297 +0000 UTC m=+16.114865435" observedRunningTime="2025-09-13 01:06:03.467713001 +0000 UTC m=+21.513188141" watchObservedRunningTime="2025-09-13 01:06:08.347540717 +0000 UTC m=+26.393015853" Sep 13 01:06:08.539627 systemd-networkd[1060]: lxc1dfc196be581: Gained IPv6LL Sep 13 01:06:08.731615 systemd-networkd[1060]: lxc2df0a4e16b6e: Gained IPv6LL Sep 13 01:06:09.115583 systemd-networkd[1060]: lxc_health: Gained IPv6LL Sep 13 01:06:10.201336 env[1244]: time="2025-09-13T01:06:10.201282494Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 01:06:10.201336 env[1244]: time="2025-09-13T01:06:10.201307492Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 01:06:10.201336 env[1244]: time="2025-09-13T01:06:10.201314651Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 01:06:10.205351 env[1244]: time="2025-09-13T01:06:10.203914369Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 01:06:10.205351 env[1244]: time="2025-09-13T01:06:10.203974477Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 01:06:10.205351 env[1244]: time="2025-09-13T01:06:10.203994665Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 01:06:10.205351 env[1244]: time="2025-09-13T01:06:10.204097258Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/22e3260e8ce395dd7607d172bc08351806d70b990b0c5df1370b59e982da608b pid=3285 runtime=io.containerd.runc.v2 Sep 13 01:06:10.213769 env[1244]: time="2025-09-13T01:06:10.213721160Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/e0b18441621b5b8cffed61237bfb9ef711cff228e382745a223319dafb986e98 pid=3272 runtime=io.containerd.runc.v2 Sep 13 01:06:10.244876 systemd[1]: run-containerd-runc-k8s.io-22e3260e8ce395dd7607d172bc08351806d70b990b0c5df1370b59e982da608b-runc.d9MD1g.mount: Deactivated successfully. Sep 13 01:06:10.254300 systemd[1]: Started cri-containerd-e0b18441621b5b8cffed61237bfb9ef711cff228e382745a223319dafb986e98.scope. Sep 13 01:06:10.263549 systemd[1]: Started cri-containerd-22e3260e8ce395dd7607d172bc08351806d70b990b0c5df1370b59e982da608b.scope. Sep 13 01:06:10.279126 systemd-resolved[1199]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 13 01:06:10.279762 systemd-resolved[1199]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 13 01:06:10.304191 env[1244]: time="2025-09-13T01:06:10.304165736Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-qnvpn,Uid:17dc8b0e-e8ab-4cf0-898e-79e06e4228db,Namespace:kube-system,Attempt:0,} returns sandbox id \"e0b18441621b5b8cffed61237bfb9ef711cff228e382745a223319dafb986e98\"" Sep 13 01:06:10.315052 env[1244]: time="2025-09-13T01:06:10.315027485Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-62fsq,Uid:8e3c6818-853f-44b3-9639-b8c5145fb1e7,Namespace:kube-system,Attempt:0,} returns sandbox id \"22e3260e8ce395dd7607d172bc08351806d70b990b0c5df1370b59e982da608b\"" Sep 13 01:06:10.315569 env[1244]: time="2025-09-13T01:06:10.315472893Z" level=info msg="CreateContainer within sandbox \"e0b18441621b5b8cffed61237bfb9ef711cff228e382745a223319dafb986e98\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 13 01:06:10.323935 env[1244]: time="2025-09-13T01:06:10.323909909Z" level=info msg="CreateContainer within sandbox \"22e3260e8ce395dd7607d172bc08351806d70b990b0c5df1370b59e982da608b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 13 01:06:10.391886 env[1244]: time="2025-09-13T01:06:10.391854985Z" level=info msg="CreateContainer within sandbox \"e0b18441621b5b8cffed61237bfb9ef711cff228e382745a223319dafb986e98\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"2d0dc69e33e6b9a94e508df3f4193e01c84e091c851578f01997876028713c89\"" Sep 13 01:06:10.392646 env[1244]: time="2025-09-13T01:06:10.392623480Z" level=info msg="StartContainer for \"2d0dc69e33e6b9a94e508df3f4193e01c84e091c851578f01997876028713c89\"" Sep 13 01:06:10.393915 env[1244]: time="2025-09-13T01:06:10.393847233Z" level=info msg="CreateContainer within sandbox \"22e3260e8ce395dd7607d172bc08351806d70b990b0c5df1370b59e982da608b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"424d6dfb1a000d737efdf6aa9a693761ab8755b8b39de6191ac9cbefa69294e4\"" Sep 13 01:06:10.394226 env[1244]: time="2025-09-13T01:06:10.394208393Z" level=info msg="StartContainer for \"424d6dfb1a000d737efdf6aa9a693761ab8755b8b39de6191ac9cbefa69294e4\"" Sep 13 01:06:10.411710 systemd[1]: Started cri-containerd-2d0dc69e33e6b9a94e508df3f4193e01c84e091c851578f01997876028713c89.scope. Sep 13 01:06:10.420650 systemd[1]: Started cri-containerd-424d6dfb1a000d737efdf6aa9a693761ab8755b8b39de6191ac9cbefa69294e4.scope. Sep 13 01:06:10.455901 env[1244]: time="2025-09-13T01:06:10.455808008Z" level=info msg="StartContainer for \"424d6dfb1a000d737efdf6aa9a693761ab8755b8b39de6191ac9cbefa69294e4\" returns successfully" Sep 13 01:06:10.463386 env[1244]: time="2025-09-13T01:06:10.463103776Z" level=info msg="StartContainer for \"2d0dc69e33e6b9a94e508df3f4193e01c84e091c851578f01997876028713c89\" returns successfully" Sep 13 01:06:10.475902 kubelet[2075]: I0913 01:06:10.475871 2075 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-62fsq" podStartSLOduration=25.475861101 podStartE2EDuration="25.475861101s" podCreationTimestamp="2025-09-13 01:05:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 01:06:10.475642272 +0000 UTC m=+28.521117407" watchObservedRunningTime="2025-09-13 01:06:10.475861101 +0000 UTC m=+28.521336234" Sep 13 01:06:10.489524 kubelet[2075]: I0913 01:06:10.489477 2075 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-qnvpn" podStartSLOduration=25.48946655 podStartE2EDuration="25.48946655s" podCreationTimestamp="2025-09-13 01:05:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 01:06:10.489036966 +0000 UTC m=+28.534512107" watchObservedRunningTime="2025-09-13 01:06:10.48946655 +0000 UTC m=+28.534941686" Sep 13 01:06:11.210323 systemd[1]: run-containerd-runc-k8s.io-e0b18441621b5b8cffed61237bfb9ef711cff228e382745a223319dafb986e98-runc.AVW7XH.mount: Deactivated successfully. Sep 13 01:06:59.209064 systemd[1]: Started sshd@5-139.178.70.102:22-147.75.109.163:34116.service. Sep 13 01:06:59.298429 sshd[3447]: Accepted publickey for core from 147.75.109.163 port 34116 ssh2: RSA SHA256:sJGDjo0Z2Vx3Gx4EUnUZFO+gxzu8eUeKwoabCfe3hp8 Sep 13 01:06:59.299844 sshd[3447]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 01:06:59.304816 systemd[1]: Started session-8.scope. Sep 13 01:06:59.305117 systemd-logind[1235]: New session 8 of user core. Sep 13 01:06:59.560368 sshd[3447]: pam_unix(sshd:session): session closed for user core Sep 13 01:06:59.562435 systemd[1]: sshd@5-139.178.70.102:22-147.75.109.163:34116.service: Deactivated successfully. Sep 13 01:06:59.562890 systemd[1]: session-8.scope: Deactivated successfully. Sep 13 01:06:59.563225 systemd-logind[1235]: Session 8 logged out. Waiting for processes to exit. Sep 13 01:06:59.563715 systemd-logind[1235]: Removed session 8. Sep 13 01:07:04.564037 systemd[1]: Started sshd@6-139.178.70.102:22-147.75.109.163:56630.service. Sep 13 01:07:04.603767 sshd[3460]: Accepted publickey for core from 147.75.109.163 port 56630 ssh2: RSA SHA256:sJGDjo0Z2Vx3Gx4EUnUZFO+gxzu8eUeKwoabCfe3hp8 Sep 13 01:07:04.604918 sshd[3460]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 01:07:04.608296 systemd[1]: Started session-9.scope. Sep 13 01:07:04.609176 systemd-logind[1235]: New session 9 of user core. Sep 13 01:07:04.704715 sshd[3460]: pam_unix(sshd:session): session closed for user core Sep 13 01:07:04.706493 systemd[1]: sshd@6-139.178.70.102:22-147.75.109.163:56630.service: Deactivated successfully. Sep 13 01:07:04.706962 systemd[1]: session-9.scope: Deactivated successfully. Sep 13 01:07:04.707600 systemd-logind[1235]: Session 9 logged out. Waiting for processes to exit. Sep 13 01:07:04.708120 systemd-logind[1235]: Removed session 9. Sep 13 01:07:09.709596 systemd[1]: Started sshd@7-139.178.70.102:22-147.75.109.163:56640.service. Sep 13 01:07:09.743625 sshd[3472]: Accepted publickey for core from 147.75.109.163 port 56640 ssh2: RSA SHA256:sJGDjo0Z2Vx3Gx4EUnUZFO+gxzu8eUeKwoabCfe3hp8 Sep 13 01:07:09.744973 sshd[3472]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 01:07:09.748328 systemd-logind[1235]: New session 10 of user core. Sep 13 01:07:09.749055 systemd[1]: Started session-10.scope. Sep 13 01:07:09.893552 sshd[3472]: pam_unix(sshd:session): session closed for user core Sep 13 01:07:09.895142 systemd-logind[1235]: Session 10 logged out. Waiting for processes to exit. Sep 13 01:07:09.895310 systemd[1]: sshd@7-139.178.70.102:22-147.75.109.163:56640.service: Deactivated successfully. Sep 13 01:07:09.895748 systemd[1]: session-10.scope: Deactivated successfully. Sep 13 01:07:09.896249 systemd-logind[1235]: Removed session 10. Sep 13 01:07:14.896229 systemd[1]: Started sshd@8-139.178.70.102:22-147.75.109.163:33764.service. Sep 13 01:07:14.936201 sshd[3484]: Accepted publickey for core from 147.75.109.163 port 33764 ssh2: RSA SHA256:sJGDjo0Z2Vx3Gx4EUnUZFO+gxzu8eUeKwoabCfe3hp8 Sep 13 01:07:14.937390 sshd[3484]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 01:07:14.940535 systemd[1]: Started session-11.scope. Sep 13 01:07:14.940945 systemd-logind[1235]: New session 11 of user core. Sep 13 01:07:15.042805 sshd[3484]: pam_unix(sshd:session): session closed for user core Sep 13 01:07:15.046573 systemd[1]: Started sshd@9-139.178.70.102:22-147.75.109.163:33774.service. Sep 13 01:07:15.051220 systemd[1]: sshd@8-139.178.70.102:22-147.75.109.163:33764.service: Deactivated successfully. Sep 13 01:07:15.051927 systemd[1]: session-11.scope: Deactivated successfully. Sep 13 01:07:15.052626 systemd-logind[1235]: Session 11 logged out. Waiting for processes to exit. Sep 13 01:07:15.053092 systemd-logind[1235]: Removed session 11. Sep 13 01:07:15.081944 sshd[3496]: Accepted publickey for core from 147.75.109.163 port 33774 ssh2: RSA SHA256:sJGDjo0Z2Vx3Gx4EUnUZFO+gxzu8eUeKwoabCfe3hp8 Sep 13 01:07:15.082983 sshd[3496]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 01:07:15.086099 systemd[1]: Started session-12.scope. Sep 13 01:07:15.086518 systemd-logind[1235]: New session 12 of user core. Sep 13 01:07:15.220311 sshd[3496]: pam_unix(sshd:session): session closed for user core Sep 13 01:07:15.222822 systemd[1]: Started sshd@10-139.178.70.102:22-147.75.109.163:33776.service. Sep 13 01:07:15.225789 systemd[1]: sshd@9-139.178.70.102:22-147.75.109.163:33774.service: Deactivated successfully. Sep 13 01:07:15.226195 systemd[1]: session-12.scope: Deactivated successfully. Sep 13 01:07:15.229374 systemd-logind[1235]: Session 12 logged out. Waiting for processes to exit. Sep 13 01:07:15.229961 systemd-logind[1235]: Removed session 12. Sep 13 01:07:15.265152 sshd[3506]: Accepted publickey for core from 147.75.109.163 port 33776 ssh2: RSA SHA256:sJGDjo0Z2Vx3Gx4EUnUZFO+gxzu8eUeKwoabCfe3hp8 Sep 13 01:07:15.266310 sshd[3506]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 01:07:15.270122 systemd[1]: Started session-13.scope. Sep 13 01:07:15.270335 systemd-logind[1235]: New session 13 of user core. Sep 13 01:07:15.367387 sshd[3506]: pam_unix(sshd:session): session closed for user core Sep 13 01:07:15.369170 systemd[1]: sshd@10-139.178.70.102:22-147.75.109.163:33776.service: Deactivated successfully. Sep 13 01:07:15.369677 systemd[1]: session-13.scope: Deactivated successfully. Sep 13 01:07:15.370383 systemd-logind[1235]: Session 13 logged out. Waiting for processes to exit. Sep 13 01:07:15.371039 systemd-logind[1235]: Removed session 13. Sep 13 01:07:20.371619 systemd[1]: Started sshd@11-139.178.70.102:22-147.75.109.163:43012.service. Sep 13 01:07:20.419211 sshd[3521]: Accepted publickey for core from 147.75.109.163 port 43012 ssh2: RSA SHA256:sJGDjo0Z2Vx3Gx4EUnUZFO+gxzu8eUeKwoabCfe3hp8 Sep 13 01:07:20.420713 sshd[3521]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 01:07:20.424937 systemd[1]: Started session-14.scope. Sep 13 01:07:20.425999 systemd-logind[1235]: New session 14 of user core. Sep 13 01:07:20.526092 sshd[3521]: pam_unix(sshd:session): session closed for user core Sep 13 01:07:20.528134 systemd[1]: sshd@11-139.178.70.102:22-147.75.109.163:43012.service: Deactivated successfully. Sep 13 01:07:20.528686 systemd[1]: session-14.scope: Deactivated successfully. Sep 13 01:07:20.529261 systemd-logind[1235]: Session 14 logged out. Waiting for processes to exit. Sep 13 01:07:20.529817 systemd-logind[1235]: Removed session 14. Sep 13 01:07:25.528986 systemd[1]: Started sshd@12-139.178.70.102:22-147.75.109.163:43014.service. Sep 13 01:07:25.583403 sshd[3532]: Accepted publickey for core from 147.75.109.163 port 43014 ssh2: RSA SHA256:sJGDjo0Z2Vx3Gx4EUnUZFO+gxzu8eUeKwoabCfe3hp8 Sep 13 01:07:25.584569 sshd[3532]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 01:07:25.587657 systemd[1]: Started session-15.scope. Sep 13 01:07:25.588553 systemd-logind[1235]: New session 15 of user core. Sep 13 01:07:25.717383 sshd[3532]: pam_unix(sshd:session): session closed for user core Sep 13 01:07:25.719797 systemd[1]: Started sshd@13-139.178.70.102:22-147.75.109.163:43018.service. Sep 13 01:07:25.723510 systemd[1]: sshd@12-139.178.70.102:22-147.75.109.163:43014.service: Deactivated successfully. Sep 13 01:07:25.723913 systemd[1]: session-15.scope: Deactivated successfully. Sep 13 01:07:25.724318 systemd-logind[1235]: Session 15 logged out. Waiting for processes to exit. Sep 13 01:07:25.724775 systemd-logind[1235]: Removed session 15. Sep 13 01:07:25.754460 sshd[3542]: Accepted publickey for core from 147.75.109.163 port 43018 ssh2: RSA SHA256:sJGDjo0Z2Vx3Gx4EUnUZFO+gxzu8eUeKwoabCfe3hp8 Sep 13 01:07:25.755460 sshd[3542]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 01:07:25.758284 systemd[1]: Started session-16.scope. Sep 13 01:07:25.758580 systemd-logind[1235]: New session 16 of user core. Sep 13 01:07:26.386218 sshd[3542]: pam_unix(sshd:session): session closed for user core Sep 13 01:07:26.389242 systemd[1]: Started sshd@14-139.178.70.102:22-147.75.109.163:43024.service. Sep 13 01:07:26.392583 systemd[1]: sshd@13-139.178.70.102:22-147.75.109.163:43018.service: Deactivated successfully. Sep 13 01:07:26.393010 systemd[1]: session-16.scope: Deactivated successfully. Sep 13 01:07:26.393374 systemd-logind[1235]: Session 16 logged out. Waiting for processes to exit. Sep 13 01:07:26.393909 systemd-logind[1235]: Removed session 16. Sep 13 01:07:26.426009 sshd[3552]: Accepted publickey for core from 147.75.109.163 port 43024 ssh2: RSA SHA256:sJGDjo0Z2Vx3Gx4EUnUZFO+gxzu8eUeKwoabCfe3hp8 Sep 13 01:07:26.427109 sshd[3552]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 01:07:26.429963 systemd[1]: Started session-17.scope. Sep 13 01:07:26.430339 systemd-logind[1235]: New session 17 of user core. Sep 13 01:07:27.052120 sshd[3552]: pam_unix(sshd:session): session closed for user core Sep 13 01:07:27.054615 systemd[1]: Started sshd@15-139.178.70.102:22-147.75.109.163:43032.service. Sep 13 01:07:27.069243 systemd[1]: sshd@14-139.178.70.102:22-147.75.109.163:43024.service: Deactivated successfully. Sep 13 01:07:27.069900 systemd[1]: session-17.scope: Deactivated successfully. Sep 13 01:07:27.070539 systemd-logind[1235]: Session 17 logged out. Waiting for processes to exit. Sep 13 01:07:27.071172 systemd-logind[1235]: Removed session 17. Sep 13 01:07:27.110765 sshd[3566]: Accepted publickey for core from 147.75.109.163 port 43032 ssh2: RSA SHA256:sJGDjo0Z2Vx3Gx4EUnUZFO+gxzu8eUeKwoabCfe3hp8 Sep 13 01:07:27.111893 sshd[3566]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 01:07:27.115070 systemd-logind[1235]: New session 18 of user core. Sep 13 01:07:27.115551 systemd[1]: Started session-18.scope. Sep 13 01:07:27.289910 systemd[1]: Started sshd@16-139.178.70.102:22-147.75.109.163:43034.service. Sep 13 01:07:27.295766 sshd[3566]: pam_unix(sshd:session): session closed for user core Sep 13 01:07:27.298709 systemd-logind[1235]: Session 18 logged out. Waiting for processes to exit. Sep 13 01:07:27.300133 systemd[1]: sshd@15-139.178.70.102:22-147.75.109.163:43032.service: Deactivated successfully. Sep 13 01:07:27.300824 systemd[1]: session-18.scope: Deactivated successfully. Sep 13 01:07:27.302109 systemd-logind[1235]: Removed session 18. Sep 13 01:07:27.330029 sshd[3579]: Accepted publickey for core from 147.75.109.163 port 43034 ssh2: RSA SHA256:sJGDjo0Z2Vx3Gx4EUnUZFO+gxzu8eUeKwoabCfe3hp8 Sep 13 01:07:27.331137 sshd[3579]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 01:07:27.334226 systemd[1]: Started session-19.scope. Sep 13 01:07:27.335106 systemd-logind[1235]: New session 19 of user core. Sep 13 01:07:27.433757 sshd[3579]: pam_unix(sshd:session): session closed for user core Sep 13 01:07:27.435579 systemd[1]: sshd@16-139.178.70.102:22-147.75.109.163:43034.service: Deactivated successfully. Sep 13 01:07:27.436035 systemd[1]: session-19.scope: Deactivated successfully. Sep 13 01:07:27.436697 systemd-logind[1235]: Session 19 logged out. Waiting for processes to exit. Sep 13 01:07:27.437270 systemd-logind[1235]: Removed session 19. Sep 13 01:07:32.438251 systemd[1]: Started sshd@17-139.178.70.102:22-147.75.109.163:49010.service. Sep 13 01:07:32.473986 sshd[3591]: Accepted publickey for core from 147.75.109.163 port 49010 ssh2: RSA SHA256:sJGDjo0Z2Vx3Gx4EUnUZFO+gxzu8eUeKwoabCfe3hp8 Sep 13 01:07:32.475208 sshd[3591]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 01:07:32.478211 systemd-logind[1235]: New session 20 of user core. Sep 13 01:07:32.478791 systemd[1]: Started session-20.scope. Sep 13 01:07:32.564679 sshd[3591]: pam_unix(sshd:session): session closed for user core Sep 13 01:07:32.566369 systemd[1]: sshd@17-139.178.70.102:22-147.75.109.163:49010.service: Deactivated successfully. Sep 13 01:07:32.566844 systemd[1]: session-20.scope: Deactivated successfully. Sep 13 01:07:32.567245 systemd-logind[1235]: Session 20 logged out. Waiting for processes to exit. Sep 13 01:07:32.567820 systemd-logind[1235]: Removed session 20. Sep 13 01:07:37.568194 systemd[1]: Started sshd@18-139.178.70.102:22-147.75.109.163:49018.service. Sep 13 01:07:37.604544 sshd[3605]: Accepted publickey for core from 147.75.109.163 port 49018 ssh2: RSA SHA256:sJGDjo0Z2Vx3Gx4EUnUZFO+gxzu8eUeKwoabCfe3hp8 Sep 13 01:07:37.605719 sshd[3605]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 01:07:37.608810 systemd[1]: Started session-21.scope. Sep 13 01:07:37.609038 systemd-logind[1235]: New session 21 of user core. Sep 13 01:07:37.725138 sshd[3605]: pam_unix(sshd:session): session closed for user core Sep 13 01:07:37.726653 systemd[1]: sshd@18-139.178.70.102:22-147.75.109.163:49018.service: Deactivated successfully. Sep 13 01:07:37.727160 systemd[1]: session-21.scope: Deactivated successfully. Sep 13 01:07:37.727772 systemd-logind[1235]: Session 21 logged out. Waiting for processes to exit. Sep 13 01:07:37.728312 systemd-logind[1235]: Removed session 21. Sep 13 01:07:42.728092 systemd[1]: Started sshd@19-139.178.70.102:22-147.75.109.163:40586.service. Sep 13 01:07:42.761623 sshd[3619]: Accepted publickey for core from 147.75.109.163 port 40586 ssh2: RSA SHA256:sJGDjo0Z2Vx3Gx4EUnUZFO+gxzu8eUeKwoabCfe3hp8 Sep 13 01:07:42.762986 sshd[3619]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 01:07:42.766306 systemd-logind[1235]: New session 22 of user core. Sep 13 01:07:42.766988 systemd[1]: Started session-22.scope. Sep 13 01:07:42.870703 sshd[3619]: pam_unix(sshd:session): session closed for user core Sep 13 01:07:42.873437 systemd[1]: Started sshd@20-139.178.70.102:22-147.75.109.163:40588.service. Sep 13 01:07:42.875088 systemd[1]: sshd@19-139.178.70.102:22-147.75.109.163:40586.service: Deactivated successfully. Sep 13 01:07:42.875484 systemd[1]: session-22.scope: Deactivated successfully. Sep 13 01:07:42.875829 systemd-logind[1235]: Session 22 logged out. Waiting for processes to exit. Sep 13 01:07:42.876281 systemd-logind[1235]: Removed session 22. Sep 13 01:07:42.908291 sshd[3630]: Accepted publickey for core from 147.75.109.163 port 40588 ssh2: RSA SHA256:sJGDjo0Z2Vx3Gx4EUnUZFO+gxzu8eUeKwoabCfe3hp8 Sep 13 01:07:42.909385 sshd[3630]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 01:07:42.913405 systemd[1]: Started session-23.scope. Sep 13 01:07:42.913703 systemd-logind[1235]: New session 23 of user core. Sep 13 01:07:44.733321 systemd[1]: run-containerd-runc-k8s.io-0eb742700f72fe40eea8b059134e703bd39dd054429a05dd166e4232b8750e75-runc.D7S2dX.mount: Deactivated successfully. Sep 13 01:07:44.861303 env[1244]: time="2025-09-13T01:07:44.861272291Z" level=info msg="StopContainer for \"2b627ee9a3855d0c2e77844c70249af96efaba26a3ab181ec48c584470a130de\" with timeout 30 (s)" Sep 13 01:07:44.861941 env[1244]: time="2025-09-13T01:07:44.861913062Z" level=info msg="Stop container \"2b627ee9a3855d0c2e77844c70249af96efaba26a3ab181ec48c584470a130de\" with signal terminated" Sep 13 01:07:44.876542 env[1244]: time="2025-09-13T01:07:44.876502083Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 13 01:07:44.893970 env[1244]: time="2025-09-13T01:07:44.893947281Z" level=info msg="StopContainer for \"0eb742700f72fe40eea8b059134e703bd39dd054429a05dd166e4232b8750e75\" with timeout 2 (s)" Sep 13 01:07:44.894249 env[1244]: time="2025-09-13T01:07:44.894220134Z" level=info msg="Stop container \"0eb742700f72fe40eea8b059134e703bd39dd054429a05dd166e4232b8750e75\" with signal terminated" Sep 13 01:07:44.900595 systemd-networkd[1060]: lxc_health: Link DOWN Sep 13 01:07:44.900605 systemd-networkd[1060]: lxc_health: Lost carrier Sep 13 01:07:44.952158 systemd[1]: cri-containerd-2b627ee9a3855d0c2e77844c70249af96efaba26a3ab181ec48c584470a130de.scope: Deactivated successfully. Sep 13 01:07:44.966778 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2b627ee9a3855d0c2e77844c70249af96efaba26a3ab181ec48c584470a130de-rootfs.mount: Deactivated successfully. Sep 13 01:07:44.977056 systemd[1]: cri-containerd-0eb742700f72fe40eea8b059134e703bd39dd054429a05dd166e4232b8750e75.scope: Deactivated successfully. Sep 13 01:07:44.977236 systemd[1]: cri-containerd-0eb742700f72fe40eea8b059134e703bd39dd054429a05dd166e4232b8750e75.scope: Consumed 4.882s CPU time. Sep 13 01:07:44.989904 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0eb742700f72fe40eea8b059134e703bd39dd054429a05dd166e4232b8750e75-rootfs.mount: Deactivated successfully. Sep 13 01:07:45.003397 env[1244]: time="2025-09-13T01:07:44.996518664Z" level=info msg="shim disconnected" id=2b627ee9a3855d0c2e77844c70249af96efaba26a3ab181ec48c584470a130de Sep 13 01:07:45.003397 env[1244]: time="2025-09-13T01:07:44.996544453Z" level=warning msg="cleaning up after shim disconnected" id=2b627ee9a3855d0c2e77844c70249af96efaba26a3ab181ec48c584470a130de namespace=k8s.io Sep 13 01:07:45.003397 env[1244]: time="2025-09-13T01:07:44.996551481Z" level=info msg="cleaning up dead shim" Sep 13 01:07:45.003397 env[1244]: time="2025-09-13T01:07:45.001559744Z" level=warning msg="cleanup warnings time=\"2025-09-13T01:07:44Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3698 runtime=io.containerd.runc.v2\n" Sep 13 01:07:45.005564 env[1244]: time="2025-09-13T01:07:45.005537913Z" level=info msg="shim disconnected" id=0eb742700f72fe40eea8b059134e703bd39dd054429a05dd166e4232b8750e75 Sep 13 01:07:45.005654 env[1244]: time="2025-09-13T01:07:45.005643870Z" level=warning msg="cleaning up after shim disconnected" id=0eb742700f72fe40eea8b059134e703bd39dd054429a05dd166e4232b8750e75 namespace=k8s.io Sep 13 01:07:45.005708 env[1244]: time="2025-09-13T01:07:45.005693411Z" level=info msg="cleaning up dead shim" Sep 13 01:07:45.010275 env[1244]: time="2025-09-13T01:07:45.010257153Z" level=warning msg="cleanup warnings time=\"2025-09-13T01:07:45Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3710 runtime=io.containerd.runc.v2\n" Sep 13 01:07:45.019356 env[1244]: time="2025-09-13T01:07:45.019329059Z" level=info msg="StopContainer for \"0eb742700f72fe40eea8b059134e703bd39dd054429a05dd166e4232b8750e75\" returns successfully" Sep 13 01:07:45.019833 env[1244]: time="2025-09-13T01:07:45.019809132Z" level=info msg="StopPodSandbox for \"7407dfe13184b5858c52276885bcf57e825256acd6aa55e51f52f3c68c379266\"" Sep 13 01:07:45.021326 env[1244]: time="2025-09-13T01:07:45.019861760Z" level=info msg="Container to stop \"0eb742700f72fe40eea8b059134e703bd39dd054429a05dd166e4232b8750e75\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 01:07:45.021326 env[1244]: time="2025-09-13T01:07:45.019874275Z" level=info msg="Container to stop \"b2e669bbf1f0ecf7fab316e14e3407966ec944824d095a0f7db0bdf83ea443d7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 01:07:45.021326 env[1244]: time="2025-09-13T01:07:45.019882849Z" level=info msg="Container to stop \"47296e40826bd2524c563442197b38309470284bbf31b75e75716393eb10eed1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 01:07:45.021326 env[1244]: time="2025-09-13T01:07:45.019890941Z" level=info msg="Container to stop \"ddf82f03195851e64d782b243767f0279ac51ac1b50c9866b8e8600aa0ce8142\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 01:07:45.021326 env[1244]: time="2025-09-13T01:07:45.019898981Z" level=info msg="Container to stop \"eb9f07d6425112b0fdc6cb3e5c6068cb1237e2be730343831192eeb22b80fb80\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 01:07:45.023917 env[1244]: time="2025-09-13T01:07:45.023893901Z" level=info msg="StopContainer for \"2b627ee9a3855d0c2e77844c70249af96efaba26a3ab181ec48c584470a130de\" returns successfully" Sep 13 01:07:45.024130 env[1244]: time="2025-09-13T01:07:45.024109097Z" level=info msg="StopPodSandbox for \"3983bfddccb2b71ee966301b2975f370519d4f14ca456ea1a3996203aecefbb9\"" Sep 13 01:07:45.024189 env[1244]: time="2025-09-13T01:07:45.024151381Z" level=info msg="Container to stop \"2b627ee9a3855d0c2e77844c70249af96efaba26a3ab181ec48c584470a130de\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 01:07:45.025026 systemd[1]: cri-containerd-7407dfe13184b5858c52276885bcf57e825256acd6aa55e51f52f3c68c379266.scope: Deactivated successfully. Sep 13 01:07:45.037768 systemd[1]: cri-containerd-3983bfddccb2b71ee966301b2975f370519d4f14ca456ea1a3996203aecefbb9.scope: Deactivated successfully. Sep 13 01:07:45.099086 env[1244]: time="2025-09-13T01:07:45.099050529Z" level=info msg="shim disconnected" id=3983bfddccb2b71ee966301b2975f370519d4f14ca456ea1a3996203aecefbb9 Sep 13 01:07:45.099086 env[1244]: time="2025-09-13T01:07:45.099079852Z" level=warning msg="cleaning up after shim disconnected" id=3983bfddccb2b71ee966301b2975f370519d4f14ca456ea1a3996203aecefbb9 namespace=k8s.io Sep 13 01:07:45.099086 env[1244]: time="2025-09-13T01:07:45.099088341Z" level=info msg="cleaning up dead shim" Sep 13 01:07:45.099426 env[1244]: time="2025-09-13T01:07:45.099393656Z" level=info msg="shim disconnected" id=7407dfe13184b5858c52276885bcf57e825256acd6aa55e51f52f3c68c379266 Sep 13 01:07:45.099480 env[1244]: time="2025-09-13T01:07:45.099430867Z" level=warning msg="cleaning up after shim disconnected" id=7407dfe13184b5858c52276885bcf57e825256acd6aa55e51f52f3c68c379266 namespace=k8s.io Sep 13 01:07:45.099480 env[1244]: time="2025-09-13T01:07:45.099441452Z" level=info msg="cleaning up dead shim" Sep 13 01:07:45.108452 env[1244]: time="2025-09-13T01:07:45.108381342Z" level=warning msg="cleanup warnings time=\"2025-09-13T01:07:45Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3757 runtime=io.containerd.runc.v2\n" Sep 13 01:07:45.108867 env[1244]: time="2025-09-13T01:07:45.108847189Z" level=warning msg="cleanup warnings time=\"2025-09-13T01:07:45Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3758 runtime=io.containerd.runc.v2\n" Sep 13 01:07:45.124909 env[1244]: time="2025-09-13T01:07:45.124880125Z" level=info msg="TearDown network for sandbox \"7407dfe13184b5858c52276885bcf57e825256acd6aa55e51f52f3c68c379266\" successfully" Sep 13 01:07:45.124909 env[1244]: time="2025-09-13T01:07:45.124903377Z" level=info msg="StopPodSandbox for \"7407dfe13184b5858c52276885bcf57e825256acd6aa55e51f52f3c68c379266\" returns successfully" Sep 13 01:07:45.130329 env[1244]: time="2025-09-13T01:07:45.125049773Z" level=info msg="TearDown network for sandbox \"3983bfddccb2b71ee966301b2975f370519d4f14ca456ea1a3996203aecefbb9\" successfully" Sep 13 01:07:45.130329 env[1244]: time="2025-09-13T01:07:45.125061777Z" level=info msg="StopPodSandbox for \"3983bfddccb2b71ee966301b2975f370519d4f14ca456ea1a3996203aecefbb9\" returns successfully" Sep 13 01:07:45.216478 kubelet[2075]: I0913 01:07:45.216452 2075 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/74bb19c9-295d-4ca9-96c4-8268351c5a4d-bpf-maps\") pod \"74bb19c9-295d-4ca9-96c4-8268351c5a4d\" (UID: \"74bb19c9-295d-4ca9-96c4-8268351c5a4d\") " Sep 13 01:07:45.216478 kubelet[2075]: I0913 01:07:45.216480 2075 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/74bb19c9-295d-4ca9-96c4-8268351c5a4d-hostproc\") pod \"74bb19c9-295d-4ca9-96c4-8268351c5a4d\" (UID: \"74bb19c9-295d-4ca9-96c4-8268351c5a4d\") " Sep 13 01:07:45.216773 kubelet[2075]: I0913 01:07:45.216490 2075 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/74bb19c9-295d-4ca9-96c4-8268351c5a4d-cilium-run\") pod \"74bb19c9-295d-4ca9-96c4-8268351c5a4d\" (UID: \"74bb19c9-295d-4ca9-96c4-8268351c5a4d\") " Sep 13 01:07:45.216773 kubelet[2075]: I0913 01:07:45.216499 2075 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/74bb19c9-295d-4ca9-96c4-8268351c5a4d-host-proc-sys-kernel\") pod \"74bb19c9-295d-4ca9-96c4-8268351c5a4d\" (UID: \"74bb19c9-295d-4ca9-96c4-8268351c5a4d\") " Sep 13 01:07:45.216773 kubelet[2075]: I0913 01:07:45.216510 2075 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/74bb19c9-295d-4ca9-96c4-8268351c5a4d-cni-path\") pod \"74bb19c9-295d-4ca9-96c4-8268351c5a4d\" (UID: \"74bb19c9-295d-4ca9-96c4-8268351c5a4d\") " Sep 13 01:07:45.216773 kubelet[2075]: I0913 01:07:45.216525 2075 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zjtfr\" (UniqueName: \"kubernetes.io/projected/74bb19c9-295d-4ca9-96c4-8268351c5a4d-kube-api-access-zjtfr\") pod \"74bb19c9-295d-4ca9-96c4-8268351c5a4d\" (UID: \"74bb19c9-295d-4ca9-96c4-8268351c5a4d\") " Sep 13 01:07:45.216773 kubelet[2075]: I0913 01:07:45.216536 2075 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/69226fd4-06ac-4fe7-849b-ba866ebbbfe4-cilium-config-path\") pod \"69226fd4-06ac-4fe7-849b-ba866ebbbfe4\" (UID: \"69226fd4-06ac-4fe7-849b-ba866ebbbfe4\") " Sep 13 01:07:45.216773 kubelet[2075]: I0913 01:07:45.216545 2075 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/74bb19c9-295d-4ca9-96c4-8268351c5a4d-lib-modules\") pod \"74bb19c9-295d-4ca9-96c4-8268351c5a4d\" (UID: \"74bb19c9-295d-4ca9-96c4-8268351c5a4d\") " Sep 13 01:07:45.216918 kubelet[2075]: I0913 01:07:45.216555 2075 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/74bb19c9-295d-4ca9-96c4-8268351c5a4d-clustermesh-secrets\") pod \"74bb19c9-295d-4ca9-96c4-8268351c5a4d\" (UID: \"74bb19c9-295d-4ca9-96c4-8268351c5a4d\") " Sep 13 01:07:45.216918 kubelet[2075]: I0913 01:07:45.216564 2075 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/74bb19c9-295d-4ca9-96c4-8268351c5a4d-hubble-tls\") pod \"74bb19c9-295d-4ca9-96c4-8268351c5a4d\" (UID: \"74bb19c9-295d-4ca9-96c4-8268351c5a4d\") " Sep 13 01:07:45.216918 kubelet[2075]: I0913 01:07:45.216574 2075 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/74bb19c9-295d-4ca9-96c4-8268351c5a4d-host-proc-sys-net\") pod \"74bb19c9-295d-4ca9-96c4-8268351c5a4d\" (UID: \"74bb19c9-295d-4ca9-96c4-8268351c5a4d\") " Sep 13 01:07:45.216918 kubelet[2075]: I0913 01:07:45.216586 2075 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/74bb19c9-295d-4ca9-96c4-8268351c5a4d-xtables-lock\") pod \"74bb19c9-295d-4ca9-96c4-8268351c5a4d\" (UID: \"74bb19c9-295d-4ca9-96c4-8268351c5a4d\") " Sep 13 01:07:45.216918 kubelet[2075]: I0913 01:07:45.216595 2075 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/74bb19c9-295d-4ca9-96c4-8268351c5a4d-cilium-config-path\") pod \"74bb19c9-295d-4ca9-96c4-8268351c5a4d\" (UID: \"74bb19c9-295d-4ca9-96c4-8268351c5a4d\") " Sep 13 01:07:45.216918 kubelet[2075]: I0913 01:07:45.216605 2075 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hjklt\" (UniqueName: \"kubernetes.io/projected/69226fd4-06ac-4fe7-849b-ba866ebbbfe4-kube-api-access-hjklt\") pod \"69226fd4-06ac-4fe7-849b-ba866ebbbfe4\" (UID: \"69226fd4-06ac-4fe7-849b-ba866ebbbfe4\") " Sep 13 01:07:45.217048 kubelet[2075]: I0913 01:07:45.216615 2075 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/74bb19c9-295d-4ca9-96c4-8268351c5a4d-cilium-cgroup\") pod \"74bb19c9-295d-4ca9-96c4-8268351c5a4d\" (UID: \"74bb19c9-295d-4ca9-96c4-8268351c5a4d\") " Sep 13 01:07:45.217048 kubelet[2075]: I0913 01:07:45.216623 2075 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/74bb19c9-295d-4ca9-96c4-8268351c5a4d-etc-cni-netd\") pod \"74bb19c9-295d-4ca9-96c4-8268351c5a4d\" (UID: \"74bb19c9-295d-4ca9-96c4-8268351c5a4d\") " Sep 13 01:07:45.220699 kubelet[2075]: I0913 01:07:45.219642 2075 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/74bb19c9-295d-4ca9-96c4-8268351c5a4d-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "74bb19c9-295d-4ca9-96c4-8268351c5a4d" (UID: "74bb19c9-295d-4ca9-96c4-8268351c5a4d"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 01:07:45.220771 kubelet[2075]: I0913 01:07:45.219154 2075 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/74bb19c9-295d-4ca9-96c4-8268351c5a4d-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "74bb19c9-295d-4ca9-96c4-8268351c5a4d" (UID: "74bb19c9-295d-4ca9-96c4-8268351c5a4d"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 01:07:45.220834 kubelet[2075]: I0913 01:07:45.220824 2075 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/74bb19c9-295d-4ca9-96c4-8268351c5a4d-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "74bb19c9-295d-4ca9-96c4-8268351c5a4d" (UID: "74bb19c9-295d-4ca9-96c4-8268351c5a4d"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 01:07:45.220897 kubelet[2075]: I0913 01:07:45.220888 2075 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/74bb19c9-295d-4ca9-96c4-8268351c5a4d-hostproc" (OuterVolumeSpecName: "hostproc") pod "74bb19c9-295d-4ca9-96c4-8268351c5a4d" (UID: "74bb19c9-295d-4ca9-96c4-8268351c5a4d"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 01:07:45.220954 kubelet[2075]: I0913 01:07:45.220944 2075 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/74bb19c9-295d-4ca9-96c4-8268351c5a4d-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "74bb19c9-295d-4ca9-96c4-8268351c5a4d" (UID: "74bb19c9-295d-4ca9-96c4-8268351c5a4d"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 01:07:45.221011 kubelet[2075]: I0913 01:07:45.221002 2075 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/74bb19c9-295d-4ca9-96c4-8268351c5a4d-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "74bb19c9-295d-4ca9-96c4-8268351c5a4d" (UID: "74bb19c9-295d-4ca9-96c4-8268351c5a4d"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 01:07:45.221065 kubelet[2075]: I0913 01:07:45.221057 2075 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/74bb19c9-295d-4ca9-96c4-8268351c5a4d-cni-path" (OuterVolumeSpecName: "cni-path") pod "74bb19c9-295d-4ca9-96c4-8268351c5a4d" (UID: "74bb19c9-295d-4ca9-96c4-8268351c5a4d"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 01:07:45.225577 kubelet[2075]: I0913 01:07:45.225553 2075 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74bb19c9-295d-4ca9-96c4-8268351c5a4d-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "74bb19c9-295d-4ca9-96c4-8268351c5a4d" (UID: "74bb19c9-295d-4ca9-96c4-8268351c5a4d"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 13 01:07:45.225714 kubelet[2075]: I0913 01:07:45.225701 2075 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/74bb19c9-295d-4ca9-96c4-8268351c5a4d-kube-api-access-zjtfr" (OuterVolumeSpecName: "kube-api-access-zjtfr") pod "74bb19c9-295d-4ca9-96c4-8268351c5a4d" (UID: "74bb19c9-295d-4ca9-96c4-8268351c5a4d"). InnerVolumeSpecName "kube-api-access-zjtfr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 13 01:07:45.227549 kubelet[2075]: I0913 01:07:45.227535 2075 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/69226fd4-06ac-4fe7-849b-ba866ebbbfe4-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "69226fd4-06ac-4fe7-849b-ba866ebbbfe4" (UID: "69226fd4-06ac-4fe7-849b-ba866ebbbfe4"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 13 01:07:45.229188 kubelet[2075]: I0913 01:07:45.229176 2075 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/74bb19c9-295d-4ca9-96c4-8268351c5a4d-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "74bb19c9-295d-4ca9-96c4-8268351c5a4d" (UID: "74bb19c9-295d-4ca9-96c4-8268351c5a4d"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 13 01:07:45.229265 kubelet[2075]: I0913 01:07:45.229255 2075 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/74bb19c9-295d-4ca9-96c4-8268351c5a4d-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "74bb19c9-295d-4ca9-96c4-8268351c5a4d" (UID: "74bb19c9-295d-4ca9-96c4-8268351c5a4d"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 01:07:45.229348 kubelet[2075]: I0913 01:07:45.229338 2075 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/74bb19c9-295d-4ca9-96c4-8268351c5a4d-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "74bb19c9-295d-4ca9-96c4-8268351c5a4d" (UID: "74bb19c9-295d-4ca9-96c4-8268351c5a4d"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 01:07:45.230906 kubelet[2075]: I0913 01:07:45.230887 2075 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/74bb19c9-295d-4ca9-96c4-8268351c5a4d-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "74bb19c9-295d-4ca9-96c4-8268351c5a4d" (UID: "74bb19c9-295d-4ca9-96c4-8268351c5a4d"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 13 01:07:45.230949 kubelet[2075]: I0913 01:07:45.230914 2075 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/74bb19c9-295d-4ca9-96c4-8268351c5a4d-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "74bb19c9-295d-4ca9-96c4-8268351c5a4d" (UID: "74bb19c9-295d-4ca9-96c4-8268351c5a4d"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 01:07:45.231550 kubelet[2075]: I0913 01:07:45.231538 2075 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/69226fd4-06ac-4fe7-849b-ba866ebbbfe4-kube-api-access-hjklt" (OuterVolumeSpecName: "kube-api-access-hjklt") pod "69226fd4-06ac-4fe7-849b-ba866ebbbfe4" (UID: "69226fd4-06ac-4fe7-849b-ba866ebbbfe4"). InnerVolumeSpecName "kube-api-access-hjklt". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 13 01:07:45.319791 kubelet[2075]: I0913 01:07:45.317992 2075 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/74bb19c9-295d-4ca9-96c4-8268351c5a4d-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Sep 13 01:07:45.319791 kubelet[2075]: I0913 01:07:45.318023 2075 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/74bb19c9-295d-4ca9-96c4-8268351c5a4d-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Sep 13 01:07:45.319791 kubelet[2075]: I0913 01:07:45.318030 2075 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/74bb19c9-295d-4ca9-96c4-8268351c5a4d-bpf-maps\") on node \"localhost\" DevicePath \"\"" Sep 13 01:07:45.319791 kubelet[2075]: I0913 01:07:45.318035 2075 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/74bb19c9-295d-4ca9-96c4-8268351c5a4d-hostproc\") on node \"localhost\" DevicePath \"\"" Sep 13 01:07:45.319791 kubelet[2075]: I0913 01:07:45.318041 2075 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/74bb19c9-295d-4ca9-96c4-8268351c5a4d-cilium-run\") on node \"localhost\" DevicePath \"\"" Sep 13 01:07:45.319791 kubelet[2075]: I0913 01:07:45.318046 2075 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/74bb19c9-295d-4ca9-96c4-8268351c5a4d-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Sep 13 01:07:45.319791 kubelet[2075]: I0913 01:07:45.318051 2075 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/74bb19c9-295d-4ca9-96c4-8268351c5a4d-cni-path\") on node \"localhost\" DevicePath \"\"" Sep 13 01:07:45.319791 kubelet[2075]: I0913 01:07:45.318055 2075 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zjtfr\" (UniqueName: \"kubernetes.io/projected/74bb19c9-295d-4ca9-96c4-8268351c5a4d-kube-api-access-zjtfr\") on node \"localhost\" DevicePath \"\"" Sep 13 01:07:45.320104 kubelet[2075]: I0913 01:07:45.318060 2075 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/69226fd4-06ac-4fe7-849b-ba866ebbbfe4-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 13 01:07:45.320104 kubelet[2075]: I0913 01:07:45.318065 2075 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/74bb19c9-295d-4ca9-96c4-8268351c5a4d-lib-modules\") on node \"localhost\" DevicePath \"\"" Sep 13 01:07:45.320104 kubelet[2075]: I0913 01:07:45.318070 2075 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/74bb19c9-295d-4ca9-96c4-8268351c5a4d-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Sep 13 01:07:45.320104 kubelet[2075]: I0913 01:07:45.318074 2075 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/74bb19c9-295d-4ca9-96c4-8268351c5a4d-hubble-tls\") on node \"localhost\" DevicePath \"\"" Sep 13 01:07:45.320104 kubelet[2075]: I0913 01:07:45.318080 2075 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/74bb19c9-295d-4ca9-96c4-8268351c5a4d-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Sep 13 01:07:45.320104 kubelet[2075]: I0913 01:07:45.318084 2075 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/74bb19c9-295d-4ca9-96c4-8268351c5a4d-xtables-lock\") on node \"localhost\" DevicePath \"\"" Sep 13 01:07:45.320104 kubelet[2075]: I0913 01:07:45.318089 2075 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/74bb19c9-295d-4ca9-96c4-8268351c5a4d-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 13 01:07:45.320104 kubelet[2075]: I0913 01:07:45.318093 2075 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hjklt\" (UniqueName: \"kubernetes.io/projected/69226fd4-06ac-4fe7-849b-ba866ebbbfe4-kube-api-access-hjklt\") on node \"localhost\" DevicePath \"\"" Sep 13 01:07:45.618704 kubelet[2075]: I0913 01:07:45.618643 2075 scope.go:117] "RemoveContainer" containerID="2b627ee9a3855d0c2e77844c70249af96efaba26a3ab181ec48c584470a130de" Sep 13 01:07:45.621662 systemd[1]: Removed slice kubepods-burstable-pod74bb19c9_295d_4ca9_96c4_8268351c5a4d.slice. Sep 13 01:07:45.621717 systemd[1]: kubepods-burstable-pod74bb19c9_295d_4ca9_96c4_8268351c5a4d.slice: Consumed 4.961s CPU time. Sep 13 01:07:45.637106 systemd[1]: Removed slice kubepods-besteffort-pod69226fd4_06ac_4fe7_849b_ba866ebbbfe4.slice. Sep 13 01:07:45.638356 env[1244]: time="2025-09-13T01:07:45.638331841Z" level=info msg="RemoveContainer for \"2b627ee9a3855d0c2e77844c70249af96efaba26a3ab181ec48c584470a130de\"" Sep 13 01:07:45.639710 env[1244]: time="2025-09-13T01:07:45.639692759Z" level=info msg="RemoveContainer for \"2b627ee9a3855d0c2e77844c70249af96efaba26a3ab181ec48c584470a130de\" returns successfully" Sep 13 01:07:45.639987 kubelet[2075]: I0913 01:07:45.639972 2075 scope.go:117] "RemoveContainer" containerID="2b627ee9a3855d0c2e77844c70249af96efaba26a3ab181ec48c584470a130de" Sep 13 01:07:45.640331 env[1244]: time="2025-09-13T01:07:45.640260745Z" level=error msg="ContainerStatus for \"2b627ee9a3855d0c2e77844c70249af96efaba26a3ab181ec48c584470a130de\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2b627ee9a3855d0c2e77844c70249af96efaba26a3ab181ec48c584470a130de\": not found" Sep 13 01:07:45.641774 kubelet[2075]: E0913 01:07:45.641757 2075 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2b627ee9a3855d0c2e77844c70249af96efaba26a3ab181ec48c584470a130de\": not found" containerID="2b627ee9a3855d0c2e77844c70249af96efaba26a3ab181ec48c584470a130de" Sep 13 01:07:45.645004 kubelet[2075]: I0913 01:07:45.643193 2075 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2b627ee9a3855d0c2e77844c70249af96efaba26a3ab181ec48c584470a130de"} err="failed to get container status \"2b627ee9a3855d0c2e77844c70249af96efaba26a3ab181ec48c584470a130de\": rpc error: code = NotFound desc = an error occurred when try to find container \"2b627ee9a3855d0c2e77844c70249af96efaba26a3ab181ec48c584470a130de\": not found" Sep 13 01:07:45.645076 kubelet[2075]: I0913 01:07:45.645005 2075 scope.go:117] "RemoveContainer" containerID="0eb742700f72fe40eea8b059134e703bd39dd054429a05dd166e4232b8750e75" Sep 13 01:07:45.646677 env[1244]: time="2025-09-13T01:07:45.646432683Z" level=info msg="RemoveContainer for \"0eb742700f72fe40eea8b059134e703bd39dd054429a05dd166e4232b8750e75\"" Sep 13 01:07:45.647694 env[1244]: time="2025-09-13T01:07:45.647649273Z" level=info msg="RemoveContainer for \"0eb742700f72fe40eea8b059134e703bd39dd054429a05dd166e4232b8750e75\" returns successfully" Sep 13 01:07:45.647772 kubelet[2075]: I0913 01:07:45.647756 2075 scope.go:117] "RemoveContainer" containerID="eb9f07d6425112b0fdc6cb3e5c6068cb1237e2be730343831192eeb22b80fb80" Sep 13 01:07:45.650158 env[1244]: time="2025-09-13T01:07:45.650135823Z" level=info msg="RemoveContainer for \"eb9f07d6425112b0fdc6cb3e5c6068cb1237e2be730343831192eeb22b80fb80\"" Sep 13 01:07:45.652323 env[1244]: time="2025-09-13T01:07:45.652296057Z" level=info msg="RemoveContainer for \"eb9f07d6425112b0fdc6cb3e5c6068cb1237e2be730343831192eeb22b80fb80\" returns successfully" Sep 13 01:07:45.652398 kubelet[2075]: I0913 01:07:45.652387 2075 scope.go:117] "RemoveContainer" containerID="ddf82f03195851e64d782b243767f0279ac51ac1b50c9866b8e8600aa0ce8142" Sep 13 01:07:45.653271 env[1244]: time="2025-09-13T01:07:45.653049813Z" level=info msg="RemoveContainer for \"ddf82f03195851e64d782b243767f0279ac51ac1b50c9866b8e8600aa0ce8142\"" Sep 13 01:07:45.654816 env[1244]: time="2025-09-13T01:07:45.654763423Z" level=info msg="RemoveContainer for \"ddf82f03195851e64d782b243767f0279ac51ac1b50c9866b8e8600aa0ce8142\" returns successfully" Sep 13 01:07:45.655310 kubelet[2075]: I0913 01:07:45.655299 2075 scope.go:117] "RemoveContainer" containerID="47296e40826bd2524c563442197b38309470284bbf31b75e75716393eb10eed1" Sep 13 01:07:45.656899 env[1244]: time="2025-09-13T01:07:45.656397571Z" level=info msg="RemoveContainer for \"47296e40826bd2524c563442197b38309470284bbf31b75e75716393eb10eed1\"" Sep 13 01:07:45.658332 env[1244]: time="2025-09-13T01:07:45.658313364Z" level=info msg="RemoveContainer for \"47296e40826bd2524c563442197b38309470284bbf31b75e75716393eb10eed1\" returns successfully" Sep 13 01:07:45.658498 kubelet[2075]: I0913 01:07:45.658483 2075 scope.go:117] "RemoveContainer" containerID="b2e669bbf1f0ecf7fab316e14e3407966ec944824d095a0f7db0bdf83ea443d7" Sep 13 01:07:45.659288 env[1244]: time="2025-09-13T01:07:45.659270074Z" level=info msg="RemoveContainer for \"b2e669bbf1f0ecf7fab316e14e3407966ec944824d095a0f7db0bdf83ea443d7\"" Sep 13 01:07:45.660324 env[1244]: time="2025-09-13T01:07:45.660307181Z" level=info msg="RemoveContainer for \"b2e669bbf1f0ecf7fab316e14e3407966ec944824d095a0f7db0bdf83ea443d7\" returns successfully" Sep 13 01:07:45.660449 kubelet[2075]: I0913 01:07:45.660434 2075 scope.go:117] "RemoveContainer" containerID="0eb742700f72fe40eea8b059134e703bd39dd054429a05dd166e4232b8750e75" Sep 13 01:07:45.660562 env[1244]: time="2025-09-13T01:07:45.660523538Z" level=error msg="ContainerStatus for \"0eb742700f72fe40eea8b059134e703bd39dd054429a05dd166e4232b8750e75\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0eb742700f72fe40eea8b059134e703bd39dd054429a05dd166e4232b8750e75\": not found" Sep 13 01:07:45.660685 kubelet[2075]: E0913 01:07:45.660674 2075 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0eb742700f72fe40eea8b059134e703bd39dd054429a05dd166e4232b8750e75\": not found" containerID="0eb742700f72fe40eea8b059134e703bd39dd054429a05dd166e4232b8750e75" Sep 13 01:07:45.660762 kubelet[2075]: I0913 01:07:45.660747 2075 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0eb742700f72fe40eea8b059134e703bd39dd054429a05dd166e4232b8750e75"} err="failed to get container status \"0eb742700f72fe40eea8b059134e703bd39dd054429a05dd166e4232b8750e75\": rpc error: code = NotFound desc = an error occurred when try to find container \"0eb742700f72fe40eea8b059134e703bd39dd054429a05dd166e4232b8750e75\": not found" Sep 13 01:07:45.660821 kubelet[2075]: I0913 01:07:45.660813 2075 scope.go:117] "RemoveContainer" containerID="eb9f07d6425112b0fdc6cb3e5c6068cb1237e2be730343831192eeb22b80fb80" Sep 13 01:07:45.661026 env[1244]: time="2025-09-13T01:07:45.660965057Z" level=error msg="ContainerStatus for \"eb9f07d6425112b0fdc6cb3e5c6068cb1237e2be730343831192eeb22b80fb80\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"eb9f07d6425112b0fdc6cb3e5c6068cb1237e2be730343831192eeb22b80fb80\": not found" Sep 13 01:07:45.661153 kubelet[2075]: E0913 01:07:45.661133 2075 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"eb9f07d6425112b0fdc6cb3e5c6068cb1237e2be730343831192eeb22b80fb80\": not found" containerID="eb9f07d6425112b0fdc6cb3e5c6068cb1237e2be730343831192eeb22b80fb80" Sep 13 01:07:45.661207 kubelet[2075]: I0913 01:07:45.661195 2075 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"eb9f07d6425112b0fdc6cb3e5c6068cb1237e2be730343831192eeb22b80fb80"} err="failed to get container status \"eb9f07d6425112b0fdc6cb3e5c6068cb1237e2be730343831192eeb22b80fb80\": rpc error: code = NotFound desc = an error occurred when try to find container \"eb9f07d6425112b0fdc6cb3e5c6068cb1237e2be730343831192eeb22b80fb80\": not found" Sep 13 01:07:45.661264 kubelet[2075]: I0913 01:07:45.661256 2075 scope.go:117] "RemoveContainer" containerID="ddf82f03195851e64d782b243767f0279ac51ac1b50c9866b8e8600aa0ce8142" Sep 13 01:07:45.661428 env[1244]: time="2025-09-13T01:07:45.661389992Z" level=error msg="ContainerStatus for \"ddf82f03195851e64d782b243767f0279ac51ac1b50c9866b8e8600aa0ce8142\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ddf82f03195851e64d782b243767f0279ac51ac1b50c9866b8e8600aa0ce8142\": not found" Sep 13 01:07:45.661537 kubelet[2075]: E0913 01:07:45.661521 2075 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ddf82f03195851e64d782b243767f0279ac51ac1b50c9866b8e8600aa0ce8142\": not found" containerID="ddf82f03195851e64d782b243767f0279ac51ac1b50c9866b8e8600aa0ce8142" Sep 13 01:07:45.661579 kubelet[2075]: I0913 01:07:45.661538 2075 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ddf82f03195851e64d782b243767f0279ac51ac1b50c9866b8e8600aa0ce8142"} err="failed to get container status \"ddf82f03195851e64d782b243767f0279ac51ac1b50c9866b8e8600aa0ce8142\": rpc error: code = NotFound desc = an error occurred when try to find container \"ddf82f03195851e64d782b243767f0279ac51ac1b50c9866b8e8600aa0ce8142\": not found" Sep 13 01:07:45.661579 kubelet[2075]: I0913 01:07:45.661547 2075 scope.go:117] "RemoveContainer" containerID="47296e40826bd2524c563442197b38309470284bbf31b75e75716393eb10eed1" Sep 13 01:07:45.661749 env[1244]: time="2025-09-13T01:07:45.661693185Z" level=error msg="ContainerStatus for \"47296e40826bd2524c563442197b38309470284bbf31b75e75716393eb10eed1\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"47296e40826bd2524c563442197b38309470284bbf31b75e75716393eb10eed1\": not found" Sep 13 01:07:45.661836 kubelet[2075]: E0913 01:07:45.661827 2075 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"47296e40826bd2524c563442197b38309470284bbf31b75e75716393eb10eed1\": not found" containerID="47296e40826bd2524c563442197b38309470284bbf31b75e75716393eb10eed1" Sep 13 01:07:45.661901 kubelet[2075]: I0913 01:07:45.661890 2075 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"47296e40826bd2524c563442197b38309470284bbf31b75e75716393eb10eed1"} err="failed to get container status \"47296e40826bd2524c563442197b38309470284bbf31b75e75716393eb10eed1\": rpc error: code = NotFound desc = an error occurred when try to find container \"47296e40826bd2524c563442197b38309470284bbf31b75e75716393eb10eed1\": not found" Sep 13 01:07:45.661949 kubelet[2075]: I0913 01:07:45.661941 2075 scope.go:117] "RemoveContainer" containerID="b2e669bbf1f0ecf7fab316e14e3407966ec944824d095a0f7db0bdf83ea443d7" Sep 13 01:07:45.662125 env[1244]: time="2025-09-13T01:07:45.662097328Z" level=error msg="ContainerStatus for \"b2e669bbf1f0ecf7fab316e14e3407966ec944824d095a0f7db0bdf83ea443d7\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b2e669bbf1f0ecf7fab316e14e3407966ec944824d095a0f7db0bdf83ea443d7\": not found" Sep 13 01:07:45.662218 kubelet[2075]: E0913 01:07:45.662207 2075 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b2e669bbf1f0ecf7fab316e14e3407966ec944824d095a0f7db0bdf83ea443d7\": not found" containerID="b2e669bbf1f0ecf7fab316e14e3407966ec944824d095a0f7db0bdf83ea443d7" Sep 13 01:07:45.662284 kubelet[2075]: I0913 01:07:45.662273 2075 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b2e669bbf1f0ecf7fab316e14e3407966ec944824d095a0f7db0bdf83ea443d7"} err="failed to get container status \"b2e669bbf1f0ecf7fab316e14e3407966ec944824d095a0f7db0bdf83ea443d7\": rpc error: code = NotFound desc = an error occurred when try to find container \"b2e669bbf1f0ecf7fab316e14e3407966ec944824d095a0f7db0bdf83ea443d7\": not found" Sep 13 01:07:45.729897 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7407dfe13184b5858c52276885bcf57e825256acd6aa55e51f52f3c68c379266-rootfs.mount: Deactivated successfully. Sep 13 01:07:45.729959 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7407dfe13184b5858c52276885bcf57e825256acd6aa55e51f52f3c68c379266-shm.mount: Deactivated successfully. Sep 13 01:07:45.730001 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3983bfddccb2b71ee966301b2975f370519d4f14ca456ea1a3996203aecefbb9-rootfs.mount: Deactivated successfully. Sep 13 01:07:45.730044 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3983bfddccb2b71ee966301b2975f370519d4f14ca456ea1a3996203aecefbb9-shm.mount: Deactivated successfully. Sep 13 01:07:45.730079 systemd[1]: var-lib-kubelet-pods-74bb19c9\x2d295d\x2d4ca9\x2d96c4\x2d8268351c5a4d-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dzjtfr.mount: Deactivated successfully. Sep 13 01:07:45.730116 systemd[1]: var-lib-kubelet-pods-69226fd4\x2d06ac\x2d4fe7\x2d849b\x2dba866ebbbfe4-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dhjklt.mount: Deactivated successfully. Sep 13 01:07:45.730152 systemd[1]: var-lib-kubelet-pods-74bb19c9\x2d295d\x2d4ca9\x2d96c4\x2d8268351c5a4d-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 13 01:07:45.730186 systemd[1]: var-lib-kubelet-pods-74bb19c9\x2d295d\x2d4ca9\x2d96c4\x2d8268351c5a4d-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 13 01:07:46.249290 kubelet[2075]: I0913 01:07:46.249266 2075 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="69226fd4-06ac-4fe7-849b-ba866ebbbfe4" path="/var/lib/kubelet/pods/69226fd4-06ac-4fe7-849b-ba866ebbbfe4/volumes" Sep 13 01:07:46.263721 kubelet[2075]: I0913 01:07:46.263702 2075 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="74bb19c9-295d-4ca9-96c4-8268351c5a4d" path="/var/lib/kubelet/pods/74bb19c9-295d-4ca9-96c4-8268351c5a4d/volumes" Sep 13 01:07:46.595248 systemd[1]: Started sshd@21-139.178.70.102:22-147.75.109.163:40602.service. Sep 13 01:07:46.597181 sshd[3630]: pam_unix(sshd:session): session closed for user core Sep 13 01:07:46.608045 systemd[1]: sshd@20-139.178.70.102:22-147.75.109.163:40588.service: Deactivated successfully. Sep 13 01:07:46.608582 systemd[1]: session-23.scope: Deactivated successfully. Sep 13 01:07:46.609026 systemd-logind[1235]: Session 23 logged out. Waiting for processes to exit. Sep 13 01:07:46.609640 systemd-logind[1235]: Removed session 23. Sep 13 01:07:46.687224 sshd[3787]: Accepted publickey for core from 147.75.109.163 port 40602 ssh2: RSA SHA256:sJGDjo0Z2Vx3Gx4EUnUZFO+gxzu8eUeKwoabCfe3hp8 Sep 13 01:07:46.688158 sshd[3787]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 01:07:46.691591 systemd[1]: Started session-24.scope. Sep 13 01:07:46.691823 systemd-logind[1235]: New session 24 of user core. Sep 13 01:07:47.153808 sshd[3787]: pam_unix(sshd:session): session closed for user core Sep 13 01:07:47.156737 systemd[1]: Started sshd@22-139.178.70.102:22-147.75.109.163:40618.service. Sep 13 01:07:47.158440 systemd[1]: sshd@21-139.178.70.102:22-147.75.109.163:40602.service: Deactivated successfully. Sep 13 01:07:47.158858 systemd[1]: session-24.scope: Deactivated successfully. Sep 13 01:07:47.159191 systemd-logind[1235]: Session 24 logged out. Waiting for processes to exit. Sep 13 01:07:47.159703 systemd-logind[1235]: Removed session 24. Sep 13 01:07:47.189111 systemd[1]: Created slice kubepods-burstable-pod0d64aea0_a008_43e3_912c_725766421c88.slice. Sep 13 01:07:47.193186 sshd[3799]: Accepted publickey for core from 147.75.109.163 port 40618 ssh2: RSA SHA256:sJGDjo0Z2Vx3Gx4EUnUZFO+gxzu8eUeKwoabCfe3hp8 Sep 13 01:07:47.194278 sshd[3799]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 01:07:47.200604 systemd[1]: Started session-25.scope. Sep 13 01:07:47.200714 systemd-logind[1235]: New session 25 of user core. Sep 13 01:07:47.314628 kubelet[2075]: E0913 01:07:47.314591 2075 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 13 01:07:47.339220 kubelet[2075]: I0913 01:07:47.339197 2075 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0d64aea0-a008-43e3-912c-725766421c88-cilium-cgroup\") pod \"cilium-d28fx\" (UID: \"0d64aea0-a008-43e3-912c-725766421c88\") " pod="kube-system/cilium-d28fx" Sep 13 01:07:47.339358 kubelet[2075]: I0913 01:07:47.339347 2075 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0d64aea0-a008-43e3-912c-725766421c88-hostproc\") pod \"cilium-d28fx\" (UID: \"0d64aea0-a008-43e3-912c-725766421c88\") " pod="kube-system/cilium-d28fx" Sep 13 01:07:47.339456 kubelet[2075]: I0913 01:07:47.339446 2075 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0d64aea0-a008-43e3-912c-725766421c88-cilium-config-path\") pod \"cilium-d28fx\" (UID: \"0d64aea0-a008-43e3-912c-725766421c88\") " pod="kube-system/cilium-d28fx" Sep 13 01:07:47.339514 kubelet[2075]: I0913 01:07:47.339505 2075 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/0d64aea0-a008-43e3-912c-725766421c88-cilium-ipsec-secrets\") pod \"cilium-d28fx\" (UID: \"0d64aea0-a008-43e3-912c-725766421c88\") " pod="kube-system/cilium-d28fx" Sep 13 01:07:47.339573 kubelet[2075]: I0913 01:07:47.339563 2075 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0d64aea0-a008-43e3-912c-725766421c88-host-proc-sys-kernel\") pod \"cilium-d28fx\" (UID: \"0d64aea0-a008-43e3-912c-725766421c88\") " pod="kube-system/cilium-d28fx" Sep 13 01:07:47.339628 kubelet[2075]: I0913 01:07:47.339619 2075 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0d64aea0-a008-43e3-912c-725766421c88-lib-modules\") pod \"cilium-d28fx\" (UID: \"0d64aea0-a008-43e3-912c-725766421c88\") " pod="kube-system/cilium-d28fx" Sep 13 01:07:47.339687 kubelet[2075]: I0913 01:07:47.339673 2075 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0d64aea0-a008-43e3-912c-725766421c88-bpf-maps\") pod \"cilium-d28fx\" (UID: \"0d64aea0-a008-43e3-912c-725766421c88\") " pod="kube-system/cilium-d28fx" Sep 13 01:07:47.339737 kubelet[2075]: I0913 01:07:47.339728 2075 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0d64aea0-a008-43e3-912c-725766421c88-xtables-lock\") pod \"cilium-d28fx\" (UID: \"0d64aea0-a008-43e3-912c-725766421c88\") " pod="kube-system/cilium-d28fx" Sep 13 01:07:47.339791 kubelet[2075]: I0913 01:07:47.339780 2075 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0d64aea0-a008-43e3-912c-725766421c88-clustermesh-secrets\") pod \"cilium-d28fx\" (UID: \"0d64aea0-a008-43e3-912c-725766421c88\") " pod="kube-system/cilium-d28fx" Sep 13 01:07:47.339850 kubelet[2075]: I0913 01:07:47.339841 2075 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0d64aea0-a008-43e3-912c-725766421c88-host-proc-sys-net\") pod \"cilium-d28fx\" (UID: \"0d64aea0-a008-43e3-912c-725766421c88\") " pod="kube-system/cilium-d28fx" Sep 13 01:07:47.339910 kubelet[2075]: I0913 01:07:47.339901 2075 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rjd9x\" (UniqueName: \"kubernetes.io/projected/0d64aea0-a008-43e3-912c-725766421c88-kube-api-access-rjd9x\") pod \"cilium-d28fx\" (UID: \"0d64aea0-a008-43e3-912c-725766421c88\") " pod="kube-system/cilium-d28fx" Sep 13 01:07:47.339978 kubelet[2075]: I0913 01:07:47.339968 2075 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0d64aea0-a008-43e3-912c-725766421c88-cilium-run\") pod \"cilium-d28fx\" (UID: \"0d64aea0-a008-43e3-912c-725766421c88\") " pod="kube-system/cilium-d28fx" Sep 13 01:07:47.340034 kubelet[2075]: I0913 01:07:47.340025 2075 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0d64aea0-a008-43e3-912c-725766421c88-cni-path\") pod \"cilium-d28fx\" (UID: \"0d64aea0-a008-43e3-912c-725766421c88\") " pod="kube-system/cilium-d28fx" Sep 13 01:07:47.340087 kubelet[2075]: I0913 01:07:47.340078 2075 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0d64aea0-a008-43e3-912c-725766421c88-etc-cni-netd\") pod \"cilium-d28fx\" (UID: \"0d64aea0-a008-43e3-912c-725766421c88\") " pod="kube-system/cilium-d28fx" Sep 13 01:07:47.340146 kubelet[2075]: I0913 01:07:47.340137 2075 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0d64aea0-a008-43e3-912c-725766421c88-hubble-tls\") pod \"cilium-d28fx\" (UID: \"0d64aea0-a008-43e3-912c-725766421c88\") " pod="kube-system/cilium-d28fx" Sep 13 01:07:47.425901 sshd[3799]: pam_unix(sshd:session): session closed for user core Sep 13 01:07:47.432585 systemd[1]: Started sshd@23-139.178.70.102:22-147.75.109.163:40620.service. Sep 13 01:07:47.432963 systemd[1]: sshd@22-139.178.70.102:22-147.75.109.163:40618.service: Deactivated successfully. Sep 13 01:07:47.433450 systemd[1]: session-25.scope: Deactivated successfully. Sep 13 01:07:47.434897 systemd-logind[1235]: Session 25 logged out. Waiting for processes to exit. Sep 13 01:07:47.437182 systemd-logind[1235]: Removed session 25. Sep 13 01:07:47.437403 kubelet[2075]: E0913 01:07:47.437373 2075 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[bpf-maps cilium-cgroup cilium-config-path cilium-ipsec-secrets cilium-run clustermesh-secrets cni-path etc-cni-netd host-proc-sys-kernel host-proc-sys-net hostproc hubble-tls kube-api-access-rjd9x lib-modules xtables-lock], unattached volumes=[], failed to process volumes=[]: context canceled" pod="kube-system/cilium-d28fx" podUID="0d64aea0-a008-43e3-912c-725766421c88" Sep 13 01:07:47.476564 sshd[3810]: Accepted publickey for core from 147.75.109.163 port 40620 ssh2: RSA SHA256:sJGDjo0Z2Vx3Gx4EUnUZFO+gxzu8eUeKwoabCfe3hp8 Sep 13 01:07:47.477519 sshd[3810]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 01:07:47.480713 systemd[1]: Started session-26.scope. Sep 13 01:07:47.480817 systemd-logind[1235]: New session 26 of user core. Sep 13 01:07:47.742796 kubelet[2075]: I0913 01:07:47.742771 2075 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0d64aea0-a008-43e3-912c-725766421c88-host-proc-sys-kernel\") pod \"0d64aea0-a008-43e3-912c-725766421c88\" (UID: \"0d64aea0-a008-43e3-912c-725766421c88\") " Sep 13 01:07:47.742939 kubelet[2075]: I0913 01:07:47.742924 2075 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0d64aea0-a008-43e3-912c-725766421c88-cilium-run\") pod \"0d64aea0-a008-43e3-912c-725766421c88\" (UID: \"0d64aea0-a008-43e3-912c-725766421c88\") " Sep 13 01:07:47.743037 kubelet[2075]: I0913 01:07:47.743024 2075 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/0d64aea0-a008-43e3-912c-725766421c88-cilium-ipsec-secrets\") pod \"0d64aea0-a008-43e3-912c-725766421c88\" (UID: \"0d64aea0-a008-43e3-912c-725766421c88\") " Sep 13 01:07:47.743108 kubelet[2075]: I0913 01:07:47.743096 2075 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0d64aea0-a008-43e3-912c-725766421c88-host-proc-sys-net\") pod \"0d64aea0-a008-43e3-912c-725766421c88\" (UID: \"0d64aea0-a008-43e3-912c-725766421c88\") " Sep 13 01:07:47.743186 kubelet[2075]: I0913 01:07:47.743174 2075 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0d64aea0-a008-43e3-912c-725766421c88-hostproc\") pod \"0d64aea0-a008-43e3-912c-725766421c88\" (UID: \"0d64aea0-a008-43e3-912c-725766421c88\") " Sep 13 01:07:47.743253 kubelet[2075]: I0913 01:07:47.743242 2075 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0d64aea0-a008-43e3-912c-725766421c88-lib-modules\") pod \"0d64aea0-a008-43e3-912c-725766421c88\" (UID: \"0d64aea0-a008-43e3-912c-725766421c88\") " Sep 13 01:07:47.743371 kubelet[2075]: I0913 01:07:47.743359 2075 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0d64aea0-a008-43e3-912c-725766421c88-etc-cni-netd\") pod \"0d64aea0-a008-43e3-912c-725766421c88\" (UID: \"0d64aea0-a008-43e3-912c-725766421c88\") " Sep 13 01:07:47.743460 kubelet[2075]: I0913 01:07:47.743448 2075 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0d64aea0-a008-43e3-912c-725766421c88-cilium-config-path\") pod \"0d64aea0-a008-43e3-912c-725766421c88\" (UID: \"0d64aea0-a008-43e3-912c-725766421c88\") " Sep 13 01:07:47.743730 kubelet[2075]: I0913 01:07:47.743528 2075 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0d64aea0-a008-43e3-912c-725766421c88-clustermesh-secrets\") pod \"0d64aea0-a008-43e3-912c-725766421c88\" (UID: \"0d64aea0-a008-43e3-912c-725766421c88\") " Sep 13 01:07:47.743730 kubelet[2075]: I0913 01:07:47.743545 2075 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0d64aea0-a008-43e3-912c-725766421c88-bpf-maps\") pod \"0d64aea0-a008-43e3-912c-725766421c88\" (UID: \"0d64aea0-a008-43e3-912c-725766421c88\") " Sep 13 01:07:47.743730 kubelet[2075]: I0913 01:07:47.743556 2075 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0d64aea0-a008-43e3-912c-725766421c88-cni-path\") pod \"0d64aea0-a008-43e3-912c-725766421c88\" (UID: \"0d64aea0-a008-43e3-912c-725766421c88\") " Sep 13 01:07:47.743730 kubelet[2075]: I0913 01:07:47.743569 2075 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0d64aea0-a008-43e3-912c-725766421c88-hubble-tls\") pod \"0d64aea0-a008-43e3-912c-725766421c88\" (UID: \"0d64aea0-a008-43e3-912c-725766421c88\") " Sep 13 01:07:47.743730 kubelet[2075]: I0913 01:07:47.743579 2075 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0d64aea0-a008-43e3-912c-725766421c88-cilium-cgroup\") pod \"0d64aea0-a008-43e3-912c-725766421c88\" (UID: \"0d64aea0-a008-43e3-912c-725766421c88\") " Sep 13 01:07:47.743730 kubelet[2075]: I0913 01:07:47.743590 2075 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0d64aea0-a008-43e3-912c-725766421c88-xtables-lock\") pod \"0d64aea0-a008-43e3-912c-725766421c88\" (UID: \"0d64aea0-a008-43e3-912c-725766421c88\") " Sep 13 01:07:47.743905 kubelet[2075]: I0913 01:07:47.743604 2075 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rjd9x\" (UniqueName: \"kubernetes.io/projected/0d64aea0-a008-43e3-912c-725766421c88-kube-api-access-rjd9x\") pod \"0d64aea0-a008-43e3-912c-725766421c88\" (UID: \"0d64aea0-a008-43e3-912c-725766421c88\") " Sep 13 01:07:47.746571 systemd[1]: var-lib-kubelet-pods-0d64aea0\x2da008\x2d43e3\x2d912c\x2d725766421c88-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Sep 13 01:07:47.748378 kubelet[2075]: I0913 01:07:47.742868 2075 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0d64aea0-a008-43e3-912c-725766421c88-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "0d64aea0-a008-43e3-912c-725766421c88" (UID: "0d64aea0-a008-43e3-912c-725766421c88"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 01:07:47.748428 kubelet[2075]: I0913 01:07:47.742989 2075 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0d64aea0-a008-43e3-912c-725766421c88-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "0d64aea0-a008-43e3-912c-725766421c88" (UID: "0d64aea0-a008-43e3-912c-725766421c88"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 01:07:47.748428 kubelet[2075]: I0913 01:07:47.746873 2075 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d64aea0-a008-43e3-912c-725766421c88-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "0d64aea0-a008-43e3-912c-725766421c88" (UID: "0d64aea0-a008-43e3-912c-725766421c88"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 13 01:07:47.748428 kubelet[2075]: I0913 01:07:47.748360 2075 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0d64aea0-a008-43e3-912c-725766421c88-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "0d64aea0-a008-43e3-912c-725766421c88" (UID: "0d64aea0-a008-43e3-912c-725766421c88"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 13 01:07:47.748428 kubelet[2075]: I0913 01:07:47.748396 2075 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0d64aea0-a008-43e3-912c-725766421c88-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "0d64aea0-a008-43e3-912c-725766421c88" (UID: "0d64aea0-a008-43e3-912c-725766421c88"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 01:07:47.748428 kubelet[2075]: I0913 01:07:47.748406 2075 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0d64aea0-a008-43e3-912c-725766421c88-hostproc" (OuterVolumeSpecName: "hostproc") pod "0d64aea0-a008-43e3-912c-725766421c88" (UID: "0d64aea0-a008-43e3-912c-725766421c88"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 01:07:47.748569 kubelet[2075]: I0913 01:07:47.748432 2075 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0d64aea0-a008-43e3-912c-725766421c88-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "0d64aea0-a008-43e3-912c-725766421c88" (UID: "0d64aea0-a008-43e3-912c-725766421c88"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 01:07:47.748569 kubelet[2075]: I0913 01:07:47.748442 2075 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0d64aea0-a008-43e3-912c-725766421c88-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "0d64aea0-a008-43e3-912c-725766421c88" (UID: "0d64aea0-a008-43e3-912c-725766421c88"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 01:07:47.748569 kubelet[2075]: I0913 01:07:47.748453 2075 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0d64aea0-a008-43e3-912c-725766421c88-cni-path" (OuterVolumeSpecName: "cni-path") pod "0d64aea0-a008-43e3-912c-725766421c88" (UID: "0d64aea0-a008-43e3-912c-725766421c88"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 01:07:47.748752 kubelet[2075]: I0913 01:07:47.748739 2075 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d64aea0-a008-43e3-912c-725766421c88-kube-api-access-rjd9x" (OuterVolumeSpecName: "kube-api-access-rjd9x") pod "0d64aea0-a008-43e3-912c-725766421c88" (UID: "0d64aea0-a008-43e3-912c-725766421c88"). InnerVolumeSpecName "kube-api-access-rjd9x". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 13 01:07:47.749841 kubelet[2075]: I0913 01:07:47.749826 2075 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d64aea0-a008-43e3-912c-725766421c88-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "0d64aea0-a008-43e3-912c-725766421c88" (UID: "0d64aea0-a008-43e3-912c-725766421c88"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 13 01:07:47.749885 kubelet[2075]: I0913 01:07:47.749845 2075 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0d64aea0-a008-43e3-912c-725766421c88-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "0d64aea0-a008-43e3-912c-725766421c88" (UID: "0d64aea0-a008-43e3-912c-725766421c88"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 01:07:47.749885 kubelet[2075]: I0913 01:07:47.749855 2075 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0d64aea0-a008-43e3-912c-725766421c88-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "0d64aea0-a008-43e3-912c-725766421c88" (UID: "0d64aea0-a008-43e3-912c-725766421c88"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 01:07:47.749885 kubelet[2075]: I0913 01:07:47.749864 2075 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0d64aea0-a008-43e3-912c-725766421c88-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "0d64aea0-a008-43e3-912c-725766421c88" (UID: "0d64aea0-a008-43e3-912c-725766421c88"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 01:07:47.750269 kubelet[2075]: I0913 01:07:47.750257 2075 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d64aea0-a008-43e3-912c-725766421c88-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "0d64aea0-a008-43e3-912c-725766421c88" (UID: "0d64aea0-a008-43e3-912c-725766421c88"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 13 01:07:47.843859 kubelet[2075]: I0913 01:07:47.843829 2075 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0d64aea0-a008-43e3-912c-725766421c88-bpf-maps\") on node \"localhost\" DevicePath \"\"" Sep 13 01:07:47.844008 kubelet[2075]: I0913 01:07:47.843996 2075 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0d64aea0-a008-43e3-912c-725766421c88-cni-path\") on node \"localhost\" DevicePath \"\"" Sep 13 01:07:47.844077 kubelet[2075]: I0913 01:07:47.844064 2075 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0d64aea0-a008-43e3-912c-725766421c88-hubble-tls\") on node \"localhost\" DevicePath \"\"" Sep 13 01:07:47.844144 kubelet[2075]: I0913 01:07:47.844132 2075 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0d64aea0-a008-43e3-912c-725766421c88-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Sep 13 01:07:47.844210 kubelet[2075]: I0913 01:07:47.844200 2075 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0d64aea0-a008-43e3-912c-725766421c88-xtables-lock\") on node \"localhost\" DevicePath \"\"" Sep 13 01:07:47.844273 kubelet[2075]: I0913 01:07:47.844261 2075 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-rjd9x\" (UniqueName: \"kubernetes.io/projected/0d64aea0-a008-43e3-912c-725766421c88-kube-api-access-rjd9x\") on node \"localhost\" DevicePath \"\"" Sep 13 01:07:47.844340 kubelet[2075]: I0913 01:07:47.844330 2075 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0d64aea0-a008-43e3-912c-725766421c88-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Sep 13 01:07:47.844400 kubelet[2075]: I0913 01:07:47.844390 2075 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0d64aea0-a008-43e3-912c-725766421c88-cilium-run\") on node \"localhost\" DevicePath \"\"" Sep 13 01:07:47.844489 kubelet[2075]: I0913 01:07:47.844478 2075 reconciler_common.go:299] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/0d64aea0-a008-43e3-912c-725766421c88-cilium-ipsec-secrets\") on node \"localhost\" DevicePath \"\"" Sep 13 01:07:47.844551 kubelet[2075]: I0913 01:07:47.844540 2075 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0d64aea0-a008-43e3-912c-725766421c88-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Sep 13 01:07:47.844615 kubelet[2075]: I0913 01:07:47.844605 2075 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0d64aea0-a008-43e3-912c-725766421c88-hostproc\") on node \"localhost\" DevicePath \"\"" Sep 13 01:07:47.844681 kubelet[2075]: I0913 01:07:47.844670 2075 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0d64aea0-a008-43e3-912c-725766421c88-lib-modules\") on node \"localhost\" DevicePath \"\"" Sep 13 01:07:47.844751 kubelet[2075]: I0913 01:07:47.844741 2075 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0d64aea0-a008-43e3-912c-725766421c88-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Sep 13 01:07:47.844817 kubelet[2075]: I0913 01:07:47.844806 2075 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0d64aea0-a008-43e3-912c-725766421c88-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 13 01:07:47.844883 kubelet[2075]: I0913 01:07:47.844872 2075 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0d64aea0-a008-43e3-912c-725766421c88-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Sep 13 01:07:48.251552 systemd[1]: Removed slice kubepods-burstable-pod0d64aea0_a008_43e3_912c_725766421c88.slice. Sep 13 01:07:48.446150 systemd[1]: var-lib-kubelet-pods-0d64aea0\x2da008\x2d43e3\x2d912c\x2d725766421c88-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 13 01:07:48.446247 systemd[1]: var-lib-kubelet-pods-0d64aea0\x2da008\x2d43e3\x2d912c\x2d725766421c88-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2drjd9x.mount: Deactivated successfully. Sep 13 01:07:48.446311 systemd[1]: var-lib-kubelet-pods-0d64aea0\x2da008\x2d43e3\x2d912c\x2d725766421c88-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 13 01:07:48.645597 systemd[1]: Created slice kubepods-burstable-pod90b2f4ed_9447_4fde_b433_bb6d6010432a.slice. Sep 13 01:07:48.749804 kubelet[2075]: I0913 01:07:48.749775 2075 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/90b2f4ed-9447-4fde-b433-bb6d6010432a-cilium-run\") pod \"cilium-f22zn\" (UID: \"90b2f4ed-9447-4fde-b433-bb6d6010432a\") " pod="kube-system/cilium-f22zn" Sep 13 01:07:48.750152 kubelet[2075]: I0913 01:07:48.750138 2075 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/90b2f4ed-9447-4fde-b433-bb6d6010432a-etc-cni-netd\") pod \"cilium-f22zn\" (UID: \"90b2f4ed-9447-4fde-b433-bb6d6010432a\") " pod="kube-system/cilium-f22zn" Sep 13 01:07:48.750238 kubelet[2075]: I0913 01:07:48.750224 2075 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/90b2f4ed-9447-4fde-b433-bb6d6010432a-host-proc-sys-kernel\") pod \"cilium-f22zn\" (UID: \"90b2f4ed-9447-4fde-b433-bb6d6010432a\") " pod="kube-system/cilium-f22zn" Sep 13 01:07:48.750323 kubelet[2075]: I0913 01:07:48.750312 2075 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/90b2f4ed-9447-4fde-b433-bb6d6010432a-cilium-cgroup\") pod \"cilium-f22zn\" (UID: \"90b2f4ed-9447-4fde-b433-bb6d6010432a\") " pod="kube-system/cilium-f22zn" Sep 13 01:07:48.750402 kubelet[2075]: I0913 01:07:48.750391 2075 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/90b2f4ed-9447-4fde-b433-bb6d6010432a-xtables-lock\") pod \"cilium-f22zn\" (UID: \"90b2f4ed-9447-4fde-b433-bb6d6010432a\") " pod="kube-system/cilium-f22zn" Sep 13 01:07:48.750511 kubelet[2075]: I0913 01:07:48.750499 2075 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/90b2f4ed-9447-4fde-b433-bb6d6010432a-cilium-ipsec-secrets\") pod \"cilium-f22zn\" (UID: \"90b2f4ed-9447-4fde-b433-bb6d6010432a\") " pod="kube-system/cilium-f22zn" Sep 13 01:07:48.750598 kubelet[2075]: I0913 01:07:48.750588 2075 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/90b2f4ed-9447-4fde-b433-bb6d6010432a-host-proc-sys-net\") pod \"cilium-f22zn\" (UID: \"90b2f4ed-9447-4fde-b433-bb6d6010432a\") " pod="kube-system/cilium-f22zn" Sep 13 01:07:48.750684 kubelet[2075]: I0913 01:07:48.750672 2075 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/90b2f4ed-9447-4fde-b433-bb6d6010432a-bpf-maps\") pod \"cilium-f22zn\" (UID: \"90b2f4ed-9447-4fde-b433-bb6d6010432a\") " pod="kube-system/cilium-f22zn" Sep 13 01:07:48.750768 kubelet[2075]: I0913 01:07:48.750758 2075 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/90b2f4ed-9447-4fde-b433-bb6d6010432a-hubble-tls\") pod \"cilium-f22zn\" (UID: \"90b2f4ed-9447-4fde-b433-bb6d6010432a\") " pod="kube-system/cilium-f22zn" Sep 13 01:07:48.750863 kubelet[2075]: I0913 01:07:48.750850 2075 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4v92n\" (UniqueName: \"kubernetes.io/projected/90b2f4ed-9447-4fde-b433-bb6d6010432a-kube-api-access-4v92n\") pod \"cilium-f22zn\" (UID: \"90b2f4ed-9447-4fde-b433-bb6d6010432a\") " pod="kube-system/cilium-f22zn" Sep 13 01:07:48.750956 kubelet[2075]: I0913 01:07:48.750943 2075 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/90b2f4ed-9447-4fde-b433-bb6d6010432a-clustermesh-secrets\") pod \"cilium-f22zn\" (UID: \"90b2f4ed-9447-4fde-b433-bb6d6010432a\") " pod="kube-system/cilium-f22zn" Sep 13 01:07:48.751052 kubelet[2075]: I0913 01:07:48.751037 2075 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/90b2f4ed-9447-4fde-b433-bb6d6010432a-hostproc\") pod \"cilium-f22zn\" (UID: \"90b2f4ed-9447-4fde-b433-bb6d6010432a\") " pod="kube-system/cilium-f22zn" Sep 13 01:07:48.751136 kubelet[2075]: I0913 01:07:48.751125 2075 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/90b2f4ed-9447-4fde-b433-bb6d6010432a-cni-path\") pod \"cilium-f22zn\" (UID: \"90b2f4ed-9447-4fde-b433-bb6d6010432a\") " pod="kube-system/cilium-f22zn" Sep 13 01:07:48.751220 kubelet[2075]: I0913 01:07:48.751207 2075 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/90b2f4ed-9447-4fde-b433-bb6d6010432a-lib-modules\") pod \"cilium-f22zn\" (UID: \"90b2f4ed-9447-4fde-b433-bb6d6010432a\") " pod="kube-system/cilium-f22zn" Sep 13 01:07:48.751301 kubelet[2075]: I0913 01:07:48.751289 2075 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/90b2f4ed-9447-4fde-b433-bb6d6010432a-cilium-config-path\") pod \"cilium-f22zn\" (UID: \"90b2f4ed-9447-4fde-b433-bb6d6010432a\") " pod="kube-system/cilium-f22zn" Sep 13 01:07:48.947744 env[1244]: time="2025-09-13T01:07:48.947673071Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-f22zn,Uid:90b2f4ed-9447-4fde-b433-bb6d6010432a,Namespace:kube-system,Attempt:0,}" Sep 13 01:07:48.971648 env[1244]: time="2025-09-13T01:07:48.971592766Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 01:07:48.971861 env[1244]: time="2025-09-13T01:07:48.971835991Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 01:07:48.971930 env[1244]: time="2025-09-13T01:07:48.971863289Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 01:07:48.972039 env[1244]: time="2025-09-13T01:07:48.971996946Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c826ee09381d514cf38f88458eef3ae77135546a52d8e46dd0957d72f14256dd pid=3838 runtime=io.containerd.runc.v2 Sep 13 01:07:48.981394 systemd[1]: Started cri-containerd-c826ee09381d514cf38f88458eef3ae77135546a52d8e46dd0957d72f14256dd.scope. Sep 13 01:07:49.006140 env[1244]: time="2025-09-13T01:07:49.006110468Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-f22zn,Uid:90b2f4ed-9447-4fde-b433-bb6d6010432a,Namespace:kube-system,Attempt:0,} returns sandbox id \"c826ee09381d514cf38f88458eef3ae77135546a52d8e46dd0957d72f14256dd\"" Sep 13 01:07:49.010027 env[1244]: time="2025-09-13T01:07:49.009887448Z" level=info msg="CreateContainer within sandbox \"c826ee09381d514cf38f88458eef3ae77135546a52d8e46dd0957d72f14256dd\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 13 01:07:49.013941 env[1244]: time="2025-09-13T01:07:49.013915357Z" level=info msg="CreateContainer within sandbox \"c826ee09381d514cf38f88458eef3ae77135546a52d8e46dd0957d72f14256dd\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"d7c1a33e5e6340633abec35b8f74050354789677e0b7d3a29403291b5d5dcd04\"" Sep 13 01:07:49.015042 env[1244]: time="2025-09-13T01:07:49.014273440Z" level=info msg="StartContainer for \"d7c1a33e5e6340633abec35b8f74050354789677e0b7d3a29403291b5d5dcd04\"" Sep 13 01:07:49.023677 systemd[1]: Started cri-containerd-d7c1a33e5e6340633abec35b8f74050354789677e0b7d3a29403291b5d5dcd04.scope. Sep 13 01:07:49.044797 env[1244]: time="2025-09-13T01:07:49.044766196Z" level=info msg="StartContainer for \"d7c1a33e5e6340633abec35b8f74050354789677e0b7d3a29403291b5d5dcd04\" returns successfully" Sep 13 01:07:49.063723 systemd[1]: cri-containerd-d7c1a33e5e6340633abec35b8f74050354789677e0b7d3a29403291b5d5dcd04.scope: Deactivated successfully. Sep 13 01:07:49.080025 env[1244]: time="2025-09-13T01:07:49.079992670Z" level=info msg="shim disconnected" id=d7c1a33e5e6340633abec35b8f74050354789677e0b7d3a29403291b5d5dcd04 Sep 13 01:07:49.080025 env[1244]: time="2025-09-13T01:07:49.080021772Z" level=warning msg="cleaning up after shim disconnected" id=d7c1a33e5e6340633abec35b8f74050354789677e0b7d3a29403291b5d5dcd04 namespace=k8s.io Sep 13 01:07:49.080025 env[1244]: time="2025-09-13T01:07:49.080027948Z" level=info msg="cleaning up dead shim" Sep 13 01:07:49.085160 env[1244]: time="2025-09-13T01:07:49.085117727Z" level=warning msg="cleanup warnings time=\"2025-09-13T01:07:49Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3924 runtime=io.containerd.runc.v2\n" Sep 13 01:07:49.642714 env[1244]: time="2025-09-13T01:07:49.642666882Z" level=info msg="CreateContainer within sandbox \"c826ee09381d514cf38f88458eef3ae77135546a52d8e46dd0957d72f14256dd\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 13 01:07:49.654357 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3165193016.mount: Deactivated successfully. Sep 13 01:07:49.666983 env[1244]: time="2025-09-13T01:07:49.666955031Z" level=info msg="CreateContainer within sandbox \"c826ee09381d514cf38f88458eef3ae77135546a52d8e46dd0957d72f14256dd\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"35b646e8a6e52bd5ed32c0ada3a5c5cac8346ff338c09bb32416267f644c3284\"" Sep 13 01:07:49.667598 env[1244]: time="2025-09-13T01:07:49.667579164Z" level=info msg="StartContainer for \"35b646e8a6e52bd5ed32c0ada3a5c5cac8346ff338c09bb32416267f644c3284\"" Sep 13 01:07:49.682195 systemd[1]: Started cri-containerd-35b646e8a6e52bd5ed32c0ada3a5c5cac8346ff338c09bb32416267f644c3284.scope. Sep 13 01:07:49.700312 env[1244]: time="2025-09-13T01:07:49.700284335Z" level=info msg="StartContainer for \"35b646e8a6e52bd5ed32c0ada3a5c5cac8346ff338c09bb32416267f644c3284\" returns successfully" Sep 13 01:07:49.717645 systemd[1]: cri-containerd-35b646e8a6e52bd5ed32c0ada3a5c5cac8346ff338c09bb32416267f644c3284.scope: Deactivated successfully. Sep 13 01:07:49.744749 env[1244]: time="2025-09-13T01:07:49.744711660Z" level=info msg="shim disconnected" id=35b646e8a6e52bd5ed32c0ada3a5c5cac8346ff338c09bb32416267f644c3284 Sep 13 01:07:49.744749 env[1244]: time="2025-09-13T01:07:49.744746317Z" level=warning msg="cleaning up after shim disconnected" id=35b646e8a6e52bd5ed32c0ada3a5c5cac8346ff338c09bb32416267f644c3284 namespace=k8s.io Sep 13 01:07:49.744749 env[1244]: time="2025-09-13T01:07:49.744754957Z" level=info msg="cleaning up dead shim" Sep 13 01:07:49.750508 env[1244]: time="2025-09-13T01:07:49.750479038Z" level=warning msg="cleanup warnings time=\"2025-09-13T01:07:49Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3987 runtime=io.containerd.runc.v2\n" Sep 13 01:07:50.248729 kubelet[2075]: E0913 01:07:50.248105 2075 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-674b8bbfcf-qnvpn" podUID="17dc8b0e-e8ab-4cf0-898e-79e06e4228db" Sep 13 01:07:50.249894 kubelet[2075]: I0913 01:07:50.249869 2075 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0d64aea0-a008-43e3-912c-725766421c88" path="/var/lib/kubelet/pods/0d64aea0-a008-43e3-912c-725766421c88/volumes" Sep 13 01:07:50.446154 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-35b646e8a6e52bd5ed32c0ada3a5c5cac8346ff338c09bb32416267f644c3284-rootfs.mount: Deactivated successfully. Sep 13 01:07:50.636006 env[1244]: time="2025-09-13T01:07:50.635936093Z" level=info msg="CreateContainer within sandbox \"c826ee09381d514cf38f88458eef3ae77135546a52d8e46dd0957d72f14256dd\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 13 01:07:50.646172 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2099699547.mount: Deactivated successfully. Sep 13 01:07:50.654104 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3267316397.mount: Deactivated successfully. Sep 13 01:07:50.655803 env[1244]: time="2025-09-13T01:07:50.655777716Z" level=info msg="CreateContainer within sandbox \"c826ee09381d514cf38f88458eef3ae77135546a52d8e46dd0957d72f14256dd\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"be5dc59177a5856f9c9d0d53d2ccb85964c8220f79052c02d98300bbc7238853\"" Sep 13 01:07:50.656251 env[1244]: time="2025-09-13T01:07:50.656224890Z" level=info msg="StartContainer for \"be5dc59177a5856f9c9d0d53d2ccb85964c8220f79052c02d98300bbc7238853\"" Sep 13 01:07:50.666259 systemd[1]: Started cri-containerd-be5dc59177a5856f9c9d0d53d2ccb85964c8220f79052c02d98300bbc7238853.scope. Sep 13 01:07:50.685141 env[1244]: time="2025-09-13T01:07:50.684993594Z" level=info msg="StartContainer for \"be5dc59177a5856f9c9d0d53d2ccb85964c8220f79052c02d98300bbc7238853\" returns successfully" Sep 13 01:07:50.692925 systemd[1]: cri-containerd-be5dc59177a5856f9c9d0d53d2ccb85964c8220f79052c02d98300bbc7238853.scope: Deactivated successfully. Sep 13 01:07:50.711603 env[1244]: time="2025-09-13T01:07:50.711564408Z" level=info msg="shim disconnected" id=be5dc59177a5856f9c9d0d53d2ccb85964c8220f79052c02d98300bbc7238853 Sep 13 01:07:50.711603 env[1244]: time="2025-09-13T01:07:50.711601595Z" level=warning msg="cleaning up after shim disconnected" id=be5dc59177a5856f9c9d0d53d2ccb85964c8220f79052c02d98300bbc7238853 namespace=k8s.io Sep 13 01:07:50.711738 env[1244]: time="2025-09-13T01:07:50.711607788Z" level=info msg="cleaning up dead shim" Sep 13 01:07:50.716353 env[1244]: time="2025-09-13T01:07:50.716334166Z" level=warning msg="cleanup warnings time=\"2025-09-13T01:07:50Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4045 runtime=io.containerd.runc.v2\n" Sep 13 01:07:51.636808 env[1244]: time="2025-09-13T01:07:51.636780080Z" level=info msg="CreateContainer within sandbox \"c826ee09381d514cf38f88458eef3ae77135546a52d8e46dd0957d72f14256dd\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 13 01:07:51.644978 env[1244]: time="2025-09-13T01:07:51.644950378Z" level=info msg="CreateContainer within sandbox \"c826ee09381d514cf38f88458eef3ae77135546a52d8e46dd0957d72f14256dd\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"7899aa38b345a5013b01588b38f2343ee0e8542a1eb46c4553c247809312f484\"" Sep 13 01:07:51.646373 env[1244]: time="2025-09-13T01:07:51.646292458Z" level=info msg="StartContainer for \"7899aa38b345a5013b01588b38f2343ee0e8542a1eb46c4553c247809312f484\"" Sep 13 01:07:51.667155 systemd[1]: Started cri-containerd-7899aa38b345a5013b01588b38f2343ee0e8542a1eb46c4553c247809312f484.scope. Sep 13 01:07:51.683025 env[1244]: time="2025-09-13T01:07:51.683001831Z" level=info msg="StartContainer for \"7899aa38b345a5013b01588b38f2343ee0e8542a1eb46c4553c247809312f484\" returns successfully" Sep 13 01:07:51.684966 systemd[1]: cri-containerd-7899aa38b345a5013b01588b38f2343ee0e8542a1eb46c4553c247809312f484.scope: Deactivated successfully. Sep 13 01:07:51.696567 env[1244]: time="2025-09-13T01:07:51.696535540Z" level=info msg="shim disconnected" id=7899aa38b345a5013b01588b38f2343ee0e8542a1eb46c4553c247809312f484 Sep 13 01:07:51.696567 env[1244]: time="2025-09-13T01:07:51.696567286Z" level=warning msg="cleaning up after shim disconnected" id=7899aa38b345a5013b01588b38f2343ee0e8542a1eb46c4553c247809312f484 namespace=k8s.io Sep 13 01:07:51.696703 env[1244]: time="2025-09-13T01:07:51.696573534Z" level=info msg="cleaning up dead shim" Sep 13 01:07:51.701021 env[1244]: time="2025-09-13T01:07:51.701002932Z" level=warning msg="cleanup warnings time=\"2025-09-13T01:07:51Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4102 runtime=io.containerd.runc.v2\n" Sep 13 01:07:52.248593 kubelet[2075]: E0913 01:07:52.248559 2075 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-674b8bbfcf-qnvpn" podUID="17dc8b0e-e8ab-4cf0-898e-79e06e4228db" Sep 13 01:07:52.315939 kubelet[2075]: E0913 01:07:52.315909 2075 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 13 01:07:52.446251 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7899aa38b345a5013b01588b38f2343ee0e8542a1eb46c4553c247809312f484-rootfs.mount: Deactivated successfully. Sep 13 01:07:52.646958 env[1244]: time="2025-09-13T01:07:52.646799800Z" level=info msg="CreateContainer within sandbox \"c826ee09381d514cf38f88458eef3ae77135546a52d8e46dd0957d72f14256dd\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 13 01:07:52.708298 env[1244]: time="2025-09-13T01:07:52.708244417Z" level=info msg="CreateContainer within sandbox \"c826ee09381d514cf38f88458eef3ae77135546a52d8e46dd0957d72f14256dd\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"fefaff61098461c6257dd9d1a97ac63c4c5c8227adde1cf60f0456feaf91acf0\"" Sep 13 01:07:52.709113 env[1244]: time="2025-09-13T01:07:52.709094072Z" level=info msg="StartContainer for \"fefaff61098461c6257dd9d1a97ac63c4c5c8227adde1cf60f0456feaf91acf0\"" Sep 13 01:07:52.728665 systemd[1]: Started cri-containerd-fefaff61098461c6257dd9d1a97ac63c4c5c8227adde1cf60f0456feaf91acf0.scope. Sep 13 01:07:52.749875 env[1244]: time="2025-09-13T01:07:52.749844684Z" level=info msg="StartContainer for \"fefaff61098461c6257dd9d1a97ac63c4c5c8227adde1cf60f0456feaf91acf0\" returns successfully" Sep 13 01:07:53.309431 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Sep 13 01:07:53.445991 systemd[1]: run-containerd-runc-k8s.io-fefaff61098461c6257dd9d1a97ac63c4c5c8227adde1cf60f0456feaf91acf0-runc.F0ctRG.mount: Deactivated successfully. Sep 13 01:07:54.247786 kubelet[2075]: E0913 01:07:54.247748 2075 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-674b8bbfcf-qnvpn" podUID="17dc8b0e-e8ab-4cf0-898e-79e06e4228db" Sep 13 01:07:54.839028 kubelet[2075]: I0913 01:07:54.838993 2075 setters.go:618] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-13T01:07:54Z","lastTransitionTime":"2025-09-13T01:07:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Sep 13 01:07:55.764431 systemd[1]: run-containerd-runc-k8s.io-fefaff61098461c6257dd9d1a97ac63c4c5c8227adde1cf60f0456feaf91acf0-runc.S7UhXS.mount: Deactivated successfully. Sep 13 01:07:55.765929 systemd-networkd[1060]: lxc_health: Link UP Sep 13 01:07:55.818479 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Sep 13 01:07:55.818561 systemd-networkd[1060]: lxc_health: Gained carrier Sep 13 01:07:56.248798 kubelet[2075]: E0913 01:07:56.248760 2075 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-674b8bbfcf-qnvpn" podUID="17dc8b0e-e8ab-4cf0-898e-79e06e4228db" Sep 13 01:07:56.958561 kubelet[2075]: I0913 01:07:56.958523 2075 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-f22zn" podStartSLOduration=8.958511253 podStartE2EDuration="8.958511253s" podCreationTimestamp="2025-09-13 01:07:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 01:07:53.653508099 +0000 UTC m=+131.698983239" watchObservedRunningTime="2025-09-13 01:07:56.958511253 +0000 UTC m=+135.003986394" Sep 13 01:07:57.659600 systemd-networkd[1060]: lxc_health: Gained IPv6LL Sep 13 01:08:00.128092 systemd[1]: run-containerd-runc-k8s.io-fefaff61098461c6257dd9d1a97ac63c4c5c8227adde1cf60f0456feaf91acf0-runc.y6d0u1.mount: Deactivated successfully. Sep 13 01:08:02.195444 systemd[1]: run-containerd-runc-k8s.io-fefaff61098461c6257dd9d1a97ac63c4c5c8227adde1cf60f0456feaf91acf0-runc.V5VSCb.mount: Deactivated successfully. Sep 13 01:08:02.236368 sshd[3810]: pam_unix(sshd:session): session closed for user core Sep 13 01:08:02.245026 systemd[1]: sshd@23-139.178.70.102:22-147.75.109.163:40620.service: Deactivated successfully. Sep 13 01:08:02.245477 systemd[1]: session-26.scope: Deactivated successfully. Sep 13 01:08:02.246028 systemd-logind[1235]: Session 26 logged out. Waiting for processes to exit. Sep 13 01:08:02.246553 systemd-logind[1235]: Removed session 26.