Jul 10 00:53:27.654316 kernel: Linux version 5.15.186-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Wed Jul 9 23:09:45 -00 2025 Jul 10 00:53:27.654330 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=6cddad5f675165861f6062277cc28875548c735477e689762fc73abc16b63a3d Jul 10 00:53:27.654336 kernel: Disabled fast string operations Jul 10 00:53:27.654340 kernel: BIOS-provided physical RAM map: Jul 10 00:53:27.654344 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ebff] usable Jul 10 00:53:27.654348 kernel: BIOS-e820: [mem 0x000000000009ec00-0x000000000009ffff] reserved Jul 10 00:53:27.654354 kernel: BIOS-e820: [mem 0x00000000000dc000-0x00000000000fffff] reserved Jul 10 00:53:27.654358 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007fedffff] usable Jul 10 00:53:27.654362 kernel: BIOS-e820: [mem 0x000000007fee0000-0x000000007fefefff] ACPI data Jul 10 00:53:27.654366 kernel: BIOS-e820: [mem 0x000000007feff000-0x000000007fefffff] ACPI NVS Jul 10 00:53:27.654370 kernel: BIOS-e820: [mem 0x000000007ff00000-0x000000007fffffff] usable Jul 10 00:53:27.654374 kernel: BIOS-e820: [mem 0x00000000f0000000-0x00000000f7ffffff] reserved Jul 10 00:53:27.654378 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec0ffff] reserved Jul 10 00:53:27.654382 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved Jul 10 00:53:27.654389 kernel: BIOS-e820: [mem 0x00000000fffe0000-0x00000000ffffffff] reserved Jul 10 00:53:27.654393 kernel: NX (Execute Disable) protection: active Jul 10 00:53:27.654398 kernel: SMBIOS 2.7 present. Jul 10 00:53:27.654402 kernel: DMI: VMware, Inc. VMware Virtual Platform/440BX Desktop Reference Platform, BIOS 6.00 05/28/2020 Jul 10 00:53:27.654407 kernel: vmware: hypercall mode: 0x00 Jul 10 00:53:27.654411 kernel: Hypervisor detected: VMware Jul 10 00:53:27.654417 kernel: vmware: TSC freq read from hypervisor : 3408.000 MHz Jul 10 00:53:27.654421 kernel: vmware: Host bus clock speed read from hypervisor : 66000000 Hz Jul 10 00:53:27.654425 kernel: vmware: using clock offset of 4613482324 ns Jul 10 00:53:27.654430 kernel: tsc: Detected 3408.000 MHz processor Jul 10 00:53:27.654435 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jul 10 00:53:27.654440 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jul 10 00:53:27.654444 kernel: last_pfn = 0x80000 max_arch_pfn = 0x400000000 Jul 10 00:53:27.654449 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jul 10 00:53:27.654453 kernel: total RAM covered: 3072M Jul 10 00:53:27.654459 kernel: Found optimal setting for mtrr clean up Jul 10 00:53:27.654464 kernel: gran_size: 64K chunk_size: 64K num_reg: 2 lose cover RAM: 0G Jul 10 00:53:27.654468 kernel: Using GB pages for direct mapping Jul 10 00:53:27.654473 kernel: ACPI: Early table checksum verification disabled Jul 10 00:53:27.654488 kernel: ACPI: RSDP 0x00000000000F6A00 000024 (v02 PTLTD ) Jul 10 00:53:27.654494 kernel: ACPI: XSDT 0x000000007FEE965B 00005C (v01 INTEL 440BX 06040000 VMW 01324272) Jul 10 00:53:27.654498 kernel: ACPI: FACP 0x000000007FEFEE73 0000F4 (v04 INTEL 440BX 06040000 PTL 000F4240) Jul 10 00:53:27.654503 kernel: ACPI: DSDT 0x000000007FEEAD55 01411E (v01 PTLTD Custom 06040000 MSFT 03000001) Jul 10 00:53:27.654507 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 Jul 10 00:53:27.654512 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 Jul 10 00:53:27.654518 kernel: ACPI: BOOT 0x000000007FEEAD2D 000028 (v01 PTLTD $SBFTBL$ 06040000 LTP 00000001) Jul 10 00:53:27.654525 kernel: ACPI: APIC 0x000000007FEEA5EB 000742 (v01 PTLTD ? APIC 06040000 LTP 00000000) Jul 10 00:53:27.654530 kernel: ACPI: MCFG 0x000000007FEEA5AF 00003C (v01 PTLTD $PCITBL$ 06040000 LTP 00000001) Jul 10 00:53:27.654535 kernel: ACPI: SRAT 0x000000007FEE9757 0008A8 (v02 VMWARE MEMPLUG 06040000 VMW 00000001) Jul 10 00:53:27.654540 kernel: ACPI: HPET 0x000000007FEE971F 000038 (v01 VMWARE VMW HPET 06040000 VMW 00000001) Jul 10 00:53:27.654546 kernel: ACPI: WAET 0x000000007FEE96F7 000028 (v01 VMWARE VMW WAET 06040000 VMW 00000001) Jul 10 00:53:27.654551 kernel: ACPI: Reserving FACP table memory at [mem 0x7fefee73-0x7fefef66] Jul 10 00:53:27.654556 kernel: ACPI: Reserving DSDT table memory at [mem 0x7feead55-0x7fefee72] Jul 10 00:53:27.654561 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] Jul 10 00:53:27.654566 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] Jul 10 00:53:27.654570 kernel: ACPI: Reserving BOOT table memory at [mem 0x7feead2d-0x7feead54] Jul 10 00:53:27.654575 kernel: ACPI: Reserving APIC table memory at [mem 0x7feea5eb-0x7feead2c] Jul 10 00:53:27.654580 kernel: ACPI: Reserving MCFG table memory at [mem 0x7feea5af-0x7feea5ea] Jul 10 00:53:27.654585 kernel: ACPI: Reserving SRAT table memory at [mem 0x7fee9757-0x7fee9ffe] Jul 10 00:53:27.654591 kernel: ACPI: Reserving HPET table memory at [mem 0x7fee971f-0x7fee9756] Jul 10 00:53:27.654596 kernel: ACPI: Reserving WAET table memory at [mem 0x7fee96f7-0x7fee971e] Jul 10 00:53:27.654601 kernel: system APIC only can use physical flat Jul 10 00:53:27.654605 kernel: Setting APIC routing to physical flat. Jul 10 00:53:27.654610 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jul 10 00:53:27.654615 kernel: SRAT: PXM 0 -> APIC 0x02 -> Node 0 Jul 10 00:53:27.654620 kernel: SRAT: PXM 0 -> APIC 0x04 -> Node 0 Jul 10 00:53:27.654625 kernel: SRAT: PXM 0 -> APIC 0x06 -> Node 0 Jul 10 00:53:27.654630 kernel: SRAT: PXM 0 -> APIC 0x08 -> Node 0 Jul 10 00:53:27.654639 kernel: SRAT: PXM 0 -> APIC 0x0a -> Node 0 Jul 10 00:53:27.654644 kernel: SRAT: PXM 0 -> APIC 0x0c -> Node 0 Jul 10 00:53:27.654649 kernel: SRAT: PXM 0 -> APIC 0x0e -> Node 0 Jul 10 00:53:27.654654 kernel: SRAT: PXM 0 -> APIC 0x10 -> Node 0 Jul 10 00:53:27.654658 kernel: SRAT: PXM 0 -> APIC 0x12 -> Node 0 Jul 10 00:53:27.654663 kernel: SRAT: PXM 0 -> APIC 0x14 -> Node 0 Jul 10 00:53:27.654668 kernel: SRAT: PXM 0 -> APIC 0x16 -> Node 0 Jul 10 00:53:27.654672 kernel: SRAT: PXM 0 -> APIC 0x18 -> Node 0 Jul 10 00:53:27.654677 kernel: SRAT: PXM 0 -> APIC 0x1a -> Node 0 Jul 10 00:53:27.654682 kernel: SRAT: PXM 0 -> APIC 0x1c -> Node 0 Jul 10 00:53:27.654688 kernel: SRAT: PXM 0 -> APIC 0x1e -> Node 0 Jul 10 00:53:27.654692 kernel: SRAT: PXM 0 -> APIC 0x20 -> Node 0 Jul 10 00:53:27.654697 kernel: SRAT: PXM 0 -> APIC 0x22 -> Node 0 Jul 10 00:53:27.654702 kernel: SRAT: PXM 0 -> APIC 0x24 -> Node 0 Jul 10 00:53:27.654707 kernel: SRAT: PXM 0 -> APIC 0x26 -> Node 0 Jul 10 00:53:27.654711 kernel: SRAT: PXM 0 -> APIC 0x28 -> Node 0 Jul 10 00:53:27.654716 kernel: SRAT: PXM 0 -> APIC 0x2a -> Node 0 Jul 10 00:53:27.654721 kernel: SRAT: PXM 0 -> APIC 0x2c -> Node 0 Jul 10 00:53:27.654726 kernel: SRAT: PXM 0 -> APIC 0x2e -> Node 0 Jul 10 00:53:27.654731 kernel: SRAT: PXM 0 -> APIC 0x30 -> Node 0 Jul 10 00:53:27.654736 kernel: SRAT: PXM 0 -> APIC 0x32 -> Node 0 Jul 10 00:53:27.654741 kernel: SRAT: PXM 0 -> APIC 0x34 -> Node 0 Jul 10 00:53:27.654746 kernel: SRAT: PXM 0 -> APIC 0x36 -> Node 0 Jul 10 00:53:27.654751 kernel: SRAT: PXM 0 -> APIC 0x38 -> Node 0 Jul 10 00:53:27.654755 kernel: SRAT: PXM 0 -> APIC 0x3a -> Node 0 Jul 10 00:53:27.654760 kernel: SRAT: PXM 0 -> APIC 0x3c -> Node 0 Jul 10 00:53:27.654765 kernel: SRAT: PXM 0 -> APIC 0x3e -> Node 0 Jul 10 00:53:27.654770 kernel: SRAT: PXM 0 -> APIC 0x40 -> Node 0 Jul 10 00:53:27.654775 kernel: SRAT: PXM 0 -> APIC 0x42 -> Node 0 Jul 10 00:53:27.654779 kernel: SRAT: PXM 0 -> APIC 0x44 -> Node 0 Jul 10 00:53:27.654785 kernel: SRAT: PXM 0 -> APIC 0x46 -> Node 0 Jul 10 00:53:27.654790 kernel: SRAT: PXM 0 -> APIC 0x48 -> Node 0 Jul 10 00:53:27.654795 kernel: SRAT: PXM 0 -> APIC 0x4a -> Node 0 Jul 10 00:53:27.654800 kernel: SRAT: PXM 0 -> APIC 0x4c -> Node 0 Jul 10 00:53:27.654804 kernel: SRAT: PXM 0 -> APIC 0x4e -> Node 0 Jul 10 00:53:27.654809 kernel: SRAT: PXM 0 -> APIC 0x50 -> Node 0 Jul 10 00:53:27.654814 kernel: SRAT: PXM 0 -> APIC 0x52 -> Node 0 Jul 10 00:53:27.654819 kernel: SRAT: PXM 0 -> APIC 0x54 -> Node 0 Jul 10 00:53:27.654824 kernel: SRAT: PXM 0 -> APIC 0x56 -> Node 0 Jul 10 00:53:27.654829 kernel: SRAT: PXM 0 -> APIC 0x58 -> Node 0 Jul 10 00:53:27.654835 kernel: SRAT: PXM 0 -> APIC 0x5a -> Node 0 Jul 10 00:53:27.654839 kernel: SRAT: PXM 0 -> APIC 0x5c -> Node 0 Jul 10 00:53:27.654844 kernel: SRAT: PXM 0 -> APIC 0x5e -> Node 0 Jul 10 00:53:27.654849 kernel: SRAT: PXM 0 -> APIC 0x60 -> Node 0 Jul 10 00:53:27.654854 kernel: SRAT: PXM 0 -> APIC 0x62 -> Node 0 Jul 10 00:53:27.654858 kernel: SRAT: PXM 0 -> APIC 0x64 -> Node 0 Jul 10 00:53:27.654863 kernel: SRAT: PXM 0 -> APIC 0x66 -> Node 0 Jul 10 00:53:27.654868 kernel: SRAT: PXM 0 -> APIC 0x68 -> Node 0 Jul 10 00:53:27.654873 kernel: SRAT: PXM 0 -> APIC 0x6a -> Node 0 Jul 10 00:53:27.654878 kernel: SRAT: PXM 0 -> APIC 0x6c -> Node 0 Jul 10 00:53:27.654884 kernel: SRAT: PXM 0 -> APIC 0x6e -> Node 0 Jul 10 00:53:27.654889 kernel: SRAT: PXM 0 -> APIC 0x70 -> Node 0 Jul 10 00:53:27.654893 kernel: SRAT: PXM 0 -> APIC 0x72 -> Node 0 Jul 10 00:53:27.654898 kernel: SRAT: PXM 0 -> APIC 0x74 -> Node 0 Jul 10 00:53:27.654903 kernel: SRAT: PXM 0 -> APIC 0x76 -> Node 0 Jul 10 00:53:27.654908 kernel: SRAT: PXM 0 -> APIC 0x78 -> Node 0 Jul 10 00:53:27.654917 kernel: SRAT: PXM 0 -> APIC 0x7a -> Node 0 Jul 10 00:53:27.654922 kernel: SRAT: PXM 0 -> APIC 0x7c -> Node 0 Jul 10 00:53:27.654927 kernel: SRAT: PXM 0 -> APIC 0x7e -> Node 0 Jul 10 00:53:27.654932 kernel: SRAT: PXM 0 -> APIC 0x80 -> Node 0 Jul 10 00:53:27.654938 kernel: SRAT: PXM 0 -> APIC 0x82 -> Node 0 Jul 10 00:53:27.654944 kernel: SRAT: PXM 0 -> APIC 0x84 -> Node 0 Jul 10 00:53:27.654949 kernel: SRAT: PXM 0 -> APIC 0x86 -> Node 0 Jul 10 00:53:27.654954 kernel: SRAT: PXM 0 -> APIC 0x88 -> Node 0 Jul 10 00:53:27.654959 kernel: SRAT: PXM 0 -> APIC 0x8a -> Node 0 Jul 10 00:53:27.654964 kernel: SRAT: PXM 0 -> APIC 0x8c -> Node 0 Jul 10 00:53:27.654969 kernel: SRAT: PXM 0 -> APIC 0x8e -> Node 0 Jul 10 00:53:27.654974 kernel: SRAT: PXM 0 -> APIC 0x90 -> Node 0 Jul 10 00:53:27.654980 kernel: SRAT: PXM 0 -> APIC 0x92 -> Node 0 Jul 10 00:53:27.654985 kernel: SRAT: PXM 0 -> APIC 0x94 -> Node 0 Jul 10 00:53:27.654991 kernel: SRAT: PXM 0 -> APIC 0x96 -> Node 0 Jul 10 00:53:27.654996 kernel: SRAT: PXM 0 -> APIC 0x98 -> Node 0 Jul 10 00:53:27.655001 kernel: SRAT: PXM 0 -> APIC 0x9a -> Node 0 Jul 10 00:53:27.655006 kernel: SRAT: PXM 0 -> APIC 0x9c -> Node 0 Jul 10 00:53:27.655011 kernel: SRAT: PXM 0 -> APIC 0x9e -> Node 0 Jul 10 00:53:27.655016 kernel: SRAT: PXM 0 -> APIC 0xa0 -> Node 0 Jul 10 00:53:27.655021 kernel: SRAT: PXM 0 -> APIC 0xa2 -> Node 0 Jul 10 00:53:27.655027 kernel: SRAT: PXM 0 -> APIC 0xa4 -> Node 0 Jul 10 00:53:27.655033 kernel: SRAT: PXM 0 -> APIC 0xa6 -> Node 0 Jul 10 00:53:27.655038 kernel: SRAT: PXM 0 -> APIC 0xa8 -> Node 0 Jul 10 00:53:27.655043 kernel: SRAT: PXM 0 -> APIC 0xaa -> Node 0 Jul 10 00:53:27.655048 kernel: SRAT: PXM 0 -> APIC 0xac -> Node 0 Jul 10 00:53:27.655053 kernel: SRAT: PXM 0 -> APIC 0xae -> Node 0 Jul 10 00:53:27.655058 kernel: SRAT: PXM 0 -> APIC 0xb0 -> Node 0 Jul 10 00:53:27.655063 kernel: SRAT: PXM 0 -> APIC 0xb2 -> Node 0 Jul 10 00:53:27.655068 kernel: SRAT: PXM 0 -> APIC 0xb4 -> Node 0 Jul 10 00:53:27.655073 kernel: SRAT: PXM 0 -> APIC 0xb6 -> Node 0 Jul 10 00:53:27.655079 kernel: SRAT: PXM 0 -> APIC 0xb8 -> Node 0 Jul 10 00:53:27.655084 kernel: SRAT: PXM 0 -> APIC 0xba -> Node 0 Jul 10 00:53:27.655090 kernel: SRAT: PXM 0 -> APIC 0xbc -> Node 0 Jul 10 00:53:27.655095 kernel: SRAT: PXM 0 -> APIC 0xbe -> Node 0 Jul 10 00:53:27.655100 kernel: SRAT: PXM 0 -> APIC 0xc0 -> Node 0 Jul 10 00:53:27.655105 kernel: SRAT: PXM 0 -> APIC 0xc2 -> Node 0 Jul 10 00:53:27.655110 kernel: SRAT: PXM 0 -> APIC 0xc4 -> Node 0 Jul 10 00:53:27.655115 kernel: SRAT: PXM 0 -> APIC 0xc6 -> Node 0 Jul 10 00:53:27.655120 kernel: SRAT: PXM 0 -> APIC 0xc8 -> Node 0 Jul 10 00:53:27.655126 kernel: SRAT: PXM 0 -> APIC 0xca -> Node 0 Jul 10 00:53:27.655132 kernel: SRAT: PXM 0 -> APIC 0xcc -> Node 0 Jul 10 00:53:27.655137 kernel: SRAT: PXM 0 -> APIC 0xce -> Node 0 Jul 10 00:53:27.655142 kernel: SRAT: PXM 0 -> APIC 0xd0 -> Node 0 Jul 10 00:53:27.655147 kernel: SRAT: PXM 0 -> APIC 0xd2 -> Node 0 Jul 10 00:53:27.655152 kernel: SRAT: PXM 0 -> APIC 0xd4 -> Node 0 Jul 10 00:53:27.655157 kernel: SRAT: PXM 0 -> APIC 0xd6 -> Node 0 Jul 10 00:53:27.655162 kernel: SRAT: PXM 0 -> APIC 0xd8 -> Node 0 Jul 10 00:53:27.655167 kernel: SRAT: PXM 0 -> APIC 0xda -> Node 0 Jul 10 00:53:27.655173 kernel: SRAT: PXM 0 -> APIC 0xdc -> Node 0 Jul 10 00:53:27.655178 kernel: SRAT: PXM 0 -> APIC 0xde -> Node 0 Jul 10 00:53:27.655184 kernel: SRAT: PXM 0 -> APIC 0xe0 -> Node 0 Jul 10 00:53:27.655189 kernel: SRAT: PXM 0 -> APIC 0xe2 -> Node 0 Jul 10 00:53:27.655194 kernel: SRAT: PXM 0 -> APIC 0xe4 -> Node 0 Jul 10 00:53:27.655199 kernel: SRAT: PXM 0 -> APIC 0xe6 -> Node 0 Jul 10 00:53:27.655204 kernel: SRAT: PXM 0 -> APIC 0xe8 -> Node 0 Jul 10 00:53:27.655209 kernel: SRAT: PXM 0 -> APIC 0xea -> Node 0 Jul 10 00:53:27.655214 kernel: SRAT: PXM 0 -> APIC 0xec -> Node 0 Jul 10 00:53:27.655219 kernel: SRAT: PXM 0 -> APIC 0xee -> Node 0 Jul 10 00:53:27.655224 kernel: SRAT: PXM 0 -> APIC 0xf0 -> Node 0 Jul 10 00:53:27.655229 kernel: SRAT: PXM 0 -> APIC 0xf2 -> Node 0 Jul 10 00:53:27.655235 kernel: SRAT: PXM 0 -> APIC 0xf4 -> Node 0 Jul 10 00:53:27.655241 kernel: SRAT: PXM 0 -> APIC 0xf6 -> Node 0 Jul 10 00:53:27.655246 kernel: SRAT: PXM 0 -> APIC 0xf8 -> Node 0 Jul 10 00:53:27.655251 kernel: SRAT: PXM 0 -> APIC 0xfa -> Node 0 Jul 10 00:53:27.655256 kernel: SRAT: PXM 0 -> APIC 0xfc -> Node 0 Jul 10 00:53:27.655261 kernel: SRAT: PXM 0 -> APIC 0xfe -> Node 0 Jul 10 00:53:27.655266 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Jul 10 00:53:27.655272 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Jul 10 00:53:27.655277 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000-0xbfffffff] hotplug Jul 10 00:53:27.655282 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7fffffff] -> [mem 0x00000000-0x7fffffff] Jul 10 00:53:27.655289 kernel: NODE_DATA(0) allocated [mem 0x7fffa000-0x7fffffff] Jul 10 00:53:27.655294 kernel: Zone ranges: Jul 10 00:53:27.655300 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jul 10 00:53:27.655305 kernel: DMA32 [mem 0x0000000001000000-0x000000007fffffff] Jul 10 00:53:27.655310 kernel: Normal empty Jul 10 00:53:27.655316 kernel: Movable zone start for each node Jul 10 00:53:27.655321 kernel: Early memory node ranges Jul 10 00:53:27.655326 kernel: node 0: [mem 0x0000000000001000-0x000000000009dfff] Jul 10 00:53:27.655331 kernel: node 0: [mem 0x0000000000100000-0x000000007fedffff] Jul 10 00:53:27.655338 kernel: node 0: [mem 0x000000007ff00000-0x000000007fffffff] Jul 10 00:53:27.655343 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007fffffff] Jul 10 00:53:27.655348 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 10 00:53:27.655353 kernel: On node 0, zone DMA: 98 pages in unavailable ranges Jul 10 00:53:27.655359 kernel: On node 0, zone DMA32: 32 pages in unavailable ranges Jul 10 00:53:27.655364 kernel: ACPI: PM-Timer IO Port: 0x1008 Jul 10 00:53:27.655369 kernel: system APIC only can use physical flat Jul 10 00:53:27.655374 kernel: ACPI: LAPIC_NMI (acpi_id[0x00] high edge lint[0x1]) Jul 10 00:53:27.655380 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) Jul 10 00:53:27.655386 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) Jul 10 00:53:27.655391 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) Jul 10 00:53:27.655396 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) Jul 10 00:53:27.655402 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) Jul 10 00:53:27.655407 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) Jul 10 00:53:27.655412 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) Jul 10 00:53:27.655417 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) Jul 10 00:53:27.655423 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) Jul 10 00:53:27.655428 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) Jul 10 00:53:27.655433 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) Jul 10 00:53:27.655439 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) Jul 10 00:53:27.655444 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) Jul 10 00:53:27.655449 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) Jul 10 00:53:27.655455 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) Jul 10 00:53:27.655460 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) Jul 10 00:53:27.655465 kernel: ACPI: LAPIC_NMI (acpi_id[0x11] high edge lint[0x1]) Jul 10 00:53:27.655471 kernel: ACPI: LAPIC_NMI (acpi_id[0x12] high edge lint[0x1]) Jul 10 00:53:27.655484 kernel: ACPI: LAPIC_NMI (acpi_id[0x13] high edge lint[0x1]) Jul 10 00:53:27.655489 kernel: ACPI: LAPIC_NMI (acpi_id[0x14] high edge lint[0x1]) Jul 10 00:53:27.655496 kernel: ACPI: LAPIC_NMI (acpi_id[0x15] high edge lint[0x1]) Jul 10 00:53:27.655502 kernel: ACPI: LAPIC_NMI (acpi_id[0x16] high edge lint[0x1]) Jul 10 00:53:27.655507 kernel: ACPI: LAPIC_NMI (acpi_id[0x17] high edge lint[0x1]) Jul 10 00:53:27.655512 kernel: ACPI: LAPIC_NMI (acpi_id[0x18] high edge lint[0x1]) Jul 10 00:53:27.655517 kernel: ACPI: LAPIC_NMI (acpi_id[0x19] high edge lint[0x1]) Jul 10 00:53:27.655522 kernel: ACPI: LAPIC_NMI (acpi_id[0x1a] high edge lint[0x1]) Jul 10 00:53:27.655527 kernel: ACPI: LAPIC_NMI (acpi_id[0x1b] high edge lint[0x1]) Jul 10 00:53:27.655533 kernel: ACPI: LAPIC_NMI (acpi_id[0x1c] high edge lint[0x1]) Jul 10 00:53:27.655538 kernel: ACPI: LAPIC_NMI (acpi_id[0x1d] high edge lint[0x1]) Jul 10 00:53:27.655543 kernel: ACPI: LAPIC_NMI (acpi_id[0x1e] high edge lint[0x1]) Jul 10 00:53:27.655549 kernel: ACPI: LAPIC_NMI (acpi_id[0x1f] high edge lint[0x1]) Jul 10 00:53:27.655554 kernel: ACPI: LAPIC_NMI (acpi_id[0x20] high edge lint[0x1]) Jul 10 00:53:27.655560 kernel: ACPI: LAPIC_NMI (acpi_id[0x21] high edge lint[0x1]) Jul 10 00:53:27.655565 kernel: ACPI: LAPIC_NMI (acpi_id[0x22] high edge lint[0x1]) Jul 10 00:53:27.655570 kernel: ACPI: LAPIC_NMI (acpi_id[0x23] high edge lint[0x1]) Jul 10 00:53:27.655575 kernel: ACPI: LAPIC_NMI (acpi_id[0x24] high edge lint[0x1]) Jul 10 00:53:27.655580 kernel: ACPI: LAPIC_NMI (acpi_id[0x25] high edge lint[0x1]) Jul 10 00:53:27.655585 kernel: ACPI: LAPIC_NMI (acpi_id[0x26] high edge lint[0x1]) Jul 10 00:53:27.655590 kernel: ACPI: LAPIC_NMI (acpi_id[0x27] high edge lint[0x1]) Jul 10 00:53:27.655596 kernel: ACPI: LAPIC_NMI (acpi_id[0x28] high edge lint[0x1]) Jul 10 00:53:27.655602 kernel: ACPI: LAPIC_NMI (acpi_id[0x29] high edge lint[0x1]) Jul 10 00:53:27.655607 kernel: ACPI: LAPIC_NMI (acpi_id[0x2a] high edge lint[0x1]) Jul 10 00:53:27.655612 kernel: ACPI: LAPIC_NMI (acpi_id[0x2b] high edge lint[0x1]) Jul 10 00:53:27.655617 kernel: ACPI: LAPIC_NMI (acpi_id[0x2c] high edge lint[0x1]) Jul 10 00:53:27.655622 kernel: ACPI: LAPIC_NMI (acpi_id[0x2d] high edge lint[0x1]) Jul 10 00:53:27.655628 kernel: ACPI: LAPIC_NMI (acpi_id[0x2e] high edge lint[0x1]) Jul 10 00:53:27.655633 kernel: ACPI: LAPIC_NMI (acpi_id[0x2f] high edge lint[0x1]) Jul 10 00:53:27.655638 kernel: ACPI: LAPIC_NMI (acpi_id[0x30] high edge lint[0x1]) Jul 10 00:53:27.655643 kernel: ACPI: LAPIC_NMI (acpi_id[0x31] high edge lint[0x1]) Jul 10 00:53:27.655649 kernel: ACPI: LAPIC_NMI (acpi_id[0x32] high edge lint[0x1]) Jul 10 00:53:27.655654 kernel: ACPI: LAPIC_NMI (acpi_id[0x33] high edge lint[0x1]) Jul 10 00:53:27.655659 kernel: ACPI: LAPIC_NMI (acpi_id[0x34] high edge lint[0x1]) Jul 10 00:53:27.655664 kernel: ACPI: LAPIC_NMI (acpi_id[0x35] high edge lint[0x1]) Jul 10 00:53:27.655670 kernel: ACPI: LAPIC_NMI (acpi_id[0x36] high edge lint[0x1]) Jul 10 00:53:27.655675 kernel: ACPI: LAPIC_NMI (acpi_id[0x37] high edge lint[0x1]) Jul 10 00:53:27.655680 kernel: ACPI: LAPIC_NMI (acpi_id[0x38] high edge lint[0x1]) Jul 10 00:53:27.655685 kernel: ACPI: LAPIC_NMI (acpi_id[0x39] high edge lint[0x1]) Jul 10 00:53:27.655691 kernel: ACPI: LAPIC_NMI (acpi_id[0x3a] high edge lint[0x1]) Jul 10 00:53:27.655697 kernel: ACPI: LAPIC_NMI (acpi_id[0x3b] high edge lint[0x1]) Jul 10 00:53:27.655702 kernel: ACPI: LAPIC_NMI (acpi_id[0x3c] high edge lint[0x1]) Jul 10 00:53:27.655707 kernel: ACPI: LAPIC_NMI (acpi_id[0x3d] high edge lint[0x1]) Jul 10 00:53:27.655712 kernel: ACPI: LAPIC_NMI (acpi_id[0x3e] high edge lint[0x1]) Jul 10 00:53:27.655717 kernel: ACPI: LAPIC_NMI (acpi_id[0x3f] high edge lint[0x1]) Jul 10 00:53:27.655722 kernel: ACPI: LAPIC_NMI (acpi_id[0x40] high edge lint[0x1]) Jul 10 00:53:27.655728 kernel: ACPI: LAPIC_NMI (acpi_id[0x41] high edge lint[0x1]) Jul 10 00:53:27.655733 kernel: ACPI: LAPIC_NMI (acpi_id[0x42] high edge lint[0x1]) Jul 10 00:53:27.655738 kernel: ACPI: LAPIC_NMI (acpi_id[0x43] high edge lint[0x1]) Jul 10 00:53:27.655743 kernel: ACPI: LAPIC_NMI (acpi_id[0x44] high edge lint[0x1]) Jul 10 00:53:27.655750 kernel: ACPI: LAPIC_NMI (acpi_id[0x45] high edge lint[0x1]) Jul 10 00:53:27.655755 kernel: ACPI: LAPIC_NMI (acpi_id[0x46] high edge lint[0x1]) Jul 10 00:53:27.655760 kernel: ACPI: LAPIC_NMI (acpi_id[0x47] high edge lint[0x1]) Jul 10 00:53:27.655765 kernel: ACPI: LAPIC_NMI (acpi_id[0x48] high edge lint[0x1]) Jul 10 00:53:27.655771 kernel: ACPI: LAPIC_NMI (acpi_id[0x49] high edge lint[0x1]) Jul 10 00:53:27.655776 kernel: ACPI: LAPIC_NMI (acpi_id[0x4a] high edge lint[0x1]) Jul 10 00:53:27.655781 kernel: ACPI: LAPIC_NMI (acpi_id[0x4b] high edge lint[0x1]) Jul 10 00:53:27.655786 kernel: ACPI: LAPIC_NMI (acpi_id[0x4c] high edge lint[0x1]) Jul 10 00:53:27.655791 kernel: ACPI: LAPIC_NMI (acpi_id[0x4d] high edge lint[0x1]) Jul 10 00:53:27.655797 kernel: ACPI: LAPIC_NMI (acpi_id[0x4e] high edge lint[0x1]) Jul 10 00:53:27.655802 kernel: ACPI: LAPIC_NMI (acpi_id[0x4f] high edge lint[0x1]) Jul 10 00:53:27.655808 kernel: ACPI: LAPIC_NMI (acpi_id[0x50] high edge lint[0x1]) Jul 10 00:53:27.655813 kernel: ACPI: LAPIC_NMI (acpi_id[0x51] high edge lint[0x1]) Jul 10 00:53:27.655818 kernel: ACPI: LAPIC_NMI (acpi_id[0x52] high edge lint[0x1]) Jul 10 00:53:27.655823 kernel: ACPI: LAPIC_NMI (acpi_id[0x53] high edge lint[0x1]) Jul 10 00:53:27.655828 kernel: ACPI: LAPIC_NMI (acpi_id[0x54] high edge lint[0x1]) Jul 10 00:53:27.655833 kernel: ACPI: LAPIC_NMI (acpi_id[0x55] high edge lint[0x1]) Jul 10 00:53:27.655839 kernel: ACPI: LAPIC_NMI (acpi_id[0x56] high edge lint[0x1]) Jul 10 00:53:27.655845 kernel: ACPI: LAPIC_NMI (acpi_id[0x57] high edge lint[0x1]) Jul 10 00:53:27.655850 kernel: ACPI: LAPIC_NMI (acpi_id[0x58] high edge lint[0x1]) Jul 10 00:53:27.655855 kernel: ACPI: LAPIC_NMI (acpi_id[0x59] high edge lint[0x1]) Jul 10 00:53:27.655860 kernel: ACPI: LAPIC_NMI (acpi_id[0x5a] high edge lint[0x1]) Jul 10 00:53:27.655865 kernel: ACPI: LAPIC_NMI (acpi_id[0x5b] high edge lint[0x1]) Jul 10 00:53:27.655870 kernel: ACPI: LAPIC_NMI (acpi_id[0x5c] high edge lint[0x1]) Jul 10 00:53:27.655875 kernel: ACPI: LAPIC_NMI (acpi_id[0x5d] high edge lint[0x1]) Jul 10 00:53:27.655881 kernel: ACPI: LAPIC_NMI (acpi_id[0x5e] high edge lint[0x1]) Jul 10 00:53:27.655886 kernel: ACPI: LAPIC_NMI (acpi_id[0x5f] high edge lint[0x1]) Jul 10 00:53:27.655891 kernel: ACPI: LAPIC_NMI (acpi_id[0x60] high edge lint[0x1]) Jul 10 00:53:27.655897 kernel: ACPI: LAPIC_NMI (acpi_id[0x61] high edge lint[0x1]) Jul 10 00:53:27.655902 kernel: ACPI: LAPIC_NMI (acpi_id[0x62] high edge lint[0x1]) Jul 10 00:53:27.655908 kernel: ACPI: LAPIC_NMI (acpi_id[0x63] high edge lint[0x1]) Jul 10 00:53:27.655913 kernel: ACPI: LAPIC_NMI (acpi_id[0x64] high edge lint[0x1]) Jul 10 00:53:27.655918 kernel: ACPI: LAPIC_NMI (acpi_id[0x65] high edge lint[0x1]) Jul 10 00:53:27.655923 kernel: ACPI: LAPIC_NMI (acpi_id[0x66] high edge lint[0x1]) Jul 10 00:53:27.655928 kernel: ACPI: LAPIC_NMI (acpi_id[0x67] high edge lint[0x1]) Jul 10 00:53:27.655934 kernel: ACPI: LAPIC_NMI (acpi_id[0x68] high edge lint[0x1]) Jul 10 00:53:27.655939 kernel: ACPI: LAPIC_NMI (acpi_id[0x69] high edge lint[0x1]) Jul 10 00:53:27.655944 kernel: ACPI: LAPIC_NMI (acpi_id[0x6a] high edge lint[0x1]) Jul 10 00:53:27.655950 kernel: ACPI: LAPIC_NMI (acpi_id[0x6b] high edge lint[0x1]) Jul 10 00:53:27.655955 kernel: ACPI: LAPIC_NMI (acpi_id[0x6c] high edge lint[0x1]) Jul 10 00:53:27.655960 kernel: ACPI: LAPIC_NMI (acpi_id[0x6d] high edge lint[0x1]) Jul 10 00:53:27.655966 kernel: ACPI: LAPIC_NMI (acpi_id[0x6e] high edge lint[0x1]) Jul 10 00:53:27.655971 kernel: ACPI: LAPIC_NMI (acpi_id[0x6f] high edge lint[0x1]) Jul 10 00:53:27.655976 kernel: ACPI: LAPIC_NMI (acpi_id[0x70] high edge lint[0x1]) Jul 10 00:53:27.655981 kernel: ACPI: LAPIC_NMI (acpi_id[0x71] high edge lint[0x1]) Jul 10 00:53:27.655986 kernel: ACPI: LAPIC_NMI (acpi_id[0x72] high edge lint[0x1]) Jul 10 00:53:27.655992 kernel: ACPI: LAPIC_NMI (acpi_id[0x73] high edge lint[0x1]) Jul 10 00:53:27.655998 kernel: ACPI: LAPIC_NMI (acpi_id[0x74] high edge lint[0x1]) Jul 10 00:53:27.656003 kernel: ACPI: LAPIC_NMI (acpi_id[0x75] high edge lint[0x1]) Jul 10 00:53:27.656008 kernel: ACPI: LAPIC_NMI (acpi_id[0x76] high edge lint[0x1]) Jul 10 00:53:27.656014 kernel: ACPI: LAPIC_NMI (acpi_id[0x77] high edge lint[0x1]) Jul 10 00:53:27.656019 kernel: ACPI: LAPIC_NMI (acpi_id[0x78] high edge lint[0x1]) Jul 10 00:53:27.656024 kernel: ACPI: LAPIC_NMI (acpi_id[0x79] high edge lint[0x1]) Jul 10 00:53:27.656029 kernel: ACPI: LAPIC_NMI (acpi_id[0x7a] high edge lint[0x1]) Jul 10 00:53:27.656034 kernel: ACPI: LAPIC_NMI (acpi_id[0x7b] high edge lint[0x1]) Jul 10 00:53:27.656040 kernel: ACPI: LAPIC_NMI (acpi_id[0x7c] high edge lint[0x1]) Jul 10 00:53:27.656046 kernel: ACPI: LAPIC_NMI (acpi_id[0x7d] high edge lint[0x1]) Jul 10 00:53:27.656051 kernel: ACPI: LAPIC_NMI (acpi_id[0x7e] high edge lint[0x1]) Jul 10 00:53:27.656056 kernel: ACPI: LAPIC_NMI (acpi_id[0x7f] high edge lint[0x1]) Jul 10 00:53:27.656061 kernel: IOAPIC[0]: apic_id 1, version 17, address 0xfec00000, GSI 0-23 Jul 10 00:53:27.656066 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 high edge) Jul 10 00:53:27.656072 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jul 10 00:53:27.656077 kernel: ACPI: HPET id: 0x8086af01 base: 0xfed00000 Jul 10 00:53:27.656082 kernel: TSC deadline timer available Jul 10 00:53:27.656087 kernel: smpboot: Allowing 128 CPUs, 126 hotplug CPUs Jul 10 00:53:27.656092 kernel: [mem 0x80000000-0xefffffff] available for PCI devices Jul 10 00:53:27.656099 kernel: Booting paravirtualized kernel on VMware hypervisor Jul 10 00:53:27.656104 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jul 10 00:53:27.656110 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:128 nr_node_ids:1 Jul 10 00:53:27.656115 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u262144 Jul 10 00:53:27.656120 kernel: pcpu-alloc: s188696 r8192 d32488 u262144 alloc=1*2097152 Jul 10 00:53:27.656125 kernel: pcpu-alloc: [0] 000 001 002 003 004 005 006 007 Jul 10 00:53:27.656131 kernel: pcpu-alloc: [0] 008 009 010 011 012 013 014 015 Jul 10 00:53:27.656136 kernel: pcpu-alloc: [0] 016 017 018 019 020 021 022 023 Jul 10 00:53:27.656142 kernel: pcpu-alloc: [0] 024 025 026 027 028 029 030 031 Jul 10 00:53:27.656147 kernel: pcpu-alloc: [0] 032 033 034 035 036 037 038 039 Jul 10 00:53:27.656152 kernel: pcpu-alloc: [0] 040 041 042 043 044 045 046 047 Jul 10 00:53:27.656157 kernel: pcpu-alloc: [0] 048 049 050 051 052 053 054 055 Jul 10 00:53:27.656169 kernel: pcpu-alloc: [0] 056 057 058 059 060 061 062 063 Jul 10 00:53:27.656175 kernel: pcpu-alloc: [0] 064 065 066 067 068 069 070 071 Jul 10 00:53:27.656181 kernel: pcpu-alloc: [0] 072 073 074 075 076 077 078 079 Jul 10 00:53:27.656186 kernel: pcpu-alloc: [0] 080 081 082 083 084 085 086 087 Jul 10 00:53:27.656192 kernel: pcpu-alloc: [0] 088 089 090 091 092 093 094 095 Jul 10 00:53:27.656198 kernel: pcpu-alloc: [0] 096 097 098 099 100 101 102 103 Jul 10 00:53:27.656204 kernel: pcpu-alloc: [0] 104 105 106 107 108 109 110 111 Jul 10 00:53:27.656209 kernel: pcpu-alloc: [0] 112 113 114 115 116 117 118 119 Jul 10 00:53:27.656215 kernel: pcpu-alloc: [0] 120 121 122 123 124 125 126 127 Jul 10 00:53:27.656220 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515808 Jul 10 00:53:27.656226 kernel: Policy zone: DMA32 Jul 10 00:53:27.656232 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=6cddad5f675165861f6062277cc28875548c735477e689762fc73abc16b63a3d Jul 10 00:53:27.656238 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 10 00:53:27.656244 kernel: printk: log_buf_len individual max cpu contribution: 4096 bytes Jul 10 00:53:27.656250 kernel: printk: log_buf_len total cpu_extra contributions: 520192 bytes Jul 10 00:53:27.656256 kernel: printk: log_buf_len min size: 262144 bytes Jul 10 00:53:27.656261 kernel: printk: log_buf_len: 1048576 bytes Jul 10 00:53:27.656267 kernel: printk: early log buf free: 239728(91%) Jul 10 00:53:27.656272 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 10 00:53:27.656278 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jul 10 00:53:27.656284 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 10 00:53:27.656289 kernel: Memory: 1940392K/2096628K available (12295K kernel code, 2275K rwdata, 13724K rodata, 47472K init, 4108K bss, 155976K reserved, 0K cma-reserved) Jul 10 00:53:27.656296 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=128, Nodes=1 Jul 10 00:53:27.656302 kernel: ftrace: allocating 34602 entries in 136 pages Jul 10 00:53:27.656307 kernel: ftrace: allocated 136 pages with 2 groups Jul 10 00:53:27.656314 kernel: rcu: Hierarchical RCU implementation. Jul 10 00:53:27.656320 kernel: rcu: RCU event tracing is enabled. Jul 10 00:53:27.656326 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=128. Jul 10 00:53:27.656333 kernel: Rude variant of Tasks RCU enabled. Jul 10 00:53:27.656339 kernel: Tracing variant of Tasks RCU enabled. Jul 10 00:53:27.656344 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 10 00:53:27.656350 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=128 Jul 10 00:53:27.656355 kernel: NR_IRQS: 33024, nr_irqs: 1448, preallocated irqs: 16 Jul 10 00:53:27.656361 kernel: random: crng init done Jul 10 00:53:27.656366 kernel: Console: colour VGA+ 80x25 Jul 10 00:53:27.656372 kernel: printk: console [tty0] enabled Jul 10 00:53:27.656378 kernel: printk: console [ttyS0] enabled Jul 10 00:53:27.656384 kernel: ACPI: Core revision 20210730 Jul 10 00:53:27.656390 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 133484882848 ns Jul 10 00:53:27.656396 kernel: APIC: Switch to symmetric I/O mode setup Jul 10 00:53:27.656402 kernel: x2apic enabled Jul 10 00:53:27.656407 kernel: Switched APIC routing to physical x2apic. Jul 10 00:53:27.656413 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jul 10 00:53:27.656419 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns Jul 10 00:53:27.656425 kernel: Calibrating delay loop (skipped) preset value.. 6816.00 BogoMIPS (lpj=3408000) Jul 10 00:53:27.656430 kernel: Disabled fast string operations Jul 10 00:53:27.656437 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Jul 10 00:53:27.656442 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Jul 10 00:53:27.656448 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jul 10 00:53:27.656454 kernel: Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks! Jul 10 00:53:27.656460 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Jul 10 00:53:27.656465 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Jul 10 00:53:27.656471 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Jul 10 00:53:27.656483 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Jul 10 00:53:27.656488 kernel: RETBleed: Mitigation: Enhanced IBRS Jul 10 00:53:27.656495 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jul 10 00:53:27.656501 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Jul 10 00:53:27.656507 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jul 10 00:53:27.656512 kernel: SRBDS: Unknown: Dependent on hypervisor status Jul 10 00:53:27.656518 kernel: GDS: Unknown: Dependent on hypervisor status Jul 10 00:53:27.656523 kernel: ITS: Mitigation: Aligned branch/return thunks Jul 10 00:53:27.656529 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jul 10 00:53:27.656535 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jul 10 00:53:27.656541 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jul 10 00:53:27.656547 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jul 10 00:53:27.656553 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jul 10 00:53:27.656558 kernel: Freeing SMP alternatives memory: 32K Jul 10 00:53:27.656564 kernel: pid_max: default: 131072 minimum: 1024 Jul 10 00:53:27.656569 kernel: LSM: Security Framework initializing Jul 10 00:53:27.656575 kernel: SELinux: Initializing. Jul 10 00:53:27.656581 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jul 10 00:53:27.656587 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jul 10 00:53:27.656592 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) Jul 10 00:53:27.656599 kernel: Performance Events: Skylake events, core PMU driver. Jul 10 00:53:27.656605 kernel: core: CPUID marked event: 'cpu cycles' unavailable Jul 10 00:53:27.656610 kernel: core: CPUID marked event: 'instructions' unavailable Jul 10 00:53:27.656616 kernel: core: CPUID marked event: 'bus cycles' unavailable Jul 10 00:53:27.656621 kernel: core: CPUID marked event: 'cache references' unavailable Jul 10 00:53:27.656627 kernel: core: CPUID marked event: 'cache misses' unavailable Jul 10 00:53:27.656632 kernel: core: CPUID marked event: 'branch instructions' unavailable Jul 10 00:53:27.656638 kernel: core: CPUID marked event: 'branch misses' unavailable Jul 10 00:53:27.656644 kernel: ... version: 1 Jul 10 00:53:27.656650 kernel: ... bit width: 48 Jul 10 00:53:27.656655 kernel: ... generic registers: 4 Jul 10 00:53:27.656661 kernel: ... value mask: 0000ffffffffffff Jul 10 00:53:27.656666 kernel: ... max period: 000000007fffffff Jul 10 00:53:27.656672 kernel: ... fixed-purpose events: 0 Jul 10 00:53:27.656677 kernel: ... event mask: 000000000000000f Jul 10 00:53:27.656683 kernel: signal: max sigframe size: 1776 Jul 10 00:53:27.656689 kernel: rcu: Hierarchical SRCU implementation. Jul 10 00:53:27.656695 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jul 10 00:53:27.656701 kernel: smp: Bringing up secondary CPUs ... Jul 10 00:53:27.656707 kernel: x86: Booting SMP configuration: Jul 10 00:53:27.656712 kernel: .... node #0, CPUs: #1 Jul 10 00:53:27.656718 kernel: Disabled fast string operations Jul 10 00:53:27.656724 kernel: smpboot: CPU 1 Converting physical 2 to logical package 1 Jul 10 00:53:27.656730 kernel: smpboot: CPU 1 Converting physical 0 to logical die 1 Jul 10 00:53:27.656735 kernel: smp: Brought up 1 node, 2 CPUs Jul 10 00:53:27.656741 kernel: smpboot: Max logical packages: 128 Jul 10 00:53:27.656747 kernel: smpboot: Total of 2 processors activated (13632.00 BogoMIPS) Jul 10 00:53:27.656753 kernel: devtmpfs: initialized Jul 10 00:53:27.656759 kernel: x86/mm: Memory block size: 128MB Jul 10 00:53:27.656764 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7feff000-0x7fefffff] (4096 bytes) Jul 10 00:53:27.656770 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 10 00:53:27.656776 kernel: futex hash table entries: 32768 (order: 9, 2097152 bytes, linear) Jul 10 00:53:27.656781 kernel: pinctrl core: initialized pinctrl subsystem Jul 10 00:53:27.656787 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 10 00:53:27.656793 kernel: audit: initializing netlink subsys (disabled) Jul 10 00:53:27.656798 kernel: audit: type=2000 audit(1752108805.086:1): state=initialized audit_enabled=0 res=1 Jul 10 00:53:27.656805 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 10 00:53:27.656810 kernel: thermal_sys: Registered thermal governor 'user_space' Jul 10 00:53:27.656816 kernel: cpuidle: using governor menu Jul 10 00:53:27.656821 kernel: Simple Boot Flag at 0x36 set to 0x80 Jul 10 00:53:27.656827 kernel: ACPI: bus type PCI registered Jul 10 00:53:27.656833 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 10 00:53:27.656838 kernel: dca service started, version 1.12.1 Jul 10 00:53:27.656844 kernel: PCI: MMCONFIG for domain 0000 [bus 00-7f] at [mem 0xf0000000-0xf7ffffff] (base 0xf0000000) Jul 10 00:53:27.656850 kernel: PCI: MMCONFIG at [mem 0xf0000000-0xf7ffffff] reserved in E820 Jul 10 00:53:27.656856 kernel: PCI: Using configuration type 1 for base access Jul 10 00:53:27.656862 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jul 10 00:53:27.656867 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Jul 10 00:53:27.656873 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Jul 10 00:53:27.656879 kernel: ACPI: Added _OSI(Module Device) Jul 10 00:53:27.656884 kernel: ACPI: Added _OSI(Processor Device) Jul 10 00:53:27.656890 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 10 00:53:27.656895 kernel: ACPI: Added _OSI(Linux-Dell-Video) Jul 10 00:53:27.656901 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Jul 10 00:53:27.656908 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Jul 10 00:53:27.656913 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 10 00:53:27.656919 kernel: ACPI: [Firmware Bug]: BIOS _OSI(Linux) query ignored Jul 10 00:53:27.656924 kernel: ACPI: Interpreter enabled Jul 10 00:53:27.656930 kernel: ACPI: PM: (supports S0 S1 S5) Jul 10 00:53:27.656936 kernel: ACPI: Using IOAPIC for interrupt routing Jul 10 00:53:27.656941 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jul 10 00:53:27.656947 kernel: ACPI: Enabled 4 GPEs in block 00 to 0F Jul 10 00:53:27.656953 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-7f]) Jul 10 00:53:27.657029 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 10 00:53:27.657080 kernel: acpi PNP0A03:00: _OSC: platform does not support [AER LTR] Jul 10 00:53:27.657128 kernel: acpi PNP0A03:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability] Jul 10 00:53:27.657136 kernel: PCI host bridge to bus 0000:00 Jul 10 00:53:27.657185 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jul 10 00:53:27.657228 kernel: pci_bus 0000:00: root bus resource [mem 0x000cc000-0x000dbfff window] Jul 10 00:53:27.657273 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jul 10 00:53:27.657315 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jul 10 00:53:27.657356 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xfeff window] Jul 10 00:53:27.657397 kernel: pci_bus 0000:00: root bus resource [bus 00-7f] Jul 10 00:53:27.657454 kernel: pci 0000:00:00.0: [8086:7190] type 00 class 0x060000 Jul 10 00:53:27.665835 kernel: pci 0000:00:01.0: [8086:7191] type 01 class 0x060400 Jul 10 00:53:27.665904 kernel: pci 0000:00:07.0: [8086:7110] type 00 class 0x060100 Jul 10 00:53:27.665966 kernel: pci 0000:00:07.1: [8086:7111] type 00 class 0x01018a Jul 10 00:53:27.666016 kernel: pci 0000:00:07.1: reg 0x20: [io 0x1060-0x106f] Jul 10 00:53:27.666064 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Jul 10 00:53:27.666112 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Jul 10 00:53:27.666159 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Jul 10 00:53:27.666209 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Jul 10 00:53:27.666278 kernel: pci 0000:00:07.3: [8086:7113] type 00 class 0x068000 Jul 10 00:53:27.666330 kernel: pci 0000:00:07.3: quirk: [io 0x1000-0x103f] claimed by PIIX4 ACPI Jul 10 00:53:27.666379 kernel: pci 0000:00:07.3: quirk: [io 0x1040-0x104f] claimed by PIIX4 SMB Jul 10 00:53:27.666430 kernel: pci 0000:00:07.7: [15ad:0740] type 00 class 0x088000 Jul 10 00:53:27.666498 kernel: pci 0000:00:07.7: reg 0x10: [io 0x1080-0x10bf] Jul 10 00:53:27.666550 kernel: pci 0000:00:07.7: reg 0x14: [mem 0xfebfe000-0xfebfffff 64bit] Jul 10 00:53:27.666602 kernel: pci 0000:00:0f.0: [15ad:0405] type 00 class 0x030000 Jul 10 00:53:27.666653 kernel: pci 0000:00:0f.0: reg 0x10: [io 0x1070-0x107f] Jul 10 00:53:27.666700 kernel: pci 0000:00:0f.0: reg 0x14: [mem 0xe8000000-0xefffffff pref] Jul 10 00:53:27.666747 kernel: pci 0000:00:0f.0: reg 0x18: [mem 0xfe000000-0xfe7fffff] Jul 10 00:53:27.666793 kernel: pci 0000:00:0f.0: reg 0x30: [mem 0x00000000-0x00007fff pref] Jul 10 00:53:27.666841 kernel: pci 0000:00:0f.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jul 10 00:53:27.666891 kernel: pci 0000:00:11.0: [15ad:0790] type 01 class 0x060401 Jul 10 00:53:27.666946 kernel: pci 0000:00:15.0: [15ad:07a0] type 01 class 0x060400 Jul 10 00:53:27.666996 kernel: pci 0000:00:15.0: PME# supported from D0 D3hot D3cold Jul 10 00:53:27.667048 kernel: pci 0000:00:15.1: [15ad:07a0] type 01 class 0x060400 Jul 10 00:53:27.667098 kernel: pci 0000:00:15.1: PME# supported from D0 D3hot D3cold Jul 10 00:53:27.667150 kernel: pci 0000:00:15.2: [15ad:07a0] type 01 class 0x060400 Jul 10 00:53:27.667199 kernel: pci 0000:00:15.2: PME# supported from D0 D3hot D3cold Jul 10 00:53:27.667253 kernel: pci 0000:00:15.3: [15ad:07a0] type 01 class 0x060400 Jul 10 00:53:27.667305 kernel: pci 0000:00:15.3: PME# supported from D0 D3hot D3cold Jul 10 00:53:27.667356 kernel: pci 0000:00:15.4: [15ad:07a0] type 01 class 0x060400 Jul 10 00:53:27.667404 kernel: pci 0000:00:15.4: PME# supported from D0 D3hot D3cold Jul 10 00:53:27.667455 kernel: pci 0000:00:15.5: [15ad:07a0] type 01 class 0x060400 Jul 10 00:53:27.667512 kernel: pci 0000:00:15.5: PME# supported from D0 D3hot D3cold Jul 10 00:53:27.667565 kernel: pci 0000:00:15.6: [15ad:07a0] type 01 class 0x060400 Jul 10 00:53:27.667616 kernel: pci 0000:00:15.6: PME# supported from D0 D3hot D3cold Jul 10 00:53:27.667667 kernel: pci 0000:00:15.7: [15ad:07a0] type 01 class 0x060400 Jul 10 00:53:27.667715 kernel: pci 0000:00:15.7: PME# supported from D0 D3hot D3cold Jul 10 00:53:27.667765 kernel: pci 0000:00:16.0: [15ad:07a0] type 01 class 0x060400 Jul 10 00:53:27.667813 kernel: pci 0000:00:16.0: PME# supported from D0 D3hot D3cold Jul 10 00:53:27.667868 kernel: pci 0000:00:16.1: [15ad:07a0] type 01 class 0x060400 Jul 10 00:53:27.667915 kernel: pci 0000:00:16.1: PME# supported from D0 D3hot D3cold Jul 10 00:53:27.667966 kernel: pci 0000:00:16.2: [15ad:07a0] type 01 class 0x060400 Jul 10 00:53:27.668015 kernel: pci 0000:00:16.2: PME# supported from D0 D3hot D3cold Jul 10 00:53:27.668066 kernel: pci 0000:00:16.3: [15ad:07a0] type 01 class 0x060400 Jul 10 00:53:27.668114 kernel: pci 0000:00:16.3: PME# supported from D0 D3hot D3cold Jul 10 00:53:27.668170 kernel: pci 0000:00:16.4: [15ad:07a0] type 01 class 0x060400 Jul 10 00:53:27.668219 kernel: pci 0000:00:16.4: PME# supported from D0 D3hot D3cold Jul 10 00:53:27.668269 kernel: pci 0000:00:16.5: [15ad:07a0] type 01 class 0x060400 Jul 10 00:53:27.668316 kernel: pci 0000:00:16.5: PME# supported from D0 D3hot D3cold Jul 10 00:53:27.668367 kernel: pci 0000:00:16.6: [15ad:07a0] type 01 class 0x060400 Jul 10 00:53:27.668416 kernel: pci 0000:00:16.6: PME# supported from D0 D3hot D3cold Jul 10 00:53:27.668467 kernel: pci 0000:00:16.7: [15ad:07a0] type 01 class 0x060400 Jul 10 00:53:27.670605 kernel: pci 0000:00:16.7: PME# supported from D0 D3hot D3cold Jul 10 00:53:27.670663 kernel: pci 0000:00:17.0: [15ad:07a0] type 01 class 0x060400 Jul 10 00:53:27.670714 kernel: pci 0000:00:17.0: PME# supported from D0 D3hot D3cold Jul 10 00:53:27.670766 kernel: pci 0000:00:17.1: [15ad:07a0] type 01 class 0x060400 Jul 10 00:53:27.670815 kernel: pci 0000:00:17.1: PME# supported from D0 D3hot D3cold Jul 10 00:53:27.670866 kernel: pci 0000:00:17.2: [15ad:07a0] type 01 class 0x060400 Jul 10 00:53:27.670917 kernel: pci 0000:00:17.2: PME# supported from D0 D3hot D3cold Jul 10 00:53:27.670968 kernel: pci 0000:00:17.3: [15ad:07a0] type 01 class 0x060400 Jul 10 00:53:27.671015 kernel: pci 0000:00:17.3: PME# supported from D0 D3hot D3cold Jul 10 00:53:27.671066 kernel: pci 0000:00:17.4: [15ad:07a0] type 01 class 0x060400 Jul 10 00:53:27.671114 kernel: pci 0000:00:17.4: PME# supported from D0 D3hot D3cold Jul 10 00:53:27.671165 kernel: pci 0000:00:17.5: [15ad:07a0] type 01 class 0x060400 Jul 10 00:53:27.671214 kernel: pci 0000:00:17.5: PME# supported from D0 D3hot D3cold Jul 10 00:53:27.671268 kernel: pci 0000:00:17.6: [15ad:07a0] type 01 class 0x060400 Jul 10 00:53:27.671315 kernel: pci 0000:00:17.6: PME# supported from D0 D3hot D3cold Jul 10 00:53:27.671367 kernel: pci 0000:00:17.7: [15ad:07a0] type 01 class 0x060400 Jul 10 00:53:27.671414 kernel: pci 0000:00:17.7: PME# supported from D0 D3hot D3cold Jul 10 00:53:27.671465 kernel: pci 0000:00:18.0: [15ad:07a0] type 01 class 0x060400 Jul 10 00:53:27.673577 kernel: pci 0000:00:18.0: PME# supported from D0 D3hot D3cold Jul 10 00:53:27.673636 kernel: pci 0000:00:18.1: [15ad:07a0] type 01 class 0x060400 Jul 10 00:53:27.673689 kernel: pci 0000:00:18.1: PME# supported from D0 D3hot D3cold Jul 10 00:53:27.673742 kernel: pci 0000:00:18.2: [15ad:07a0] type 01 class 0x060400 Jul 10 00:53:27.673792 kernel: pci 0000:00:18.2: PME# supported from D0 D3hot D3cold Jul 10 00:53:27.673859 kernel: pci 0000:00:18.3: [15ad:07a0] type 01 class 0x060400 Jul 10 00:53:27.673906 kernel: pci 0000:00:18.3: PME# supported from D0 D3hot D3cold Jul 10 00:53:27.673959 kernel: pci 0000:00:18.4: [15ad:07a0] type 01 class 0x060400 Jul 10 00:53:27.674006 kernel: pci 0000:00:18.4: PME# supported from D0 D3hot D3cold Jul 10 00:53:27.674055 kernel: pci 0000:00:18.5: [15ad:07a0] type 01 class 0x060400 Jul 10 00:53:27.674102 kernel: pci 0000:00:18.5: PME# supported from D0 D3hot D3cold Jul 10 00:53:27.681517 kernel: pci 0000:00:18.6: [15ad:07a0] type 01 class 0x060400 Jul 10 00:53:27.681579 kernel: pci 0000:00:18.6: PME# supported from D0 D3hot D3cold Jul 10 00:53:27.681643 kernel: pci 0000:00:18.7: [15ad:07a0] type 01 class 0x060400 Jul 10 00:53:27.681693 kernel: pci 0000:00:18.7: PME# supported from D0 D3hot D3cold Jul 10 00:53:27.681744 kernel: pci_bus 0000:01: extended config space not accessible Jul 10 00:53:27.681794 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Jul 10 00:53:27.681844 kernel: pci_bus 0000:02: extended config space not accessible Jul 10 00:53:27.681853 kernel: acpiphp: Slot [32] registered Jul 10 00:53:27.681860 kernel: acpiphp: Slot [33] registered Jul 10 00:53:27.681866 kernel: acpiphp: Slot [34] registered Jul 10 00:53:27.681872 kernel: acpiphp: Slot [35] registered Jul 10 00:53:27.681877 kernel: acpiphp: Slot [36] registered Jul 10 00:53:27.681883 kernel: acpiphp: Slot [37] registered Jul 10 00:53:27.681888 kernel: acpiphp: Slot [38] registered Jul 10 00:53:27.681894 kernel: acpiphp: Slot [39] registered Jul 10 00:53:27.681900 kernel: acpiphp: Slot [40] registered Jul 10 00:53:27.681905 kernel: acpiphp: Slot [41] registered Jul 10 00:53:27.681910 kernel: acpiphp: Slot [42] registered Jul 10 00:53:27.681917 kernel: acpiphp: Slot [43] registered Jul 10 00:53:27.681923 kernel: acpiphp: Slot [44] registered Jul 10 00:53:27.681928 kernel: acpiphp: Slot [45] registered Jul 10 00:53:27.681934 kernel: acpiphp: Slot [46] registered Jul 10 00:53:27.681939 kernel: acpiphp: Slot [47] registered Jul 10 00:53:27.681945 kernel: acpiphp: Slot [48] registered Jul 10 00:53:27.681950 kernel: acpiphp: Slot [49] registered Jul 10 00:53:27.681956 kernel: acpiphp: Slot [50] registered Jul 10 00:53:27.681961 kernel: acpiphp: Slot [51] registered Jul 10 00:53:27.681967 kernel: acpiphp: Slot [52] registered Jul 10 00:53:27.681973 kernel: acpiphp: Slot [53] registered Jul 10 00:53:27.681979 kernel: acpiphp: Slot [54] registered Jul 10 00:53:27.681985 kernel: acpiphp: Slot [55] registered Jul 10 00:53:27.681990 kernel: acpiphp: Slot [56] registered Jul 10 00:53:27.681995 kernel: acpiphp: Slot [57] registered Jul 10 00:53:27.682001 kernel: acpiphp: Slot [58] registered Jul 10 00:53:27.682007 kernel: acpiphp: Slot [59] registered Jul 10 00:53:27.682012 kernel: acpiphp: Slot [60] registered Jul 10 00:53:27.682018 kernel: acpiphp: Slot [61] registered Jul 10 00:53:27.682024 kernel: acpiphp: Slot [62] registered Jul 10 00:53:27.682030 kernel: acpiphp: Slot [63] registered Jul 10 00:53:27.682078 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] (subtractive decode) Jul 10 00:53:27.682126 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] Jul 10 00:53:27.682172 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] Jul 10 00:53:27.682219 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] Jul 10 00:53:27.682265 kernel: pci 0000:00:11.0: bridge window [mem 0x000a0000-0x000bffff window] (subtractive decode) Jul 10 00:53:27.682312 kernel: pci 0000:00:11.0: bridge window [mem 0x000cc000-0x000dbfff window] (subtractive decode) Jul 10 00:53:27.682360 kernel: pci 0000:00:11.0: bridge window [mem 0xc0000000-0xfebfffff window] (subtractive decode) Jul 10 00:53:27.682406 kernel: pci 0000:00:11.0: bridge window [io 0x0000-0x0cf7 window] (subtractive decode) Jul 10 00:53:27.682452 kernel: pci 0000:00:11.0: bridge window [io 0x0d00-0xfeff window] (subtractive decode) Jul 10 00:53:27.682513 kernel: pci 0000:03:00.0: [15ad:07c0] type 00 class 0x010700 Jul 10 00:53:27.682563 kernel: pci 0000:03:00.0: reg 0x10: [io 0x4000-0x4007] Jul 10 00:53:27.682611 kernel: pci 0000:03:00.0: reg 0x14: [mem 0xfd5f8000-0xfd5fffff 64bit] Jul 10 00:53:27.682659 kernel: pci 0000:03:00.0: reg 0x30: [mem 0x00000000-0x0000ffff pref] Jul 10 00:53:27.682710 kernel: pci 0000:03:00.0: PME# supported from D0 D3hot D3cold Jul 10 00:53:27.682758 kernel: pci 0000:03:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' Jul 10 00:53:27.682807 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] Jul 10 00:53:27.682854 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] Jul 10 00:53:27.682936 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] Jul 10 00:53:27.682985 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] Jul 10 00:53:27.683031 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] Jul 10 00:53:27.683077 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] Jul 10 00:53:27.683125 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] Jul 10 00:53:27.683173 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] Jul 10 00:53:27.683220 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] Jul 10 00:53:27.683267 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] Jul 10 00:53:27.683313 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] Jul 10 00:53:27.683361 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] Jul 10 00:53:27.683408 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] Jul 10 00:53:27.683457 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] Jul 10 00:53:27.683517 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] Jul 10 00:53:27.683565 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] Jul 10 00:53:27.683612 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] Jul 10 00:53:27.683705 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] Jul 10 00:53:27.683753 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] Jul 10 00:53:27.683799 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] Jul 10 00:53:27.683847 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] Jul 10 00:53:27.683893 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] Jul 10 00:53:27.683940 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] Jul 10 00:53:27.683988 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] Jul 10 00:53:27.684035 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] Jul 10 00:53:27.684084 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] Jul 10 00:53:27.684139 kernel: pci 0000:0b:00.0: [15ad:07b0] type 00 class 0x020000 Jul 10 00:53:27.684208 kernel: pci 0000:0b:00.0: reg 0x10: [mem 0xfd4fc000-0xfd4fcfff] Jul 10 00:53:27.684274 kernel: pci 0000:0b:00.0: reg 0x14: [mem 0xfd4fd000-0xfd4fdfff] Jul 10 00:53:27.684322 kernel: pci 0000:0b:00.0: reg 0x18: [mem 0xfd4fe000-0xfd4fffff] Jul 10 00:53:27.684370 kernel: pci 0000:0b:00.0: reg 0x1c: [io 0x5000-0x500f] Jul 10 00:53:27.684418 kernel: pci 0000:0b:00.0: reg 0x30: [mem 0x00000000-0x0000ffff pref] Jul 10 00:53:27.684467 kernel: pci 0000:0b:00.0: supports D1 D2 Jul 10 00:53:27.684527 kernel: pci 0000:0b:00.0: PME# supported from D0 D1 D2 D3hot D3cold Jul 10 00:53:27.684576 kernel: pci 0000:0b:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' Jul 10 00:53:27.684625 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] Jul 10 00:53:27.684676 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] Jul 10 00:53:27.684723 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] Jul 10 00:53:27.684772 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] Jul 10 00:53:27.684818 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] Jul 10 00:53:27.684865 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] Jul 10 00:53:27.684914 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] Jul 10 00:53:27.684962 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] Jul 10 00:53:27.685009 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] Jul 10 00:53:27.685055 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] Jul 10 00:53:27.685101 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] Jul 10 00:53:27.685150 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] Jul 10 00:53:27.685198 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] Jul 10 00:53:27.685247 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] Jul 10 00:53:27.685295 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] Jul 10 00:53:27.685342 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] Jul 10 00:53:27.685389 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] Jul 10 00:53:27.685436 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] Jul 10 00:53:27.685489 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] Jul 10 00:53:27.685538 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] Jul 10 00:53:27.685586 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] Jul 10 00:53:27.685634 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] Jul 10 00:53:27.685699 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] Jul 10 00:53:27.685764 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] Jul 10 00:53:27.685810 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] Jul 10 00:53:27.685856 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] Jul 10 00:53:27.685904 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] Jul 10 00:53:27.685950 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] Jul 10 00:53:27.685996 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] Jul 10 00:53:27.686045 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] Jul 10 00:53:27.686092 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] Jul 10 00:53:27.686138 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] Jul 10 00:53:27.686185 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] Jul 10 00:53:27.686232 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] Jul 10 00:53:27.686280 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] Jul 10 00:53:27.686327 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] Jul 10 00:53:27.686373 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] Jul 10 00:53:27.686422 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] Jul 10 00:53:27.686470 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] Jul 10 00:53:27.689288 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] Jul 10 00:53:27.689343 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] Jul 10 00:53:27.689395 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] Jul 10 00:53:27.689444 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] Jul 10 00:53:27.689498 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] Jul 10 00:53:27.689551 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] Jul 10 00:53:27.689599 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] Jul 10 00:53:27.689656 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] Jul 10 00:53:27.689708 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] Jul 10 00:53:27.689756 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] Jul 10 00:53:27.689804 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] Jul 10 00:53:27.689852 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] Jul 10 00:53:27.689900 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] Jul 10 00:53:27.689949 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] Jul 10 00:53:27.689998 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] Jul 10 00:53:27.690045 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] Jul 10 00:53:27.690092 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] Jul 10 00:53:27.690139 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] Jul 10 00:53:27.690188 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] Jul 10 00:53:27.690235 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] Jul 10 00:53:27.690283 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] Jul 10 00:53:27.690332 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] Jul 10 00:53:27.690381 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] Jul 10 00:53:27.690429 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] Jul 10 00:53:27.690482 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] Jul 10 00:53:27.690538 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] Jul 10 00:53:27.690586 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] Jul 10 00:53:27.690632 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] Jul 10 00:53:27.690680 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] Jul 10 00:53:27.690731 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] Jul 10 00:53:27.690778 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] Jul 10 00:53:27.690827 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] Jul 10 00:53:27.691181 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] Jul 10 00:53:27.691238 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] Jul 10 00:53:27.691292 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] Jul 10 00:53:27.691639 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] Jul 10 00:53:27.691696 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] Jul 10 00:53:27.691750 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] Jul 10 00:53:27.691799 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] Jul 10 00:53:27.691847 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] Jul 10 00:53:27.691855 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 9 Jul 10 00:53:27.691861 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 0 Jul 10 00:53:27.691867 kernel: ACPI: PCI: Interrupt link LNKB disabled Jul 10 00:53:27.691873 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jul 10 00:53:27.691878 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 10 Jul 10 00:53:27.691884 kernel: iommu: Default domain type: Translated Jul 10 00:53:27.691891 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jul 10 00:53:27.691940 kernel: pci 0000:00:0f.0: vgaarb: setting as boot VGA device Jul 10 00:53:27.691988 kernel: pci 0000:00:0f.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jul 10 00:53:27.692036 kernel: pci 0000:00:0f.0: vgaarb: bridge control possible Jul 10 00:53:27.692044 kernel: vgaarb: loaded Jul 10 00:53:27.692050 kernel: pps_core: LinuxPPS API ver. 1 registered Jul 10 00:53:27.692056 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jul 10 00:53:27.692061 kernel: PTP clock support registered Jul 10 00:53:27.692068 kernel: PCI: Using ACPI for IRQ routing Jul 10 00:53:27.692074 kernel: PCI: pci_cache_line_size set to 64 bytes Jul 10 00:53:27.692080 kernel: e820: reserve RAM buffer [mem 0x0009ec00-0x0009ffff] Jul 10 00:53:27.692086 kernel: e820: reserve RAM buffer [mem 0x7fee0000-0x7fffffff] Jul 10 00:53:27.692091 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 Jul 10 00:53:27.692097 kernel: hpet0: 16 comparators, 64-bit 14.318180 MHz counter Jul 10 00:53:27.692104 kernel: clocksource: Switched to clocksource tsc-early Jul 10 00:53:27.692109 kernel: VFS: Disk quotas dquot_6.6.0 Jul 10 00:53:27.692115 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 10 00:53:27.692121 kernel: pnp: PnP ACPI init Jul 10 00:53:27.692177 kernel: system 00:00: [io 0x1000-0x103f] has been reserved Jul 10 00:53:27.692222 kernel: system 00:00: [io 0x1040-0x104f] has been reserved Jul 10 00:53:27.692265 kernel: system 00:00: [io 0x0cf0-0x0cf1] has been reserved Jul 10 00:53:27.692312 kernel: system 00:04: [mem 0xfed00000-0xfed003ff] has been reserved Jul 10 00:53:27.692359 kernel: pnp 00:06: [dma 2] Jul 10 00:53:27.692405 kernel: system 00:07: [io 0xfce0-0xfcff] has been reserved Jul 10 00:53:27.692451 kernel: system 00:07: [mem 0xf0000000-0xf7ffffff] has been reserved Jul 10 00:53:27.692513 kernel: system 00:07: [mem 0xfe800000-0xfe9fffff] has been reserved Jul 10 00:53:27.692523 kernel: pnp: PnP ACPI: found 8 devices Jul 10 00:53:27.692529 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jul 10 00:53:27.692535 kernel: NET: Registered PF_INET protocol family Jul 10 00:53:27.692540 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 10 00:53:27.692546 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jul 10 00:53:27.692552 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 10 00:53:27.692560 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jul 10 00:53:27.692565 kernel: TCP bind hash table entries: 16384 (order: 6, 262144 bytes, linear) Jul 10 00:53:27.692571 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jul 10 00:53:27.692578 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jul 10 00:53:27.692583 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jul 10 00:53:27.692589 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 10 00:53:27.692595 kernel: NET: Registered PF_XDP protocol family Jul 10 00:53:27.692644 kernel: pci 0000:00:15.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 Jul 10 00:53:27.692697 kernel: pci 0000:00:15.3: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Jul 10 00:53:27.692747 kernel: pci 0000:00:15.4: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Jul 10 00:53:27.692796 kernel: pci 0000:00:15.5: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Jul 10 00:53:27.692845 kernel: pci 0000:00:15.6: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Jul 10 00:53:27.692894 kernel: pci 0000:00:15.7: bridge window [io 0x1000-0x0fff] to [bus 0a] add_size 1000 Jul 10 00:53:27.692943 kernel: pci 0000:00:16.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 0b] add_size 200000 add_align 100000 Jul 10 00:53:27.692993 kernel: pci 0000:00:16.3: bridge window [io 0x1000-0x0fff] to [bus 0e] add_size 1000 Jul 10 00:53:27.693041 kernel: pci 0000:00:16.4: bridge window [io 0x1000-0x0fff] to [bus 0f] add_size 1000 Jul 10 00:53:27.693089 kernel: pci 0000:00:16.5: bridge window [io 0x1000-0x0fff] to [bus 10] add_size 1000 Jul 10 00:53:27.693138 kernel: pci 0000:00:16.6: bridge window [io 0x1000-0x0fff] to [bus 11] add_size 1000 Jul 10 00:53:27.693186 kernel: pci 0000:00:16.7: bridge window [io 0x1000-0x0fff] to [bus 12] add_size 1000 Jul 10 00:53:27.693234 kernel: pci 0000:00:17.3: bridge window [io 0x1000-0x0fff] to [bus 16] add_size 1000 Jul 10 00:53:27.693283 kernel: pci 0000:00:17.4: bridge window [io 0x1000-0x0fff] to [bus 17] add_size 1000 Jul 10 00:53:27.693332 kernel: pci 0000:00:17.5: bridge window [io 0x1000-0x0fff] to [bus 18] add_size 1000 Jul 10 00:53:27.693380 kernel: pci 0000:00:17.6: bridge window [io 0x1000-0x0fff] to [bus 19] add_size 1000 Jul 10 00:53:27.693428 kernel: pci 0000:00:17.7: bridge window [io 0x1000-0x0fff] to [bus 1a] add_size 1000 Jul 10 00:53:27.693484 kernel: pci 0000:00:18.2: bridge window [io 0x1000-0x0fff] to [bus 1d] add_size 1000 Jul 10 00:53:27.693540 kernel: pci 0000:00:18.3: bridge window [io 0x1000-0x0fff] to [bus 1e] add_size 1000 Jul 10 00:53:27.693591 kernel: pci 0000:00:18.4: bridge window [io 0x1000-0x0fff] to [bus 1f] add_size 1000 Jul 10 00:53:27.693640 kernel: pci 0000:00:18.5: bridge window [io 0x1000-0x0fff] to [bus 20] add_size 1000 Jul 10 00:53:27.693689 kernel: pci 0000:00:18.6: bridge window [io 0x1000-0x0fff] to [bus 21] add_size 1000 Jul 10 00:53:27.693736 kernel: pci 0000:00:18.7: bridge window [io 0x1000-0x0fff] to [bus 22] add_size 1000 Jul 10 00:53:27.693783 kernel: pci 0000:00:15.0: BAR 15: assigned [mem 0xc0000000-0xc01fffff 64bit pref] Jul 10 00:53:27.694108 kernel: pci 0000:00:16.0: BAR 15: assigned [mem 0xc0200000-0xc03fffff 64bit pref] Jul 10 00:53:27.694174 kernel: pci 0000:00:15.3: BAR 13: no space for [io size 0x1000] Jul 10 00:53:27.694225 kernel: pci 0000:00:15.3: BAR 13: failed to assign [io size 0x1000] Jul 10 00:53:27.694275 kernel: pci 0000:00:15.4: BAR 13: no space for [io size 0x1000] Jul 10 00:53:27.695541 kernel: pci 0000:00:15.4: BAR 13: failed to assign [io size 0x1000] Jul 10 00:53:27.695603 kernel: pci 0000:00:15.5: BAR 13: no space for [io size 0x1000] Jul 10 00:53:27.695867 kernel: pci 0000:00:15.5: BAR 13: failed to assign [io size 0x1000] Jul 10 00:53:27.695926 kernel: pci 0000:00:15.6: BAR 13: no space for [io size 0x1000] Jul 10 00:53:27.695978 kernel: pci 0000:00:15.6: BAR 13: failed to assign [io size 0x1000] Jul 10 00:53:27.696031 kernel: pci 0000:00:15.7: BAR 13: no space for [io size 0x1000] Jul 10 00:53:27.696098 kernel: pci 0000:00:15.7: BAR 13: failed to assign [io size 0x1000] Jul 10 00:53:27.696366 kernel: pci 0000:00:16.3: BAR 13: no space for [io size 0x1000] Jul 10 00:53:27.696439 kernel: pci 0000:00:16.3: BAR 13: failed to assign [io size 0x1000] Jul 10 00:53:27.696798 kernel: pci 0000:00:16.4: BAR 13: no space for [io size 0x1000] Jul 10 00:53:27.696858 kernel: pci 0000:00:16.4: BAR 13: failed to assign [io size 0x1000] Jul 10 00:53:27.696910 kernel: pci 0000:00:16.5: BAR 13: no space for [io size 0x1000] Jul 10 00:53:27.697232 kernel: pci 0000:00:16.5: BAR 13: failed to assign [io size 0x1000] Jul 10 00:53:27.697293 kernel: pci 0000:00:16.6: BAR 13: no space for [io size 0x1000] Jul 10 00:53:27.697345 kernel: pci 0000:00:16.6: BAR 13: failed to assign [io size 0x1000] Jul 10 00:53:27.697395 kernel: pci 0000:00:16.7: BAR 13: no space for [io size 0x1000] Jul 10 00:53:27.697446 kernel: pci 0000:00:16.7: BAR 13: failed to assign [io size 0x1000] Jul 10 00:53:27.697637 kernel: pci 0000:00:17.3: BAR 13: no space for [io size 0x1000] Jul 10 00:53:27.697689 kernel: pci 0000:00:17.3: BAR 13: failed to assign [io size 0x1000] Jul 10 00:53:27.697858 kernel: pci 0000:00:17.4: BAR 13: no space for [io size 0x1000] Jul 10 00:53:27.697912 kernel: pci 0000:00:17.4: BAR 13: failed to assign [io size 0x1000] Jul 10 00:53:27.697963 kernel: pci 0000:00:17.5: BAR 13: no space for [io size 0x1000] Jul 10 00:53:27.698029 kernel: pci 0000:00:17.5: BAR 13: failed to assign [io size 0x1000] Jul 10 00:53:27.698219 kernel: pci 0000:00:17.6: BAR 13: no space for [io size 0x1000] Jul 10 00:53:27.698272 kernel: pci 0000:00:17.6: BAR 13: failed to assign [io size 0x1000] Jul 10 00:53:27.698320 kernel: pci 0000:00:17.7: BAR 13: no space for [io size 0x1000] Jul 10 00:53:27.698672 kernel: pci 0000:00:17.7: BAR 13: failed to assign [io size 0x1000] Jul 10 00:53:27.698732 kernel: pci 0000:00:18.2: BAR 13: no space for [io size 0x1000] Jul 10 00:53:27.698783 kernel: pci 0000:00:18.2: BAR 13: failed to assign [io size 0x1000] Jul 10 00:53:27.698835 kernel: pci 0000:00:18.3: BAR 13: no space for [io size 0x1000] Jul 10 00:53:27.698884 kernel: pci 0000:00:18.3: BAR 13: failed to assign [io size 0x1000] Jul 10 00:53:27.698933 kernel: pci 0000:00:18.4: BAR 13: no space for [io size 0x1000] Jul 10 00:53:27.698982 kernel: pci 0000:00:18.4: BAR 13: failed to assign [io size 0x1000] Jul 10 00:53:27.699030 kernel: pci 0000:00:18.5: BAR 13: no space for [io size 0x1000] Jul 10 00:53:27.699077 kernel: pci 0000:00:18.5: BAR 13: failed to assign [io size 0x1000] Jul 10 00:53:27.699125 kernel: pci 0000:00:18.6: BAR 13: no space for [io size 0x1000] Jul 10 00:53:27.699173 kernel: pci 0000:00:18.6: BAR 13: failed to assign [io size 0x1000] Jul 10 00:53:27.699223 kernel: pci 0000:00:18.7: BAR 13: no space for [io size 0x1000] Jul 10 00:53:27.699271 kernel: pci 0000:00:18.7: BAR 13: failed to assign [io size 0x1000] Jul 10 00:53:27.699318 kernel: pci 0000:00:18.7: BAR 13: no space for [io size 0x1000] Jul 10 00:53:27.699365 kernel: pci 0000:00:18.7: BAR 13: failed to assign [io size 0x1000] Jul 10 00:53:27.699413 kernel: pci 0000:00:18.6: BAR 13: no space for [io size 0x1000] Jul 10 00:53:27.699460 kernel: pci 0000:00:18.6: BAR 13: failed to assign [io size 0x1000] Jul 10 00:53:27.699530 kernel: pci 0000:00:18.5: BAR 13: no space for [io size 0x1000] Jul 10 00:53:27.699578 kernel: pci 0000:00:18.5: BAR 13: failed to assign [io size 0x1000] Jul 10 00:53:27.699626 kernel: pci 0000:00:18.4: BAR 13: no space for [io size 0x1000] Jul 10 00:53:27.699955 kernel: pci 0000:00:18.4: BAR 13: failed to assign [io size 0x1000] Jul 10 00:53:27.700017 kernel: pci 0000:00:18.3: BAR 13: no space for [io size 0x1000] Jul 10 00:53:27.700068 kernel: pci 0000:00:18.3: BAR 13: failed to assign [io size 0x1000] Jul 10 00:53:27.700117 kernel: pci 0000:00:18.2: BAR 13: no space for [io size 0x1000] Jul 10 00:53:27.700164 kernel: pci 0000:00:18.2: BAR 13: failed to assign [io size 0x1000] Jul 10 00:53:27.700212 kernel: pci 0000:00:17.7: BAR 13: no space for [io size 0x1000] Jul 10 00:53:27.700261 kernel: pci 0000:00:17.7: BAR 13: failed to assign [io size 0x1000] Jul 10 00:53:27.700308 kernel: pci 0000:00:17.6: BAR 13: no space for [io size 0x1000] Jul 10 00:53:27.700356 kernel: pci 0000:00:17.6: BAR 13: failed to assign [io size 0x1000] Jul 10 00:53:27.700403 kernel: pci 0000:00:17.5: BAR 13: no space for [io size 0x1000] Jul 10 00:53:27.700454 kernel: pci 0000:00:17.5: BAR 13: failed to assign [io size 0x1000] Jul 10 00:53:27.700510 kernel: pci 0000:00:17.4: BAR 13: no space for [io size 0x1000] Jul 10 00:53:27.700558 kernel: pci 0000:00:17.4: BAR 13: failed to assign [io size 0x1000] Jul 10 00:53:27.700605 kernel: pci 0000:00:17.3: BAR 13: no space for [io size 0x1000] Jul 10 00:53:27.700653 kernel: pci 0000:00:17.3: BAR 13: failed to assign [io size 0x1000] Jul 10 00:53:27.700700 kernel: pci 0000:00:16.7: BAR 13: no space for [io size 0x1000] Jul 10 00:53:27.700748 kernel: pci 0000:00:16.7: BAR 13: failed to assign [io size 0x1000] Jul 10 00:53:27.700795 kernel: pci 0000:00:16.6: BAR 13: no space for [io size 0x1000] Jul 10 00:53:27.700843 kernel: pci 0000:00:16.6: BAR 13: failed to assign [io size 0x1000] Jul 10 00:53:27.700894 kernel: pci 0000:00:16.5: BAR 13: no space for [io size 0x1000] Jul 10 00:53:27.700942 kernel: pci 0000:00:16.5: BAR 13: failed to assign [io size 0x1000] Jul 10 00:53:27.700991 kernel: pci 0000:00:16.4: BAR 13: no space for [io size 0x1000] Jul 10 00:53:27.701039 kernel: pci 0000:00:16.4: BAR 13: failed to assign [io size 0x1000] Jul 10 00:53:27.701087 kernel: pci 0000:00:16.3: BAR 13: no space for [io size 0x1000] Jul 10 00:53:27.701135 kernel: pci 0000:00:16.3: BAR 13: failed to assign [io size 0x1000] Jul 10 00:53:27.701183 kernel: pci 0000:00:15.7: BAR 13: no space for [io size 0x1000] Jul 10 00:53:27.701245 kernel: pci 0000:00:15.7: BAR 13: failed to assign [io size 0x1000] Jul 10 00:53:27.701291 kernel: pci 0000:00:15.6: BAR 13: no space for [io size 0x1000] Jul 10 00:53:27.701338 kernel: pci 0000:00:15.6: BAR 13: failed to assign [io size 0x1000] Jul 10 00:53:27.701387 kernel: pci 0000:00:15.5: BAR 13: no space for [io size 0x1000] Jul 10 00:53:27.701433 kernel: pci 0000:00:15.5: BAR 13: failed to assign [io size 0x1000] Jul 10 00:53:27.701485 kernel: pci 0000:00:15.4: BAR 13: no space for [io size 0x1000] Jul 10 00:53:27.701533 kernel: pci 0000:00:15.4: BAR 13: failed to assign [io size 0x1000] Jul 10 00:53:27.701579 kernel: pci 0000:00:15.3: BAR 13: no space for [io size 0x1000] Jul 10 00:53:27.701626 kernel: pci 0000:00:15.3: BAR 13: failed to assign [io size 0x1000] Jul 10 00:53:27.701679 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Jul 10 00:53:27.701727 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] Jul 10 00:53:27.701773 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] Jul 10 00:53:27.701821 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] Jul 10 00:53:27.701867 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] Jul 10 00:53:27.701918 kernel: pci 0000:03:00.0: BAR 6: assigned [mem 0xfd500000-0xfd50ffff pref] Jul 10 00:53:27.701966 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] Jul 10 00:53:27.702012 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] Jul 10 00:53:27.702059 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] Jul 10 00:53:27.702105 kernel: pci 0000:00:15.0: bridge window [mem 0xc0000000-0xc01fffff 64bit pref] Jul 10 00:53:27.702153 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] Jul 10 00:53:27.702202 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] Jul 10 00:53:27.702248 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] Jul 10 00:53:27.702295 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] Jul 10 00:53:27.702342 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] Jul 10 00:53:27.702389 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] Jul 10 00:53:27.702435 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] Jul 10 00:53:27.702492 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] Jul 10 00:53:27.702542 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] Jul 10 00:53:27.702589 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] Jul 10 00:53:27.702635 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] Jul 10 00:53:27.702965 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] Jul 10 00:53:27.703020 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] Jul 10 00:53:27.703069 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] Jul 10 00:53:27.703398 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] Jul 10 00:53:27.703453 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] Jul 10 00:53:27.703537 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] Jul 10 00:53:27.703590 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] Jul 10 00:53:27.703811 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] Jul 10 00:53:27.703861 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] Jul 10 00:53:27.703909 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] Jul 10 00:53:27.704229 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] Jul 10 00:53:27.704282 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] Jul 10 00:53:27.704334 kernel: pci 0000:0b:00.0: BAR 6: assigned [mem 0xfd400000-0xfd40ffff pref] Jul 10 00:53:27.704381 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] Jul 10 00:53:27.704428 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] Jul 10 00:53:27.704532 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] Jul 10 00:53:27.704584 kernel: pci 0000:00:16.0: bridge window [mem 0xc0200000-0xc03fffff 64bit pref] Jul 10 00:53:27.704631 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] Jul 10 00:53:27.704965 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] Jul 10 00:53:27.705020 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] Jul 10 00:53:27.705070 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] Jul 10 00:53:27.705121 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] Jul 10 00:53:27.705169 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] Jul 10 00:53:27.705216 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] Jul 10 00:53:27.705266 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] Jul 10 00:53:27.705313 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] Jul 10 00:53:27.705360 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] Jul 10 00:53:27.705407 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] Jul 10 00:53:27.705454 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] Jul 10 00:53:27.705513 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] Jul 10 00:53:27.705561 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] Jul 10 00:53:27.705608 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] Jul 10 00:53:27.705654 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] Jul 10 00:53:27.705999 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] Jul 10 00:53:27.706056 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] Jul 10 00:53:27.706107 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] Jul 10 00:53:27.706434 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] Jul 10 00:53:27.706514 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] Jul 10 00:53:27.706568 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] Jul 10 00:53:27.706617 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] Jul 10 00:53:27.706666 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] Jul 10 00:53:27.706714 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] Jul 10 00:53:27.706762 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] Jul 10 00:53:27.706813 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] Jul 10 00:53:27.706863 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] Jul 10 00:53:27.706910 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] Jul 10 00:53:27.706958 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] Jul 10 00:53:27.707006 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] Jul 10 00:53:27.707053 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] Jul 10 00:53:27.707101 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] Jul 10 00:53:27.707149 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] Jul 10 00:53:27.707196 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] Jul 10 00:53:27.707245 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] Jul 10 00:53:27.707295 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] Jul 10 00:53:27.707343 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] Jul 10 00:53:27.707391 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] Jul 10 00:53:27.707439 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] Jul 10 00:53:27.707493 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] Jul 10 00:53:27.707542 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] Jul 10 00:53:27.707590 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] Jul 10 00:53:27.707642 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] Jul 10 00:53:27.707690 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] Jul 10 00:53:27.707739 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] Jul 10 00:53:27.707786 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] Jul 10 00:53:27.707834 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] Jul 10 00:53:27.707881 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] Jul 10 00:53:27.707928 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] Jul 10 00:53:27.707976 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] Jul 10 00:53:27.708022 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] Jul 10 00:53:27.708069 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] Jul 10 00:53:27.708116 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] Jul 10 00:53:27.708166 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] Jul 10 00:53:27.708214 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] Jul 10 00:53:27.708260 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] Jul 10 00:53:27.708308 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] Jul 10 00:53:27.708355 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] Jul 10 00:53:27.708402 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] Jul 10 00:53:27.708449 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] Jul 10 00:53:27.708686 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] Jul 10 00:53:27.708739 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] Jul 10 00:53:27.708788 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] Jul 10 00:53:27.709105 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] Jul 10 00:53:27.709168 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] Jul 10 00:53:27.709219 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] Jul 10 00:53:27.709291 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] Jul 10 00:53:27.709434 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] Jul 10 00:53:27.709495 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] Jul 10 00:53:27.709546 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] Jul 10 00:53:27.709980 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] Jul 10 00:53:27.710123 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] Jul 10 00:53:27.710213 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] Jul 10 00:53:27.710264 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] Jul 10 00:53:27.710314 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] Jul 10 00:53:27.710361 kernel: pci_bus 0000:00: resource 4 [mem 0x000a0000-0x000bffff window] Jul 10 00:53:27.710404 kernel: pci_bus 0000:00: resource 5 [mem 0x000cc000-0x000dbfff window] Jul 10 00:53:27.710448 kernel: pci_bus 0000:00: resource 6 [mem 0xc0000000-0xfebfffff window] Jul 10 00:53:27.710530 kernel: pci_bus 0000:00: resource 7 [io 0x0000-0x0cf7 window] Jul 10 00:53:27.710574 kernel: pci_bus 0000:00: resource 8 [io 0x0d00-0xfeff window] Jul 10 00:53:27.710620 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x3fff] Jul 10 00:53:27.710672 kernel: pci_bus 0000:02: resource 1 [mem 0xfd600000-0xfdffffff] Jul 10 00:53:27.710737 kernel: pci_bus 0000:02: resource 2 [mem 0xe7b00000-0xe7ffffff 64bit pref] Jul 10 00:53:27.710897 kernel: pci_bus 0000:02: resource 4 [mem 0x000a0000-0x000bffff window] Jul 10 00:53:27.710946 kernel: pci_bus 0000:02: resource 5 [mem 0x000cc000-0x000dbfff window] Jul 10 00:53:27.710991 kernel: pci_bus 0000:02: resource 6 [mem 0xc0000000-0xfebfffff window] Jul 10 00:53:27.711035 kernel: pci_bus 0000:02: resource 7 [io 0x0000-0x0cf7 window] Jul 10 00:53:27.711421 kernel: pci_bus 0000:02: resource 8 [io 0x0d00-0xfeff window] Jul 10 00:53:27.711542 kernel: pci_bus 0000:03: resource 0 [io 0x4000-0x4fff] Jul 10 00:53:27.711592 kernel: pci_bus 0000:03: resource 1 [mem 0xfd500000-0xfd5fffff] Jul 10 00:53:27.711644 kernel: pci_bus 0000:03: resource 2 [mem 0xc0000000-0xc01fffff 64bit pref] Jul 10 00:53:27.711693 kernel: pci_bus 0000:04: resource 0 [io 0x8000-0x8fff] Jul 10 00:53:27.711738 kernel: pci_bus 0000:04: resource 1 [mem 0xfd100000-0xfd1fffff] Jul 10 00:53:27.711782 kernel: pci_bus 0000:04: resource 2 [mem 0xe7800000-0xe78fffff 64bit pref] Jul 10 00:53:27.711830 kernel: pci_bus 0000:05: resource 0 [io 0xc000-0xcfff] Jul 10 00:53:27.711877 kernel: pci_bus 0000:05: resource 1 [mem 0xfcd00000-0xfcdfffff] Jul 10 00:53:27.711921 kernel: pci_bus 0000:05: resource 2 [mem 0xe7400000-0xe74fffff 64bit pref] Jul 10 00:53:27.711971 kernel: pci_bus 0000:06: resource 1 [mem 0xfc900000-0xfc9fffff] Jul 10 00:53:27.712015 kernel: pci_bus 0000:06: resource 2 [mem 0xe7000000-0xe70fffff 64bit pref] Jul 10 00:53:27.712063 kernel: pci_bus 0000:07: resource 1 [mem 0xfc500000-0xfc5fffff] Jul 10 00:53:27.712107 kernel: pci_bus 0000:07: resource 2 [mem 0xe6c00000-0xe6cfffff 64bit pref] Jul 10 00:53:27.712156 kernel: pci_bus 0000:08: resource 1 [mem 0xfc100000-0xfc1fffff] Jul 10 00:53:27.712340 kernel: pci_bus 0000:08: resource 2 [mem 0xe6800000-0xe68fffff 64bit pref] Jul 10 00:53:27.712393 kernel: pci_bus 0000:09: resource 1 [mem 0xfbd00000-0xfbdfffff] Jul 10 00:53:27.712438 kernel: pci_bus 0000:09: resource 2 [mem 0xe6400000-0xe64fffff 64bit pref] Jul 10 00:53:27.712776 kernel: pci_bus 0000:0a: resource 1 [mem 0xfb900000-0xfb9fffff] Jul 10 00:53:27.712828 kernel: pci_bus 0000:0a: resource 2 [mem 0xe6000000-0xe60fffff 64bit pref] Jul 10 00:53:27.712881 kernel: pci_bus 0000:0b: resource 0 [io 0x5000-0x5fff] Jul 10 00:53:27.712928 kernel: pci_bus 0000:0b: resource 1 [mem 0xfd400000-0xfd4fffff] Jul 10 00:53:27.713092 kernel: pci_bus 0000:0b: resource 2 [mem 0xc0200000-0xc03fffff 64bit pref] Jul 10 00:53:27.713147 kernel: pci_bus 0000:0c: resource 0 [io 0x9000-0x9fff] Jul 10 00:53:27.713193 kernel: pci_bus 0000:0c: resource 1 [mem 0xfd000000-0xfd0fffff] Jul 10 00:53:27.713532 kernel: pci_bus 0000:0c: resource 2 [mem 0xe7700000-0xe77fffff 64bit pref] Jul 10 00:53:27.713596 kernel: pci_bus 0000:0d: resource 0 [io 0xd000-0xdfff] Jul 10 00:53:27.713648 kernel: pci_bus 0000:0d: resource 1 [mem 0xfcc00000-0xfccfffff] Jul 10 00:53:27.713693 kernel: pci_bus 0000:0d: resource 2 [mem 0xe7300000-0xe73fffff 64bit pref] Jul 10 00:53:27.713956 kernel: pci_bus 0000:0e: resource 1 [mem 0xfc800000-0xfc8fffff] Jul 10 00:53:27.714007 kernel: pci_bus 0000:0e: resource 2 [mem 0xe6f00000-0xe6ffffff 64bit pref] Jul 10 00:53:27.714057 kernel: pci_bus 0000:0f: resource 1 [mem 0xfc400000-0xfc4fffff] Jul 10 00:53:27.714116 kernel: pci_bus 0000:0f: resource 2 [mem 0xe6b00000-0xe6bfffff 64bit pref] Jul 10 00:53:27.714375 kernel: pci_bus 0000:10: resource 1 [mem 0xfc000000-0xfc0fffff] Jul 10 00:53:27.714428 kernel: pci_bus 0000:10: resource 2 [mem 0xe6700000-0xe67fffff 64bit pref] Jul 10 00:53:27.714513 kernel: pci_bus 0000:11: resource 1 [mem 0xfbc00000-0xfbcfffff] Jul 10 00:53:27.714564 kernel: pci_bus 0000:11: resource 2 [mem 0xe6300000-0xe63fffff 64bit pref] Jul 10 00:53:27.715451 kernel: pci_bus 0000:12: resource 1 [mem 0xfb800000-0xfb8fffff] Jul 10 00:53:27.715517 kernel: pci_bus 0000:12: resource 2 [mem 0xe5f00000-0xe5ffffff 64bit pref] Jul 10 00:53:27.715573 kernel: pci_bus 0000:13: resource 0 [io 0x6000-0x6fff] Jul 10 00:53:27.715619 kernel: pci_bus 0000:13: resource 1 [mem 0xfd300000-0xfd3fffff] Jul 10 00:53:27.715961 kernel: pci_bus 0000:13: resource 2 [mem 0xe7a00000-0xe7afffff 64bit pref] Jul 10 00:53:27.716017 kernel: pci_bus 0000:14: resource 0 [io 0xa000-0xafff] Jul 10 00:53:27.716064 kernel: pci_bus 0000:14: resource 1 [mem 0xfcf00000-0xfcffffff] Jul 10 00:53:27.716110 kernel: pci_bus 0000:14: resource 2 [mem 0xe7600000-0xe76fffff 64bit pref] Jul 10 00:53:27.716160 kernel: pci_bus 0000:15: resource 0 [io 0xe000-0xefff] Jul 10 00:53:27.716209 kernel: pci_bus 0000:15: resource 1 [mem 0xfcb00000-0xfcbfffff] Jul 10 00:53:27.716254 kernel: pci_bus 0000:15: resource 2 [mem 0xe7200000-0xe72fffff 64bit pref] Jul 10 00:53:27.716303 kernel: pci_bus 0000:16: resource 1 [mem 0xfc700000-0xfc7fffff] Jul 10 00:53:27.716349 kernel: pci_bus 0000:16: resource 2 [mem 0xe6e00000-0xe6efffff 64bit pref] Jul 10 00:53:27.716399 kernel: pci_bus 0000:17: resource 1 [mem 0xfc300000-0xfc3fffff] Jul 10 00:53:27.716444 kernel: pci_bus 0000:17: resource 2 [mem 0xe6a00000-0xe6afffff 64bit pref] Jul 10 00:53:27.716529 kernel: pci_bus 0000:18: resource 1 [mem 0xfbf00000-0xfbffffff] Jul 10 00:53:27.716577 kernel: pci_bus 0000:18: resource 2 [mem 0xe6600000-0xe66fffff 64bit pref] Jul 10 00:53:27.716626 kernel: pci_bus 0000:19: resource 1 [mem 0xfbb00000-0xfbbfffff] Jul 10 00:53:27.716671 kernel: pci_bus 0000:19: resource 2 [mem 0xe6200000-0xe62fffff 64bit pref] Jul 10 00:53:27.717012 kernel: pci_bus 0000:1a: resource 1 [mem 0xfb700000-0xfb7fffff] Jul 10 00:53:27.717065 kernel: pci_bus 0000:1a: resource 2 [mem 0xe5e00000-0xe5efffff 64bit pref] Jul 10 00:53:27.717117 kernel: pci_bus 0000:1b: resource 0 [io 0x7000-0x7fff] Jul 10 00:53:27.717167 kernel: pci_bus 0000:1b: resource 1 [mem 0xfd200000-0xfd2fffff] Jul 10 00:53:27.717213 kernel: pci_bus 0000:1b: resource 2 [mem 0xe7900000-0xe79fffff 64bit pref] Jul 10 00:53:27.717263 kernel: pci_bus 0000:1c: resource 0 [io 0xb000-0xbfff] Jul 10 00:53:27.717309 kernel: pci_bus 0000:1c: resource 1 [mem 0xfce00000-0xfcefffff] Jul 10 00:53:27.717354 kernel: pci_bus 0000:1c: resource 2 [mem 0xe7500000-0xe75fffff 64bit pref] Jul 10 00:53:27.717403 kernel: pci_bus 0000:1d: resource 1 [mem 0xfca00000-0xfcafffff] Jul 10 00:53:27.717450 kernel: pci_bus 0000:1d: resource 2 [mem 0xe7100000-0xe71fffff 64bit pref] Jul 10 00:53:27.717515 kernel: pci_bus 0000:1e: resource 1 [mem 0xfc600000-0xfc6fffff] Jul 10 00:53:27.717562 kernel: pci_bus 0000:1e: resource 2 [mem 0xe6d00000-0xe6dfffff 64bit pref] Jul 10 00:53:27.717614 kernel: pci_bus 0000:1f: resource 1 [mem 0xfc200000-0xfc2fffff] Jul 10 00:53:27.717670 kernel: pci_bus 0000:1f: resource 2 [mem 0xe6900000-0xe69fffff 64bit pref] Jul 10 00:53:27.717722 kernel: pci_bus 0000:20: resource 1 [mem 0xfbe00000-0xfbefffff] Jul 10 00:53:27.717770 kernel: pci_bus 0000:20: resource 2 [mem 0xe6500000-0xe65fffff 64bit pref] Jul 10 00:53:27.717822 kernel: pci_bus 0000:21: resource 1 [mem 0xfba00000-0xfbafffff] Jul 10 00:53:27.717866 kernel: pci_bus 0000:21: resource 2 [mem 0xe6100000-0xe61fffff 64bit pref] Jul 10 00:53:27.717917 kernel: pci_bus 0000:22: resource 1 [mem 0xfb600000-0xfb6fffff] Jul 10 00:53:27.717962 kernel: pci_bus 0000:22: resource 2 [mem 0xe5d00000-0xe5dfffff 64bit pref] Jul 10 00:53:27.718015 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jul 10 00:53:27.718026 kernel: PCI: CLS 32 bytes, default 64 Jul 10 00:53:27.718033 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jul 10 00:53:27.718040 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns Jul 10 00:53:27.718047 kernel: clocksource: Switched to clocksource tsc Jul 10 00:53:27.718053 kernel: Initialise system trusted keyrings Jul 10 00:53:27.718059 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jul 10 00:53:27.718065 kernel: Key type asymmetric registered Jul 10 00:53:27.718071 kernel: Asymmetric key parser 'x509' registered Jul 10 00:53:27.718077 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Jul 10 00:53:27.718085 kernel: io scheduler mq-deadline registered Jul 10 00:53:27.718091 kernel: io scheduler kyber registered Jul 10 00:53:27.718097 kernel: io scheduler bfq registered Jul 10 00:53:27.718149 kernel: pcieport 0000:00:15.0: PME: Signaling with IRQ 24 Jul 10 00:53:27.718199 kernel: pcieport 0000:00:15.0: pciehp: Slot #160 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 10 00:53:27.718249 kernel: pcieport 0000:00:15.1: PME: Signaling with IRQ 25 Jul 10 00:53:27.718298 kernel: pcieport 0000:00:15.1: pciehp: Slot #161 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 10 00:53:27.718347 kernel: pcieport 0000:00:15.2: PME: Signaling with IRQ 26 Jul 10 00:53:27.718397 kernel: pcieport 0000:00:15.2: pciehp: Slot #162 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 10 00:53:27.718447 kernel: pcieport 0000:00:15.3: PME: Signaling with IRQ 27 Jul 10 00:53:27.718504 kernel: pcieport 0000:00:15.3: pciehp: Slot #163 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 10 00:53:27.718554 kernel: pcieport 0000:00:15.4: PME: Signaling with IRQ 28 Jul 10 00:53:27.718603 kernel: pcieport 0000:00:15.4: pciehp: Slot #164 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 10 00:53:27.718653 kernel: pcieport 0000:00:15.5: PME: Signaling with IRQ 29 Jul 10 00:53:27.718705 kernel: pcieport 0000:00:15.5: pciehp: Slot #165 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 10 00:53:27.718754 kernel: pcieport 0000:00:15.6: PME: Signaling with IRQ 30 Jul 10 00:53:27.718804 kernel: pcieport 0000:00:15.6: pciehp: Slot #166 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 10 00:53:27.718853 kernel: pcieport 0000:00:15.7: PME: Signaling with IRQ 31 Jul 10 00:53:27.718902 kernel: pcieport 0000:00:15.7: pciehp: Slot #167 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 10 00:53:27.718954 kernel: pcieport 0000:00:16.0: PME: Signaling with IRQ 32 Jul 10 00:53:27.719003 kernel: pcieport 0000:00:16.0: pciehp: Slot #192 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 10 00:53:27.719052 kernel: pcieport 0000:00:16.1: PME: Signaling with IRQ 33 Jul 10 00:53:27.719100 kernel: pcieport 0000:00:16.1: pciehp: Slot #193 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 10 00:53:27.719148 kernel: pcieport 0000:00:16.2: PME: Signaling with IRQ 34 Jul 10 00:53:27.719196 kernel: pcieport 0000:00:16.2: pciehp: Slot #194 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 10 00:53:27.719245 kernel: pcieport 0000:00:16.3: PME: Signaling with IRQ 35 Jul 10 00:53:27.719297 kernel: pcieport 0000:00:16.3: pciehp: Slot #195 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 10 00:53:27.719347 kernel: pcieport 0000:00:16.4: PME: Signaling with IRQ 36 Jul 10 00:53:27.719396 kernel: pcieport 0000:00:16.4: pciehp: Slot #196 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 10 00:53:27.719445 kernel: pcieport 0000:00:16.5: PME: Signaling with IRQ 37 Jul 10 00:53:27.719509 kernel: pcieport 0000:00:16.5: pciehp: Slot #197 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 10 00:53:27.719562 kernel: pcieport 0000:00:16.6: PME: Signaling with IRQ 38 Jul 10 00:53:27.719612 kernel: pcieport 0000:00:16.6: pciehp: Slot #198 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 10 00:53:27.719662 kernel: pcieport 0000:00:16.7: PME: Signaling with IRQ 39 Jul 10 00:53:27.719711 kernel: pcieport 0000:00:16.7: pciehp: Slot #199 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 10 00:53:27.719760 kernel: pcieport 0000:00:17.0: PME: Signaling with IRQ 40 Jul 10 00:53:27.719808 kernel: pcieport 0000:00:17.0: pciehp: Slot #224 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 10 00:53:27.719860 kernel: pcieport 0000:00:17.1: PME: Signaling with IRQ 41 Jul 10 00:53:27.720053 kernel: pcieport 0000:00:17.1: pciehp: Slot #225 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 10 00:53:27.720106 kernel: pcieport 0000:00:17.2: PME: Signaling with IRQ 42 Jul 10 00:53:27.720156 kernel: pcieport 0000:00:17.2: pciehp: Slot #226 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 10 00:53:27.720388 kernel: pcieport 0000:00:17.3: PME: Signaling with IRQ 43 Jul 10 00:53:27.720442 kernel: pcieport 0000:00:17.3: pciehp: Slot #227 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 10 00:53:27.720508 kernel: pcieport 0000:00:17.4: PME: Signaling with IRQ 44 Jul 10 00:53:27.720579 kernel: pcieport 0000:00:17.4: pciehp: Slot #228 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 10 00:53:27.720879 kernel: pcieport 0000:00:17.5: PME: Signaling with IRQ 45 Jul 10 00:53:27.720936 kernel: pcieport 0000:00:17.5: pciehp: Slot #229 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 10 00:53:27.721313 kernel: pcieport 0000:00:17.6: PME: Signaling with IRQ 46 Jul 10 00:53:27.721372 kernel: pcieport 0000:00:17.6: pciehp: Slot #230 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 10 00:53:27.721427 kernel: pcieport 0000:00:17.7: PME: Signaling with IRQ 47 Jul 10 00:53:27.721527 kernel: pcieport 0000:00:17.7: pciehp: Slot #231 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 10 00:53:27.721890 kernel: pcieport 0000:00:18.0: PME: Signaling with IRQ 48 Jul 10 00:53:27.721945 kernel: pcieport 0000:00:18.0: pciehp: Slot #256 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 10 00:53:27.721997 kernel: pcieport 0000:00:18.1: PME: Signaling with IRQ 49 Jul 10 00:53:27.722049 kernel: pcieport 0000:00:18.1: pciehp: Slot #257 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 10 00:53:27.722189 kernel: pcieport 0000:00:18.2: PME: Signaling with IRQ 50 Jul 10 00:53:27.722243 kernel: pcieport 0000:00:18.2: pciehp: Slot #258 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 10 00:53:27.722293 kernel: pcieport 0000:00:18.3: PME: Signaling with IRQ 51 Jul 10 00:53:27.722614 kernel: pcieport 0000:00:18.3: pciehp: Slot #259 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 10 00:53:27.722675 kernel: pcieport 0000:00:18.4: PME: Signaling with IRQ 52 Jul 10 00:53:27.722729 kernel: pcieport 0000:00:18.4: pciehp: Slot #260 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 10 00:53:27.722778 kernel: pcieport 0000:00:18.5: PME: Signaling with IRQ 53 Jul 10 00:53:27.722828 kernel: pcieport 0000:00:18.5: pciehp: Slot #261 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 10 00:53:27.722878 kernel: pcieport 0000:00:18.6: PME: Signaling with IRQ 54 Jul 10 00:53:27.722926 kernel: pcieport 0000:00:18.6: pciehp: Slot #262 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 10 00:53:27.722974 kernel: pcieport 0000:00:18.7: PME: Signaling with IRQ 55 Jul 10 00:53:27.723025 kernel: pcieport 0000:00:18.7: pciehp: Slot #263 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 10 00:53:27.723034 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jul 10 00:53:27.723040 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 10 00:53:27.723046 kernel: 00:05: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jul 10 00:53:27.723053 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBC,PNP0f13:MOUS] at 0x60,0x64 irq 1,12 Jul 10 00:53:27.723059 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jul 10 00:53:27.723065 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jul 10 00:53:27.723115 kernel: rtc_cmos 00:01: registered as rtc0 Jul 10 00:53:27.723160 kernel: rtc_cmos 00:01: setting system clock to 2025-07-10T00:53:27 UTC (1752108807) Jul 10 00:53:27.723204 kernel: rtc_cmos 00:01: alarms up to one month, y3k, 114 bytes nvram Jul 10 00:53:27.723213 kernel: intel_pstate: CPU model not supported Jul 10 00:53:27.723219 kernel: NET: Registered PF_INET6 protocol family Jul 10 00:53:27.723225 kernel: Segment Routing with IPv6 Jul 10 00:53:27.723231 kernel: In-situ OAM (IOAM) with IPv6 Jul 10 00:53:27.723238 kernel: NET: Registered PF_PACKET protocol family Jul 10 00:53:27.723245 kernel: Key type dns_resolver registered Jul 10 00:53:27.723252 kernel: IPI shorthand broadcast: enabled Jul 10 00:53:27.723258 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jul 10 00:53:27.723264 kernel: sched_clock: Marking stable (874882517, 224045187)->(1160663554, -61735850) Jul 10 00:53:27.723270 kernel: registered taskstats version 1 Jul 10 00:53:27.723276 kernel: Loading compiled-in X.509 certificates Jul 10 00:53:27.723282 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.186-flatcar: 6ebecdd7757c0df63fc51731f0b99957f4e4af16' Jul 10 00:53:27.723289 kernel: Key type .fscrypt registered Jul 10 00:53:27.723295 kernel: Key type fscrypt-provisioning registered Jul 10 00:53:27.723301 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 10 00:53:27.723308 kernel: ima: Allocated hash algorithm: sha1 Jul 10 00:53:27.723314 kernel: ima: No architecture policies found Jul 10 00:53:27.723320 kernel: clk: Disabling unused clocks Jul 10 00:53:27.723326 kernel: Freeing unused kernel image (initmem) memory: 47472K Jul 10 00:53:27.723332 kernel: Write protecting the kernel read-only data: 28672k Jul 10 00:53:27.723339 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Jul 10 00:53:27.723345 kernel: Freeing unused kernel image (rodata/data gap) memory: 612K Jul 10 00:53:27.723351 kernel: Run /init as init process Jul 10 00:53:27.723358 kernel: with arguments: Jul 10 00:53:27.723364 kernel: /init Jul 10 00:53:27.723370 kernel: with environment: Jul 10 00:53:27.723376 kernel: HOME=/ Jul 10 00:53:27.723382 kernel: TERM=linux Jul 10 00:53:27.723388 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 10 00:53:27.723395 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Jul 10 00:53:27.723403 systemd[1]: Detected virtualization vmware. Jul 10 00:53:27.723411 systemd[1]: Detected architecture x86-64. Jul 10 00:53:27.723417 systemd[1]: Running in initrd. Jul 10 00:53:27.723423 systemd[1]: No hostname configured, using default hostname. Jul 10 00:53:27.723429 systemd[1]: Hostname set to . Jul 10 00:53:27.723435 systemd[1]: Initializing machine ID from random generator. Jul 10 00:53:27.723441 systemd[1]: Queued start job for default target initrd.target. Jul 10 00:53:27.723447 systemd[1]: Started systemd-ask-password-console.path. Jul 10 00:53:27.723453 systemd[1]: Reached target cryptsetup.target. Jul 10 00:53:27.723461 systemd[1]: Reached target paths.target. Jul 10 00:53:27.723467 systemd[1]: Reached target slices.target. Jul 10 00:53:27.723473 systemd[1]: Reached target swap.target. Jul 10 00:53:27.723489 systemd[1]: Reached target timers.target. Jul 10 00:53:27.723495 systemd[1]: Listening on iscsid.socket. Jul 10 00:53:27.723501 systemd[1]: Listening on iscsiuio.socket. Jul 10 00:53:27.723508 systemd[1]: Listening on systemd-journald-audit.socket. Jul 10 00:53:27.723514 systemd[1]: Listening on systemd-journald-dev-log.socket. Jul 10 00:53:27.723521 systemd[1]: Listening on systemd-journald.socket. Jul 10 00:53:27.723527 systemd[1]: Listening on systemd-networkd.socket. Jul 10 00:53:27.723534 systemd[1]: Listening on systemd-udevd-control.socket. Jul 10 00:53:27.723540 systemd[1]: Listening on systemd-udevd-kernel.socket. Jul 10 00:53:27.723546 systemd[1]: Reached target sockets.target. Jul 10 00:53:27.723552 systemd[1]: Starting kmod-static-nodes.service... Jul 10 00:53:27.723559 systemd[1]: Finished network-cleanup.service. Jul 10 00:53:27.723565 systemd[1]: Starting systemd-fsck-usr.service... Jul 10 00:53:27.723571 systemd[1]: Starting systemd-journald.service... Jul 10 00:53:27.723578 systemd[1]: Starting systemd-modules-load.service... Jul 10 00:53:27.723584 systemd[1]: Starting systemd-resolved.service... Jul 10 00:53:27.723591 systemd[1]: Starting systemd-vconsole-setup.service... Jul 10 00:53:27.723597 systemd[1]: Finished kmod-static-nodes.service. Jul 10 00:53:27.723603 systemd[1]: Finished systemd-fsck-usr.service. Jul 10 00:53:27.723609 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Jul 10 00:53:27.723615 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Jul 10 00:53:27.723622 kernel: audit: type=1130 audit(1752108807.657:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:53:27.723629 systemd[1]: Finished systemd-vconsole-setup.service. Jul 10 00:53:27.723637 systemd[1]: Starting dracut-cmdline-ask.service... Jul 10 00:53:27.723645 kernel: audit: type=1130 audit(1752108807.660:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:53:27.723651 systemd[1]: Finished dracut-cmdline-ask.service. Jul 10 00:53:27.723657 kernel: audit: type=1130 audit(1752108807.673:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:53:27.723664 systemd[1]: Started systemd-resolved.service. Jul 10 00:53:27.723670 kernel: audit: type=1130 audit(1752108807.677:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:53:27.723676 systemd[1]: Reached target nss-lookup.target. Jul 10 00:53:27.723684 systemd[1]: Starting dracut-cmdline.service... Jul 10 00:53:27.723691 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 10 00:53:27.723697 kernel: Bridge firewalling registered Jul 10 00:53:27.723703 kernel: SCSI subsystem initialized Jul 10 00:53:27.723709 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 10 00:53:27.723722 systemd-journald[216]: Journal started Jul 10 00:53:27.723753 systemd-journald[216]: Runtime Journal (/run/log/journal/2e997e39599647178d10faabd0b5daee) is 4.8M, max 38.8M, 34.0M free. Jul 10 00:53:27.724881 systemd[1]: Started systemd-journald.service. Jul 10 00:53:27.724897 kernel: device-mapper: uevent: version 1.0.3 Jul 10 00:53:27.657000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:53:27.660000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:53:27.673000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:53:27.677000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:53:27.647666 systemd-modules-load[217]: Inserted module 'overlay' Jul 10 00:53:27.727854 kernel: audit: type=1130 audit(1752108807.723:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:53:27.723000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:53:27.675117 systemd-resolved[218]: Positive Trust Anchors: Jul 10 00:53:27.729281 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Jul 10 00:53:27.675122 systemd-resolved[218]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 10 00:53:27.675140 systemd-resolved[218]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Jul 10 00:53:27.676754 systemd-resolved[218]: Defaulting to hostname 'linux'. Jul 10 00:53:27.695347 systemd-modules-load[217]: Inserted module 'br_netfilter' Jul 10 00:53:27.731493 dracut-cmdline[232]: dracut-dracut-053 Jul 10 00:53:27.731493 dracut-cmdline[232]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LA Jul 10 00:53:27.731493 dracut-cmdline[232]: BEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=6cddad5f675165861f6062277cc28875548c735477e689762fc73abc16b63a3d Jul 10 00:53:27.733394 systemd-modules-load[217]: Inserted module 'dm_multipath' Jul 10 00:53:27.732000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:53:27.733738 systemd[1]: Finished systemd-modules-load.service. Jul 10 00:53:27.734223 systemd[1]: Starting systemd-sysctl.service... Jul 10 00:53:27.737490 kernel: audit: type=1130 audit(1752108807.732:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:53:27.740403 systemd[1]: Finished systemd-sysctl.service. Jul 10 00:53:27.739000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:53:27.743531 kernel: audit: type=1130 audit(1752108807.739:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:53:27.753495 kernel: Loading iSCSI transport class v2.0-870. Jul 10 00:53:27.768949 kernel: iscsi: registered transport (tcp) Jul 10 00:53:27.786497 kernel: iscsi: registered transport (qla4xxx) Jul 10 00:53:27.786527 kernel: QLogic iSCSI HBA Driver Jul 10 00:53:27.809584 kernel: audit: type=1130 audit(1752108807.805:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:53:27.805000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:53:27.806502 systemd[1]: Finished dracut-cmdline.service. Jul 10 00:53:27.807194 systemd[1]: Starting dracut-pre-udev.service... Jul 10 00:53:27.845501 kernel: raid6: avx2x4 gen() 47657 MB/s Jul 10 00:53:27.862507 kernel: raid6: avx2x4 xor() 18657 MB/s Jul 10 00:53:27.879498 kernel: raid6: avx2x2 gen() 48152 MB/s Jul 10 00:53:27.896499 kernel: raid6: avx2x2 xor() 29457 MB/s Jul 10 00:53:27.913502 kernel: raid6: avx2x1 gen() 41129 MB/s Jul 10 00:53:27.930505 kernel: raid6: avx2x1 xor() 24276 MB/s Jul 10 00:53:27.947505 kernel: raid6: sse2x4 gen() 19420 MB/s Jul 10 00:53:27.964529 kernel: raid6: sse2x4 xor() 10844 MB/s Jul 10 00:53:27.981497 kernel: raid6: sse2x2 gen() 19445 MB/s Jul 10 00:53:27.998500 kernel: raid6: sse2x2 xor() 12547 MB/s Jul 10 00:53:28.015499 kernel: raid6: sse2x1 gen() 17903 MB/s Jul 10 00:53:28.032709 kernel: raid6: sse2x1 xor() 8757 MB/s Jul 10 00:53:28.032754 kernel: raid6: using algorithm avx2x2 gen() 48152 MB/s Jul 10 00:53:28.032763 kernel: raid6: .... xor() 29457 MB/s, rmw enabled Jul 10 00:53:28.033898 kernel: raid6: using avx2x2 recovery algorithm Jul 10 00:53:28.042491 kernel: xor: automatically using best checksumming function avx Jul 10 00:53:28.109499 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Jul 10 00:53:28.114775 systemd[1]: Finished dracut-pre-udev.service. Jul 10 00:53:28.115453 systemd[1]: Starting systemd-udevd.service... Jul 10 00:53:28.113000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:53:28.113000 audit: BPF prog-id=7 op=LOAD Jul 10 00:53:28.113000 audit: BPF prog-id=8 op=LOAD Jul 10 00:53:28.118490 kernel: audit: type=1130 audit(1752108808.113:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:53:28.126888 systemd-udevd[415]: Using default interface naming scheme 'v252'. Jul 10 00:53:28.130031 systemd[1]: Started systemd-udevd.service. Jul 10 00:53:28.128000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:53:28.130683 systemd[1]: Starting dracut-pre-trigger.service... Jul 10 00:53:28.137676 dracut-pre-trigger[417]: rd.md=0: removing MD RAID activation Jul 10 00:53:28.155033 systemd[1]: Finished dracut-pre-trigger.service. Jul 10 00:53:28.153000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:53:28.155633 systemd[1]: Starting systemd-udev-trigger.service... Jul 10 00:53:28.221358 systemd[1]: Finished systemd-udev-trigger.service. Jul 10 00:53:28.219000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:53:28.276492 kernel: VMware PVSCSI driver - version 1.0.7.0-k Jul 10 00:53:28.289496 kernel: vmw_pvscsi: using 64bit dma Jul 10 00:53:28.289527 kernel: VMware vmxnet3 virtual NIC driver - version 1.6.0.0-k-NAPI Jul 10 00:53:28.289536 kernel: vmw_pvscsi: max_id: 16 Jul 10 00:53:28.289543 kernel: vmw_pvscsi: setting ring_pages to 8 Jul 10 00:53:28.291487 kernel: libata version 3.00 loaded. Jul 10 00:53:28.297489 kernel: ata_piix 0000:00:07.1: version 2.13 Jul 10 00:53:28.308626 kernel: scsi host1: ata_piix Jul 10 00:53:28.308704 kernel: vmxnet3 0000:0b:00.0: # of Tx queues : 2, # of Rx queues : 2 Jul 10 00:53:28.314737 kernel: vmw_pvscsi: enabling reqCallThreshold Jul 10 00:53:28.314749 kernel: vmw_pvscsi: driver-based request coalescing enabled Jul 10 00:53:28.314756 kernel: vmw_pvscsi: using MSI-X Jul 10 00:53:28.314763 kernel: scsi host0: VMware PVSCSI storage adapter rev 2, req/cmp/msg rings: 8/8/1 pages, cmd_per_lun=254 Jul 10 00:53:28.314831 kernel: scsi host2: ata_piix Jul 10 00:53:28.314890 kernel: ata1: PATA max UDMA/33 cmd 0x1f0 ctl 0x3f6 bmdma 0x1060 irq 14 Jul 10 00:53:28.314899 kernel: ata2: PATA max UDMA/33 cmd 0x170 ctl 0x376 bmdma 0x1068 irq 15 Jul 10 00:53:28.314908 kernel: vmw_pvscsi 0000:03:00.0: VMware PVSCSI rev 2 host #0 Jul 10 00:53:28.314968 kernel: scsi 0:0:0:0: Direct-Access VMware Virtual disk 2.0 PQ: 0 ANSI: 6 Jul 10 00:53:28.315035 kernel: cryptd: max_cpu_qlen set to 1000 Jul 10 00:53:28.315043 kernel: vmxnet3 0000:0b:00.0 eth0: NIC Link is Up 10000 Mbps Jul 10 00:53:28.475505 kernel: ata2.00: ATAPI: VMware Virtual IDE CDROM Drive, 00000001, max UDMA/33 Jul 10 00:53:28.479508 kernel: scsi 2:0:0:0: CD-ROM NECVMWar VMware IDE CDR10 1.00 PQ: 0 ANSI: 5 Jul 10 00:53:28.486495 kernel: vmxnet3 0000:0b:00.0 ens192: renamed from eth0 Jul 10 00:53:28.497124 kernel: AVX2 version of gcm_enc/dec engaged. Jul 10 00:53:28.497157 kernel: AES CTR mode by8 optimization enabled Jul 10 00:53:28.503521 kernel: sd 0:0:0:0: [sda] 17805312 512-byte logical blocks: (9.12 GB/8.49 GiB) Jul 10 00:53:28.553237 kernel: sd 0:0:0:0: [sda] Write Protect is off Jul 10 00:53:28.553321 kernel: sd 0:0:0:0: [sda] Mode Sense: 31 00 00 00 Jul 10 00:53:28.553383 kernel: sd 0:0:0:0: [sda] Cache data unavailable Jul 10 00:53:28.553442 kernel: sd 0:0:0:0: [sda] Assuming drive cache: write through Jul 10 00:53:28.553511 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 1x/1x writer dvd-ram cd/rw xa/form2 cdda tray Jul 10 00:53:28.553583 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jul 10 00:53:28.553592 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jul 10 00:53:28.553651 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 10 00:53:28.553659 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jul 10 00:53:28.694866 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Jul 10 00:53:28.696486 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/sda6 scanned by (udev-worker) (470) Jul 10 00:53:28.699060 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Jul 10 00:53:28.703233 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Jul 10 00:53:28.705348 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Jul 10 00:53:28.705468 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Jul 10 00:53:28.706263 systemd[1]: Starting disk-uuid.service... Jul 10 00:53:28.730490 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 10 00:53:28.734778 kernel: GPT:disk_guids don't match. Jul 10 00:53:28.734799 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 10 00:53:28.734808 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 10 00:53:29.738495 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 10 00:53:29.738751 disk-uuid[550]: The operation has completed successfully. Jul 10 00:53:29.783089 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 10 00:53:29.781000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:53:29.781000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:53:29.783149 systemd[1]: Finished disk-uuid.service. Jul 10 00:53:29.783755 systemd[1]: Starting verity-setup.service... Jul 10 00:53:29.793549 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jul 10 00:53:29.836221 systemd[1]: Found device dev-mapper-usr.device. Jul 10 00:53:29.836671 systemd[1]: Mounting sysusr-usr.mount... Jul 10 00:53:29.836000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:53:29.838031 systemd[1]: Finished verity-setup.service. Jul 10 00:53:29.889490 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Jul 10 00:53:29.889896 systemd[1]: Mounted sysusr-usr.mount. Jul 10 00:53:29.890487 systemd[1]: Starting afterburn-network-kargs.service... Jul 10 00:53:29.890963 systemd[1]: Starting ignition-setup.service... Jul 10 00:53:29.907026 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jul 10 00:53:29.907066 kernel: BTRFS info (device sda6): using free space tree Jul 10 00:53:29.907077 kernel: BTRFS info (device sda6): has skinny extents Jul 10 00:53:29.911490 kernel: BTRFS info (device sda6): enabling ssd optimizations Jul 10 00:53:29.917042 systemd[1]: mnt-oem.mount: Deactivated successfully. Jul 10 00:53:29.922000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:53:29.923524 systemd[1]: Finished ignition-setup.service. Jul 10 00:53:29.924131 systemd[1]: Starting ignition-fetch-offline.service... Jul 10 00:53:29.965000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=afterburn-network-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:53:29.967307 systemd[1]: Finished afterburn-network-kargs.service. Jul 10 00:53:29.967904 systemd[1]: Starting parse-ip-for-networkd.service... Jul 10 00:53:30.021552 systemd[1]: Finished parse-ip-for-networkd.service. Jul 10 00:53:30.020000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:53:30.020000 audit: BPF prog-id=9 op=LOAD Jul 10 00:53:30.022507 systemd[1]: Starting systemd-networkd.service... Jul 10 00:53:30.039041 systemd-networkd[734]: lo: Link UP Jul 10 00:53:30.039272 systemd-networkd[734]: lo: Gained carrier Jul 10 00:53:30.039874 systemd-networkd[734]: Enumeration completed Jul 10 00:53:30.040061 systemd[1]: Started systemd-networkd.service. Jul 10 00:53:30.040242 systemd[1]: Reached target network.target. Jul 10 00:53:30.038000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:53:30.040736 systemd-networkd[734]: ens192: Configuring with /etc/systemd/network/10-dracut-cmdline-99.network. Jul 10 00:53:30.041044 systemd[1]: Starting iscsiuio.service... Jul 10 00:53:30.046637 kernel: vmxnet3 0000:0b:00.0 ens192: intr type 3, mode 0, 3 vectors allocated Jul 10 00:53:30.046805 kernel: vmxnet3 0000:0b:00.0 ens192: NIC Link is Up 10000 Mbps Jul 10 00:53:30.043129 systemd-networkd[734]: ens192: Link UP Jul 10 00:53:30.043132 systemd-networkd[734]: ens192: Gained carrier Jul 10 00:53:30.047615 systemd[1]: Started iscsiuio.service. Jul 10 00:53:30.048541 systemd[1]: Starting iscsid.service... Jul 10 00:53:30.046000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:53:30.051016 iscsid[739]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Jul 10 00:53:30.051016 iscsid[739]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Jul 10 00:53:30.051016 iscsid[739]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Jul 10 00:53:30.051016 iscsid[739]: If using hardware iscsi like qla4xxx this message can be ignored. Jul 10 00:53:30.051016 iscsid[739]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Jul 10 00:53:30.051016 iscsid[739]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Jul 10 00:53:30.052218 systemd[1]: Started iscsid.service. Jul 10 00:53:30.050000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:53:30.053231 systemd[1]: Starting dracut-initqueue.service... Jul 10 00:53:30.059662 systemd[1]: Finished dracut-initqueue.service. Jul 10 00:53:30.058000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:53:30.059982 systemd[1]: Reached target remote-fs-pre.target. Jul 10 00:53:30.060193 systemd[1]: Reached target remote-cryptsetup.target. Jul 10 00:53:30.060705 systemd[1]: Reached target remote-fs.target. Jul 10 00:53:30.061542 systemd[1]: Starting dracut-pre-mount.service... Jul 10 00:53:30.066579 systemd[1]: Finished dracut-pre-mount.service. Jul 10 00:53:30.065000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:53:30.068859 ignition[606]: Ignition 2.14.0 Jul 10 00:53:30.068869 ignition[606]: Stage: fetch-offline Jul 10 00:53:30.068909 ignition[606]: reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 10 00:53:30.068924 ignition[606]: parsing config with SHA512: bd85a898f7da4744ff98e02742aa4854e1ceea8026a4e95cb6fb599b39b54cff0db353847df13d3c55ae196a9dc5d648977228d55e5da3ea20cd600fa7cec8ed Jul 10 00:53:30.071781 ignition[606]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Jul 10 00:53:30.071860 ignition[606]: parsed url from cmdline: "" Jul 10 00:53:30.071863 ignition[606]: no config URL provided Jul 10 00:53:30.071865 ignition[606]: reading system config file "/usr/lib/ignition/user.ign" Jul 10 00:53:30.071870 ignition[606]: no config at "/usr/lib/ignition/user.ign" Jul 10 00:53:30.072285 ignition[606]: config successfully fetched Jul 10 00:53:30.072312 ignition[606]: parsing config with SHA512: 42f125ef49258b5a31690de6a8c5ec6dc33c78fe023b1458f66d20979315128d1736e7e3e78bc732f25b12bade027a5c00b7c0ebc94502316d8a9cda3466958d Jul 10 00:53:30.082238 unknown[606]: fetched base config from "system" Jul 10 00:53:30.082401 unknown[606]: fetched user config from "vmware" Jul 10 00:53:30.082900 ignition[606]: fetch-offline: fetch-offline passed Jul 10 00:53:30.083069 ignition[606]: Ignition finished successfully Jul 10 00:53:30.083765 systemd[1]: Finished ignition-fetch-offline.service. Jul 10 00:53:30.083938 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jul 10 00:53:30.082000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:53:30.084433 systemd[1]: Starting ignition-kargs.service... Jul 10 00:53:30.090872 ignition[754]: Ignition 2.14.0 Jul 10 00:53:30.091186 ignition[754]: Stage: kargs Jul 10 00:53:30.091404 ignition[754]: reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 10 00:53:30.091601 ignition[754]: parsing config with SHA512: bd85a898f7da4744ff98e02742aa4854e1ceea8026a4e95cb6fb599b39b54cff0db353847df13d3c55ae196a9dc5d648977228d55e5da3ea20cd600fa7cec8ed Jul 10 00:53:30.093680 ignition[754]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Jul 10 00:53:30.095535 ignition[754]: kargs: kargs passed Jul 10 00:53:30.095575 ignition[754]: Ignition finished successfully Jul 10 00:53:30.095000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:53:30.096814 systemd[1]: Finished ignition-kargs.service. Jul 10 00:53:30.097631 systemd[1]: Starting ignition-disks.service... Jul 10 00:53:30.103134 ignition[760]: Ignition 2.14.0 Jul 10 00:53:30.103453 ignition[760]: Stage: disks Jul 10 00:53:30.103676 ignition[760]: reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 10 00:53:30.103861 ignition[760]: parsing config with SHA512: bd85a898f7da4744ff98e02742aa4854e1ceea8026a4e95cb6fb599b39b54cff0db353847df13d3c55ae196a9dc5d648977228d55e5da3ea20cd600fa7cec8ed Jul 10 00:53:30.105794 ignition[760]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Jul 10 00:53:30.107662 ignition[760]: disks: disks passed Jul 10 00:53:30.107708 ignition[760]: Ignition finished successfully Jul 10 00:53:30.108571 systemd[1]: Finished ignition-disks.service. Jul 10 00:53:30.108761 systemd[1]: Reached target initrd-root-device.target. Jul 10 00:53:30.108872 systemd[1]: Reached target local-fs-pre.target. Jul 10 00:53:30.108973 systemd[1]: Reached target local-fs.target. Jul 10 00:53:30.109071 systemd[1]: Reached target sysinit.target. Jul 10 00:53:30.109166 systemd[1]: Reached target basic.target. Jul 10 00:53:30.109910 systemd[1]: Starting systemd-fsck-root.service... Jul 10 00:53:30.107000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:53:30.124379 systemd-fsck[768]: ROOT: clean, 619/1628000 files, 124060/1617920 blocks Jul 10 00:53:30.124000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:53:30.126158 systemd[1]: Finished systemd-fsck-root.service. Jul 10 00:53:30.126944 systemd[1]: Mounting sysroot.mount... Jul 10 00:53:30.135490 kernel: EXT4-fs (sda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Jul 10 00:53:30.136113 systemd[1]: Mounted sysroot.mount. Jul 10 00:53:30.136427 systemd[1]: Reached target initrd-root-fs.target. Jul 10 00:53:30.137692 systemd[1]: Mounting sysroot-usr.mount... Jul 10 00:53:30.138366 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Jul 10 00:53:30.138634 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 10 00:53:30.138868 systemd[1]: Reached target ignition-diskful.target. Jul 10 00:53:30.139881 systemd[1]: Mounted sysroot-usr.mount. Jul 10 00:53:30.140707 systemd[1]: Starting initrd-setup-root.service... Jul 10 00:53:30.143854 initrd-setup-root[778]: cut: /sysroot/etc/passwd: No such file or directory Jul 10 00:53:30.149128 initrd-setup-root[786]: cut: /sysroot/etc/group: No such file or directory Jul 10 00:53:30.151625 initrd-setup-root[794]: cut: /sysroot/etc/shadow: No such file or directory Jul 10 00:53:30.154575 initrd-setup-root[802]: cut: /sysroot/etc/gshadow: No such file or directory Jul 10 00:53:30.191235 systemd[1]: Finished initrd-setup-root.service. Jul 10 00:53:30.189000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:53:30.192080 systemd[1]: Starting ignition-mount.service... Jul 10 00:53:30.192749 systemd[1]: Starting sysroot-boot.service... Jul 10 00:53:30.197506 bash[819]: umount: /sysroot/usr/share/oem: not mounted. Jul 10 00:53:30.204541 ignition[820]: INFO : Ignition 2.14.0 Jul 10 00:53:30.204869 ignition[820]: INFO : Stage: mount Jul 10 00:53:30.205100 ignition[820]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 10 00:53:30.205284 ignition[820]: DEBUG : parsing config with SHA512: bd85a898f7da4744ff98e02742aa4854e1ceea8026a4e95cb6fb599b39b54cff0db353847df13d3c55ae196a9dc5d648977228d55e5da3ea20cd600fa7cec8ed Jul 10 00:53:30.207451 ignition[820]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Jul 10 00:53:30.210373 ignition[820]: INFO : mount: mount passed Jul 10 00:53:30.210577 ignition[820]: INFO : Ignition finished successfully Jul 10 00:53:30.211323 systemd[1]: Finished ignition-mount.service. Jul 10 00:53:30.210000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:53:30.216742 systemd[1]: Finished sysroot-boot.service. Jul 10 00:53:30.215000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:53:30.850393 systemd[1]: Mounting sysroot-usr-share-oem.mount... Jul 10 00:53:30.858509 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (829) Jul 10 00:53:30.861020 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jul 10 00:53:30.861061 kernel: BTRFS info (device sda6): using free space tree Jul 10 00:53:30.861075 kernel: BTRFS info (device sda6): has skinny extents Jul 10 00:53:30.865491 kernel: BTRFS info (device sda6): enabling ssd optimizations Jul 10 00:53:30.867216 systemd[1]: Mounted sysroot-usr-share-oem.mount. Jul 10 00:53:30.868043 systemd[1]: Starting ignition-files.service... Jul 10 00:53:30.880133 ignition[849]: INFO : Ignition 2.14.0 Jul 10 00:53:30.880133 ignition[849]: INFO : Stage: files Jul 10 00:53:30.880527 ignition[849]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 10 00:53:30.880527 ignition[849]: DEBUG : parsing config with SHA512: bd85a898f7da4744ff98e02742aa4854e1ceea8026a4e95cb6fb599b39b54cff0db353847df13d3c55ae196a9dc5d648977228d55e5da3ea20cd600fa7cec8ed Jul 10 00:53:30.881918 ignition[849]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Jul 10 00:53:30.887788 ignition[849]: DEBUG : files: compiled without relabeling support, skipping Jul 10 00:53:30.891115 ignition[849]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 10 00:53:30.891115 ignition[849]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 10 00:53:30.912567 ignition[849]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 10 00:53:30.912773 ignition[849]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 10 00:53:30.916547 unknown[849]: wrote ssh authorized keys file for user: core Jul 10 00:53:30.916843 ignition[849]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 10 00:53:30.917459 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Jul 10 00:53:30.917677 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Jul 10 00:53:30.917677 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 10 00:53:30.917677 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jul 10 00:53:30.967535 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jul 10 00:53:31.090505 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 10 00:53:31.093772 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 10 00:53:31.093953 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jul 10 00:53:31.494910 systemd-networkd[734]: ens192: Gained IPv6LL Jul 10 00:53:31.617127 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Jul 10 00:53:31.670286 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 10 00:53:31.670544 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/install.sh" Jul 10 00:53:31.670544 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/install.sh" Jul 10 00:53:31.670544 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 10 00:53:31.671009 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 10 00:53:31.671009 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 10 00:53:31.671009 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 10 00:53:31.671009 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 10 00:53:31.672603 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 10 00:53:31.674526 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 10 00:53:31.674705 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 10 00:53:31.674705 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 10 00:53:31.674705 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 10 00:53:31.675559 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/etc/systemd/system/vmtoolsd.service" Jul 10 00:53:31.675738 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(c): oem config not found in "/usr/share/oem", looking on oem partition Jul 10 00:53:31.679767 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(c): op(d): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3110852198" Jul 10 00:53:31.680084 ignition[849]: CRITICAL : files: createFilesystemsFiles: createFiles: op(c): op(d): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3110852198": device or resource busy Jul 10 00:53:31.680361 ignition[849]: ERROR : files: createFilesystemsFiles: createFiles: op(c): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem3110852198", trying btrfs: device or resource busy Jul 10 00:53:31.680664 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(c): op(e): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3110852198" Jul 10 00:53:31.681008 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(c): op(e): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3110852198" Jul 10 00:53:31.682064 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(c): op(f): [started] unmounting "/mnt/oem3110852198" Jul 10 00:53:31.682335 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(c): op(f): [finished] unmounting "/mnt/oem3110852198" Jul 10 00:53:31.682604 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/etc/systemd/system/vmtoolsd.service" Jul 10 00:53:31.682879 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(10): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 10 00:53:31.683222 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(10): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Jul 10 00:53:31.683348 systemd[1]: mnt-oem3110852198.mount: Deactivated successfully. Jul 10 00:53:32.426129 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(10): GET result: OK Jul 10 00:53:33.318185 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(10): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 10 00:53:33.318900 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(11): [started] writing file "/sysroot/etc/systemd/network/00-vmware.network" Jul 10 00:53:33.319166 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(11): [finished] writing file "/sysroot/etc/systemd/network/00-vmware.network" Jul 10 00:53:33.319362 ignition[849]: INFO : files: op(12): [started] processing unit "vmtoolsd.service" Jul 10 00:53:33.319509 ignition[849]: INFO : files: op(12): [finished] processing unit "vmtoolsd.service" Jul 10 00:53:33.319651 ignition[849]: INFO : files: op(13): [started] processing unit "containerd.service" Jul 10 00:53:33.319818 ignition[849]: INFO : files: op(13): op(14): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jul 10 00:53:33.320092 ignition[849]: INFO : files: op(13): op(14): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jul 10 00:53:33.320297 ignition[849]: INFO : files: op(13): [finished] processing unit "containerd.service" Jul 10 00:53:33.320441 ignition[849]: INFO : files: op(15): [started] processing unit "prepare-helm.service" Jul 10 00:53:33.320604 ignition[849]: INFO : files: op(15): op(16): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 10 00:53:33.320836 ignition[849]: INFO : files: op(15): op(16): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 10 00:53:33.321021 ignition[849]: INFO : files: op(15): [finished] processing unit "prepare-helm.service" Jul 10 00:53:33.321164 ignition[849]: INFO : files: op(17): [started] processing unit "coreos-metadata.service" Jul 10 00:53:33.321324 ignition[849]: INFO : files: op(17): op(18): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 10 00:53:33.321569 ignition[849]: INFO : files: op(17): op(18): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 10 00:53:33.321761 ignition[849]: INFO : files: op(17): [finished] processing unit "coreos-metadata.service" Jul 10 00:53:33.321908 ignition[849]: INFO : files: op(19): [started] setting preset to enabled for "vmtoolsd.service" Jul 10 00:53:33.322103 ignition[849]: INFO : files: op(19): [finished] setting preset to enabled for "vmtoolsd.service" Jul 10 00:53:33.322255 ignition[849]: INFO : files: op(1a): [started] setting preset to enabled for "prepare-helm.service" Jul 10 00:53:33.322421 ignition[849]: INFO : files: op(1a): [finished] setting preset to enabled for "prepare-helm.service" Jul 10 00:53:33.322578 ignition[849]: INFO : files: op(1b): [started] setting preset to disabled for "coreos-metadata.service" Jul 10 00:53:33.322732 ignition[849]: INFO : files: op(1b): op(1c): [started] removing enablement symlink(s) for "coreos-metadata.service" Jul 10 00:53:33.414492 ignition[849]: INFO : files: op(1b): op(1c): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jul 10 00:53:33.414806 ignition[849]: INFO : files: op(1b): [finished] setting preset to disabled for "coreos-metadata.service" Jul 10 00:53:33.415142 ignition[849]: INFO : files: createResultFile: createFiles: op(1d): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 10 00:53:33.415443 ignition[849]: INFO : files: createResultFile: createFiles: op(1d): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 10 00:53:33.415699 ignition[849]: INFO : files: files passed Jul 10 00:53:33.415879 ignition[849]: INFO : Ignition finished successfully Jul 10 00:53:33.417765 systemd[1]: Finished ignition-files.service. Jul 10 00:53:33.420707 kernel: kauditd_printk_skb: 24 callbacks suppressed Jul 10 00:53:33.420741 kernel: audit: type=1130 audit(1752108813.416:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:53:33.416000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:53:33.418558 systemd[1]: Starting initrd-setup-root-after-ignition.service... Jul 10 00:53:33.418718 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Jul 10 00:53:33.419216 systemd[1]: Starting ignition-quench.service... Jul 10 00:53:33.426833 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 10 00:53:33.426899 systemd[1]: Finished ignition-quench.service. Jul 10 00:53:33.425000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:53:33.425000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:53:33.430037 initrd-setup-root-after-ignition[875]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 10 00:53:33.432581 kernel: audit: type=1130 audit(1752108813.425:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:53:33.432603 kernel: audit: type=1131 audit(1752108813.425:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:53:33.432453 systemd[1]: Finished initrd-setup-root-after-ignition.service. Jul 10 00:53:33.435394 kernel: audit: type=1130 audit(1752108813.431:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:53:33.431000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:53:33.432709 systemd[1]: Reached target ignition-complete.target. Jul 10 00:53:33.435947 systemd[1]: Starting initrd-parse-etc.service... Jul 10 00:53:33.445323 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 10 00:53:33.445379 systemd[1]: Finished initrd-parse-etc.service. Jul 10 00:53:33.445569 systemd[1]: Reached target initrd-fs.target. Jul 10 00:53:33.445661 systemd[1]: Reached target initrd.target. Jul 10 00:53:33.445775 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Jul 10 00:53:33.446343 systemd[1]: Starting dracut-pre-pivot.service... Jul 10 00:53:33.444000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:53:33.451985 kernel: audit: type=1130 audit(1752108813.444:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:53:33.452004 kernel: audit: type=1131 audit(1752108813.444:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:53:33.444000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:53:33.452810 systemd[1]: Finished dracut-pre-pivot.service. Jul 10 00:53:33.453330 systemd[1]: Starting initrd-cleanup.service... Jul 10 00:53:33.451000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:53:33.457444 kernel: audit: type=1130 audit(1752108813.451:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:53:33.459900 systemd[1]: Stopped target nss-lookup.target. Jul 10 00:53:33.460246 systemd[1]: Stopped target remote-cryptsetup.target. Jul 10 00:53:33.460526 systemd[1]: Stopped target timers.target. Jul 10 00:53:33.460773 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 10 00:53:33.460975 systemd[1]: Stopped dracut-pre-pivot.service. Jul 10 00:53:33.459000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:53:33.461358 systemd[1]: Stopped target initrd.target. Jul 10 00:53:33.463907 systemd[1]: Stopped target basic.target. Jul 10 00:53:33.464213 systemd[1]: Stopped target ignition-complete.target. Jul 10 00:53:33.464509 kernel: audit: type=1131 audit(1752108813.459:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:53:33.464546 systemd[1]: Stopped target ignition-diskful.target. Jul 10 00:53:33.464808 systemd[1]: Stopped target initrd-root-device.target. Jul 10 00:53:33.465076 systemd[1]: Stopped target remote-fs.target. Jul 10 00:53:33.465327 systemd[1]: Stopped target remote-fs-pre.target. Jul 10 00:53:33.465602 systemd[1]: Stopped target sysinit.target. Jul 10 00:53:33.465861 systemd[1]: Stopped target local-fs.target. Jul 10 00:53:33.466115 systemd[1]: Stopped target local-fs-pre.target. Jul 10 00:53:33.466371 systemd[1]: Stopped target swap.target. Jul 10 00:53:33.466644 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 10 00:53:33.466852 systemd[1]: Stopped dracut-pre-mount.service. Jul 10 00:53:33.465000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:53:33.467198 systemd[1]: Stopped target cryptsetup.target. Jul 10 00:53:33.469574 kernel: audit: type=1131 audit(1752108813.465:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:53:33.469666 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 10 00:53:33.469736 systemd[1]: Stopped dracut-initqueue.service. Jul 10 00:53:33.469957 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 10 00:53:33.472505 kernel: audit: type=1131 audit(1752108813.468:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:53:33.468000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:53:33.470013 systemd[1]: Stopped ignition-fetch-offline.service. Jul 10 00:53:33.472632 systemd[1]: Stopped target paths.target. Jul 10 00:53:33.471000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:53:33.472779 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 10 00:53:33.475499 systemd[1]: Stopped systemd-ask-password-console.path. Jul 10 00:53:33.475661 systemd[1]: Stopped target slices.target. Jul 10 00:53:33.475844 systemd[1]: Stopped target sockets.target. Jul 10 00:53:33.476159 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 10 00:53:33.476228 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Jul 10 00:53:33.476474 systemd[1]: ignition-files.service: Deactivated successfully. Jul 10 00:53:33.474000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:53:33.476538 systemd[1]: Stopped ignition-files.service. Jul 10 00:53:33.475000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:53:33.478000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:53:33.478000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:53:33.479000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:53:33.480000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:53:33.481000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:53:33.481000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:53:33.477311 systemd[1]: Stopping ignition-mount.service... Jul 10 00:53:33.484610 iscsid[739]: iscsid shutting down. Jul 10 00:53:33.484771 ignition[888]: INFO : Ignition 2.14.0 Jul 10 00:53:33.484771 ignition[888]: INFO : Stage: umount Jul 10 00:53:33.484771 ignition[888]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 10 00:53:33.484771 ignition[888]: DEBUG : parsing config with SHA512: bd85a898f7da4744ff98e02742aa4854e1ceea8026a4e95cb6fb599b39b54cff0db353847df13d3c55ae196a9dc5d648977228d55e5da3ea20cd600fa7cec8ed Jul 10 00:53:33.484771 ignition[888]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Jul 10 00:53:33.479208 systemd[1]: Stopping iscsid.service... Jul 10 00:53:33.479292 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 10 00:53:33.479375 systemd[1]: Stopped kmod-static-nodes.service. Jul 10 00:53:33.480044 systemd[1]: Stopping sysroot-boot.service... Jul 10 00:53:33.480147 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 10 00:53:33.480237 systemd[1]: Stopped systemd-udev-trigger.service. Jul 10 00:53:33.480419 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 10 00:53:33.480501 systemd[1]: Stopped dracut-pre-trigger.service. Jul 10 00:53:33.481895 systemd[1]: iscsid.service: Deactivated successfully. Jul 10 00:53:33.489536 ignition[888]: INFO : umount: umount passed Jul 10 00:53:33.489536 ignition[888]: INFO : Ignition finished successfully Jul 10 00:53:33.481962 systemd[1]: Stopped iscsid.service. Jul 10 00:53:33.486000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:53:33.487000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:53:33.487000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:53:33.487000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:53:33.487000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:53:33.482921 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 10 00:53:33.482986 systemd[1]: Finished initrd-cleanup.service. Jul 10 00:53:33.483687 systemd[1]: iscsid.socket: Deactivated successfully. Jul 10 00:53:33.483712 systemd[1]: Closed iscsid.socket. Jul 10 00:53:33.486670 systemd[1]: Stopping iscsiuio.service... Jul 10 00:53:33.487704 systemd[1]: iscsiuio.service: Deactivated successfully. Jul 10 00:53:33.487758 systemd[1]: Stopped iscsiuio.service. Jul 10 00:53:33.487905 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 10 00:53:33.487925 systemd[1]: Closed iscsiuio.socket. Jul 10 00:53:33.488367 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 10 00:53:33.488533 systemd[1]: Stopped ignition-mount.service. Jul 10 00:53:33.488673 systemd[1]: Stopped target network.target. Jul 10 00:53:33.488758 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 10 00:53:33.488782 systemd[1]: Stopped ignition-disks.service. Jul 10 00:53:33.488883 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 10 00:53:33.488903 systemd[1]: Stopped ignition-kargs.service. Jul 10 00:53:33.489002 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 10 00:53:33.489021 systemd[1]: Stopped ignition-setup.service. Jul 10 00:53:33.489172 systemd[1]: Stopping systemd-networkd.service... Jul 10 00:53:33.489298 systemd[1]: Stopping systemd-resolved.service... Jul 10 00:53:33.491000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:53:33.492000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:53:33.493000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=afterburn-network-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:53:33.493000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:53:33.493000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:53:33.496000 audit: BPF prog-id=9 op=UNLOAD Jul 10 00:53:33.493180 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 10 00:53:33.498000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:53:33.493239 systemd[1]: Stopped systemd-networkd.service. Jul 10 00:53:33.493618 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 10 00:53:33.493636 systemd[1]: Closed systemd-networkd.socket. Jul 10 00:53:33.499000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:53:33.499000 audit: BPF prog-id=6 op=UNLOAD Jul 10 00:53:33.494159 systemd[1]: Stopping network-cleanup.service... Jul 10 00:53:33.494280 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 10 00:53:33.494309 systemd[1]: Stopped parse-ip-for-networkd.service. Jul 10 00:53:33.500000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:53:33.494461 systemd[1]: afterburn-network-kargs.service: Deactivated successfully. Jul 10 00:53:33.494499 systemd[1]: Stopped afterburn-network-kargs.service. Jul 10 00:53:33.494628 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 10 00:53:33.500000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:53:33.501000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:53:33.494657 systemd[1]: Stopped systemd-sysctl.service. Jul 10 00:53:33.494834 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 10 00:53:33.494857 systemd[1]: Stopped systemd-modules-load.service. Jul 10 00:53:33.497022 systemd[1]: Stopping systemd-udevd.service... Jul 10 00:53:33.497929 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 10 00:53:33.497978 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 10 00:53:33.502000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:53:33.498326 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 10 00:53:33.498383 systemd[1]: Stopped systemd-resolved.service. Jul 10 00:53:33.500159 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 10 00:53:33.500226 systemd[1]: Stopped systemd-udevd.service. Jul 10 00:53:33.501064 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 10 00:53:33.501087 systemd[1]: Closed systemd-udevd-control.socket. Jul 10 00:53:33.501326 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 10 00:53:33.508000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:53:33.508000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:53:33.501346 systemd[1]: Closed systemd-udevd-kernel.socket. Jul 10 00:53:33.501440 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 10 00:53:33.508000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:53:33.501464 systemd[1]: Stopped dracut-pre-udev.service. Jul 10 00:53:33.501713 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 10 00:53:33.501734 systemd[1]: Stopped dracut-cmdline.service. Jul 10 00:53:33.502402 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 10 00:53:33.502426 systemd[1]: Stopped dracut-cmdline-ask.service. Jul 10 00:53:33.503560 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Jul 10 00:53:33.503782 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 10 00:53:33.503829 systemd[1]: Stopped systemd-vconsole-setup.service. Jul 10 00:53:33.509245 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 10 00:53:33.509316 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Jul 10 00:53:33.509948 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 10 00:53:33.510005 systemd[1]: Stopped network-cleanup.service. Jul 10 00:53:33.687578 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 10 00:53:33.687671 systemd[1]: Stopped sysroot-boot.service. Jul 10 00:53:33.686000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:53:33.688034 systemd[1]: Reached target initrd-switch-root.target. Jul 10 00:53:33.688181 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 10 00:53:33.688213 systemd[1]: Stopped initrd-setup-root.service. Jul 10 00:53:33.686000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:53:33.688951 systemd[1]: Starting initrd-switch-root.service... Jul 10 00:53:33.720847 systemd[1]: Switching root. Jul 10 00:53:33.722000 audit: BPF prog-id=8 op=UNLOAD Jul 10 00:53:33.722000 audit: BPF prog-id=7 op=UNLOAD Jul 10 00:53:33.722000 audit: BPF prog-id=5 op=UNLOAD Jul 10 00:53:33.722000 audit: BPF prog-id=4 op=UNLOAD Jul 10 00:53:33.722000 audit: BPF prog-id=3 op=UNLOAD Jul 10 00:53:33.738620 systemd-journald[216]: Journal stopped Jul 10 00:53:37.957527 systemd-journald[216]: Received SIGTERM from PID 1 (systemd). Jul 10 00:53:37.957547 kernel: SELinux: Class mctp_socket not defined in policy. Jul 10 00:53:37.957555 kernel: SELinux: Class anon_inode not defined in policy. Jul 10 00:53:37.957562 kernel: SELinux: the above unknown classes and permissions will be allowed Jul 10 00:53:37.957567 kernel: SELinux: policy capability network_peer_controls=1 Jul 10 00:53:37.957574 kernel: SELinux: policy capability open_perms=1 Jul 10 00:53:37.957581 kernel: SELinux: policy capability extended_socket_class=1 Jul 10 00:53:37.957587 kernel: SELinux: policy capability always_check_network=0 Jul 10 00:53:37.957593 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 10 00:53:37.957599 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 10 00:53:37.957606 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 10 00:53:37.957612 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 10 00:53:37.957620 systemd[1]: Successfully loaded SELinux policy in 41.525ms. Jul 10 00:53:37.957628 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 6.682ms. Jul 10 00:53:37.957636 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Jul 10 00:53:37.957643 systemd[1]: Detected virtualization vmware. Jul 10 00:53:37.957651 systemd[1]: Detected architecture x86-64. Jul 10 00:53:37.957658 systemd[1]: Detected first boot. Jul 10 00:53:37.957665 systemd[1]: Initializing machine ID from random generator. Jul 10 00:53:37.957671 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Jul 10 00:53:37.957678 systemd[1]: Populated /etc with preset unit settings. Jul 10 00:53:37.957685 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 10 00:53:37.957692 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 10 00:53:37.957700 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 10 00:53:37.957709 systemd[1]: Queued start job for default target multi-user.target. Jul 10 00:53:37.957716 systemd[1]: Unnecessary job was removed for dev-sda6.device. Jul 10 00:53:37.957722 systemd[1]: Created slice system-addon\x2dconfig.slice. Jul 10 00:53:37.957729 systemd[1]: Created slice system-addon\x2drun.slice. Jul 10 00:53:37.957736 systemd[1]: Created slice system-getty.slice. Jul 10 00:53:37.958235 systemd[1]: Created slice system-modprobe.slice. Jul 10 00:53:37.958246 systemd[1]: Created slice system-serial\x2dgetty.slice. Jul 10 00:53:37.958255 systemd[1]: Created slice system-system\x2dcloudinit.slice. Jul 10 00:53:37.958263 systemd[1]: Created slice system-systemd\x2dfsck.slice. Jul 10 00:53:37.958270 systemd[1]: Created slice user.slice. Jul 10 00:53:37.958276 systemd[1]: Started systemd-ask-password-console.path. Jul 10 00:53:37.958289 systemd[1]: Started systemd-ask-password-wall.path. Jul 10 00:53:37.958299 systemd[1]: Set up automount boot.automount. Jul 10 00:53:37.958306 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Jul 10 00:53:37.958313 systemd[1]: Reached target integritysetup.target. Jul 10 00:53:37.958320 systemd[1]: Reached target remote-cryptsetup.target. Jul 10 00:53:37.958330 systemd[1]: Reached target remote-fs.target. Jul 10 00:53:37.958339 systemd[1]: Reached target slices.target. Jul 10 00:53:37.958346 systemd[1]: Reached target swap.target. Jul 10 00:53:37.958353 systemd[1]: Reached target torcx.target. Jul 10 00:53:37.958360 systemd[1]: Reached target veritysetup.target. Jul 10 00:53:37.958367 systemd[1]: Listening on systemd-coredump.socket. Jul 10 00:53:37.958375 systemd[1]: Listening on systemd-initctl.socket. Jul 10 00:53:37.958382 systemd[1]: Listening on systemd-journald-audit.socket. Jul 10 00:53:37.958390 systemd[1]: Listening on systemd-journald-dev-log.socket. Jul 10 00:53:37.958397 systemd[1]: Listening on systemd-journald.socket. Jul 10 00:53:37.958404 systemd[1]: Listening on systemd-networkd.socket. Jul 10 00:53:37.958411 systemd[1]: Listening on systemd-udevd-control.socket. Jul 10 00:53:37.958424 systemd[1]: Listening on systemd-udevd-kernel.socket. Jul 10 00:53:37.958433 systemd[1]: Listening on systemd-userdbd.socket. Jul 10 00:53:37.958448 systemd[1]: Mounting dev-hugepages.mount... Jul 10 00:53:37.958456 systemd[1]: Mounting dev-mqueue.mount... Jul 10 00:53:37.958464 systemd[1]: Mounting media.mount... Jul 10 00:53:37.958472 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 10 00:53:37.958485 systemd[1]: Mounting sys-kernel-debug.mount... Jul 10 00:53:37.958493 systemd[1]: Mounting sys-kernel-tracing.mount... Jul 10 00:53:37.958500 systemd[1]: Mounting tmp.mount... Jul 10 00:53:37.958509 systemd[1]: Starting flatcar-tmpfiles.service... Jul 10 00:53:37.958517 systemd[1]: Starting ignition-delete-config.service... Jul 10 00:53:37.958524 systemd[1]: Starting kmod-static-nodes.service... Jul 10 00:53:37.958534 systemd[1]: Starting modprobe@configfs.service... Jul 10 00:53:37.958542 systemd[1]: Starting modprobe@dm_mod.service... Jul 10 00:53:37.958550 systemd[1]: Starting modprobe@drm.service... Jul 10 00:53:37.958557 systemd[1]: Starting modprobe@efi_pstore.service... Jul 10 00:53:37.958564 systemd[1]: Starting modprobe@fuse.service... Jul 10 00:53:37.958571 systemd[1]: Starting modprobe@loop.service... Jul 10 00:53:37.958580 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 10 00:53:37.958588 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Jul 10 00:53:37.958595 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) Jul 10 00:53:37.958603 systemd[1]: Starting systemd-journald.service... Jul 10 00:53:37.958610 systemd[1]: Starting systemd-modules-load.service... Jul 10 00:53:37.958621 systemd[1]: Starting systemd-network-generator.service... Jul 10 00:53:37.958628 systemd[1]: Starting systemd-remount-fs.service... Jul 10 00:53:37.958635 systemd[1]: Starting systemd-udev-trigger.service... Jul 10 00:53:37.958704 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 10 00:53:37.958722 systemd[1]: Mounted dev-hugepages.mount. Jul 10 00:53:37.958731 systemd[1]: Mounted dev-mqueue.mount. Jul 10 00:53:37.958739 systemd[1]: Mounted media.mount. Jul 10 00:53:37.958746 systemd[1]: Mounted sys-kernel-debug.mount. Jul 10 00:53:37.958753 systemd[1]: Mounted sys-kernel-tracing.mount. Jul 10 00:53:37.958760 systemd[1]: Mounted tmp.mount. Jul 10 00:53:37.958768 systemd[1]: Finished systemd-remount-fs.service. Jul 10 00:53:37.958775 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 10 00:53:37.958782 systemd[1]: Starting systemd-hwdb-update.service... Jul 10 00:53:37.958791 systemd[1]: Starting systemd-random-seed.service... Jul 10 00:53:37.958801 systemd-journald[1027]: Journal started Jul 10 00:53:37.958833 systemd-journald[1027]: Runtime Journal (/run/log/journal/9aff94b4c0d14a28aa592df31db1f1ca) is 4.8M, max 38.8M, 34.0M free. Jul 10 00:53:37.797000 audit[1]: AVC avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Jul 10 00:53:37.797000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Jul 10 00:53:37.924000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:53:37.953000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jul 10 00:53:37.953000 audit[1027]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=4 a1=7ffceca089a0 a2=4000 a3=7ffceca08a3c items=0 ppid=1 pid=1027 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:53:37.953000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Jul 10 00:53:37.966419 systemd[1]: Started systemd-journald.service. Jul 10 00:53:37.958000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:53:37.959000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:53:37.959000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:53:37.960000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:53:37.960000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:53:37.960000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:53:37.960000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:53:37.961000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:53:37.961000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:53:37.962000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:53:37.960859 systemd[1]: Finished flatcar-tmpfiles.service. Jul 10 00:53:37.961165 systemd[1]: Finished kmod-static-nodes.service. Jul 10 00:53:37.961438 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 10 00:53:37.961551 systemd[1]: Finished modprobe@dm_mod.service. Jul 10 00:53:37.961777 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 10 00:53:37.961853 systemd[1]: Finished modprobe@drm.service. Jul 10 00:53:37.962063 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 10 00:53:37.962140 systemd[1]: Finished modprobe@efi_pstore.service. Jul 10 00:53:37.962722 systemd[1]: Finished systemd-network-generator.service. Jul 10 00:53:37.963585 systemd[1]: Reached target network-pre.target. Jul 10 00:53:37.964770 systemd[1]: Starting systemd-journal-flush.service... Jul 10 00:53:37.965222 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 10 00:53:37.968000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:53:37.974000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:53:37.976667 systemd-journald[1027]: Time spent on flushing to /var/log/journal/9aff94b4c0d14a28aa592df31db1f1ca is 73.591ms for 1922 entries. Jul 10 00:53:37.976667 systemd-journald[1027]: System Journal (/var/log/journal/9aff94b4c0d14a28aa592df31db1f1ca) is 8.0M, max 584.8M, 576.8M free. Jul 10 00:53:38.182284 systemd-journald[1027]: Received client request to flush runtime journal. Jul 10 00:53:38.182346 kernel: fuse: init (API version 7.34) Jul 10 00:53:38.182368 kernel: loop: module loaded Jul 10 00:53:37.998000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:53:37.998000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:53:38.012000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:53:38.029000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:53:38.029000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:53:38.061000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:53:38.061000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:53:38.121000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:53:38.125000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:53:37.968151 systemd[1]: Starting systemd-sysusers.service... Jul 10 00:53:38.182805 jq[1019]: true Jul 10 00:53:38.181000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:53:37.970154 systemd[1]: Finished systemd-modules-load.service. Jul 10 00:53:37.971760 systemd[1]: Starting systemd-sysctl.service... Jul 10 00:53:37.975143 systemd[1]: Finished systemd-random-seed.service. Jul 10 00:53:37.975946 systemd[1]: Reached target first-boot-complete.target. Jul 10 00:53:37.999864 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 10 00:53:38.184623 jq[1084]: true Jul 10 00:53:37.999967 systemd[1]: Finished modprobe@configfs.service. Jul 10 00:53:38.000969 systemd[1]: Mounting sys-kernel-config.mount... Jul 10 00:53:38.003584 systemd[1]: Mounted sys-kernel-config.mount. Jul 10 00:53:38.013418 systemd[1]: Finished systemd-sysctl.service. Jul 10 00:53:38.185612 udevadm[1099]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jul 10 00:53:38.030397 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 10 00:53:38.030531 systemd[1]: Finished modprobe@fuse.service. Jul 10 00:53:38.031584 systemd[1]: Mounting sys-fs-fuse-connections.mount... Jul 10 00:53:38.035206 systemd[1]: Mounted sys-fs-fuse-connections.mount. Jul 10 00:53:38.062537 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 10 00:53:38.062648 systemd[1]: Finished modprobe@loop.service. Jul 10 00:53:38.062833 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 10 00:53:38.123210 systemd[1]: Finished systemd-udev-trigger.service. Jul 10 00:53:38.124312 systemd[1]: Starting systemd-udev-settle.service... Jul 10 00:53:38.126898 systemd[1]: Finished systemd-sysusers.service. Jul 10 00:53:38.127960 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Jul 10 00:53:38.183168 systemd[1]: Finished systemd-journal-flush.service. Jul 10 00:53:38.352134 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Jul 10 00:53:38.350000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:53:38.678463 ignition[1085]: Ignition 2.14.0 Jul 10 00:53:38.678708 ignition[1085]: deleting config from guestinfo properties Jul 10 00:53:38.847588 ignition[1085]: Successfully deleted config Jul 10 00:53:38.848304 systemd[1]: Finished ignition-delete-config.service. Jul 10 00:53:38.852298 kernel: kauditd_printk_skb: 77 callbacks suppressed Jul 10 00:53:38.852371 kernel: audit: type=1130 audit(1752108818.846:113): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ignition-delete-config comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:53:38.846000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ignition-delete-config comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:53:39.115462 systemd[1]: Finished systemd-hwdb-update.service. Jul 10 00:53:39.114000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:53:39.116492 systemd[1]: Starting systemd-udevd.service... Jul 10 00:53:39.118496 kernel: audit: type=1130 audit(1752108819.114:114): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:53:39.130784 systemd-udevd[1110]: Using default interface naming scheme 'v252'. Jul 10 00:53:39.357227 systemd[1]: Started systemd-udevd.service. Jul 10 00:53:39.361919 kernel: audit: type=1130 audit(1752108819.355:115): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:53:39.355000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:53:39.358860 systemd[1]: Starting systemd-networkd.service... Jul 10 00:53:39.368644 systemd[1]: Starting systemd-userdbd.service... Jul 10 00:53:39.397361 systemd[1]: Found device dev-ttyS0.device. Jul 10 00:53:39.411509 systemd[1]: Started systemd-userdbd.service. Jul 10 00:53:39.410000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:53:39.415638 kernel: audit: type=1130 audit(1752108819.410:116): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:53:39.462494 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jul 10 00:53:39.470494 kernel: ACPI: button: Power Button [PWRF] Jul 10 00:53:39.548943 systemd-networkd[1112]: lo: Link UP Jul 10 00:53:39.548000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:53:39.548948 systemd-networkd[1112]: lo: Gained carrier Jul 10 00:53:39.549264 systemd-networkd[1112]: Enumeration completed Jul 10 00:53:39.549336 systemd-networkd[1112]: ens192: Configuring with /etc/systemd/network/00-vmware.network. Jul 10 00:53:39.549354 systemd[1]: Started systemd-networkd.service. Jul 10 00:53:39.555138 kernel: audit: type=1130 audit(1752108819.548:117): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:53:39.555209 kernel: vmxnet3 0000:0b:00.0 ens192: intr type 3, mode 0, 3 vectors allocated Jul 10 00:53:39.555336 kernel: vmxnet3 0000:0b:00.0 ens192: NIC Link is Up 10000 Mbps Jul 10 00:53:39.556531 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): ens192: link becomes ready Jul 10 00:53:39.557211 systemd-networkd[1112]: ens192: Link UP Jul 10 00:53:39.557432 systemd-networkd[1112]: ens192: Gained carrier Jul 10 00:53:39.562491 kernel: vmw_vmci 0000:00:07.7: Found VMCI PCI device at 0x11080, irq 16 Jul 10 00:53:39.564168 kernel: vmw_vmci 0000:00:07.7: Using capabilities 0xc Jul 10 00:53:39.564267 kernel: Guest personality initialized and is active Jul 10 00:53:39.567496 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Jul 10 00:53:39.567552 kernel: Initialized host personality Jul 10 00:53:39.569394 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Jul 10 00:53:39.562000 audit[1118]: AVC avc: denied { confidentiality } for pid=1118 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Jul 10 00:53:39.577555 kernel: audit: type=1400 audit(1752108819.562:118): avc: denied { confidentiality } for pid=1118 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Jul 10 00:53:39.562000 audit[1118]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=55a13169da70 a1=338ac a2=7fea34b88bc5 a3=5 items=110 ppid=1110 pid=1118 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:53:39.562000 audit: CWD cwd="/" Jul 10 00:53:39.562000 audit: PATH item=0 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:53:39.586119 kernel: audit: type=1300 audit(1752108819.562:118): arch=c000003e syscall=175 success=yes exit=0 a0=55a13169da70 a1=338ac a2=7fea34b88bc5 a3=5 items=110 ppid=1110 pid=1118 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:53:39.586186 kernel: audit: type=1307 audit(1752108819.562:118): cwd="/" Jul 10 00:53:39.586204 kernel: audit: type=1302 audit(1752108819.562:118): item=0 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:53:39.586216 kernel: audit: type=1302 audit(1752108819.562:118): item=1 name=(null) inode=23776 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:53:39.562000 audit: PATH item=1 name=(null) inode=23776 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:53:39.562000 audit: PATH item=2 name=(null) inode=23776 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:53:39.562000 audit: PATH item=3 name=(null) inode=23777 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:53:39.562000 audit: PATH item=4 name=(null) inode=23776 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:53:39.562000 audit: PATH item=5 name=(null) inode=23778 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:53:39.562000 audit: PATH item=6 name=(null) inode=23776 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:53:39.562000 audit: PATH item=7 name=(null) inode=23779 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:53:39.562000 audit: PATH item=8 name=(null) inode=23779 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:53:39.562000 audit: PATH item=9 name=(null) inode=23780 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:53:39.562000 audit: PATH item=10 name=(null) inode=23779 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:53:39.562000 audit: PATH item=11 name=(null) inode=23781 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:53:39.562000 audit: PATH item=12 name=(null) inode=23779 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:53:39.562000 audit: PATH item=13 name=(null) inode=23782 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:53:39.562000 audit: PATH item=14 name=(null) inode=23779 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:53:39.562000 audit: PATH item=15 name=(null) inode=23783 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:53:39.562000 audit: PATH item=16 name=(null) inode=23779 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:53:39.562000 audit: PATH item=17 name=(null) inode=23784 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:53:39.562000 audit: PATH item=18 name=(null) inode=23776 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:53:39.562000 audit: PATH item=19 name=(null) inode=23785 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:53:39.562000 audit: PATH item=20 name=(null) inode=23785 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:53:39.562000 audit: PATH item=21 name=(null) inode=23786 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:53:39.562000 audit: PATH item=22 name=(null) inode=23785 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:53:39.562000 audit: PATH item=23 name=(null) inode=23787 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:53:39.562000 audit: PATH item=24 name=(null) inode=23785 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:53:39.562000 audit: PATH item=25 name=(null) inode=23788 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:53:39.562000 audit: PATH item=26 name=(null) inode=23785 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:53:39.562000 audit: PATH item=27 name=(null) inode=23789 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:53:39.562000 audit: PATH item=28 name=(null) inode=23785 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:53:39.562000 audit: PATH item=29 name=(null) inode=23790 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:53:39.562000 audit: PATH item=30 name=(null) inode=23776 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:53:39.562000 audit: PATH item=31 name=(null) inode=23791 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:53:39.562000 audit: PATH item=32 name=(null) inode=23791 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:53:39.562000 audit: PATH item=33 name=(null) inode=23792 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:53:39.562000 audit: PATH item=34 name=(null) inode=23791 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:53:39.562000 audit: PATH item=35 name=(null) inode=23793 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:53:39.562000 audit: PATH item=36 name=(null) inode=23791 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:53:39.562000 audit: PATH item=37 name=(null) inode=23794 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:53:39.562000 audit: PATH item=38 name=(null) inode=23791 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:53:39.562000 audit: PATH item=39 name=(null) inode=23795 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:53:39.562000 audit: PATH item=40 name=(null) inode=23791 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:53:39.562000 audit: PATH item=41 name=(null) inode=23796 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:53:39.562000 audit: PATH item=42 name=(null) inode=23776 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:53:39.562000 audit: PATH item=43 name=(null) inode=23797 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:53:39.562000 audit: PATH item=44 name=(null) inode=23797 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:53:39.562000 audit: PATH item=45 name=(null) inode=23798 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:53:39.562000 audit: PATH item=46 name=(null) inode=23797 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:53:39.562000 audit: PATH item=47 name=(null) inode=23799 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:53:39.562000 audit: PATH item=48 name=(null) inode=23797 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:53:39.562000 audit: PATH item=49 name=(null) inode=23800 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:53:39.562000 audit: PATH item=50 name=(null) inode=23797 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:53:39.562000 audit: PATH item=51 name=(null) inode=23801 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:53:39.562000 audit: PATH item=52 name=(null) inode=23797 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:53:39.562000 audit: PATH item=53 name=(null) inode=23802 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:53:39.562000 audit: PATH item=54 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:53:39.562000 audit: PATH item=55 name=(null) inode=23803 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:53:39.562000 audit: PATH item=56 name=(null) inode=23803 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:53:39.562000 audit: PATH item=57 name=(null) inode=23804 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:53:39.562000 audit: PATH item=58 name=(null) inode=23803 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:53:39.562000 audit: PATH item=59 name=(null) inode=23805 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:53:39.562000 audit: PATH item=60 name=(null) inode=23803 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:53:39.562000 audit: PATH item=61 name=(null) inode=23806 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:53:39.562000 audit: PATH item=62 name=(null) inode=23806 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:53:39.562000 audit: PATH item=63 name=(null) inode=23807 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:53:39.562000 audit: PATH item=64 name=(null) inode=23806 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:53:39.562000 audit: PATH item=65 name=(null) inode=23808 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:53:39.562000 audit: PATH item=66 name=(null) inode=23806 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:53:39.562000 audit: PATH item=67 name=(null) inode=23809 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:53:39.562000 audit: PATH item=68 name=(null) inode=23806 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:53:39.562000 audit: PATH item=69 name=(null) inode=23810 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:53:39.562000 audit: PATH item=70 name=(null) inode=23806 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:53:39.562000 audit: PATH item=71 name=(null) inode=23811 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:53:39.562000 audit: PATH item=72 name=(null) inode=23803 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:53:39.562000 audit: PATH item=73 name=(null) inode=23812 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:53:39.562000 audit: PATH item=74 name=(null) inode=23812 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:53:39.562000 audit: PATH item=75 name=(null) inode=23813 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:53:39.562000 audit: PATH item=76 name=(null) inode=23812 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:53:39.562000 audit: PATH item=77 name=(null) inode=23814 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:53:39.562000 audit: PATH item=78 name=(null) inode=23812 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:53:39.562000 audit: PATH item=79 name=(null) inode=23815 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:53:39.562000 audit: PATH item=80 name=(null) inode=23812 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:53:39.562000 audit: PATH item=81 name=(null) inode=23816 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:53:39.562000 audit: PATH item=82 name=(null) inode=23812 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:53:39.562000 audit: PATH item=83 name=(null) inode=23817 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:53:39.562000 audit: PATH item=84 name=(null) inode=23803 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:53:39.562000 audit: PATH item=85 name=(null) inode=23818 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:53:39.562000 audit: PATH item=86 name=(null) inode=23818 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:53:39.562000 audit: PATH item=87 name=(null) inode=23819 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:53:39.562000 audit: PATH item=88 name=(null) inode=23818 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:53:39.562000 audit: PATH item=89 name=(null) inode=23820 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:53:39.562000 audit: PATH item=90 name=(null) inode=23818 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:53:39.562000 audit: PATH item=91 name=(null) inode=23821 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:53:39.562000 audit: PATH item=92 name=(null) inode=23818 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:53:39.562000 audit: PATH item=93 name=(null) inode=23822 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:53:39.562000 audit: PATH item=94 name=(null) inode=23818 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:53:39.562000 audit: PATH item=95 name=(null) inode=23823 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:53:39.562000 audit: PATH item=96 name=(null) inode=23803 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:53:39.562000 audit: PATH item=97 name=(null) inode=23824 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:53:39.562000 audit: PATH item=98 name=(null) inode=23824 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:53:39.562000 audit: PATH item=99 name=(null) inode=23825 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:53:39.562000 audit: PATH item=100 name=(null) inode=23824 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:53:39.562000 audit: PATH item=101 name=(null) inode=23826 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:53:39.562000 audit: PATH item=102 name=(null) inode=23824 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:53:39.562000 audit: PATH item=103 name=(null) inode=23827 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:53:39.562000 audit: PATH item=104 name=(null) inode=23824 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:53:39.562000 audit: PATH item=105 name=(null) inode=23828 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:53:39.562000 audit: PATH item=106 name=(null) inode=23824 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:53:39.562000 audit: PATH item=107 name=(null) inode=23829 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:53:39.562000 audit: PATH item=108 name=(null) inode=1 dev=00:07 mode=040700 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:53:39.562000 audit: PATH item=109 name=(null) inode=23830 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:53:39.562000 audit: PROCTITLE proctitle="(udev-worker)" Jul 10 00:53:39.603496 kernel: piix4_smbus 0000:00:07.3: SMBus Host Controller not enabled! Jul 10 00:53:39.608489 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input3 Jul 10 00:53:39.620500 kernel: mousedev: PS/2 mouse device common for all mice Jul 10 00:53:39.627573 (udev-worker)[1117]: id: Truncating stdout of 'dmi_memory_id' up to 16384 byte. Jul 10 00:53:39.636897 systemd[1]: Finished systemd-udev-settle.service. Jul 10 00:53:39.635000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:53:39.638376 systemd[1]: Starting lvm2-activation-early.service... Jul 10 00:53:39.704790 lvm[1145]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 10 00:53:39.731123 systemd[1]: Finished lvm2-activation-early.service. Jul 10 00:53:39.729000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:53:39.731337 systemd[1]: Reached target cryptsetup.target. Jul 10 00:53:39.732489 systemd[1]: Starting lvm2-activation.service... Jul 10 00:53:39.735771 lvm[1147]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 10 00:53:39.762140 systemd[1]: Finished lvm2-activation.service. Jul 10 00:53:39.760000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:53:39.762318 systemd[1]: Reached target local-fs-pre.target. Jul 10 00:53:39.762416 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 10 00:53:39.762432 systemd[1]: Reached target local-fs.target. Jul 10 00:53:39.762534 systemd[1]: Reached target machines.target. Jul 10 00:53:39.763572 systemd[1]: Starting ldconfig.service... Jul 10 00:53:39.774812 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 10 00:53:39.774859 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 10 00:53:39.775874 systemd[1]: Starting systemd-boot-update.service... Jul 10 00:53:39.776751 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Jul 10 00:53:39.777689 systemd[1]: Starting systemd-machine-id-commit.service... Jul 10 00:53:39.778638 systemd[1]: Starting systemd-sysext.service... Jul 10 00:53:39.812193 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1150 (bootctl) Jul 10 00:53:39.813032 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Jul 10 00:53:39.823133 systemd[1]: Unmounting usr-share-oem.mount... Jul 10 00:53:39.825455 systemd[1]: usr-share-oem.mount: Deactivated successfully. Jul 10 00:53:39.825596 systemd[1]: Unmounted usr-share-oem.mount. Jul 10 00:53:39.837000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:53:39.838825 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Jul 10 00:53:39.854494 kernel: loop0: detected capacity change from 0 to 221472 Jul 10 00:53:40.861359 systemd-fsck[1163]: fsck.fat 4.2 (2021-01-31) Jul 10 00:53:40.861359 systemd-fsck[1163]: /dev/sda1: 790 files, 120731/258078 clusters Jul 10 00:53:40.860000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:53:40.862212 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Jul 10 00:53:40.863681 systemd[1]: Mounting boot.mount... Jul 10 00:53:40.994739 systemd[1]: Mounted boot.mount. Jul 10 00:53:41.113000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:53:41.114693 systemd[1]: Finished systemd-boot-update.service. Jul 10 00:53:41.451500 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 10 00:53:41.542541 systemd-networkd[1112]: ens192: Gained IPv6LL Jul 10 00:53:41.555494 kernel: loop1: detected capacity change from 0 to 221472 Jul 10 00:53:41.629665 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 10 00:53:41.630045 systemd[1]: Finished systemd-machine-id-commit.service. Jul 10 00:53:41.628000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:53:41.769851 (sd-sysext)[1171]: Using extensions 'kubernetes'. Jul 10 00:53:41.770495 (sd-sysext)[1171]: Merged extensions into '/usr'. Jul 10 00:53:41.801437 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 10 00:53:41.802671 systemd[1]: Mounting usr-share-oem.mount... Jul 10 00:53:41.803539 systemd[1]: Starting modprobe@dm_mod.service... Jul 10 00:53:41.804360 systemd[1]: Starting modprobe@efi_pstore.service... Jul 10 00:53:41.805176 systemd[1]: Starting modprobe@loop.service... Jul 10 00:53:41.805324 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 10 00:53:41.805406 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 10 00:53:41.805487 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 10 00:53:41.808190 systemd[1]: Mounted usr-share-oem.mount. Jul 10 00:53:41.811892 systemd[1]: Finished systemd-sysext.service. Jul 10 00:53:41.810000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:53:41.812409 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 10 00:53:41.812560 systemd[1]: Finished modprobe@efi_pstore.service. Jul 10 00:53:41.811000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:53:41.811000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:53:41.813198 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 10 00:53:41.813326 systemd[1]: Finished modprobe@loop.service. Jul 10 00:53:41.813000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:53:41.813000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:53:41.815040 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 10 00:53:41.815174 systemd[1]: Finished modprobe@dm_mod.service. Jul 10 00:53:41.813000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:53:41.813000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:53:41.817441 systemd[1]: Starting ensure-sysext.service... Jul 10 00:53:41.817705 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 10 00:53:41.817752 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 10 00:53:41.818783 systemd[1]: Starting systemd-tmpfiles-setup.service... Jul 10 00:53:41.826255 systemd[1]: Reloading. Jul 10 00:53:41.837571 systemd-tmpfiles[1186]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Jul 10 00:53:41.859771 systemd-tmpfiles[1186]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 10 00:53:41.864919 /usr/lib/systemd/system-generators/torcx-generator[1205]: time="2025-07-10T00:53:41Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" Jul 10 00:53:41.864941 /usr/lib/systemd/system-generators/torcx-generator[1205]: time="2025-07-10T00:53:41Z" level=info msg="torcx already run" Jul 10 00:53:41.869695 systemd-tmpfiles[1186]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 10 00:53:41.943065 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 10 00:53:41.943078 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 10 00:53:41.955226 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 10 00:53:42.006207 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 10 00:53:42.007522 systemd[1]: Starting modprobe@dm_mod.service... Jul 10 00:53:42.008816 systemd[1]: Starting modprobe@efi_pstore.service... Jul 10 00:53:42.010051 systemd[1]: Starting modprobe@loop.service... Jul 10 00:53:42.010279 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 10 00:53:42.010474 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 10 00:53:42.010639 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 10 00:53:42.011439 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 10 00:53:42.011789 systemd[1]: Finished modprobe@dm_mod.service. Jul 10 00:53:42.010000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:53:42.010000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:53:42.012448 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 10 00:53:42.012697 systemd[1]: Finished modprobe@efi_pstore.service. Jul 10 00:53:42.011000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:53:42.011000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:53:42.013298 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 10 00:53:42.015021 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 10 00:53:42.017645 systemd[1]: Starting modprobe@dm_mod.service... Jul 10 00:53:42.018991 systemd[1]: Starting modprobe@efi_pstore.service... Jul 10 00:53:42.019285 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 10 00:53:42.019414 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 10 00:53:42.019551 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 10 00:53:42.020273 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 10 00:53:42.020425 systemd[1]: Finished modprobe@loop.service. Jul 10 00:53:42.019000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:53:42.019000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:53:42.021057 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 10 00:53:42.021193 systemd[1]: Finished modprobe@dm_mod.service. Jul 10 00:53:42.019000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:53:42.019000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:53:42.021627 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 10 00:53:42.021784 systemd[1]: Finished modprobe@efi_pstore.service. Jul 10 00:53:42.020000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:53:42.020000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:53:42.022198 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 10 00:53:42.022314 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 10 00:53:42.023972 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 10 00:53:42.024875 systemd[1]: Starting modprobe@dm_mod.service... Jul 10 00:53:42.028185 systemd[1]: Starting modprobe@drm.service... Jul 10 00:53:42.029318 systemd[1]: Starting modprobe@efi_pstore.service... Jul 10 00:53:42.030409 systemd[1]: Starting modprobe@loop.service... Jul 10 00:53:42.030654 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 10 00:53:42.030789 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 10 00:53:42.032482 systemd[1]: Starting systemd-networkd-wait-online.service... Jul 10 00:53:42.032727 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 10 00:53:42.033460 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 10 00:53:42.033629 systemd[1]: Finished modprobe@dm_mod.service. Jul 10 00:53:42.032000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:53:42.032000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:53:42.034078 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 10 00:53:42.034213 systemd[1]: Finished modprobe@drm.service. Jul 10 00:53:42.032000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:53:42.032000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:53:42.034806 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 10 00:53:42.034958 systemd[1]: Finished modprobe@loop.service. Jul 10 00:53:42.033000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:53:42.033000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:53:42.035439 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 10 00:53:42.038664 systemd[1]: Finished ensure-sysext.service. Jul 10 00:53:42.037000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:53:42.039017 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 10 00:53:42.039194 systemd[1]: Finished modprobe@efi_pstore.service. Jul 10 00:53:42.037000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:53:42.037000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:53:42.039414 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 10 00:53:42.057900 systemd[1]: Finished systemd-networkd-wait-online.service. Jul 10 00:53:42.056000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:53:42.828631 systemd[1]: Finished systemd-tmpfiles-setup.service. Jul 10 00:53:42.827000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:53:42.830028 systemd[1]: Starting audit-rules.service... Jul 10 00:53:42.831135 systemd[1]: Starting clean-ca-certificates.service... Jul 10 00:53:42.832467 systemd[1]: Starting systemd-journal-catalog-update.service... Jul 10 00:53:42.833810 systemd[1]: Starting systemd-resolved.service... Jul 10 00:53:42.835030 systemd[1]: Starting systemd-timesyncd.service... Jul 10 00:53:42.836140 systemd[1]: Starting systemd-update-utmp.service... Jul 10 00:53:42.836639 systemd[1]: Finished clean-ca-certificates.service. Jul 10 00:53:42.835000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:53:42.836963 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 10 00:53:42.847000 audit[1302]: SYSTEM_BOOT pid=1302 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Jul 10 00:53:42.850000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:53:42.851598 systemd[1]: Finished systemd-update-utmp.service. Jul 10 00:53:42.923018 systemd[1]: Started systemd-timesyncd.service. Jul 10 00:53:42.921000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:53:42.923199 systemd[1]: Reached target time-set.target. Jul 10 00:55:08.843096 systemd-timesyncd[1301]: Contacted time server 44.190.5.123:123 (0.flatcar.pool.ntp.org). Jul 10 00:55:08.843181 systemd-timesyncd[1301]: Initial clock synchronization to Thu 2025-07-10 00:55:08.842995 UTC. Jul 10 00:55:08.852254 systemd-resolved[1299]: Positive Trust Anchors: Jul 10 00:55:08.852461 systemd-resolved[1299]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 10 00:55:08.852526 systemd-resolved[1299]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Jul 10 00:55:08.976000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:55:08.977502 systemd[1]: Finished systemd-journal-catalog-update.service. Jul 10 00:55:08.983000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Jul 10 00:55:08.983000 audit[1319]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffef4732890 a2=420 a3=0 items=0 ppid=1296 pid=1319 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:55:08.983000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jul 10 00:55:08.985303 augenrules[1319]: No rules Jul 10 00:55:08.985591 systemd[1]: Finished audit-rules.service. Jul 10 00:55:09.088903 systemd-resolved[1299]: Defaulting to hostname 'linux'. Jul 10 00:55:09.090079 systemd[1]: Started systemd-resolved.service. Jul 10 00:55:09.090232 systemd[1]: Reached target network.target. Jul 10 00:55:09.090323 systemd[1]: Reached target network-online.target. Jul 10 00:55:09.090419 systemd[1]: Reached target nss-lookup.target. Jul 10 00:55:09.234891 ldconfig[1149]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 10 00:55:09.238550 systemd[1]: Finished ldconfig.service. Jul 10 00:55:09.239754 systemd[1]: Starting systemd-update-done.service... Jul 10 00:55:09.245020 systemd[1]: Finished systemd-update-done.service. Jul 10 00:55:09.245221 systemd[1]: Reached target sysinit.target. Jul 10 00:55:09.245419 systemd[1]: Started motdgen.path. Jul 10 00:55:09.245549 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Jul 10 00:55:09.245785 systemd[1]: Started logrotate.timer. Jul 10 00:55:09.245950 systemd[1]: Started mdadm.timer. Jul 10 00:55:09.246054 systemd[1]: Started systemd-tmpfiles-clean.timer. Jul 10 00:55:09.246170 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 10 00:55:09.246199 systemd[1]: Reached target paths.target. Jul 10 00:55:09.246301 systemd[1]: Reached target timers.target. Jul 10 00:55:09.246644 systemd[1]: Listening on dbus.socket. Jul 10 00:55:09.247977 systemd[1]: Starting docker.socket... Jul 10 00:55:09.249738 systemd[1]: Listening on sshd.socket. Jul 10 00:55:09.250048 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 10 00:55:09.250527 systemd[1]: Listening on docker.socket. Jul 10 00:55:09.250731 systemd[1]: Reached target sockets.target. Jul 10 00:55:09.250897 systemd[1]: Reached target basic.target. Jul 10 00:55:09.251179 systemd[1]: System is tainted: cgroupsv1 Jul 10 00:55:09.251280 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Jul 10 00:55:09.251612 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Jul 10 00:55:09.252927 systemd[1]: Starting containerd.service... Jul 10 00:55:09.254752 systemd[1]: Starting dbus.service... Jul 10 00:55:09.256292 systemd[1]: Starting enable-oem-cloudinit.service... Jul 10 00:55:09.258273 systemd[1]: Starting extend-filesystems.service... Jul 10 00:55:09.258716 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Jul 10 00:55:09.259747 jq[1334]: false Jul 10 00:55:09.261156 systemd[1]: Starting kubelet.service... Jul 10 00:55:09.262898 systemd[1]: Starting motdgen.service... Jul 10 00:55:09.265919 systemd[1]: Starting prepare-helm.service... Jul 10 00:55:09.269238 systemd[1]: Starting ssh-key-proc-cmdline.service... Jul 10 00:55:09.275919 systemd[1]: Starting sshd-keygen.service... Jul 10 00:55:09.281660 extend-filesystems[1335]: Found loop1 Jul 10 00:55:09.281660 extend-filesystems[1335]: Found sda Jul 10 00:55:09.280654 systemd[1]: Starting systemd-logind.service... Jul 10 00:55:09.287662 extend-filesystems[1335]: Found sda1 Jul 10 00:55:09.287662 extend-filesystems[1335]: Found sda2 Jul 10 00:55:09.287662 extend-filesystems[1335]: Found sda3 Jul 10 00:55:09.287662 extend-filesystems[1335]: Found usr Jul 10 00:55:09.282051 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 10 00:55:09.314447 extend-filesystems[1335]: Found sda4 Jul 10 00:55:09.314447 extend-filesystems[1335]: Found sda6 Jul 10 00:55:09.314447 extend-filesystems[1335]: Found sda7 Jul 10 00:55:09.314447 extend-filesystems[1335]: Found sda9 Jul 10 00:55:09.314447 extend-filesystems[1335]: Checking size of /dev/sda9 Jul 10 00:55:09.282129 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 10 00:55:09.285042 systemd[1]: Starting update-engine.service... Jul 10 00:55:09.289167 systemd[1]: Starting update-ssh-keys-after-ignition.service... Jul 10 00:55:09.292126 systemd[1]: Starting vmtoolsd.service... Jul 10 00:55:09.294295 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 10 00:55:09.294538 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Jul 10 00:55:09.316471 jq[1350]: true Jul 10 00:55:09.316614 jq[1362]: true Jul 10 00:55:09.318595 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 10 00:55:09.318762 systemd[1]: Finished ssh-key-proc-cmdline.service. Jul 10 00:55:09.327669 systemd[1]: Started vmtoolsd.service. Jul 10 00:55:09.329455 systemd[1]: motdgen.service: Deactivated successfully. Jul 10 00:55:09.329627 systemd[1]: Finished motdgen.service. Jul 10 00:55:09.352530 env[1361]: time="2025-07-10T00:55:09.352496068Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Jul 10 00:55:09.357593 extend-filesystems[1335]: Old size kept for /dev/sda9 Jul 10 00:55:09.357787 extend-filesystems[1335]: Found sr0 Jul 10 00:55:09.357911 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 10 00:55:09.358064 systemd[1]: Finished extend-filesystems.service. Jul 10 00:55:09.380244 tar[1358]: linux-amd64/helm Jul 10 00:55:09.388628 env[1361]: time="2025-07-10T00:55:09.388596365Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 10 00:55:09.388812 env[1361]: time="2025-07-10T00:55:09.388800396Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 10 00:55:09.391386 env[1361]: time="2025-07-10T00:55:09.391336531Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.186-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jul 10 00:55:09.391386 env[1361]: time="2025-07-10T00:55:09.391380475Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 10 00:55:09.391576 env[1361]: time="2025-07-10T00:55:09.391559559Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 10 00:55:09.391576 env[1361]: time="2025-07-10T00:55:09.391572768Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 10 00:55:09.391631 env[1361]: time="2025-07-10T00:55:09.391581186Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Jul 10 00:55:09.391631 env[1361]: time="2025-07-10T00:55:09.391588307Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 10 00:55:09.391664 env[1361]: time="2025-07-10T00:55:09.391637985Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 10 00:55:09.391798 env[1361]: time="2025-07-10T00:55:09.391784312Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 10 00:55:09.391900 env[1361]: time="2025-07-10T00:55:09.391885657Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 10 00:55:09.391900 env[1361]: time="2025-07-10T00:55:09.391897498Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 10 00:55:09.391955 env[1361]: time="2025-07-10T00:55:09.391926253Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Jul 10 00:55:09.391955 env[1361]: time="2025-07-10T00:55:09.391935228Z" level=info msg="metadata content store policy set" policy=shared Jul 10 00:55:09.439760 systemd-logind[1346]: Watching system buttons on /dev/input/event1 (Power Button) Jul 10 00:55:09.441469 env[1361]: time="2025-07-10T00:55:09.440327659Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 10 00:55:09.441469 env[1361]: time="2025-07-10T00:55:09.440378013Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jul 10 00:55:09.441469 env[1361]: time="2025-07-10T00:55:09.440392845Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 10 00:55:09.441469 env[1361]: time="2025-07-10T00:55:09.440422043Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 10 00:55:09.441469 env[1361]: time="2025-07-10T00:55:09.440433155Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 10 00:55:09.441469 env[1361]: time="2025-07-10T00:55:09.440461816Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 10 00:55:09.441469 env[1361]: time="2025-07-10T00:55:09.440478456Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 10 00:55:09.441469 env[1361]: time="2025-07-10T00:55:09.440493122Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 10 00:55:09.441469 env[1361]: time="2025-07-10T00:55:09.440502838Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Jul 10 00:55:09.441469 env[1361]: time="2025-07-10T00:55:09.440514462Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 10 00:55:09.441469 env[1361]: time="2025-07-10T00:55:09.440522686Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 10 00:55:09.441469 env[1361]: time="2025-07-10T00:55:09.440530054Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 10 00:55:09.439782 systemd-logind[1346]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jul 10 00:55:09.441361 systemd-logind[1346]: New seat seat0. Jul 10 00:55:09.442901 env[1361]: time="2025-07-10T00:55:09.442079613Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 10 00:55:09.442901 env[1361]: time="2025-07-10T00:55:09.442169523Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jul 10 00:55:09.442901 env[1361]: time="2025-07-10T00:55:09.442450807Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jul 10 00:55:09.442901 env[1361]: time="2025-07-10T00:55:09.442476971Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jul 10 00:55:09.442901 env[1361]: time="2025-07-10T00:55:09.442492435Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jul 10 00:55:09.442901 env[1361]: time="2025-07-10T00:55:09.442534111Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jul 10 00:55:09.442901 env[1361]: time="2025-07-10T00:55:09.442546901Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jul 10 00:55:09.442901 env[1361]: time="2025-07-10T00:55:09.442558117Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jul 10 00:55:09.442901 env[1361]: time="2025-07-10T00:55:09.442566071Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jul 10 00:55:09.442901 env[1361]: time="2025-07-10T00:55:09.442577518Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jul 10 00:55:09.442901 env[1361]: time="2025-07-10T00:55:09.442587209Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jul 10 00:55:09.442901 env[1361]: time="2025-07-10T00:55:09.442594115Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jul 10 00:55:09.442901 env[1361]: time="2025-07-10T00:55:09.442600428Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jul 10 00:55:09.442901 env[1361]: time="2025-07-10T00:55:09.442608850Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jul 10 00:55:09.442901 env[1361]: time="2025-07-10T00:55:09.442703271Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jul 10 00:55:09.443348 env[1361]: time="2025-07-10T00:55:09.442718728Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jul 10 00:55:09.443348 env[1361]: time="2025-07-10T00:55:09.442731583Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jul 10 00:55:09.443348 env[1361]: time="2025-07-10T00:55:09.442743340Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jul 10 00:55:09.443348 env[1361]: time="2025-07-10T00:55:09.442760225Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Jul 10 00:55:09.443348 env[1361]: time="2025-07-10T00:55:09.442772769Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jul 10 00:55:09.443348 env[1361]: time="2025-07-10T00:55:09.442787322Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Jul 10 00:55:09.443348 env[1361]: time="2025-07-10T00:55:09.442816175Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jul 10 00:55:09.450220 env[1361]: time="2025-07-10T00:55:09.445435958Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jul 10 00:55:09.453934 env[1361]: time="2025-07-10T00:55:09.450227717Z" level=info msg="Connect containerd service" Jul 10 00:55:09.453934 env[1361]: time="2025-07-10T00:55:09.450266367Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jul 10 00:55:09.453934 env[1361]: time="2025-07-10T00:55:09.451161124Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 10 00:55:09.453215 systemd[1]: Finished update-ssh-keys-after-ignition.service. Jul 10 00:55:09.454087 bash[1393]: Updated "/home/core/.ssh/authorized_keys" Jul 10 00:55:09.456277 env[1361]: time="2025-07-10T00:55:09.455344746Z" level=info msg="Start subscribing containerd event" Jul 10 00:55:09.456277 env[1361]: time="2025-07-10T00:55:09.455403260Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 10 00:55:09.456277 env[1361]: time="2025-07-10T00:55:09.455412916Z" level=info msg="Start recovering state" Jul 10 00:55:09.456277 env[1361]: time="2025-07-10T00:55:09.455444263Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 10 00:55:09.456277 env[1361]: time="2025-07-10T00:55:09.455459542Z" level=info msg="Start event monitor" Jul 10 00:55:09.456277 env[1361]: time="2025-07-10T00:55:09.455475962Z" level=info msg="Start snapshots syncer" Jul 10 00:55:09.456277 env[1361]: time="2025-07-10T00:55:09.455483649Z" level=info msg="Start cni network conf syncer for default" Jul 10 00:55:09.456277 env[1361]: time="2025-07-10T00:55:09.455488063Z" level=info msg="Start streaming server" Jul 10 00:55:09.456277 env[1361]: time="2025-07-10T00:55:09.455869729Z" level=info msg="containerd successfully booted in 0.103998s" Jul 10 00:55:09.455566 systemd[1]: Started containerd.service. Jul 10 00:55:09.459473 dbus-daemon[1333]: [system] SELinux support is enabled Jul 10 00:55:09.459612 systemd[1]: Started dbus.service. Jul 10 00:55:09.461162 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 10 00:55:09.461564 dbus-daemon[1333]: [system] Successfully activated service 'org.freedesktop.systemd1' Jul 10 00:55:09.461186 systemd[1]: Reached target system-config.target. Jul 10 00:55:09.461336 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 10 00:55:09.461346 systemd[1]: Reached target user-config.target. Jul 10 00:55:09.461642 systemd[1]: Started systemd-logind.service. Jul 10 00:55:09.494432 update_engine[1349]: I0710 00:55:09.492256 1349 main.cc:92] Flatcar Update Engine starting Jul 10 00:55:09.496371 kernel: NET: Registered PF_VSOCK protocol family Jul 10 00:55:09.497751 systemd[1]: Started update-engine.service. Jul 10 00:55:09.497979 update_engine[1349]: I0710 00:55:09.497802 1349 update_check_scheduler.cc:74] Next update check in 9m41s Jul 10 00:55:09.499710 systemd[1]: Started locksmithd.service. Jul 10 00:55:09.838082 tar[1358]: linux-amd64/LICENSE Jul 10 00:55:09.838082 tar[1358]: linux-amd64/README.md Jul 10 00:55:09.842939 systemd[1]: Finished prepare-helm.service. Jul 10 00:55:10.032103 sshd_keygen[1380]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 10 00:55:10.044898 locksmithd[1420]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 10 00:55:10.056975 systemd[1]: Finished sshd-keygen.service. Jul 10 00:55:10.058487 systemd[1]: Starting issuegen.service... Jul 10 00:55:10.063294 systemd[1]: issuegen.service: Deactivated successfully. Jul 10 00:55:10.063498 systemd[1]: Finished issuegen.service. Jul 10 00:55:10.065065 systemd[1]: Starting systemd-user-sessions.service... Jul 10 00:55:10.071149 systemd[1]: Finished systemd-user-sessions.service. Jul 10 00:55:10.072399 systemd[1]: Started getty@tty1.service. Jul 10 00:55:10.073581 systemd[1]: Started serial-getty@ttyS0.service. Jul 10 00:55:10.073848 systemd[1]: Reached target getty.target. Jul 10 00:55:11.876563 systemd[1]: Started kubelet.service. Jul 10 00:55:11.876922 systemd[1]: Reached target multi-user.target. Jul 10 00:55:11.877969 systemd[1]: Starting systemd-update-utmp-runlevel.service... Jul 10 00:55:11.882522 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Jul 10 00:55:11.882648 systemd[1]: Finished systemd-update-utmp-runlevel.service. Jul 10 00:55:11.884820 systemd[1]: Startup finished in 7.297s (kernel) + 12.056s (userspace) = 19.354s. Jul 10 00:55:12.001047 login[1490]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jul 10 00:55:12.001165 login[1489]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jul 10 00:55:12.009281 systemd[1]: Created slice user-500.slice. Jul 10 00:55:12.009993 systemd[1]: Starting user-runtime-dir@500.service... Jul 10 00:55:12.012485 systemd-logind[1346]: New session 2 of user core. Jul 10 00:55:12.015266 systemd-logind[1346]: New session 1 of user core. Jul 10 00:55:12.024723 systemd[1]: Finished user-runtime-dir@500.service. Jul 10 00:55:12.025547 systemd[1]: Starting user@500.service... Jul 10 00:55:12.036495 (systemd)[1502]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 10 00:55:12.091275 systemd[1502]: Queued start job for default target default.target. Jul 10 00:55:12.091670 systemd[1502]: Reached target paths.target. Jul 10 00:55:12.091751 systemd[1502]: Reached target sockets.target. Jul 10 00:55:12.091815 systemd[1502]: Reached target timers.target. Jul 10 00:55:12.091885 systemd[1502]: Reached target basic.target. Jul 10 00:55:12.092017 systemd[1]: Started user@500.service. Jul 10 00:55:12.092664 systemd[1]: Started session-1.scope. Jul 10 00:55:12.093060 systemd[1]: Started session-2.scope. Jul 10 00:55:12.093614 systemd[1502]: Reached target default.target. Jul 10 00:55:12.093796 systemd[1502]: Startup finished in 52ms. Jul 10 00:55:13.027133 kubelet[1496]: E0710 00:55:13.027091 1496 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 10 00:55:13.028585 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 10 00:55:13.028695 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 10 00:55:23.029444 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 10 00:55:23.029606 systemd[1]: Stopped kubelet.service. Jul 10 00:55:23.030650 systemd[1]: Starting kubelet.service... Jul 10 00:55:23.088580 systemd[1]: Started kubelet.service. Jul 10 00:55:23.141080 kubelet[1538]: E0710 00:55:23.141038 1538 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 10 00:55:23.143129 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 10 00:55:23.143226 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 10 00:55:33.279521 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 10 00:55:33.279747 systemd[1]: Stopped kubelet.service. Jul 10 00:55:33.281255 systemd[1]: Starting kubelet.service... Jul 10 00:55:33.510456 systemd[1]: Started kubelet.service. Jul 10 00:55:33.546571 kubelet[1553]: E0710 00:55:33.546504 1553 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 10 00:55:33.547689 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 10 00:55:33.547780 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 10 00:55:39.620711 systemd[1]: Created slice system-sshd.slice. Jul 10 00:55:39.621691 systemd[1]: Started sshd@0-139.178.70.107:22-139.178.68.195:44344.service. Jul 10 00:55:39.680912 sshd[1560]: Accepted publickey for core from 139.178.68.195 port 44344 ssh2: RSA SHA256:NVpdRDPpwzjVTzi6orhe1cA9BvcYymCSReGH8myOy/Q Jul 10 00:55:39.681653 sshd[1560]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 10 00:55:39.684196 systemd-logind[1346]: New session 3 of user core. Jul 10 00:55:39.684487 systemd[1]: Started session-3.scope. Jul 10 00:55:39.730794 systemd[1]: Started sshd@1-139.178.70.107:22-139.178.68.195:44358.service. Jul 10 00:55:39.772077 sshd[1565]: Accepted publickey for core from 139.178.68.195 port 44358 ssh2: RSA SHA256:NVpdRDPpwzjVTzi6orhe1cA9BvcYymCSReGH8myOy/Q Jul 10 00:55:39.773149 sshd[1565]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 10 00:55:39.775515 systemd-logind[1346]: New session 4 of user core. Jul 10 00:55:39.776148 systemd[1]: Started session-4.scope. Jul 10 00:55:39.827944 systemd[1]: Started sshd@2-139.178.70.107:22-139.178.68.195:44372.service. Jul 10 00:55:39.828447 sshd[1565]: pam_unix(sshd:session): session closed for user core Jul 10 00:55:39.830281 systemd[1]: sshd@1-139.178.70.107:22-139.178.68.195:44358.service: Deactivated successfully. Jul 10 00:55:39.830778 systemd-logind[1346]: Session 4 logged out. Waiting for processes to exit. Jul 10 00:55:39.830794 systemd[1]: session-4.scope: Deactivated successfully. Jul 10 00:55:39.831438 systemd-logind[1346]: Removed session 4. Jul 10 00:55:39.860337 sshd[1570]: Accepted publickey for core from 139.178.68.195 port 44372 ssh2: RSA SHA256:NVpdRDPpwzjVTzi6orhe1cA9BvcYymCSReGH8myOy/Q Jul 10 00:55:39.861033 sshd[1570]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 10 00:55:39.863377 systemd-logind[1346]: New session 5 of user core. Jul 10 00:55:39.863776 systemd[1]: Started session-5.scope. Jul 10 00:55:39.910076 sshd[1570]: pam_unix(sshd:session): session closed for user core Jul 10 00:55:39.912013 systemd[1]: Started sshd@3-139.178.70.107:22-139.178.68.195:44384.service. Jul 10 00:55:39.913072 systemd[1]: sshd@2-139.178.70.107:22-139.178.68.195:44372.service: Deactivated successfully. Jul 10 00:55:39.913690 systemd[1]: session-5.scope: Deactivated successfully. Jul 10 00:55:39.914018 systemd-logind[1346]: Session 5 logged out. Waiting for processes to exit. Jul 10 00:55:39.914588 systemd-logind[1346]: Removed session 5. Jul 10 00:55:39.942659 sshd[1577]: Accepted publickey for core from 139.178.68.195 port 44384 ssh2: RSA SHA256:NVpdRDPpwzjVTzi6orhe1cA9BvcYymCSReGH8myOy/Q Jul 10 00:55:39.943364 sshd[1577]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 10 00:55:39.946114 systemd[1]: Started session-6.scope. Jul 10 00:55:39.946312 systemd-logind[1346]: New session 6 of user core. Jul 10 00:55:39.998315 sshd[1577]: pam_unix(sshd:session): session closed for user core Jul 10 00:55:39.999423 systemd[1]: Started sshd@4-139.178.70.107:22-139.178.68.195:44388.service. Jul 10 00:55:40.004483 systemd[1]: sshd@3-139.178.70.107:22-139.178.68.195:44384.service: Deactivated successfully. Jul 10 00:55:40.004883 systemd[1]: session-6.scope: Deactivated successfully. Jul 10 00:55:40.005752 systemd-logind[1346]: Session 6 logged out. Waiting for processes to exit. Jul 10 00:55:40.006354 systemd-logind[1346]: Removed session 6. Jul 10 00:55:40.030422 sshd[1584]: Accepted publickey for core from 139.178.68.195 port 44388 ssh2: RSA SHA256:NVpdRDPpwzjVTzi6orhe1cA9BvcYymCSReGH8myOy/Q Jul 10 00:55:40.031215 sshd[1584]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 10 00:55:40.034082 systemd[1]: Started session-7.scope. Jul 10 00:55:40.034294 systemd-logind[1346]: New session 7 of user core. Jul 10 00:55:40.094974 sudo[1590]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 10 00:55:40.095445 sudo[1590]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 10 00:55:40.113052 systemd[1]: Starting docker.service... Jul 10 00:55:40.135133 env[1600]: time="2025-07-10T00:55:40.135107842Z" level=info msg="Starting up" Jul 10 00:55:40.136114 env[1600]: time="2025-07-10T00:55:40.136096532Z" level=info msg="parsed scheme: \"unix\"" module=grpc Jul 10 00:55:40.136114 env[1600]: time="2025-07-10T00:55:40.136109321Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Jul 10 00:55:40.136169 env[1600]: time="2025-07-10T00:55:40.136122038Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Jul 10 00:55:40.136169 env[1600]: time="2025-07-10T00:55:40.136128156Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Jul 10 00:55:40.137022 env[1600]: time="2025-07-10T00:55:40.137006221Z" level=info msg="parsed scheme: \"unix\"" module=grpc Jul 10 00:55:40.137022 env[1600]: time="2025-07-10T00:55:40.137017429Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Jul 10 00:55:40.137086 env[1600]: time="2025-07-10T00:55:40.137025563Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Jul 10 00:55:40.137086 env[1600]: time="2025-07-10T00:55:40.137030476Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Jul 10 00:55:40.141125 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport3117564268-merged.mount: Deactivated successfully. Jul 10 00:55:40.154756 env[1600]: time="2025-07-10T00:55:40.154730505Z" level=warning msg="Your kernel does not support cgroup blkio weight" Jul 10 00:55:40.154756 env[1600]: time="2025-07-10T00:55:40.154747108Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" Jul 10 00:55:40.154876 env[1600]: time="2025-07-10T00:55:40.154840016Z" level=info msg="Loading containers: start." Jul 10 00:55:40.238370 kernel: Initializing XFRM netlink socket Jul 10 00:55:40.261170 env[1600]: time="2025-07-10T00:55:40.261144864Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Jul 10 00:55:40.298588 systemd-networkd[1112]: docker0: Link UP Jul 10 00:55:40.306994 env[1600]: time="2025-07-10T00:55:40.306978777Z" level=info msg="Loading containers: done." Jul 10 00:55:40.313668 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2421447258-merged.mount: Deactivated successfully. Jul 10 00:55:40.316246 env[1600]: time="2025-07-10T00:55:40.316226789Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 10 00:55:40.316464 env[1600]: time="2025-07-10T00:55:40.316453417Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Jul 10 00:55:40.316562 env[1600]: time="2025-07-10T00:55:40.316553793Z" level=info msg="Daemon has completed initialization" Jul 10 00:55:40.324467 systemd[1]: Started docker.service. Jul 10 00:55:40.327325 env[1600]: time="2025-07-10T00:55:40.327295063Z" level=info msg="API listen on /run/docker.sock" Jul 10 00:55:41.605796 env[1361]: time="2025-07-10T00:55:41.605646548Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.10\"" Jul 10 00:55:42.187641 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount824523747.mount: Deactivated successfully. Jul 10 00:55:43.309622 env[1361]: time="2025-07-10T00:55:43.309595029Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.31.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:55:43.310467 env[1361]: time="2025-07-10T00:55:43.310449566Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:74c5154ea84d9a53c406e6c00e53cf66145cce821fd80e3c74e2e1bf312f3977,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:55:43.311558 env[1361]: time="2025-07-10T00:55:43.311544330Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.31.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:55:43.312517 env[1361]: time="2025-07-10T00:55:43.312501453Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:083d7d64af31cd090f870eb49fb815e6bb42c175fc602ee9dae2f28f082bd4dc,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:55:43.313007 env[1361]: time="2025-07-10T00:55:43.312992912Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.10\" returns image reference \"sha256:74c5154ea84d9a53c406e6c00e53cf66145cce821fd80e3c74e2e1bf312f3977\"" Jul 10 00:55:43.313404 env[1361]: time="2025-07-10T00:55:43.313392113Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.10\"" Jul 10 00:55:43.779502 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jul 10 00:55:43.779651 systemd[1]: Stopped kubelet.service. Jul 10 00:55:43.781066 systemd[1]: Starting kubelet.service... Jul 10 00:55:43.846523 systemd[1]: Started kubelet.service. Jul 10 00:55:43.917382 kubelet[1730]: E0710 00:55:43.917339 1730 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 10 00:55:43.918446 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 10 00:55:43.918544 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 10 00:55:44.901753 env[1361]: time="2025-07-10T00:55:44.901722777Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.31.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:55:44.902466 env[1361]: time="2025-07-10T00:55:44.902449284Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c285c4e62c91c434e9928bee7063b361509f43f43faa31641b626d6eff97616d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:55:44.903857 env[1361]: time="2025-07-10T00:55:44.903842104Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.31.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:55:44.906300 env[1361]: time="2025-07-10T00:55:44.906282856Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:3c67387d023c6114879f1e817669fd641797d30f117230682faf3930ecaaf0fe,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:55:44.906804 env[1361]: time="2025-07-10T00:55:44.906789726Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.10\" returns image reference \"sha256:c285c4e62c91c434e9928bee7063b361509f43f43faa31641b626d6eff97616d\"" Jul 10 00:55:44.907191 env[1361]: time="2025-07-10T00:55:44.907165658Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.10\"" Jul 10 00:55:46.140226 env[1361]: time="2025-07-10T00:55:46.140200473Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.31.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:55:46.164019 env[1361]: time="2025-07-10T00:55:46.163997670Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:61daeb7d112d9547792027cb16242b1d131f357f511545477381457fff5a69e2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:55:46.176695 env[1361]: time="2025-07-10T00:55:46.176678609Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.31.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:55:46.204818 env[1361]: time="2025-07-10T00:55:46.204784771Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:284dc2a5cf6afc9b76e39ad4b79c680c23d289488517643b28784a06d0141272,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:55:46.205485 env[1361]: time="2025-07-10T00:55:46.205460271Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.10\" returns image reference \"sha256:61daeb7d112d9547792027cb16242b1d131f357f511545477381457fff5a69e2\"" Jul 10 00:55:46.206431 env[1361]: time="2025-07-10T00:55:46.206411209Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\"" Jul 10 00:55:47.571445 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount728055836.mount: Deactivated successfully. Jul 10 00:55:48.190526 env[1361]: time="2025-07-10T00:55:48.190486300Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.31.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:55:48.205739 env[1361]: time="2025-07-10T00:55:48.205719967Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3ed600862d3e69931e0f9f4dbf5c2b46343af40aa079772434f13de771bdc30c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:55:48.213858 env[1361]: time="2025-07-10T00:55:48.213838602Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.31.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:55:48.227732 env[1361]: time="2025-07-10T00:55:48.227714658Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:bcbb293812bdf587b28ea98369a8c347ca84884160046296761acdf12b27029d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:55:48.228077 env[1361]: time="2025-07-10T00:55:48.228060408Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\" returns image reference \"sha256:3ed600862d3e69931e0f9f4dbf5c2b46343af40aa079772434f13de771bdc30c\"" Jul 10 00:55:48.228439 env[1361]: time="2025-07-10T00:55:48.228419040Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jul 10 00:55:48.877072 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3078190625.mount: Deactivated successfully. Jul 10 00:55:50.387715 env[1361]: time="2025-07-10T00:55:50.387685672Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:55:50.394523 env[1361]: time="2025-07-10T00:55:50.394503455Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:55:50.402184 env[1361]: time="2025-07-10T00:55:50.402166694Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:55:50.414053 env[1361]: time="2025-07-10T00:55:50.414028628Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:55:50.414472 env[1361]: time="2025-07-10T00:55:50.414452490Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jul 10 00:55:50.414738 env[1361]: time="2025-07-10T00:55:50.414724875Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 10 00:55:51.008927 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4082427014.mount: Deactivated successfully. Jul 10 00:55:51.067095 env[1361]: time="2025-07-10T00:55:51.067065301Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:55:51.076335 env[1361]: time="2025-07-10T00:55:51.076309410Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:55:51.084468 env[1361]: time="2025-07-10T00:55:51.084444384Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:55:51.089547 env[1361]: time="2025-07-10T00:55:51.089526766Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:55:51.089759 env[1361]: time="2025-07-10T00:55:51.089734970Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jul 10 00:55:51.090265 env[1361]: time="2025-07-10T00:55:51.090250855Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Jul 10 00:55:52.145860 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1564756954.mount: Deactivated successfully. Jul 10 00:55:54.029510 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jul 10 00:55:54.029664 systemd[1]: Stopped kubelet.service. Jul 10 00:55:54.031206 systemd[1]: Starting kubelet.service... Jul 10 00:55:54.186810 env[1361]: time="2025-07-10T00:55:54.186739846Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:55:54.188581 env[1361]: time="2025-07-10T00:55:54.188564084Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:55:54.190235 env[1361]: time="2025-07-10T00:55:54.190215169Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:55:54.192819 env[1361]: time="2025-07-10T00:55:54.192796801Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:55:54.193162 env[1361]: time="2025-07-10T00:55:54.193136825Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Jul 10 00:55:54.544561 update_engine[1349]: I0710 00:55:54.544529 1349 update_attempter.cc:509] Updating boot flags... Jul 10 00:55:56.144623 systemd[1]: Started kubelet.service. Jul 10 00:55:56.214278 kubelet[1780]: E0710 00:55:56.214247 1780 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 10 00:55:56.215128 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 10 00:55:56.215220 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 10 00:55:56.631687 systemd[1]: Stopped kubelet.service. Jul 10 00:55:56.633654 systemd[1]: Starting kubelet.service... Jul 10 00:55:56.652559 systemd[1]: Reloading. Jul 10 00:55:56.694903 /usr/lib/systemd/system-generators/torcx-generator[1813]: time="2025-07-10T00:55:56Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" Jul 10 00:55:56.694921 /usr/lib/systemd/system-generators/torcx-generator[1813]: time="2025-07-10T00:55:56Z" level=info msg="torcx already run" Jul 10 00:55:56.765022 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 10 00:55:56.765038 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 10 00:55:56.777205 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 10 00:55:56.850764 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 10 00:55:56.850817 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 10 00:55:56.851068 systemd[1]: Stopped kubelet.service. Jul 10 00:55:56.852932 systemd[1]: Starting kubelet.service... Jul 10 00:55:57.621495 systemd[1]: Started kubelet.service. Jul 10 00:55:57.648327 kubelet[1888]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 10 00:55:57.648327 kubelet[1888]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 10 00:55:57.648327 kubelet[1888]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 10 00:55:57.648593 kubelet[1888]: I0710 00:55:57.648401 1888 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 10 00:55:57.940033 kubelet[1888]: I0710 00:55:57.939856 1888 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 10 00:55:57.940185 kubelet[1888]: I0710 00:55:57.940175 1888 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 10 00:55:57.940407 kubelet[1888]: I0710 00:55:57.940399 1888 server.go:934] "Client rotation is on, will bootstrap in background" Jul 10 00:55:58.378844 kubelet[1888]: E0710 00:55:58.378807 1888 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://139.178.70.107:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 139.178.70.107:6443: connect: connection refused" logger="UnhandledError" Jul 10 00:55:58.388446 kubelet[1888]: I0710 00:55:58.388418 1888 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 10 00:55:58.424473 kubelet[1888]: E0710 00:55:58.424448 1888 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 10 00:55:58.424473 kubelet[1888]: I0710 00:55:58.424470 1888 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 10 00:55:58.431934 kubelet[1888]: I0710 00:55:58.431904 1888 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 10 00:55:58.432087 kubelet[1888]: I0710 00:55:58.432076 1888 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 10 00:55:58.432178 kubelet[1888]: I0710 00:55:58.432154 1888 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 10 00:55:58.432292 kubelet[1888]: I0710 00:55:58.432177 1888 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Jul 10 00:55:58.432385 kubelet[1888]: I0710 00:55:58.432296 1888 topology_manager.go:138] "Creating topology manager with none policy" Jul 10 00:55:58.432385 kubelet[1888]: I0710 00:55:58.432303 1888 container_manager_linux.go:300] "Creating device plugin manager" Jul 10 00:55:58.432385 kubelet[1888]: I0710 00:55:58.432370 1888 state_mem.go:36] "Initialized new in-memory state store" Jul 10 00:55:58.491018 kubelet[1888]: W0710 00:55:58.490986 1888 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://139.178.70.107:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 139.178.70.107:6443: connect: connection refused Jul 10 00:55:58.491149 kubelet[1888]: E0710 00:55:58.491136 1888 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://139.178.70.107:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 139.178.70.107:6443: connect: connection refused" logger="UnhandledError" Jul 10 00:55:58.494412 kubelet[1888]: I0710 00:55:58.494397 1888 kubelet.go:408] "Attempting to sync node with API server" Jul 10 00:55:58.494449 kubelet[1888]: I0710 00:55:58.494421 1888 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 10 00:55:58.494449 kubelet[1888]: I0710 00:55:58.494446 1888 kubelet.go:314] "Adding apiserver pod source" Jul 10 00:55:58.494491 kubelet[1888]: I0710 00:55:58.494459 1888 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 10 00:55:58.530434 kubelet[1888]: W0710 00:55:58.530400 1888 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://139.178.70.107:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 139.178.70.107:6443: connect: connection refused Jul 10 00:55:58.530551 kubelet[1888]: E0710 00:55:58.530538 1888 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://139.178.70.107:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 139.178.70.107:6443: connect: connection refused" logger="UnhandledError" Jul 10 00:55:58.531267 kubelet[1888]: I0710 00:55:58.531256 1888 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Jul 10 00:55:58.531583 kubelet[1888]: I0710 00:55:58.531574 1888 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 10 00:55:58.531663 kubelet[1888]: W0710 00:55:58.531655 1888 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 10 00:55:58.549428 kubelet[1888]: I0710 00:55:58.549414 1888 server.go:1274] "Started kubelet" Jul 10 00:55:58.552975 kubelet[1888]: I0710 00:55:58.552944 1888 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 10 00:55:58.553891 kubelet[1888]: I0710 00:55:58.553694 1888 server.go:449] "Adding debug handlers to kubelet server" Jul 10 00:55:58.576148 kubelet[1888]: I0710 00:55:58.576121 1888 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 10 00:55:58.576401 kubelet[1888]: I0710 00:55:58.576393 1888 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 10 00:55:58.577650 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Jul 10 00:55:58.578166 kubelet[1888]: I0710 00:55:58.578154 1888 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 10 00:55:58.578631 kubelet[1888]: I0710 00:55:58.578622 1888 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 10 00:55:58.580243 kubelet[1888]: I0710 00:55:58.580236 1888 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 10 00:55:58.580440 kubelet[1888]: E0710 00:55:58.580430 1888 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 10 00:55:58.580785 kubelet[1888]: I0710 00:55:58.580777 1888 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 10 00:55:58.580857 kubelet[1888]: I0710 00:55:58.580851 1888 reconciler.go:26] "Reconciler: start to sync state" Jul 10 00:55:58.592749 kubelet[1888]: E0710 00:55:58.591700 1888 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://139.178.70.107:6443/api/v1/namespaces/default/events\": dial tcp 139.178.70.107:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1850bdd0921dc49a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-07-10 00:55:58.549394586 +0000 UTC m=+0.924587954,LastTimestamp:2025-07-10 00:55:58.549394586 +0000 UTC m=+0.924587954,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 10 00:55:58.592970 kubelet[1888]: W0710 00:55:58.592937 1888 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://139.178.70.107:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.70.107:6443: connect: connection refused Jul 10 00:55:58.593001 kubelet[1888]: E0710 00:55:58.592982 1888 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://139.178.70.107:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 139.178.70.107:6443: connect: connection refused" logger="UnhandledError" Jul 10 00:55:58.593147 kubelet[1888]: I0710 00:55:58.593135 1888 factory.go:221] Registration of the systemd container factory successfully Jul 10 00:55:58.593204 kubelet[1888]: I0710 00:55:58.593191 1888 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 10 00:55:58.594267 kubelet[1888]: I0710 00:55:58.594251 1888 factory.go:221] Registration of the containerd container factory successfully Jul 10 00:55:58.596212 kubelet[1888]: E0710 00:55:58.596188 1888 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.107:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.107:6443: connect: connection refused" interval="200ms" Jul 10 00:55:58.605376 kubelet[1888]: I0710 00:55:58.605340 1888 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 10 00:55:58.606014 kubelet[1888]: I0710 00:55:58.606005 1888 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 10 00:55:58.606082 kubelet[1888]: I0710 00:55:58.606074 1888 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 10 00:55:58.606133 kubelet[1888]: I0710 00:55:58.606126 1888 kubelet.go:2321] "Starting kubelet main sync loop" Jul 10 00:55:58.606201 kubelet[1888]: E0710 00:55:58.606190 1888 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 10 00:55:58.610025 kubelet[1888]: W0710 00:55:58.609999 1888 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://139.178.70.107:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.70.107:6443: connect: connection refused Jul 10 00:55:58.610139 kubelet[1888]: E0710 00:55:58.610124 1888 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://139.178.70.107:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 139.178.70.107:6443: connect: connection refused" logger="UnhandledError" Jul 10 00:55:58.610278 kubelet[1888]: E0710 00:55:58.610269 1888 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 10 00:55:58.622112 kubelet[1888]: I0710 00:55:58.622097 1888 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 10 00:55:58.622229 kubelet[1888]: I0710 00:55:58.622221 1888 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 10 00:55:58.622293 kubelet[1888]: I0710 00:55:58.622281 1888 state_mem.go:36] "Initialized new in-memory state store" Jul 10 00:55:58.623384 kubelet[1888]: I0710 00:55:58.623374 1888 policy_none.go:49] "None policy: Start" Jul 10 00:55:58.623806 kubelet[1888]: I0710 00:55:58.623791 1888 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 10 00:55:58.623871 kubelet[1888]: I0710 00:55:58.623864 1888 state_mem.go:35] "Initializing new in-memory state store" Jul 10 00:55:58.628545 kubelet[1888]: I0710 00:55:58.628526 1888 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 10 00:55:58.628738 kubelet[1888]: I0710 00:55:58.628730 1888 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 10 00:55:58.628814 kubelet[1888]: I0710 00:55:58.628790 1888 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 10 00:55:58.630952 kubelet[1888]: I0710 00:55:58.629707 1888 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 10 00:55:58.631074 kubelet[1888]: E0710 00:55:58.630184 1888 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jul 10 00:55:58.730030 kubelet[1888]: I0710 00:55:58.730016 1888 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 10 00:55:58.730484 kubelet[1888]: E0710 00:55:58.730472 1888 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://139.178.70.107:6443/api/v1/nodes\": dial tcp 139.178.70.107:6443: connect: connection refused" node="localhost" Jul 10 00:55:58.792052 kubelet[1888]: I0710 00:55:58.792021 1888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/392856983053c3ce3a70e1daab9cbfef-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"392856983053c3ce3a70e1daab9cbfef\") " pod="kube-system/kube-apiserver-localhost" Jul 10 00:55:58.792152 kubelet[1888]: I0710 00:55:58.792057 1888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/392856983053c3ce3a70e1daab9cbfef-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"392856983053c3ce3a70e1daab9cbfef\") " pod="kube-system/kube-apiserver-localhost" Jul 10 00:55:58.792152 kubelet[1888]: I0710 00:55:58.792072 1888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 10 00:55:58.792152 kubelet[1888]: I0710 00:55:58.792084 1888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 10 00:55:58.792152 kubelet[1888]: I0710 00:55:58.792093 1888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 10 00:55:58.792152 kubelet[1888]: I0710 00:55:58.792102 1888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/392856983053c3ce3a70e1daab9cbfef-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"392856983053c3ce3a70e1daab9cbfef\") " pod="kube-system/kube-apiserver-localhost" Jul 10 00:55:58.792253 kubelet[1888]: I0710 00:55:58.792113 1888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 10 00:55:58.792253 kubelet[1888]: I0710 00:55:58.792121 1888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 10 00:55:58.792253 kubelet[1888]: I0710 00:55:58.792130 1888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b35b56493416c25588cb530e37ffc065-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"b35b56493416c25588cb530e37ffc065\") " pod="kube-system/kube-scheduler-localhost" Jul 10 00:55:58.797449 kubelet[1888]: E0710 00:55:58.797399 1888 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.107:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.107:6443: connect: connection refused" interval="400ms" Jul 10 00:55:58.932145 kubelet[1888]: I0710 00:55:58.931692 1888 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 10 00:55:58.932145 kubelet[1888]: E0710 00:55:58.931966 1888 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://139.178.70.107:6443/api/v1/nodes\": dial tcp 139.178.70.107:6443: connect: connection refused" node="localhost" Jul 10 00:55:59.012366 env[1361]: time="2025-07-10T00:55:59.012109423Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:392856983053c3ce3a70e1daab9cbfef,Namespace:kube-system,Attempt:0,}" Jul 10 00:55:59.013978 env[1361]: time="2025-07-10T00:55:59.013915028Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:b35b56493416c25588cb530e37ffc065,Namespace:kube-system,Attempt:0,}" Jul 10 00:55:59.022535 env[1361]: time="2025-07-10T00:55:59.022410626Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:3f04709fe51ae4ab5abd58e8da771b74,Namespace:kube-system,Attempt:0,}" Jul 10 00:55:59.198402 kubelet[1888]: E0710 00:55:59.198171 1888 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.107:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.107:6443: connect: connection refused" interval="800ms" Jul 10 00:55:59.333675 kubelet[1888]: I0710 00:55:59.333649 1888 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 10 00:55:59.333877 kubelet[1888]: E0710 00:55:59.333860 1888 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://139.178.70.107:6443/api/v1/nodes\": dial tcp 139.178.70.107:6443: connect: connection refused" node="localhost" Jul 10 00:55:59.485158 kubelet[1888]: W0710 00:55:59.484968 1888 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://139.178.70.107:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 139.178.70.107:6443: connect: connection refused Jul 10 00:55:59.485158 kubelet[1888]: E0710 00:55:59.485026 1888 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://139.178.70.107:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 139.178.70.107:6443: connect: connection refused" logger="UnhandledError" Jul 10 00:55:59.604383 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1201680518.mount: Deactivated successfully. Jul 10 00:55:59.606670 env[1361]: time="2025-07-10T00:55:59.606641311Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:55:59.608267 env[1361]: time="2025-07-10T00:55:59.608249850Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:55:59.612003 env[1361]: time="2025-07-10T00:55:59.611335968Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:55:59.613514 env[1361]: time="2025-07-10T00:55:59.613494288Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:55:59.616273 env[1361]: time="2025-07-10T00:55:59.616257110Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:55:59.618298 kubelet[1888]: W0710 00:55:59.618245 1888 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://139.178.70.107:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.70.107:6443: connect: connection refused Jul 10 00:55:59.618298 kubelet[1888]: E0710 00:55:59.618282 1888 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://139.178.70.107:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 139.178.70.107:6443: connect: connection refused" logger="UnhandledError" Jul 10 00:55:59.619171 env[1361]: time="2025-07-10T00:55:59.619142212Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:55:59.619701 env[1361]: time="2025-07-10T00:55:59.619662632Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:55:59.620184 env[1361]: time="2025-07-10T00:55:59.620165647Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:55:59.620677 env[1361]: time="2025-07-10T00:55:59.620641338Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:55:59.621122 env[1361]: time="2025-07-10T00:55:59.621103965Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:55:59.621850 env[1361]: time="2025-07-10T00:55:59.621836360Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:55:59.622529 env[1361]: time="2025-07-10T00:55:59.622449286Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:55:59.634686 env[1361]: time="2025-07-10T00:55:59.634566481Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 10 00:55:59.634686 env[1361]: time="2025-07-10T00:55:59.634590231Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 10 00:55:59.634686 env[1361]: time="2025-07-10T00:55:59.634597158Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 00:55:59.634819 env[1361]: time="2025-07-10T00:55:59.634697282Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/47353d8fd6ba9507ff5ad70e4ca8c350480c7aff7cc3398e6f46f9ddbe441839 pid=1929 runtime=io.containerd.runc.v2 Jul 10 00:55:59.640664 env[1361]: time="2025-07-10T00:55:59.640627518Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 10 00:55:59.640736 env[1361]: time="2025-07-10T00:55:59.640669476Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 10 00:55:59.640736 env[1361]: time="2025-07-10T00:55:59.640687212Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 00:55:59.640790 env[1361]: time="2025-07-10T00:55:59.640770949Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/7a4c14ba67359960a12899530bffd79f8d2a1bb0b4dc3cdc7f52fc7956424dfb pid=1942 runtime=io.containerd.runc.v2 Jul 10 00:55:59.679836 env[1361]: time="2025-07-10T00:55:59.679797658Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 10 00:55:59.679999 env[1361]: time="2025-07-10T00:55:59.679984641Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 10 00:55:59.680057 env[1361]: time="2025-07-10T00:55:59.680044335Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 00:55:59.680192 env[1361]: time="2025-07-10T00:55:59.680176677Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/7e24265dad6fc52fd29c8d61f44362e7da017531b0b1597dd75e3f8fce2ca4cc pid=1998 runtime=io.containerd.runc.v2 Jul 10 00:55:59.696294 env[1361]: time="2025-07-10T00:55:59.696267598Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:392856983053c3ce3a70e1daab9cbfef,Namespace:kube-system,Attempt:0,} returns sandbox id \"47353d8fd6ba9507ff5ad70e4ca8c350480c7aff7cc3398e6f46f9ddbe441839\"" Jul 10 00:55:59.701607 env[1361]: time="2025-07-10T00:55:59.699593316Z" level=info msg="CreateContainer within sandbox \"47353d8fd6ba9507ff5ad70e4ca8c350480c7aff7cc3398e6f46f9ddbe441839\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 10 00:55:59.702953 env[1361]: time="2025-07-10T00:55:59.702934142Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:3f04709fe51ae4ab5abd58e8da771b74,Namespace:kube-system,Attempt:0,} returns sandbox id \"7a4c14ba67359960a12899530bffd79f8d2a1bb0b4dc3cdc7f52fc7956424dfb\"" Jul 10 00:55:59.708114 env[1361]: time="2025-07-10T00:55:59.708087840Z" level=info msg="CreateContainer within sandbox \"7a4c14ba67359960a12899530bffd79f8d2a1bb0b4dc3cdc7f52fc7956424dfb\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 10 00:55:59.729080 env[1361]: time="2025-07-10T00:55:59.729053442Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:b35b56493416c25588cb530e37ffc065,Namespace:kube-system,Attempt:0,} returns sandbox id \"7e24265dad6fc52fd29c8d61f44362e7da017531b0b1597dd75e3f8fce2ca4cc\"" Jul 10 00:55:59.730227 env[1361]: time="2025-07-10T00:55:59.730207977Z" level=info msg="CreateContainer within sandbox \"7e24265dad6fc52fd29c8d61f44362e7da017531b0b1597dd75e3f8fce2ca4cc\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 10 00:55:59.842017 kubelet[1888]: W0710 00:55:59.841941 1888 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://139.178.70.107:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 139.178.70.107:6443: connect: connection refused Jul 10 00:55:59.842017 kubelet[1888]: E0710 00:55:59.841986 1888 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://139.178.70.107:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 139.178.70.107:6443: connect: connection refused" logger="UnhandledError" Jul 10 00:55:59.958869 env[1361]: time="2025-07-10T00:55:59.958832136Z" level=info msg="CreateContainer within sandbox \"7e24265dad6fc52fd29c8d61f44362e7da017531b0b1597dd75e3f8fce2ca4cc\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"6e003ce79daa509a0c0f249b3bc3f6d58e34d858553457daca1aef4b65f94d9d\"" Jul 10 00:55:59.959573 env[1361]: time="2025-07-10T00:55:59.959534557Z" level=info msg="StartContainer for \"6e003ce79daa509a0c0f249b3bc3f6d58e34d858553457daca1aef4b65f94d9d\"" Jul 10 00:55:59.960979 env[1361]: time="2025-07-10T00:55:59.960956438Z" level=info msg="CreateContainer within sandbox \"47353d8fd6ba9507ff5ad70e4ca8c350480c7aff7cc3398e6f46f9ddbe441839\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"fa5c6df7e19394fb94da9d922b4e8ea35d93510994efbcba78a96754d0f016ed\"" Jul 10 00:55:59.961273 env[1361]: time="2025-07-10T00:55:59.961257345Z" level=info msg="StartContainer for \"fa5c6df7e19394fb94da9d922b4e8ea35d93510994efbcba78a96754d0f016ed\"" Jul 10 00:55:59.969667 env[1361]: time="2025-07-10T00:55:59.969633539Z" level=info msg="CreateContainer within sandbox \"7a4c14ba67359960a12899530bffd79f8d2a1bb0b4dc3cdc7f52fc7956424dfb\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"23bb6e953f591c9b044741ec80e496f93f0c86901aed0f49d916526884b7e603\"" Jul 10 00:55:59.970495 env[1361]: time="2025-07-10T00:55:59.970455807Z" level=info msg="StartContainer for \"23bb6e953f591c9b044741ec80e496f93f0c86901aed0f49d916526884b7e603\"" Jul 10 00:55:59.999085 kubelet[1888]: E0710 00:55:59.999046 1888 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.107:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.107:6443: connect: connection refused" interval="1.6s" Jul 10 00:56:00.044070 env[1361]: time="2025-07-10T00:56:00.043973812Z" level=info msg="StartContainer for \"fa5c6df7e19394fb94da9d922b4e8ea35d93510994efbcba78a96754d0f016ed\" returns successfully" Jul 10 00:56:00.057540 env[1361]: time="2025-07-10T00:56:00.057505636Z" level=info msg="StartContainer for \"6e003ce79daa509a0c0f249b3bc3f6d58e34d858553457daca1aef4b65f94d9d\" returns successfully" Jul 10 00:56:00.101276 env[1361]: time="2025-07-10T00:56:00.101218420Z" level=info msg="StartContainer for \"23bb6e953f591c9b044741ec80e496f93f0c86901aed0f49d916526884b7e603\" returns successfully" Jul 10 00:56:00.135603 kubelet[1888]: I0710 00:56:00.135538 1888 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 10 00:56:00.137268 kubelet[1888]: E0710 00:56:00.136109 1888 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://139.178.70.107:6443/api/v1/nodes\": dial tcp 139.178.70.107:6443: connect: connection refused" node="localhost" Jul 10 00:56:00.180041 kubelet[1888]: W0710 00:56:00.180004 1888 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://139.178.70.107:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.70.107:6443: connect: connection refused Jul 10 00:56:00.180173 kubelet[1888]: E0710 00:56:00.180158 1888 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://139.178.70.107:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 139.178.70.107:6443: connect: connection refused" logger="UnhandledError" Jul 10 00:56:00.523432 kubelet[1888]: E0710 00:56:00.523376 1888 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://139.178.70.107:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 139.178.70.107:6443: connect: connection refused" logger="UnhandledError" Jul 10 00:56:01.599805 kubelet[1888]: E0710 00:56:01.599766 1888 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.107:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.107:6443: connect: connection refused" interval="3.2s" Jul 10 00:56:01.737840 kubelet[1888]: I0710 00:56:01.737821 1888 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 10 00:56:01.738077 kubelet[1888]: E0710 00:56:01.738058 1888 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://139.178.70.107:6443/api/v1/nodes\": dial tcp 139.178.70.107:6443: connect: connection refused" node="localhost" Jul 10 00:56:03.229078 kubelet[1888]: E0710 00:56:03.229053 1888 csi_plugin.go:305] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Jul 10 00:56:03.498475 kubelet[1888]: I0710 00:56:03.498389 1888 apiserver.go:52] "Watching apiserver" Jul 10 00:56:03.581215 kubelet[1888]: I0710 00:56:03.581190 1888 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 10 00:56:03.582566 kubelet[1888]: E0710 00:56:03.582543 1888 csi_plugin.go:305] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Jul 10 00:56:03.999496 kubelet[1888]: E0710 00:56:03.999476 1888 csi_plugin.go:305] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Jul 10 00:56:04.802812 kubelet[1888]: E0710 00:56:04.802787 1888 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jul 10 00:56:04.912545 kubelet[1888]: E0710 00:56:04.912516 1888 csi_plugin.go:305] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Jul 10 00:56:04.939624 kubelet[1888]: I0710 00:56:04.939603 1888 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 10 00:56:04.943499 kubelet[1888]: I0710 00:56:04.943480 1888 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Jul 10 00:56:05.626643 systemd[1]: Reloading. Jul 10 00:56:05.681036 /usr/lib/systemd/system-generators/torcx-generator[2179]: time="2025-07-10T00:56:05Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" Jul 10 00:56:05.681053 /usr/lib/systemd/system-generators/torcx-generator[2179]: time="2025-07-10T00:56:05Z" level=info msg="torcx already run" Jul 10 00:56:05.726395 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 10 00:56:05.726406 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 10 00:56:05.738496 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 10 00:56:05.799578 systemd[1]: Stopping kubelet.service... Jul 10 00:56:05.812668 systemd[1]: kubelet.service: Deactivated successfully. Jul 10 00:56:05.812850 systemd[1]: Stopped kubelet.service. Jul 10 00:56:05.814486 systemd[1]: Starting kubelet.service... Jul 10 00:56:06.848405 systemd[1]: Started kubelet.service. Jul 10 00:56:07.063777 kubelet[2254]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 10 00:56:07.063777 kubelet[2254]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 10 00:56:07.063777 kubelet[2254]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 10 00:56:07.064048 kubelet[2254]: I0710 00:56:07.063815 2254 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 10 00:56:07.067672 kubelet[2254]: I0710 00:56:07.067654 2254 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 10 00:56:07.067672 kubelet[2254]: I0710 00:56:07.067669 2254 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 10 00:56:07.067802 kubelet[2254]: I0710 00:56:07.067791 2254 server.go:934] "Client rotation is on, will bootstrap in background" Jul 10 00:56:07.068538 kubelet[2254]: I0710 00:56:07.068526 2254 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 10 00:56:07.080598 kubelet[2254]: I0710 00:56:07.080580 2254 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 10 00:56:07.083491 kubelet[2254]: E0710 00:56:07.083474 2254 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 10 00:56:07.083587 kubelet[2254]: I0710 00:56:07.083577 2254 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 10 00:56:07.085157 kubelet[2254]: I0710 00:56:07.085148 2254 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 10 00:56:07.085442 kubelet[2254]: I0710 00:56:07.085434 2254 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 10 00:56:07.085590 kubelet[2254]: I0710 00:56:07.085577 2254 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 10 00:56:07.085738 kubelet[2254]: I0710 00:56:07.085630 2254 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Jul 10 00:56:07.085870 kubelet[2254]: I0710 00:56:07.085861 2254 topology_manager.go:138] "Creating topology manager with none policy" Jul 10 00:56:07.085917 kubelet[2254]: I0710 00:56:07.085910 2254 container_manager_linux.go:300] "Creating device plugin manager" Jul 10 00:56:07.085974 kubelet[2254]: I0710 00:56:07.085967 2254 state_mem.go:36] "Initialized new in-memory state store" Jul 10 00:56:07.086074 kubelet[2254]: I0710 00:56:07.086067 2254 kubelet.go:408] "Attempting to sync node with API server" Jul 10 00:56:07.086121 kubelet[2254]: I0710 00:56:07.086113 2254 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 10 00:56:07.086231 kubelet[2254]: I0710 00:56:07.086223 2254 kubelet.go:314] "Adding apiserver pod source" Jul 10 00:56:07.086281 kubelet[2254]: I0710 00:56:07.086273 2254 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 10 00:56:07.102460 kubelet[2254]: I0710 00:56:07.100686 2254 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Jul 10 00:56:07.102460 kubelet[2254]: I0710 00:56:07.100960 2254 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 10 00:56:07.102460 kubelet[2254]: I0710 00:56:07.101191 2254 server.go:1274] "Started kubelet" Jul 10 00:56:07.102460 kubelet[2254]: I0710 00:56:07.102230 2254 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 10 00:56:07.116628 kubelet[2254]: I0710 00:56:07.115573 2254 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 10 00:56:07.116628 kubelet[2254]: I0710 00:56:07.115774 2254 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 10 00:56:07.116628 kubelet[2254]: I0710 00:56:07.115841 2254 reconciler.go:26] "Reconciler: start to sync state" Jul 10 00:56:07.116885 kubelet[2254]: I0710 00:56:07.116865 2254 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 10 00:56:07.117570 kubelet[2254]: I0710 00:56:07.117558 2254 server.go:449] "Adding debug handlers to kubelet server" Jul 10 00:56:07.118644 kubelet[2254]: E0710 00:56:07.118625 2254 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 10 00:56:07.123765 sudo[2269]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jul 10 00:56:07.123926 sudo[2269]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Jul 10 00:56:07.127879 kubelet[2254]: I0710 00:56:07.127856 2254 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 10 00:56:07.128668 kubelet[2254]: I0710 00:56:07.128506 2254 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 10 00:56:07.128668 kubelet[2254]: I0710 00:56:07.128520 2254 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 10 00:56:07.128668 kubelet[2254]: I0710 00:56:07.128532 2254 kubelet.go:2321] "Starting kubelet main sync loop" Jul 10 00:56:07.128668 kubelet[2254]: E0710 00:56:07.128574 2254 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 10 00:56:07.135678 kubelet[2254]: I0710 00:56:07.135225 2254 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 10 00:56:07.135678 kubelet[2254]: I0710 00:56:07.135341 2254 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 10 00:56:07.135678 kubelet[2254]: I0710 00:56:07.135386 2254 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 10 00:56:07.136375 kubelet[2254]: I0710 00:56:07.136347 2254 factory.go:221] Registration of the containerd container factory successfully Jul 10 00:56:07.136375 kubelet[2254]: I0710 00:56:07.136374 2254 factory.go:221] Registration of the systemd container factory successfully Jul 10 00:56:07.136432 kubelet[2254]: I0710 00:56:07.136423 2254 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 10 00:56:07.184292 kubelet[2254]: I0710 00:56:07.184273 2254 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 10 00:56:07.184430 kubelet[2254]: I0710 00:56:07.184418 2254 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 10 00:56:07.184497 kubelet[2254]: I0710 00:56:07.184488 2254 state_mem.go:36] "Initialized new in-memory state store" Jul 10 00:56:07.184684 kubelet[2254]: I0710 00:56:07.184674 2254 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 10 00:56:07.184756 kubelet[2254]: I0710 00:56:07.184734 2254 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 10 00:56:07.184818 kubelet[2254]: I0710 00:56:07.184809 2254 policy_none.go:49] "None policy: Start" Jul 10 00:56:07.185189 kubelet[2254]: I0710 00:56:07.185180 2254 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 10 00:56:07.185258 kubelet[2254]: I0710 00:56:07.185249 2254 state_mem.go:35] "Initializing new in-memory state store" Jul 10 00:56:07.185442 kubelet[2254]: I0710 00:56:07.185433 2254 state_mem.go:75] "Updated machine memory state" Jul 10 00:56:07.186434 kubelet[2254]: I0710 00:56:07.186422 2254 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 10 00:56:07.186619 kubelet[2254]: I0710 00:56:07.186609 2254 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 10 00:56:07.186694 kubelet[2254]: I0710 00:56:07.186671 2254 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 10 00:56:07.188099 kubelet[2254]: I0710 00:56:07.188090 2254 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 10 00:56:07.234190 kubelet[2254]: E0710 00:56:07.234141 2254 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jul 10 00:56:07.292332 kubelet[2254]: I0710 00:56:07.292301 2254 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 10 00:56:07.298155 kubelet[2254]: I0710 00:56:07.298128 2254 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Jul 10 00:56:07.298269 kubelet[2254]: I0710 00:56:07.298190 2254 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Jul 10 00:56:07.417226 kubelet[2254]: I0710 00:56:07.417175 2254 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 10 00:56:07.417359 kubelet[2254]: I0710 00:56:07.417337 2254 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 10 00:56:07.417417 kubelet[2254]: I0710 00:56:07.417407 2254 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 10 00:56:07.417473 kubelet[2254]: I0710 00:56:07.417463 2254 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 10 00:56:07.417531 kubelet[2254]: I0710 00:56:07.417522 2254 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/392856983053c3ce3a70e1daab9cbfef-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"392856983053c3ce3a70e1daab9cbfef\") " pod="kube-system/kube-apiserver-localhost" Jul 10 00:56:07.417583 kubelet[2254]: I0710 00:56:07.417574 2254 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/392856983053c3ce3a70e1daab9cbfef-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"392856983053c3ce3a70e1daab9cbfef\") " pod="kube-system/kube-apiserver-localhost" Jul 10 00:56:07.417638 kubelet[2254]: I0710 00:56:07.417629 2254 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/392856983053c3ce3a70e1daab9cbfef-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"392856983053c3ce3a70e1daab9cbfef\") " pod="kube-system/kube-apiserver-localhost" Jul 10 00:56:07.417692 kubelet[2254]: I0710 00:56:07.417683 2254 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 10 00:56:07.417748 kubelet[2254]: I0710 00:56:07.417738 2254 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b35b56493416c25588cb530e37ffc065-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"b35b56493416c25588cb530e37ffc065\") " pod="kube-system/kube-scheduler-localhost" Jul 10 00:56:07.791722 sudo[2269]: pam_unix(sudo:session): session closed for user root Jul 10 00:56:08.089440 kubelet[2254]: I0710 00:56:08.089382 2254 apiserver.go:52] "Watching apiserver" Jul 10 00:56:08.116311 kubelet[2254]: I0710 00:56:08.116297 2254 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 10 00:56:08.202502 kubelet[2254]: I0710 00:56:08.202470 2254 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=3.202457154 podStartE2EDuration="3.202457154s" podCreationTimestamp="2025-07-10 00:56:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 00:56:08.188525455 +0000 UTC m=+1.319981693" watchObservedRunningTime="2025-07-10 00:56:08.202457154 +0000 UTC m=+1.333913394" Jul 10 00:56:08.218146 kubelet[2254]: I0710 00:56:08.218119 2254 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.218105703 podStartE2EDuration="1.218105703s" podCreationTimestamp="2025-07-10 00:56:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 00:56:08.202776215 +0000 UTC m=+1.334232460" watchObservedRunningTime="2025-07-10 00:56:08.218105703 +0000 UTC m=+1.349561951" Jul 10 00:56:08.235508 kubelet[2254]: I0710 00:56:08.235467 2254 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.235454252 podStartE2EDuration="1.235454252s" podCreationTimestamp="2025-07-10 00:56:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 00:56:08.218360874 +0000 UTC m=+1.349817111" watchObservedRunningTime="2025-07-10 00:56:08.235454252 +0000 UTC m=+1.366910499" Jul 10 00:56:09.232746 kubelet[2254]: I0710 00:56:09.232726 2254 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 10 00:56:09.233971 env[1361]: time="2025-07-10T00:56:09.233264321Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 10 00:56:09.234222 kubelet[2254]: I0710 00:56:09.234212 2254 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 10 00:56:09.977447 kubelet[2254]: E0710 00:56:09.977406 2254 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[bpf-maps cilium-cgroup cilium-config-path cilium-run clustermesh-secrets cni-path etc-cni-netd host-proc-sys-kernel host-proc-sys-net hostproc hubble-tls kube-api-access-fs9tb lib-modules xtables-lock], unattached volumes=[], failed to process volumes=[bpf-maps cilium-cgroup cilium-config-path cilium-run clustermesh-secrets cni-path etc-cni-netd host-proc-sys-kernel host-proc-sys-net hostproc hubble-tls kube-api-access-fs9tb lib-modules xtables-lock]: context canceled" pod="kube-system/cilium-6fnxc" podUID="90ef8bd9-f27c-4187-ac6f-01d6250fdb86" Jul 10 00:56:10.034322 kubelet[2254]: I0710 00:56:10.034284 2254 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/71b66231-d84b-4049-b699-bd7541027166-xtables-lock\") pod \"kube-proxy-2fvp2\" (UID: \"71b66231-d84b-4049-b699-bd7541027166\") " pod="kube-system/kube-proxy-2fvp2" Jul 10 00:56:10.034322 kubelet[2254]: I0710 00:56:10.034312 2254 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/90ef8bd9-f27c-4187-ac6f-01d6250fdb86-clustermesh-secrets\") pod \"cilium-6fnxc\" (UID: \"90ef8bd9-f27c-4187-ac6f-01d6250fdb86\") " pod="kube-system/cilium-6fnxc" Jul 10 00:56:10.034492 kubelet[2254]: I0710 00:56:10.034325 2254 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/71b66231-d84b-4049-b699-bd7541027166-lib-modules\") pod \"kube-proxy-2fvp2\" (UID: \"71b66231-d84b-4049-b699-bd7541027166\") " pod="kube-system/kube-proxy-2fvp2" Jul 10 00:56:10.034492 kubelet[2254]: I0710 00:56:10.034376 2254 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hxjlx\" (UniqueName: \"kubernetes.io/projected/71b66231-d84b-4049-b699-bd7541027166-kube-api-access-hxjlx\") pod \"kube-proxy-2fvp2\" (UID: \"71b66231-d84b-4049-b699-bd7541027166\") " pod="kube-system/kube-proxy-2fvp2" Jul 10 00:56:10.034492 kubelet[2254]: I0710 00:56:10.034388 2254 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/90ef8bd9-f27c-4187-ac6f-01d6250fdb86-bpf-maps\") pod \"cilium-6fnxc\" (UID: \"90ef8bd9-f27c-4187-ac6f-01d6250fdb86\") " pod="kube-system/cilium-6fnxc" Jul 10 00:56:10.034492 kubelet[2254]: I0710 00:56:10.034397 2254 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/90ef8bd9-f27c-4187-ac6f-01d6250fdb86-hostproc\") pod \"cilium-6fnxc\" (UID: \"90ef8bd9-f27c-4187-ac6f-01d6250fdb86\") " pod="kube-system/cilium-6fnxc" Jul 10 00:56:10.034492 kubelet[2254]: I0710 00:56:10.034415 2254 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/90ef8bd9-f27c-4187-ac6f-01d6250fdb86-lib-modules\") pod \"cilium-6fnxc\" (UID: \"90ef8bd9-f27c-4187-ac6f-01d6250fdb86\") " pod="kube-system/cilium-6fnxc" Jul 10 00:56:10.034492 kubelet[2254]: I0710 00:56:10.034427 2254 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/90ef8bd9-f27c-4187-ac6f-01d6250fdb86-cilium-config-path\") pod \"cilium-6fnxc\" (UID: \"90ef8bd9-f27c-4187-ac6f-01d6250fdb86\") " pod="kube-system/cilium-6fnxc" Jul 10 00:56:10.034686 kubelet[2254]: I0710 00:56:10.034436 2254 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/90ef8bd9-f27c-4187-ac6f-01d6250fdb86-host-proc-sys-kernel\") pod \"cilium-6fnxc\" (UID: \"90ef8bd9-f27c-4187-ac6f-01d6250fdb86\") " pod="kube-system/cilium-6fnxc" Jul 10 00:56:10.034686 kubelet[2254]: I0710 00:56:10.034447 2254 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/90ef8bd9-f27c-4187-ac6f-01d6250fdb86-cilium-cgroup\") pod \"cilium-6fnxc\" (UID: \"90ef8bd9-f27c-4187-ac6f-01d6250fdb86\") " pod="kube-system/cilium-6fnxc" Jul 10 00:56:10.034686 kubelet[2254]: I0710 00:56:10.034456 2254 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/90ef8bd9-f27c-4187-ac6f-01d6250fdb86-host-proc-sys-net\") pod \"cilium-6fnxc\" (UID: \"90ef8bd9-f27c-4187-ac6f-01d6250fdb86\") " pod="kube-system/cilium-6fnxc" Jul 10 00:56:10.034686 kubelet[2254]: I0710 00:56:10.034470 2254 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/90ef8bd9-f27c-4187-ac6f-01d6250fdb86-cilium-run\") pod \"cilium-6fnxc\" (UID: \"90ef8bd9-f27c-4187-ac6f-01d6250fdb86\") " pod="kube-system/cilium-6fnxc" Jul 10 00:56:10.034686 kubelet[2254]: I0710 00:56:10.034495 2254 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/90ef8bd9-f27c-4187-ac6f-01d6250fdb86-cni-path\") pod \"cilium-6fnxc\" (UID: \"90ef8bd9-f27c-4187-ac6f-01d6250fdb86\") " pod="kube-system/cilium-6fnxc" Jul 10 00:56:10.034686 kubelet[2254]: I0710 00:56:10.034504 2254 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/90ef8bd9-f27c-4187-ac6f-01d6250fdb86-xtables-lock\") pod \"cilium-6fnxc\" (UID: \"90ef8bd9-f27c-4187-ac6f-01d6250fdb86\") " pod="kube-system/cilium-6fnxc" Jul 10 00:56:10.034900 kubelet[2254]: I0710 00:56:10.034513 2254 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/90ef8bd9-f27c-4187-ac6f-01d6250fdb86-hubble-tls\") pod \"cilium-6fnxc\" (UID: \"90ef8bd9-f27c-4187-ac6f-01d6250fdb86\") " pod="kube-system/cilium-6fnxc" Jul 10 00:56:10.034900 kubelet[2254]: I0710 00:56:10.034522 2254 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fs9tb\" (UniqueName: \"kubernetes.io/projected/90ef8bd9-f27c-4187-ac6f-01d6250fdb86-kube-api-access-fs9tb\") pod \"cilium-6fnxc\" (UID: \"90ef8bd9-f27c-4187-ac6f-01d6250fdb86\") " pod="kube-system/cilium-6fnxc" Jul 10 00:56:10.034900 kubelet[2254]: I0710 00:56:10.034538 2254 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/90ef8bd9-f27c-4187-ac6f-01d6250fdb86-etc-cni-netd\") pod \"cilium-6fnxc\" (UID: \"90ef8bd9-f27c-4187-ac6f-01d6250fdb86\") " pod="kube-system/cilium-6fnxc" Jul 10 00:56:10.034900 kubelet[2254]: I0710 00:56:10.034546 2254 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/71b66231-d84b-4049-b699-bd7541027166-kube-proxy\") pod \"kube-proxy-2fvp2\" (UID: \"71b66231-d84b-4049-b699-bd7541027166\") " pod="kube-system/kube-proxy-2fvp2" Jul 10 00:56:10.138359 kubelet[2254]: I0710 00:56:10.138332 2254 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Jul 10 00:56:10.235182 kubelet[2254]: I0710 00:56:10.235109 2254 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/90ef8bd9-f27c-4187-ac6f-01d6250fdb86-hostproc\") pod \"90ef8bd9-f27c-4187-ac6f-01d6250fdb86\" (UID: \"90ef8bd9-f27c-4187-ac6f-01d6250fdb86\") " Jul 10 00:56:10.235483 kubelet[2254]: I0710 00:56:10.235471 2254 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/90ef8bd9-f27c-4187-ac6f-01d6250fdb86-lib-modules\") pod \"90ef8bd9-f27c-4187-ac6f-01d6250fdb86\" (UID: \"90ef8bd9-f27c-4187-ac6f-01d6250fdb86\") " Jul 10 00:56:10.235562 kubelet[2254]: I0710 00:56:10.235554 2254 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/90ef8bd9-f27c-4187-ac6f-01d6250fdb86-cilium-run\") pod \"90ef8bd9-f27c-4187-ac6f-01d6250fdb86\" (UID: \"90ef8bd9-f27c-4187-ac6f-01d6250fdb86\") " Jul 10 00:56:10.235647 kubelet[2254]: I0710 00:56:10.235640 2254 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/90ef8bd9-f27c-4187-ac6f-01d6250fdb86-cni-path\") pod \"90ef8bd9-f27c-4187-ac6f-01d6250fdb86\" (UID: \"90ef8bd9-f27c-4187-ac6f-01d6250fdb86\") " Jul 10 00:56:10.235731 kubelet[2254]: I0710 00:56:10.235724 2254 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/90ef8bd9-f27c-4187-ac6f-01d6250fdb86-xtables-lock\") pod \"90ef8bd9-f27c-4187-ac6f-01d6250fdb86\" (UID: \"90ef8bd9-f27c-4187-ac6f-01d6250fdb86\") " Jul 10 00:56:10.235815 kubelet[2254]: I0710 00:56:10.235808 2254 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/90ef8bd9-f27c-4187-ac6f-01d6250fdb86-bpf-maps\") pod \"90ef8bd9-f27c-4187-ac6f-01d6250fdb86\" (UID: \"90ef8bd9-f27c-4187-ac6f-01d6250fdb86\") " Jul 10 00:56:10.235896 kubelet[2254]: I0710 00:56:10.235888 2254 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/90ef8bd9-f27c-4187-ac6f-01d6250fdb86-host-proc-sys-kernel\") pod \"90ef8bd9-f27c-4187-ac6f-01d6250fdb86\" (UID: \"90ef8bd9-f27c-4187-ac6f-01d6250fdb86\") " Jul 10 00:56:10.235976 kubelet[2254]: I0710 00:56:10.235969 2254 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/90ef8bd9-f27c-4187-ac6f-01d6250fdb86-hubble-tls\") pod \"90ef8bd9-f27c-4187-ac6f-01d6250fdb86\" (UID: \"90ef8bd9-f27c-4187-ac6f-01d6250fdb86\") " Jul 10 00:56:10.236219 kubelet[2254]: I0710 00:56:10.236211 2254 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fs9tb\" (UniqueName: \"kubernetes.io/projected/90ef8bd9-f27c-4187-ac6f-01d6250fdb86-kube-api-access-fs9tb\") pod \"90ef8bd9-f27c-4187-ac6f-01d6250fdb86\" (UID: \"90ef8bd9-f27c-4187-ac6f-01d6250fdb86\") " Jul 10 00:56:10.236286 kubelet[2254]: I0710 00:56:10.236277 2254 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/90ef8bd9-f27c-4187-ac6f-01d6250fdb86-cilium-config-path\") pod \"90ef8bd9-f27c-4187-ac6f-01d6250fdb86\" (UID: \"90ef8bd9-f27c-4187-ac6f-01d6250fdb86\") " Jul 10 00:56:10.236354 kubelet[2254]: I0710 00:56:10.236341 2254 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/90ef8bd9-f27c-4187-ac6f-01d6250fdb86-clustermesh-secrets\") pod \"90ef8bd9-f27c-4187-ac6f-01d6250fdb86\" (UID: \"90ef8bd9-f27c-4187-ac6f-01d6250fdb86\") " Jul 10 00:56:10.236408 kubelet[2254]: I0710 00:56:10.236399 2254 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/90ef8bd9-f27c-4187-ac6f-01d6250fdb86-etc-cni-netd\") pod \"90ef8bd9-f27c-4187-ac6f-01d6250fdb86\" (UID: \"90ef8bd9-f27c-4187-ac6f-01d6250fdb86\") " Jul 10 00:56:10.236468 kubelet[2254]: I0710 00:56:10.236460 2254 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/90ef8bd9-f27c-4187-ac6f-01d6250fdb86-cilium-cgroup\") pod \"90ef8bd9-f27c-4187-ac6f-01d6250fdb86\" (UID: \"90ef8bd9-f27c-4187-ac6f-01d6250fdb86\") " Jul 10 00:56:10.236523 kubelet[2254]: I0710 00:56:10.236510 2254 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/90ef8bd9-f27c-4187-ac6f-01d6250fdb86-host-proc-sys-net\") pod \"90ef8bd9-f27c-4187-ac6f-01d6250fdb86\" (UID: \"90ef8bd9-f27c-4187-ac6f-01d6250fdb86\") " Jul 10 00:56:10.236604 kubelet[2254]: I0710 00:56:10.236594 2254 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/23cd1691-30eb-4002-9c8a-a66aab02e4b0-cilium-config-path\") pod \"cilium-operator-5d85765b45-2lrvw\" (UID: \"23cd1691-30eb-4002-9c8a-a66aab02e4b0\") " pod="kube-system/cilium-operator-5d85765b45-2lrvw" Jul 10 00:56:10.236667 kubelet[2254]: I0710 00:56:10.236658 2254 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4ftk8\" (UniqueName: \"kubernetes.io/projected/23cd1691-30eb-4002-9c8a-a66aab02e4b0-kube-api-access-4ftk8\") pod \"cilium-operator-5d85765b45-2lrvw\" (UID: \"23cd1691-30eb-4002-9c8a-a66aab02e4b0\") " pod="kube-system/cilium-operator-5d85765b45-2lrvw" Jul 10 00:56:10.236734 kubelet[2254]: I0710 00:56:10.235477 2254 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/90ef8bd9-f27c-4187-ac6f-01d6250fdb86-hostproc" (OuterVolumeSpecName: "hostproc") pod "90ef8bd9-f27c-4187-ac6f-01d6250fdb86" (UID: "90ef8bd9-f27c-4187-ac6f-01d6250fdb86"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 10 00:56:10.236788 kubelet[2254]: I0710 00:56:10.235492 2254 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/90ef8bd9-f27c-4187-ac6f-01d6250fdb86-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "90ef8bd9-f27c-4187-ac6f-01d6250fdb86" (UID: "90ef8bd9-f27c-4187-ac6f-01d6250fdb86"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 10 00:56:10.236840 kubelet[2254]: I0710 00:56:10.235622 2254 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/90ef8bd9-f27c-4187-ac6f-01d6250fdb86-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "90ef8bd9-f27c-4187-ac6f-01d6250fdb86" (UID: "90ef8bd9-f27c-4187-ac6f-01d6250fdb86"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 10 00:56:10.236883 kubelet[2254]: I0710 00:56:10.235705 2254 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/90ef8bd9-f27c-4187-ac6f-01d6250fdb86-cni-path" (OuterVolumeSpecName: "cni-path") pod "90ef8bd9-f27c-4187-ac6f-01d6250fdb86" (UID: "90ef8bd9-f27c-4187-ac6f-01d6250fdb86"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 10 00:56:10.236934 kubelet[2254]: I0710 00:56:10.235788 2254 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/90ef8bd9-f27c-4187-ac6f-01d6250fdb86-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "90ef8bd9-f27c-4187-ac6f-01d6250fdb86" (UID: "90ef8bd9-f27c-4187-ac6f-01d6250fdb86"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 10 00:56:10.236977 kubelet[2254]: I0710 00:56:10.235869 2254 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/90ef8bd9-f27c-4187-ac6f-01d6250fdb86-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "90ef8bd9-f27c-4187-ac6f-01d6250fdb86" (UID: "90ef8bd9-f27c-4187-ac6f-01d6250fdb86"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 10 00:56:10.237031 kubelet[2254]: I0710 00:56:10.235950 2254 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/90ef8bd9-f27c-4187-ac6f-01d6250fdb86-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "90ef8bd9-f27c-4187-ac6f-01d6250fdb86" (UID: "90ef8bd9-f27c-4187-ac6f-01d6250fdb86"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 10 00:56:10.239068 kubelet[2254]: I0710 00:56:10.239056 2254 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/90ef8bd9-f27c-4187-ac6f-01d6250fdb86-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "90ef8bd9-f27c-4187-ac6f-01d6250fdb86" (UID: "90ef8bd9-f27c-4187-ac6f-01d6250fdb86"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 10 00:56:10.239599 env[1361]: time="2025-07-10T00:56:10.239575452Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-2fvp2,Uid:71b66231-d84b-4049-b699-bd7541027166,Namespace:kube-system,Attempt:0,}" Jul 10 00:56:10.240837 kubelet[2254]: I0710 00:56:10.240558 2254 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/90ef8bd9-f27c-4187-ac6f-01d6250fdb86-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "90ef8bd9-f27c-4187-ac6f-01d6250fdb86" (UID: "90ef8bd9-f27c-4187-ac6f-01d6250fdb86"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 10 00:56:10.240918 kubelet[2254]: I0710 00:56:10.240903 2254 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/90ef8bd9-f27c-4187-ac6f-01d6250fdb86-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "90ef8bd9-f27c-4187-ac6f-01d6250fdb86" (UID: "90ef8bd9-f27c-4187-ac6f-01d6250fdb86"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 10 00:56:10.248933 systemd[1]: var-lib-kubelet-pods-90ef8bd9\x2df27c\x2d4187\x2dac6f\x2d01d6250fdb86-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 10 00:56:10.250164 kubelet[2254]: I0710 00:56:10.250138 2254 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90ef8bd9-f27c-4187-ac6f-01d6250fdb86-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "90ef8bd9-f27c-4187-ac6f-01d6250fdb86" (UID: "90ef8bd9-f27c-4187-ac6f-01d6250fdb86"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 10 00:56:10.250278 kubelet[2254]: I0710 00:56:10.250269 2254 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/90ef8bd9-f27c-4187-ac6f-01d6250fdb86-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "90ef8bd9-f27c-4187-ac6f-01d6250fdb86" (UID: "90ef8bd9-f27c-4187-ac6f-01d6250fdb86"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 10 00:56:10.252221 env[1361]: time="2025-07-10T00:56:10.250338172Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 10 00:56:10.252221 env[1361]: time="2025-07-10T00:56:10.252200318Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 10 00:56:10.252221 env[1361]: time="2025-07-10T00:56:10.252208434Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 00:56:10.252840 env[1361]: time="2025-07-10T00:56:10.252293510Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/4646ef6f594137ca25608de6d427ec680e4fde0f85dbe63c3ef15f4226ad0c70 pid=2321 runtime=io.containerd.runc.v2 Jul 10 00:56:10.254137 systemd[1]: var-lib-kubelet-pods-90ef8bd9\x2df27c\x2d4187\x2dac6f\x2d01d6250fdb86-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 10 00:56:10.256046 kubelet[2254]: I0710 00:56:10.256016 2254 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/90ef8bd9-f27c-4187-ac6f-01d6250fdb86-kube-api-access-fs9tb" (OuterVolumeSpecName: "kube-api-access-fs9tb") pod "90ef8bd9-f27c-4187-ac6f-01d6250fdb86" (UID: "90ef8bd9-f27c-4187-ac6f-01d6250fdb86"). InnerVolumeSpecName "kube-api-access-fs9tb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 10 00:56:10.256167 kubelet[2254]: I0710 00:56:10.256027 2254 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/90ef8bd9-f27c-4187-ac6f-01d6250fdb86-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "90ef8bd9-f27c-4187-ac6f-01d6250fdb86" (UID: "90ef8bd9-f27c-4187-ac6f-01d6250fdb86"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 10 00:56:10.281574 env[1361]: time="2025-07-10T00:56:10.281544974Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-2fvp2,Uid:71b66231-d84b-4049-b699-bd7541027166,Namespace:kube-system,Attempt:0,} returns sandbox id \"4646ef6f594137ca25608de6d427ec680e4fde0f85dbe63c3ef15f4226ad0c70\"" Jul 10 00:56:10.284087 env[1361]: time="2025-07-10T00:56:10.284065900Z" level=info msg="CreateContainer within sandbox \"4646ef6f594137ca25608de6d427ec680e4fde0f85dbe63c3ef15f4226ad0c70\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 10 00:56:10.337292 kubelet[2254]: I0710 00:56:10.337270 2254 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fs9tb\" (UniqueName: \"kubernetes.io/projected/90ef8bd9-f27c-4187-ac6f-01d6250fdb86-kube-api-access-fs9tb\") on node \"localhost\" DevicePath \"\"" Jul 10 00:56:10.337292 kubelet[2254]: I0710 00:56:10.337289 2254 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/90ef8bd9-f27c-4187-ac6f-01d6250fdb86-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jul 10 00:56:10.337292 kubelet[2254]: I0710 00:56:10.337294 2254 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/90ef8bd9-f27c-4187-ac6f-01d6250fdb86-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Jul 10 00:56:10.337292 kubelet[2254]: I0710 00:56:10.337299 2254 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/90ef8bd9-f27c-4187-ac6f-01d6250fdb86-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Jul 10 00:56:10.338211 kubelet[2254]: I0710 00:56:10.337304 2254 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/90ef8bd9-f27c-4187-ac6f-01d6250fdb86-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Jul 10 00:56:10.338211 kubelet[2254]: I0710 00:56:10.337308 2254 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/90ef8bd9-f27c-4187-ac6f-01d6250fdb86-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Jul 10 00:56:10.338211 kubelet[2254]: I0710 00:56:10.337313 2254 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/90ef8bd9-f27c-4187-ac6f-01d6250fdb86-xtables-lock\") on node \"localhost\" DevicePath \"\"" Jul 10 00:56:10.338211 kubelet[2254]: I0710 00:56:10.337317 2254 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/90ef8bd9-f27c-4187-ac6f-01d6250fdb86-hostproc\") on node \"localhost\" DevicePath \"\"" Jul 10 00:56:10.338211 kubelet[2254]: I0710 00:56:10.337321 2254 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/90ef8bd9-f27c-4187-ac6f-01d6250fdb86-lib-modules\") on node \"localhost\" DevicePath \"\"" Jul 10 00:56:10.338211 kubelet[2254]: I0710 00:56:10.337327 2254 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/90ef8bd9-f27c-4187-ac6f-01d6250fdb86-cilium-run\") on node \"localhost\" DevicePath \"\"" Jul 10 00:56:10.338211 kubelet[2254]: I0710 00:56:10.337331 2254 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/90ef8bd9-f27c-4187-ac6f-01d6250fdb86-cni-path\") on node \"localhost\" DevicePath \"\"" Jul 10 00:56:10.338211 kubelet[2254]: I0710 00:56:10.337337 2254 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/90ef8bd9-f27c-4187-ac6f-01d6250fdb86-hubble-tls\") on node \"localhost\" DevicePath \"\"" Jul 10 00:56:10.339932 kubelet[2254]: I0710 00:56:10.337341 2254 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/90ef8bd9-f27c-4187-ac6f-01d6250fdb86-bpf-maps\") on node \"localhost\" DevicePath \"\"" Jul 10 00:56:10.339932 kubelet[2254]: I0710 00:56:10.337346 2254 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/90ef8bd9-f27c-4187-ac6f-01d6250fdb86-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Jul 10 00:56:10.489551 env[1361]: time="2025-07-10T00:56:10.488894139Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-2lrvw,Uid:23cd1691-30eb-4002-9c8a-a66aab02e4b0,Namespace:kube-system,Attempt:0,}" Jul 10 00:56:10.520362 env[1361]: time="2025-07-10T00:56:10.520308541Z" level=info msg="CreateContainer within sandbox \"4646ef6f594137ca25608de6d427ec680e4fde0f85dbe63c3ef15f4226ad0c70\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"09a2f8cad734a4a9f1bc9448b9abe48c4676305ca599215536b2f41d1abe1b6d\"" Jul 10 00:56:10.521502 env[1361]: time="2025-07-10T00:56:10.521482143Z" level=info msg="StartContainer for \"09a2f8cad734a4a9f1bc9448b9abe48c4676305ca599215536b2f41d1abe1b6d\"" Jul 10 00:56:10.574544 env[1361]: time="2025-07-10T00:56:10.574424047Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 10 00:56:10.574544 env[1361]: time="2025-07-10T00:56:10.574452572Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 10 00:56:10.574544 env[1361]: time="2025-07-10T00:56:10.574459513Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 00:56:10.574760 env[1361]: time="2025-07-10T00:56:10.574729996Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/cd14f60a5e354c420f07d10c7ae3616905d515cf3134d23e1f5abdca9b7ebfac pid=2391 runtime=io.containerd.runc.v2 Jul 10 00:56:10.583288 env[1361]: time="2025-07-10T00:56:10.581521325Z" level=info msg="StartContainer for \"09a2f8cad734a4a9f1bc9448b9abe48c4676305ca599215536b2f41d1abe1b6d\" returns successfully" Jul 10 00:56:10.609722 env[1361]: time="2025-07-10T00:56:10.609696284Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-2lrvw,Uid:23cd1691-30eb-4002-9c8a-a66aab02e4b0,Namespace:kube-system,Attempt:0,} returns sandbox id \"cd14f60a5e354c420f07d10c7ae3616905d515cf3134d23e1f5abdca9b7ebfac\"" Jul 10 00:56:10.611227 env[1361]: time="2025-07-10T00:56:10.611211149Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jul 10 00:56:11.148301 systemd[1]: var-lib-kubelet-pods-90ef8bd9\x2df27c\x2d4187\x2dac6f\x2d01d6250fdb86-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dfs9tb.mount: Deactivated successfully. Jul 10 00:56:11.207825 kubelet[2254]: I0710 00:56:11.207783 2254 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-2fvp2" podStartSLOduration=2.207770438 podStartE2EDuration="2.207770438s" podCreationTimestamp="2025-07-10 00:56:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 00:56:11.207703536 +0000 UTC m=+4.339159777" watchObservedRunningTime="2025-07-10 00:56:11.207770438 +0000 UTC m=+4.339226677" Jul 10 00:56:11.242500 kubelet[2254]: I0710 00:56:11.242469 2254 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4042cc1e-c41f-4e6b-b861-ce18118b4808-host-proc-sys-kernel\") pod \"cilium-rj4lh\" (UID: \"4042cc1e-c41f-4e6b-b861-ce18118b4808\") " pod="kube-system/cilium-rj4lh" Jul 10 00:56:11.242912 kubelet[2254]: I0710 00:56:11.242900 2254 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4042cc1e-c41f-4e6b-b861-ce18118b4808-clustermesh-secrets\") pod \"cilium-rj4lh\" (UID: \"4042cc1e-c41f-4e6b-b861-ce18118b4808\") " pod="kube-system/cilium-rj4lh" Jul 10 00:56:11.243022 kubelet[2254]: I0710 00:56:11.243010 2254 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lcqtz\" (UniqueName: \"kubernetes.io/projected/4042cc1e-c41f-4e6b-b861-ce18118b4808-kube-api-access-lcqtz\") pod \"cilium-rj4lh\" (UID: \"4042cc1e-c41f-4e6b-b861-ce18118b4808\") " pod="kube-system/cilium-rj4lh" Jul 10 00:56:11.243131 kubelet[2254]: I0710 00:56:11.243120 2254 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4042cc1e-c41f-4e6b-b861-ce18118b4808-cilium-config-path\") pod \"cilium-rj4lh\" (UID: \"4042cc1e-c41f-4e6b-b861-ce18118b4808\") " pod="kube-system/cilium-rj4lh" Jul 10 00:56:11.243437 kubelet[2254]: I0710 00:56:11.243425 2254 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4042cc1e-c41f-4e6b-b861-ce18118b4808-host-proc-sys-net\") pod \"cilium-rj4lh\" (UID: \"4042cc1e-c41f-4e6b-b861-ce18118b4808\") " pod="kube-system/cilium-rj4lh" Jul 10 00:56:11.243542 kubelet[2254]: I0710 00:56:11.243531 2254 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4042cc1e-c41f-4e6b-b861-ce18118b4808-bpf-maps\") pod \"cilium-rj4lh\" (UID: \"4042cc1e-c41f-4e6b-b861-ce18118b4808\") " pod="kube-system/cilium-rj4lh" Jul 10 00:56:11.243637 kubelet[2254]: I0710 00:56:11.243627 2254 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4042cc1e-c41f-4e6b-b861-ce18118b4808-cilium-cgroup\") pod \"cilium-rj4lh\" (UID: \"4042cc1e-c41f-4e6b-b861-ce18118b4808\") " pod="kube-system/cilium-rj4lh" Jul 10 00:56:11.243733 kubelet[2254]: I0710 00:56:11.243723 2254 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4042cc1e-c41f-4e6b-b861-ce18118b4808-cni-path\") pod \"cilium-rj4lh\" (UID: \"4042cc1e-c41f-4e6b-b861-ce18118b4808\") " pod="kube-system/cilium-rj4lh" Jul 10 00:56:11.243826 kubelet[2254]: I0710 00:56:11.243816 2254 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4042cc1e-c41f-4e6b-b861-ce18118b4808-cilium-run\") pod \"cilium-rj4lh\" (UID: \"4042cc1e-c41f-4e6b-b861-ce18118b4808\") " pod="kube-system/cilium-rj4lh" Jul 10 00:56:11.243914 kubelet[2254]: I0710 00:56:11.243904 2254 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4042cc1e-c41f-4e6b-b861-ce18118b4808-lib-modules\") pod \"cilium-rj4lh\" (UID: \"4042cc1e-c41f-4e6b-b861-ce18118b4808\") " pod="kube-system/cilium-rj4lh" Jul 10 00:56:11.244008 kubelet[2254]: I0710 00:56:11.243997 2254 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4042cc1e-c41f-4e6b-b861-ce18118b4808-hubble-tls\") pod \"cilium-rj4lh\" (UID: \"4042cc1e-c41f-4e6b-b861-ce18118b4808\") " pod="kube-system/cilium-rj4lh" Jul 10 00:56:11.244115 kubelet[2254]: I0710 00:56:11.244104 2254 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4042cc1e-c41f-4e6b-b861-ce18118b4808-hostproc\") pod \"cilium-rj4lh\" (UID: \"4042cc1e-c41f-4e6b-b861-ce18118b4808\") " pod="kube-system/cilium-rj4lh" Jul 10 00:56:11.244395 kubelet[2254]: I0710 00:56:11.244385 2254 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4042cc1e-c41f-4e6b-b861-ce18118b4808-etc-cni-netd\") pod \"cilium-rj4lh\" (UID: \"4042cc1e-c41f-4e6b-b861-ce18118b4808\") " pod="kube-system/cilium-rj4lh" Jul 10 00:56:11.244501 kubelet[2254]: I0710 00:56:11.244490 2254 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4042cc1e-c41f-4e6b-b861-ce18118b4808-xtables-lock\") pod \"cilium-rj4lh\" (UID: \"4042cc1e-c41f-4e6b-b861-ce18118b4808\") " pod="kube-system/cilium-rj4lh" Jul 10 00:56:11.502347 env[1361]: time="2025-07-10T00:56:11.501952821Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-rj4lh,Uid:4042cc1e-c41f-4e6b-b861-ce18118b4808,Namespace:kube-system,Attempt:0,}" Jul 10 00:56:11.515633 env[1361]: time="2025-07-10T00:56:11.515586204Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 10 00:56:11.515633 env[1361]: time="2025-07-10T00:56:11.515617769Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 10 00:56:11.515778 env[1361]: time="2025-07-10T00:56:11.515753078Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 00:56:11.515911 env[1361]: time="2025-07-10T00:56:11.515892436Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/aa2ea584e670c65d592afb6c3431d1dd08e8e0a3d5fb778c5ce691cbf3458f8c pid=2570 runtime=io.containerd.runc.v2 Jul 10 00:56:11.545700 env[1361]: time="2025-07-10T00:56:11.545675592Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-rj4lh,Uid:4042cc1e-c41f-4e6b-b861-ce18118b4808,Namespace:kube-system,Attempt:0,} returns sandbox id \"aa2ea584e670c65d592afb6c3431d1dd08e8e0a3d5fb778c5ce691cbf3458f8c\"" Jul 10 00:56:12.415722 env[1361]: time="2025-07-10T00:56:12.415684935Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:56:12.416874 env[1361]: time="2025-07-10T00:56:12.416846033Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:56:12.418059 env[1361]: time="2025-07-10T00:56:12.418033480Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:56:12.418759 env[1361]: time="2025-07-10T00:56:12.418724295Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jul 10 00:56:12.421955 env[1361]: time="2025-07-10T00:56:12.421914713Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jul 10 00:56:12.429107 env[1361]: time="2025-07-10T00:56:12.429065334Z" level=info msg="CreateContainer within sandbox \"cd14f60a5e354c420f07d10c7ae3616905d515cf3134d23e1f5abdca9b7ebfac\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jul 10 00:56:12.438012 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3399438748.mount: Deactivated successfully. Jul 10 00:56:12.442335 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1192484520.mount: Deactivated successfully. Jul 10 00:56:12.445033 env[1361]: time="2025-07-10T00:56:12.445004373Z" level=info msg="CreateContainer within sandbox \"cd14f60a5e354c420f07d10c7ae3616905d515cf3134d23e1f5abdca9b7ebfac\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"24049de46ce8d7172ff197e73f7d333db2aa4e7cc4628074e3e243a81d1766f7\"" Jul 10 00:56:12.446252 env[1361]: time="2025-07-10T00:56:12.446226017Z" level=info msg="StartContainer for \"24049de46ce8d7172ff197e73f7d333db2aa4e7cc4628074e3e243a81d1766f7\"" Jul 10 00:56:12.485542 env[1361]: time="2025-07-10T00:56:12.485509652Z" level=info msg="StartContainer for \"24049de46ce8d7172ff197e73f7d333db2aa4e7cc4628074e3e243a81d1766f7\" returns successfully" Jul 10 00:56:13.138643 kubelet[2254]: I0710 00:56:13.138618 2254 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="90ef8bd9-f27c-4187-ac6f-01d6250fdb86" path="/var/lib/kubelet/pods/90ef8bd9-f27c-4187-ac6f-01d6250fdb86/volumes" Jul 10 00:56:13.187797 kubelet[2254]: I0710 00:56:13.187760 2254 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-2lrvw" podStartSLOduration=1.377975761 podStartE2EDuration="3.187746925s" podCreationTimestamp="2025-07-10 00:56:10 +0000 UTC" firstStartedPulling="2025-07-10 00:56:10.610344982 +0000 UTC m=+3.741801217" lastFinishedPulling="2025-07-10 00:56:12.420116139 +0000 UTC m=+5.551572381" observedRunningTime="2025-07-10 00:56:13.187036263 +0000 UTC m=+6.318492510" watchObservedRunningTime="2025-07-10 00:56:13.187746925 +0000 UTC m=+6.319203166" Jul 10 00:56:16.223313 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1219150448.mount: Deactivated successfully. Jul 10 00:56:20.571911 env[1361]: time="2025-07-10T00:56:20.571873221Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:56:20.584503 env[1361]: time="2025-07-10T00:56:20.584471431Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:56:20.586731 env[1361]: time="2025-07-10T00:56:20.586712017Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:56:20.593567 env[1361]: time="2025-07-10T00:56:20.587522866Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jul 10 00:56:20.623572 env[1361]: time="2025-07-10T00:56:20.623410923Z" level=info msg="CreateContainer within sandbox \"aa2ea584e670c65d592afb6c3431d1dd08e8e0a3d5fb778c5ce691cbf3458f8c\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 10 00:56:20.631642 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2822894428.mount: Deactivated successfully. Jul 10 00:56:20.637144 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4062121230.mount: Deactivated successfully. Jul 10 00:56:20.639826 env[1361]: time="2025-07-10T00:56:20.639797685Z" level=info msg="CreateContainer within sandbox \"aa2ea584e670c65d592afb6c3431d1dd08e8e0a3d5fb778c5ce691cbf3458f8c\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"689f876575790d5e038c60306dced84a0e8fc4804dce66bd6675d14c6495227a\"" Jul 10 00:56:20.641305 env[1361]: time="2025-07-10T00:56:20.640330434Z" level=info msg="StartContainer for \"689f876575790d5e038c60306dced84a0e8fc4804dce66bd6675d14c6495227a\"" Jul 10 00:56:20.691861 env[1361]: time="2025-07-10T00:56:20.691834315Z" level=info msg="StartContainer for \"689f876575790d5e038c60306dced84a0e8fc4804dce66bd6675d14c6495227a\" returns successfully" Jul 10 00:56:21.337779 env[1361]: time="2025-07-10T00:56:21.337745701Z" level=info msg="shim disconnected" id=689f876575790d5e038c60306dced84a0e8fc4804dce66bd6675d14c6495227a Jul 10 00:56:21.337779 env[1361]: time="2025-07-10T00:56:21.337775936Z" level=warning msg="cleaning up after shim disconnected" id=689f876575790d5e038c60306dced84a0e8fc4804dce66bd6675d14c6495227a namespace=k8s.io Jul 10 00:56:21.337779 env[1361]: time="2025-07-10T00:56:21.337784073Z" level=info msg="cleaning up dead shim" Jul 10 00:56:21.342724 env[1361]: time="2025-07-10T00:56:21.342696256Z" level=warning msg="cleanup warnings time=\"2025-07-10T00:56:21Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2694 runtime=io.containerd.runc.v2\n" Jul 10 00:56:21.628437 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-689f876575790d5e038c60306dced84a0e8fc4804dce66bd6675d14c6495227a-rootfs.mount: Deactivated successfully. Jul 10 00:56:22.290721 env[1361]: time="2025-07-10T00:56:22.290611141Z" level=info msg="CreateContainer within sandbox \"aa2ea584e670c65d592afb6c3431d1dd08e8e0a3d5fb778c5ce691cbf3458f8c\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 10 00:56:22.337934 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount592827006.mount: Deactivated successfully. Jul 10 00:56:22.342388 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4237740222.mount: Deactivated successfully. Jul 10 00:56:22.367506 env[1361]: time="2025-07-10T00:56:22.367468861Z" level=info msg="CreateContainer within sandbox \"aa2ea584e670c65d592afb6c3431d1dd08e8e0a3d5fb778c5ce691cbf3458f8c\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"a39ee0701f1adf80eb5780715a053ef86748cdef302e628fa429d17a8e136834\"" Jul 10 00:56:22.368746 env[1361]: time="2025-07-10T00:56:22.368055401Z" level=info msg="StartContainer for \"a39ee0701f1adf80eb5780715a053ef86748cdef302e628fa429d17a8e136834\"" Jul 10 00:56:22.405561 env[1361]: time="2025-07-10T00:56:22.405528101Z" level=info msg="StartContainer for \"a39ee0701f1adf80eb5780715a053ef86748cdef302e628fa429d17a8e136834\" returns successfully" Jul 10 00:56:22.414634 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 10 00:56:22.414835 systemd[1]: Stopped systemd-sysctl.service. Jul 10 00:56:22.415016 systemd[1]: Stopping systemd-sysctl.service... Jul 10 00:56:22.417143 systemd[1]: Starting systemd-sysctl.service... Jul 10 00:56:22.430172 systemd[1]: Finished systemd-sysctl.service. Jul 10 00:56:22.439887 env[1361]: time="2025-07-10T00:56:22.439852393Z" level=info msg="shim disconnected" id=a39ee0701f1adf80eb5780715a053ef86748cdef302e628fa429d17a8e136834 Jul 10 00:56:22.439887 env[1361]: time="2025-07-10T00:56:22.439883581Z" level=warning msg="cleaning up after shim disconnected" id=a39ee0701f1adf80eb5780715a053ef86748cdef302e628fa429d17a8e136834 namespace=k8s.io Jul 10 00:56:22.439887 env[1361]: time="2025-07-10T00:56:22.439889929Z" level=info msg="cleaning up dead shim" Jul 10 00:56:22.445513 env[1361]: time="2025-07-10T00:56:22.445479143Z" level=warning msg="cleanup warnings time=\"2025-07-10T00:56:22Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2760 runtime=io.containerd.runc.v2\n" Jul 10 00:56:23.284638 env[1361]: time="2025-07-10T00:56:23.284612116Z" level=info msg="CreateContainer within sandbox \"aa2ea584e670c65d592afb6c3431d1dd08e8e0a3d5fb778c5ce691cbf3458f8c\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 10 00:56:23.347073 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3846750911.mount: Deactivated successfully. Jul 10 00:56:23.351334 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1589518744.mount: Deactivated successfully. Jul 10 00:56:23.391114 env[1361]: time="2025-07-10T00:56:23.391078711Z" level=info msg="CreateContainer within sandbox \"aa2ea584e670c65d592afb6c3431d1dd08e8e0a3d5fb778c5ce691cbf3458f8c\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"aa21674b8a415c3e0d7e77cfd8e6ba66ce23485ab590ddc030302dba8e044034\"" Jul 10 00:56:23.392783 env[1361]: time="2025-07-10T00:56:23.392765239Z" level=info msg="StartContainer for \"aa21674b8a415c3e0d7e77cfd8e6ba66ce23485ab590ddc030302dba8e044034\"" Jul 10 00:56:23.434113 env[1361]: time="2025-07-10T00:56:23.434031550Z" level=info msg="StartContainer for \"aa21674b8a415c3e0d7e77cfd8e6ba66ce23485ab590ddc030302dba8e044034\" returns successfully" Jul 10 00:56:23.616939 env[1361]: time="2025-07-10T00:56:23.616726166Z" level=info msg="shim disconnected" id=aa21674b8a415c3e0d7e77cfd8e6ba66ce23485ab590ddc030302dba8e044034 Jul 10 00:56:23.616939 env[1361]: time="2025-07-10T00:56:23.616756516Z" level=warning msg="cleaning up after shim disconnected" id=aa21674b8a415c3e0d7e77cfd8e6ba66ce23485ab590ddc030302dba8e044034 namespace=k8s.io Jul 10 00:56:23.616939 env[1361]: time="2025-07-10T00:56:23.616762800Z" level=info msg="cleaning up dead shim" Jul 10 00:56:23.621783 env[1361]: time="2025-07-10T00:56:23.621759257Z" level=warning msg="cleanup warnings time=\"2025-07-10T00:56:23Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2816 runtime=io.containerd.runc.v2\n" Jul 10 00:56:24.292103 env[1361]: time="2025-07-10T00:56:24.292008011Z" level=info msg="CreateContainer within sandbox \"aa2ea584e670c65d592afb6c3431d1dd08e8e0a3d5fb778c5ce691cbf3458f8c\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 10 00:56:24.353297 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3134763801.mount: Deactivated successfully. Jul 10 00:56:24.357485 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1343135794.mount: Deactivated successfully. Jul 10 00:56:24.391419 env[1361]: time="2025-07-10T00:56:24.391379817Z" level=info msg="CreateContainer within sandbox \"aa2ea584e670c65d592afb6c3431d1dd08e8e0a3d5fb778c5ce691cbf3458f8c\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"c9d71382117a7305bf296ecabac0fb2b044a857e5e8d3d7b8fa4a7de234e8146\"" Jul 10 00:56:24.392267 env[1361]: time="2025-07-10T00:56:24.392250375Z" level=info msg="StartContainer for \"c9d71382117a7305bf296ecabac0fb2b044a857e5e8d3d7b8fa4a7de234e8146\"" Jul 10 00:56:24.449161 env[1361]: time="2025-07-10T00:56:24.449132613Z" level=info msg="StartContainer for \"c9d71382117a7305bf296ecabac0fb2b044a857e5e8d3d7b8fa4a7de234e8146\" returns successfully" Jul 10 00:56:24.492919 env[1361]: time="2025-07-10T00:56:24.492884610Z" level=info msg="shim disconnected" id=c9d71382117a7305bf296ecabac0fb2b044a857e5e8d3d7b8fa4a7de234e8146 Jul 10 00:56:24.492919 env[1361]: time="2025-07-10T00:56:24.492915398Z" level=warning msg="cleaning up after shim disconnected" id=c9d71382117a7305bf296ecabac0fb2b044a857e5e8d3d7b8fa4a7de234e8146 namespace=k8s.io Jul 10 00:56:24.492919 env[1361]: time="2025-07-10T00:56:24.492922148Z" level=info msg="cleaning up dead shim" Jul 10 00:56:24.497834 env[1361]: time="2025-07-10T00:56:24.497808215Z" level=warning msg="cleanup warnings time=\"2025-07-10T00:56:24Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2873 runtime=io.containerd.runc.v2\n" Jul 10 00:56:25.290488 env[1361]: time="2025-07-10T00:56:25.290460942Z" level=info msg="CreateContainer within sandbox \"aa2ea584e670c65d592afb6c3431d1dd08e8e0a3d5fb778c5ce691cbf3458f8c\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 10 00:56:25.368612 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1152173322.mount: Deactivated successfully. Jul 10 00:56:25.373974 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount943260640.mount: Deactivated successfully. Jul 10 00:56:25.428975 env[1361]: time="2025-07-10T00:56:25.428928214Z" level=info msg="CreateContainer within sandbox \"aa2ea584e670c65d592afb6c3431d1dd08e8e0a3d5fb778c5ce691cbf3458f8c\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"ad9191f26f84f9e4297a2aae0bbfefa60f97a8e5dcc744aeab2b13df84ec485c\"" Jul 10 00:56:25.429478 env[1361]: time="2025-07-10T00:56:25.429462226Z" level=info msg="StartContainer for \"ad9191f26f84f9e4297a2aae0bbfefa60f97a8e5dcc744aeab2b13df84ec485c\"" Jul 10 00:56:25.483254 env[1361]: time="2025-07-10T00:56:25.483226657Z" level=info msg="StartContainer for \"ad9191f26f84f9e4297a2aae0bbfefa60f97a8e5dcc744aeab2b13df84ec485c\" returns successfully" Jul 10 00:56:25.756666 kubelet[2254]: I0710 00:56:25.756571 2254 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Jul 10 00:56:25.949068 kubelet[2254]: I0710 00:56:25.949043 2254 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/75ccb5d9-4379-4bdc-9fc7-d91b6580aaaf-config-volume\") pod \"coredns-7c65d6cfc9-2dgdc\" (UID: \"75ccb5d9-4379-4bdc-9fc7-d91b6580aaaf\") " pod="kube-system/coredns-7c65d6cfc9-2dgdc" Jul 10 00:56:25.949240 kubelet[2254]: I0710 00:56:25.949227 2254 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t7k6c\" (UniqueName: \"kubernetes.io/projected/75ccb5d9-4379-4bdc-9fc7-d91b6580aaaf-kube-api-access-t7k6c\") pod \"coredns-7c65d6cfc9-2dgdc\" (UID: \"75ccb5d9-4379-4bdc-9fc7-d91b6580aaaf\") " pod="kube-system/coredns-7c65d6cfc9-2dgdc" Jul 10 00:56:25.949329 kubelet[2254]: I0710 00:56:25.949316 2254 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e0744f96-bddc-4f76-9ad4-33fe7374e75e-config-volume\") pod \"coredns-7c65d6cfc9-sf6xq\" (UID: \"e0744f96-bddc-4f76-9ad4-33fe7374e75e\") " pod="kube-system/coredns-7c65d6cfc9-sf6xq" Jul 10 00:56:25.949430 kubelet[2254]: I0710 00:56:25.949418 2254 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nqjcv\" (UniqueName: \"kubernetes.io/projected/e0744f96-bddc-4f76-9ad4-33fe7374e75e-kube-api-access-nqjcv\") pod \"coredns-7c65d6cfc9-sf6xq\" (UID: \"e0744f96-bddc-4f76-9ad4-33fe7374e75e\") " pod="kube-system/coredns-7c65d6cfc9-sf6xq" Jul 10 00:56:26.074465 kernel: Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks! Jul 10 00:56:26.109863 env[1361]: time="2025-07-10T00:56:26.109835475Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-sf6xq,Uid:e0744f96-bddc-4f76-9ad4-33fe7374e75e,Namespace:kube-system,Attempt:0,}" Jul 10 00:56:26.110257 env[1361]: time="2025-07-10T00:56:26.110239435Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-2dgdc,Uid:75ccb5d9-4379-4bdc-9fc7-d91b6580aaaf,Namespace:kube-system,Attempt:0,}" Jul 10 00:56:26.312451 kubelet[2254]: I0710 00:56:26.312402 2254 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-rj4lh" podStartSLOduration=6.269133094 podStartE2EDuration="15.310758927s" podCreationTimestamp="2025-07-10 00:56:11 +0000 UTC" firstStartedPulling="2025-07-10 00:56:11.546502615 +0000 UTC m=+4.677958849" lastFinishedPulling="2025-07-10 00:56:20.588128439 +0000 UTC m=+13.719584682" observedRunningTime="2025-07-10 00:56:26.310540093 +0000 UTC m=+19.441996334" watchObservedRunningTime="2025-07-10 00:56:26.310758927 +0000 UTC m=+19.442215169" Jul 10 00:56:26.478372 kernel: Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks! Jul 10 00:56:28.124376 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Jul 10 00:56:28.127159 systemd-networkd[1112]: cilium_host: Link UP Jul 10 00:56:28.127247 systemd-networkd[1112]: cilium_net: Link UP Jul 10 00:56:28.127249 systemd-networkd[1112]: cilium_net: Gained carrier Jul 10 00:56:28.127342 systemd-networkd[1112]: cilium_host: Gained carrier Jul 10 00:56:28.127699 systemd-networkd[1112]: cilium_host: Gained IPv6LL Jul 10 00:56:28.264982 systemd-networkd[1112]: cilium_net: Gained IPv6LL Jul 10 00:56:28.268467 systemd-networkd[1112]: cilium_vxlan: Link UP Jul 10 00:56:28.268469 systemd-networkd[1112]: cilium_vxlan: Gained carrier Jul 10 00:56:28.313812 systemd[1]: run-containerd-runc-k8s.io-ad9191f26f84f9e4297a2aae0bbfefa60f97a8e5dcc744aeab2b13df84ec485c-runc.B6lSQL.mount: Deactivated successfully. Jul 10 00:56:28.957371 kernel: NET: Registered PF_ALG protocol family Jul 10 00:56:29.448745 systemd-networkd[1112]: lxc_health: Link UP Jul 10 00:56:29.461738 systemd-networkd[1112]: lxc_health: Gained carrier Jul 10 00:56:29.462363 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Jul 10 00:56:29.698105 systemd-networkd[1112]: lxc8214da093f77: Link UP Jul 10 00:56:29.708401 kernel: eth0: renamed from tmp11092 Jul 10 00:56:29.714147 systemd-networkd[1112]: lxc8214da093f77: Gained carrier Jul 10 00:56:29.714595 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc8214da093f77: link becomes ready Jul 10 00:56:29.722165 systemd-networkd[1112]: lxcf97dc5763ccd: Link UP Jul 10 00:56:29.731599 kernel: eth0: renamed from tmpad845 Jul 10 00:56:29.738841 systemd-networkd[1112]: lxcf97dc5763ccd: Gained carrier Jul 10 00:56:29.739390 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcf97dc5763ccd: link becomes ready Jul 10 00:56:30.128456 systemd-networkd[1112]: cilium_vxlan: Gained IPv6LL Jul 10 00:56:30.381990 systemd[1]: run-containerd-runc-k8s.io-ad9191f26f84f9e4297a2aae0bbfefa60f97a8e5dcc744aeab2b13df84ec485c-runc.PVL0LM.mount: Deactivated successfully. Jul 10 00:56:30.832481 systemd-networkd[1112]: lxc_health: Gained IPv6LL Jul 10 00:56:30.896466 systemd-networkd[1112]: lxcf97dc5763ccd: Gained IPv6LL Jul 10 00:56:31.728473 systemd-networkd[1112]: lxc8214da093f77: Gained IPv6LL Jul 10 00:56:32.415156 env[1361]: time="2025-07-10T00:56:32.405423790Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 10 00:56:32.415156 env[1361]: time="2025-07-10T00:56:32.405462237Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 10 00:56:32.415156 env[1361]: time="2025-07-10T00:56:32.405469804Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 00:56:32.415156 env[1361]: time="2025-07-10T00:56:32.405558371Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ad84506234b81ecda20550b1cbd84e10cde7fc6d638e1d2af97cfc38fee81392 pid=3460 runtime=io.containerd.runc.v2 Jul 10 00:56:32.415156 env[1361]: time="2025-07-10T00:56:32.410738721Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 10 00:56:32.415156 env[1361]: time="2025-07-10T00:56:32.410779077Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 10 00:56:32.415156 env[1361]: time="2025-07-10T00:56:32.411386514Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 00:56:32.415156 env[1361]: time="2025-07-10T00:56:32.411490136Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/110926b0aedaf02f3c50dc9a2f6d039a0b2ff30acddb684f7f26baf957f5cdc3 pid=3475 runtime=io.containerd.runc.v2 Jul 10 00:56:32.435046 systemd-resolved[1299]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 10 00:56:32.447014 systemd[1]: run-containerd-runc-k8s.io-110926b0aedaf02f3c50dc9a2f6d039a0b2ff30acddb684f7f26baf957f5cdc3-runc.lHaOsr.mount: Deactivated successfully. Jul 10 00:56:32.479234 systemd-resolved[1299]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 10 00:56:32.491211 env[1361]: time="2025-07-10T00:56:32.491186630Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-2dgdc,Uid:75ccb5d9-4379-4bdc-9fc7-d91b6580aaaf,Namespace:kube-system,Attempt:0,} returns sandbox id \"ad84506234b81ecda20550b1cbd84e10cde7fc6d638e1d2af97cfc38fee81392\"" Jul 10 00:56:32.497391 env[1361]: time="2025-07-10T00:56:32.496960951Z" level=info msg="CreateContainer within sandbox \"ad84506234b81ecda20550b1cbd84e10cde7fc6d638e1d2af97cfc38fee81392\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 10 00:56:32.528475 env[1361]: time="2025-07-10T00:56:32.528446058Z" level=info msg="CreateContainer within sandbox \"ad84506234b81ecda20550b1cbd84e10cde7fc6d638e1d2af97cfc38fee81392\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"4b0060f293ebf277b65cb84d49313165038ab9472d9d60e7627a5fe6654f66b0\"" Jul 10 00:56:32.529286 env[1361]: time="2025-07-10T00:56:32.529270926Z" level=info msg="StartContainer for \"4b0060f293ebf277b65cb84d49313165038ab9472d9d60e7627a5fe6654f66b0\"" Jul 10 00:56:32.554053 env[1361]: time="2025-07-10T00:56:32.554020785Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-sf6xq,Uid:e0744f96-bddc-4f76-9ad4-33fe7374e75e,Namespace:kube-system,Attempt:0,} returns sandbox id \"110926b0aedaf02f3c50dc9a2f6d039a0b2ff30acddb684f7f26baf957f5cdc3\"" Jul 10 00:56:32.559424 env[1361]: time="2025-07-10T00:56:32.559396764Z" level=info msg="CreateContainer within sandbox \"110926b0aedaf02f3c50dc9a2f6d039a0b2ff30acddb684f7f26baf957f5cdc3\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 10 00:56:32.609938 env[1361]: time="2025-07-10T00:56:32.609905296Z" level=info msg="CreateContainer within sandbox \"110926b0aedaf02f3c50dc9a2f6d039a0b2ff30acddb684f7f26baf957f5cdc3\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"3c2962f8ebff935d7c88167023840168de33aa080f96e50d9f5fc50b47078064\"" Jul 10 00:56:32.610495 env[1361]: time="2025-07-10T00:56:32.610481129Z" level=info msg="StartContainer for \"3c2962f8ebff935d7c88167023840168de33aa080f96e50d9f5fc50b47078064\"" Jul 10 00:56:32.640238 env[1361]: time="2025-07-10T00:56:32.640209661Z" level=info msg="StartContainer for \"4b0060f293ebf277b65cb84d49313165038ab9472d9d60e7627a5fe6654f66b0\" returns successfully" Jul 10 00:56:32.646061 env[1361]: time="2025-07-10T00:56:32.646028840Z" level=info msg="StartContainer for \"3c2962f8ebff935d7c88167023840168de33aa080f96e50d9f5fc50b47078064\" returns successfully" Jul 10 00:56:33.307416 kubelet[2254]: I0710 00:56:33.307384 2254 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-sf6xq" podStartSLOduration=23.307371891 podStartE2EDuration="23.307371891s" podCreationTimestamp="2025-07-10 00:56:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 00:56:33.307088987 +0000 UTC m=+26.438545227" watchObservedRunningTime="2025-07-10 00:56:33.307371891 +0000 UTC m=+26.438828132" Jul 10 00:56:33.314824 kubelet[2254]: I0710 00:56:33.314796 2254 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-2dgdc" podStartSLOduration=23.314775644 podStartE2EDuration="23.314775644s" podCreationTimestamp="2025-07-10 00:56:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 00:56:33.314443351 +0000 UTC m=+26.445899593" watchObservedRunningTime="2025-07-10 00:56:33.314775644 +0000 UTC m=+26.446231885" Jul 10 00:56:34.627267 systemd[1]: run-containerd-runc-k8s.io-ad9191f26f84f9e4297a2aae0bbfefa60f97a8e5dcc744aeab2b13df84ec485c-runc.DWT1JO.mount: Deactivated successfully. Jul 10 00:56:35.846425 sudo[1590]: pam_unix(sudo:session): session closed for user root Jul 10 00:56:35.856163 sshd[1584]: pam_unix(sshd:session): session closed for user core Jul 10 00:56:35.866466 systemd[1]: sshd@4-139.178.70.107:22-139.178.68.195:44388.service: Deactivated successfully. Jul 10 00:56:35.867282 systemd[1]: session-7.scope: Deactivated successfully. Jul 10 00:56:35.867547 systemd-logind[1346]: Session 7 logged out. Waiting for processes to exit. Jul 10 00:56:35.868578 systemd-logind[1346]: Removed session 7. Jul 10 00:57:36.741224 systemd[1]: Started sshd@5-139.178.70.107:22-139.178.68.195:57976.service. Jul 10 00:57:36.781710 sshd[3685]: Accepted publickey for core from 139.178.68.195 port 57976 ssh2: RSA SHA256:NVpdRDPpwzjVTzi6orhe1cA9BvcYymCSReGH8myOy/Q Jul 10 00:57:36.783016 sshd[3685]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 10 00:57:36.787197 systemd-logind[1346]: New session 8 of user core. Jul 10 00:57:36.787587 systemd[1]: Started session-8.scope. Jul 10 00:57:36.998102 sshd[3685]: pam_unix(sshd:session): session closed for user core Jul 10 00:57:36.999931 systemd[1]: sshd@5-139.178.70.107:22-139.178.68.195:57976.service: Deactivated successfully. Jul 10 00:57:37.000696 systemd[1]: session-8.scope: Deactivated successfully. Jul 10 00:57:37.000718 systemd-logind[1346]: Session 8 logged out. Waiting for processes to exit. Jul 10 00:57:37.001364 systemd-logind[1346]: Removed session 8. Jul 10 00:57:42.000760 systemd[1]: Started sshd@6-139.178.70.107:22-139.178.68.195:34164.service. Jul 10 00:57:42.103796 sshd[3700]: Accepted publickey for core from 139.178.68.195 port 34164 ssh2: RSA SHA256:NVpdRDPpwzjVTzi6orhe1cA9BvcYymCSReGH8myOy/Q Jul 10 00:57:42.104885 sshd[3700]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 10 00:57:42.107931 systemd[1]: Started session-9.scope. Jul 10 00:57:42.108408 systemd-logind[1346]: New session 9 of user core. Jul 10 00:57:42.226812 sshd[3700]: pam_unix(sshd:session): session closed for user core Jul 10 00:57:42.228833 systemd[1]: sshd@6-139.178.70.107:22-139.178.68.195:34164.service: Deactivated successfully. Jul 10 00:57:42.229702 systemd[1]: session-9.scope: Deactivated successfully. Jul 10 00:57:42.230128 systemd-logind[1346]: Session 9 logged out. Waiting for processes to exit. Jul 10 00:57:42.230742 systemd-logind[1346]: Removed session 9. Jul 10 00:57:47.229425 systemd[1]: Started sshd@7-139.178.70.107:22-139.178.68.195:34176.service. Jul 10 00:57:47.265361 sshd[3715]: Accepted publickey for core from 139.178.68.195 port 34176 ssh2: RSA SHA256:NVpdRDPpwzjVTzi6orhe1cA9BvcYymCSReGH8myOy/Q Jul 10 00:57:47.266975 sshd[3715]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 10 00:57:47.270510 systemd[1]: Started session-10.scope. Jul 10 00:57:47.270781 systemd-logind[1346]: New session 10 of user core. Jul 10 00:57:47.361504 sshd[3715]: pam_unix(sshd:session): session closed for user core Jul 10 00:57:47.363147 systemd[1]: sshd@7-139.178.70.107:22-139.178.68.195:34176.service: Deactivated successfully. Jul 10 00:57:47.363696 systemd[1]: session-10.scope: Deactivated successfully. Jul 10 00:57:47.364113 systemd-logind[1346]: Session 10 logged out. Waiting for processes to exit. Jul 10 00:57:47.364705 systemd-logind[1346]: Removed session 10. Jul 10 00:57:52.364771 systemd[1]: Started sshd@8-139.178.70.107:22-139.178.68.195:41812.service. Jul 10 00:57:52.398422 sshd[3729]: Accepted publickey for core from 139.178.68.195 port 41812 ssh2: RSA SHA256:NVpdRDPpwzjVTzi6orhe1cA9BvcYymCSReGH8myOy/Q Jul 10 00:57:52.400008 sshd[3729]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 10 00:57:52.404242 systemd-logind[1346]: New session 11 of user core. Jul 10 00:57:52.404303 systemd[1]: Started session-11.scope. Jul 10 00:57:52.510290 sshd[3729]: pam_unix(sshd:session): session closed for user core Jul 10 00:57:52.511415 systemd[1]: Started sshd@9-139.178.70.107:22-139.178.68.195:41826.service. Jul 10 00:57:52.514413 systemd[1]: sshd@8-139.178.70.107:22-139.178.68.195:41812.service: Deactivated successfully. Jul 10 00:57:52.515225 systemd[1]: session-11.scope: Deactivated successfully. Jul 10 00:57:52.515562 systemd-logind[1346]: Session 11 logged out. Waiting for processes to exit. Jul 10 00:57:52.516050 systemd-logind[1346]: Removed session 11. Jul 10 00:57:52.547832 sshd[3742]: Accepted publickey for core from 139.178.68.195 port 41826 ssh2: RSA SHA256:NVpdRDPpwzjVTzi6orhe1cA9BvcYymCSReGH8myOy/Q Jul 10 00:57:52.549158 sshd[3742]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 10 00:57:52.553346 systemd[1]: Started session-12.scope. Jul 10 00:57:52.553875 systemd-logind[1346]: New session 12 of user core. Jul 10 00:57:52.687244 systemd[1]: Started sshd@10-139.178.70.107:22-139.178.68.195:41830.service. Jul 10 00:57:52.690753 sshd[3742]: pam_unix(sshd:session): session closed for user core Jul 10 00:57:52.701054 systemd[1]: sshd@9-139.178.70.107:22-139.178.68.195:41826.service: Deactivated successfully. Jul 10 00:57:52.701614 systemd[1]: session-12.scope: Deactivated successfully. Jul 10 00:57:52.706131 systemd-logind[1346]: Session 12 logged out. Waiting for processes to exit. Jul 10 00:57:52.706900 systemd-logind[1346]: Removed session 12. Jul 10 00:57:52.728727 sshd[3752]: Accepted publickey for core from 139.178.68.195 port 41830 ssh2: RSA SHA256:NVpdRDPpwzjVTzi6orhe1cA9BvcYymCSReGH8myOy/Q Jul 10 00:57:52.729588 sshd[3752]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 10 00:57:52.732289 systemd-logind[1346]: New session 13 of user core. Jul 10 00:57:52.732642 systemd[1]: Started session-13.scope. Jul 10 00:57:52.835344 sshd[3752]: pam_unix(sshd:session): session closed for user core Jul 10 00:57:52.837065 systemd[1]: sshd@10-139.178.70.107:22-139.178.68.195:41830.service: Deactivated successfully. Jul 10 00:57:52.837795 systemd[1]: session-13.scope: Deactivated successfully. Jul 10 00:57:52.837836 systemd-logind[1346]: Session 13 logged out. Waiting for processes to exit. Jul 10 00:57:52.838527 systemd-logind[1346]: Removed session 13. Jul 10 00:57:57.837503 systemd[1]: Started sshd@11-139.178.70.107:22-139.178.68.195:41834.service. Jul 10 00:57:58.085301 sshd[3766]: Accepted publickey for core from 139.178.68.195 port 41834 ssh2: RSA SHA256:NVpdRDPpwzjVTzi6orhe1cA9BvcYymCSReGH8myOy/Q Jul 10 00:57:58.096870 sshd[3766]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 10 00:57:58.104318 systemd-logind[1346]: New session 14 of user core. Jul 10 00:57:58.104684 systemd[1]: Started session-14.scope. Jul 10 00:57:58.249721 sshd[3766]: pam_unix(sshd:session): session closed for user core Jul 10 00:57:58.251263 systemd-logind[1346]: Session 14 logged out. Waiting for processes to exit. Jul 10 00:57:58.251343 systemd[1]: sshd@11-139.178.70.107:22-139.178.68.195:41834.service: Deactivated successfully. Jul 10 00:57:58.251836 systemd[1]: session-14.scope: Deactivated successfully. Jul 10 00:57:58.252296 systemd-logind[1346]: Removed session 14. Jul 10 00:58:03.247830 systemd[1]: Started sshd@12-139.178.70.107:22-139.178.68.195:58412.service. Jul 10 00:58:03.289312 sshd[3778]: Accepted publickey for core from 139.178.68.195 port 58412 ssh2: RSA SHA256:NVpdRDPpwzjVTzi6orhe1cA9BvcYymCSReGH8myOy/Q Jul 10 00:58:03.290119 sshd[3778]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 10 00:58:03.293088 systemd[1]: Started session-15.scope. Jul 10 00:58:03.293239 systemd-logind[1346]: New session 15 of user core. Jul 10 00:58:03.519086 sshd[3778]: pam_unix(sshd:session): session closed for user core Jul 10 00:58:03.521094 systemd[1]: Started sshd@13-139.178.70.107:22-139.178.68.195:58414.service. Jul 10 00:58:03.526629 systemd[1]: sshd@12-139.178.70.107:22-139.178.68.195:58412.service: Deactivated successfully. Jul 10 00:58:03.527132 systemd[1]: session-15.scope: Deactivated successfully. Jul 10 00:58:03.528478 systemd-logind[1346]: Session 15 logged out. Waiting for processes to exit. Jul 10 00:58:03.529142 systemd-logind[1346]: Removed session 15. Jul 10 00:58:03.553630 sshd[3788]: Accepted publickey for core from 139.178.68.195 port 58414 ssh2: RSA SHA256:NVpdRDPpwzjVTzi6orhe1cA9BvcYymCSReGH8myOy/Q Jul 10 00:58:03.554751 sshd[3788]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 10 00:58:03.557034 systemd-logind[1346]: New session 16 of user core. Jul 10 00:58:03.557679 systemd[1]: Started session-16.scope. Jul 10 00:58:04.000960 sshd[3788]: pam_unix(sshd:session): session closed for user core Jul 10 00:58:04.003019 systemd[1]: Started sshd@14-139.178.70.107:22-139.178.68.195:58430.service. Jul 10 00:58:04.004564 systemd-logind[1346]: Session 16 logged out. Waiting for processes to exit. Jul 10 00:58:04.005091 systemd[1]: sshd@13-139.178.70.107:22-139.178.68.195:58414.service: Deactivated successfully. Jul 10 00:58:04.005866 systemd[1]: session-16.scope: Deactivated successfully. Jul 10 00:58:04.006206 systemd-logind[1346]: Removed session 16. Jul 10 00:58:04.040782 sshd[3799]: Accepted publickey for core from 139.178.68.195 port 58430 ssh2: RSA SHA256:NVpdRDPpwzjVTzi6orhe1cA9BvcYymCSReGH8myOy/Q Jul 10 00:58:04.042093 sshd[3799]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 10 00:58:04.045070 systemd[1]: Started session-17.scope. Jul 10 00:58:04.045831 systemd-logind[1346]: New session 17 of user core. Jul 10 00:58:05.416293 systemd[1]: Started sshd@15-139.178.70.107:22-139.178.68.195:58444.service. Jul 10 00:58:05.418577 systemd[1]: sshd@14-139.178.70.107:22-139.178.68.195:58430.service: Deactivated successfully. Jul 10 00:58:05.417135 sshd[3799]: pam_unix(sshd:session): session closed for user core Jul 10 00:58:05.419170 systemd-logind[1346]: Session 17 logged out. Waiting for processes to exit. Jul 10 00:58:05.419220 systemd[1]: session-17.scope: Deactivated successfully. Jul 10 00:58:05.420122 systemd-logind[1346]: Removed session 17. Jul 10 00:58:05.469266 sshd[3815]: Accepted publickey for core from 139.178.68.195 port 58444 ssh2: RSA SHA256:NVpdRDPpwzjVTzi6orhe1cA9BvcYymCSReGH8myOy/Q Jul 10 00:58:05.471127 sshd[3815]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 10 00:58:05.474167 systemd-logind[1346]: New session 18 of user core. Jul 10 00:58:05.474318 systemd[1]: Started session-18.scope. Jul 10 00:58:05.986270 sshd[3815]: pam_unix(sshd:session): session closed for user core Jul 10 00:58:05.988008 systemd[1]: Started sshd@16-139.178.70.107:22-139.178.68.195:58452.service. Jul 10 00:58:05.991575 systemd[1]: sshd@15-139.178.70.107:22-139.178.68.195:58444.service: Deactivated successfully. Jul 10 00:58:05.992191 systemd-logind[1346]: Session 18 logged out. Waiting for processes to exit. Jul 10 00:58:05.992247 systemd[1]: session-18.scope: Deactivated successfully. Jul 10 00:58:05.993740 systemd-logind[1346]: Removed session 18. Jul 10 00:58:06.020697 sshd[3829]: Accepted publickey for core from 139.178.68.195 port 58452 ssh2: RSA SHA256:NVpdRDPpwzjVTzi6orhe1cA9BvcYymCSReGH8myOy/Q Jul 10 00:58:06.021652 sshd[3829]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 10 00:58:06.024700 systemd[1]: Started session-19.scope. Jul 10 00:58:06.024993 systemd-logind[1346]: New session 19 of user core. Jul 10 00:58:06.138153 sshd[3829]: pam_unix(sshd:session): session closed for user core Jul 10 00:58:06.139648 systemd[1]: sshd@16-139.178.70.107:22-139.178.68.195:58452.service: Deactivated successfully. Jul 10 00:58:06.141993 systemd[1]: session-19.scope: Deactivated successfully. Jul 10 00:58:06.142922 systemd-logind[1346]: Session 19 logged out. Waiting for processes to exit. Jul 10 00:58:06.144033 systemd-logind[1346]: Removed session 19. Jul 10 00:58:11.139598 systemd[1]: Started sshd@17-139.178.70.107:22-139.178.68.195:58702.service. Jul 10 00:58:11.383188 sshd[3849]: Accepted publickey for core from 139.178.68.195 port 58702 ssh2: RSA SHA256:NVpdRDPpwzjVTzi6orhe1cA9BvcYymCSReGH8myOy/Q Jul 10 00:58:11.384396 sshd[3849]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 10 00:58:11.387754 systemd[1]: Started session-20.scope. Jul 10 00:58:11.387988 systemd-logind[1346]: New session 20 of user core. Jul 10 00:58:11.490535 sshd[3849]: pam_unix(sshd:session): session closed for user core Jul 10 00:58:11.492334 systemd[1]: sshd@17-139.178.70.107:22-139.178.68.195:58702.service: Deactivated successfully. Jul 10 00:58:11.493178 systemd[1]: session-20.scope: Deactivated successfully. Jul 10 00:58:11.493431 systemd-logind[1346]: Session 20 logged out. Waiting for processes to exit. Jul 10 00:58:11.493898 systemd-logind[1346]: Removed session 20. Jul 10 00:58:16.494016 systemd[1]: Started sshd@18-139.178.70.107:22-139.178.68.195:58708.service. Jul 10 00:58:16.525849 sshd[3864]: Accepted publickey for core from 139.178.68.195 port 58708 ssh2: RSA SHA256:NVpdRDPpwzjVTzi6orhe1cA9BvcYymCSReGH8myOy/Q Jul 10 00:58:16.527459 sshd[3864]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 10 00:58:16.531329 systemd-logind[1346]: New session 21 of user core. Jul 10 00:58:16.531864 systemd[1]: Started session-21.scope. Jul 10 00:58:16.624469 sshd[3864]: pam_unix(sshd:session): session closed for user core Jul 10 00:58:16.625991 systemd-logind[1346]: Session 21 logged out. Waiting for processes to exit. Jul 10 00:58:16.626128 systemd[1]: sshd@18-139.178.70.107:22-139.178.68.195:58708.service: Deactivated successfully. Jul 10 00:58:16.626593 systemd[1]: session-21.scope: Deactivated successfully. Jul 10 00:58:16.627120 systemd-logind[1346]: Removed session 21. Jul 10 00:58:21.627442 systemd[1]: Started sshd@19-139.178.70.107:22-139.178.68.195:43004.service. Jul 10 00:58:21.659191 sshd[3877]: Accepted publickey for core from 139.178.68.195 port 43004 ssh2: RSA SHA256:NVpdRDPpwzjVTzi6orhe1cA9BvcYymCSReGH8myOy/Q Jul 10 00:58:21.660344 sshd[3877]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 10 00:58:21.664536 systemd[1]: Started session-22.scope. Jul 10 00:58:21.664882 systemd-logind[1346]: New session 22 of user core. Jul 10 00:58:21.784405 sshd[3877]: pam_unix(sshd:session): session closed for user core Jul 10 00:58:21.787056 systemd[1]: Started sshd@20-139.178.70.107:22-139.178.68.195:43008.service. Jul 10 00:58:21.789839 systemd[1]: sshd@19-139.178.70.107:22-139.178.68.195:43004.service: Deactivated successfully. Jul 10 00:58:21.791125 systemd[1]: session-22.scope: Deactivated successfully. Jul 10 00:58:21.791601 systemd-logind[1346]: Session 22 logged out. Waiting for processes to exit. Jul 10 00:58:21.792180 systemd-logind[1346]: Removed session 22. Jul 10 00:58:21.872766 sshd[3888]: Accepted publickey for core from 139.178.68.195 port 43008 ssh2: RSA SHA256:NVpdRDPpwzjVTzi6orhe1cA9BvcYymCSReGH8myOy/Q Jul 10 00:58:21.874241 sshd[3888]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 10 00:58:21.877785 systemd[1]: Started session-23.scope. Jul 10 00:58:21.878375 systemd-logind[1346]: New session 23 of user core. Jul 10 00:58:23.841333 env[1361]: time="2025-07-10T00:58:23.840916475Z" level=info msg="StopContainer for \"24049de46ce8d7172ff197e73f7d333db2aa4e7cc4628074e3e243a81d1766f7\" with timeout 30 (s)" Jul 10 00:58:23.841333 env[1361]: time="2025-07-10T00:58:23.841262190Z" level=info msg="Stop container \"24049de46ce8d7172ff197e73f7d333db2aa4e7cc4628074e3e243a81d1766f7\" with signal terminated" Jul 10 00:58:24.007508 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-24049de46ce8d7172ff197e73f7d333db2aa4e7cc4628074e3e243a81d1766f7-rootfs.mount: Deactivated successfully. Jul 10 00:58:24.011994 env[1361]: time="2025-07-10T00:58:24.007881406Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 10 00:58:24.019769 env[1361]: time="2025-07-10T00:58:24.019744085Z" level=info msg="StopContainer for \"ad9191f26f84f9e4297a2aae0bbfefa60f97a8e5dcc744aeab2b13df84ec485c\" with timeout 2 (s)" Jul 10 00:58:24.020223 env[1361]: time="2025-07-10T00:58:24.020117504Z" level=info msg="Stop container \"ad9191f26f84f9e4297a2aae0bbfefa60f97a8e5dcc744aeab2b13df84ec485c\" with signal terminated" Jul 10 00:58:24.035904 env[1361]: time="2025-07-10T00:58:24.035866610Z" level=info msg="shim disconnected" id=24049de46ce8d7172ff197e73f7d333db2aa4e7cc4628074e3e243a81d1766f7 Jul 10 00:58:24.036168 env[1361]: time="2025-07-10T00:58:24.036143898Z" level=warning msg="cleaning up after shim disconnected" id=24049de46ce8d7172ff197e73f7d333db2aa4e7cc4628074e3e243a81d1766f7 namespace=k8s.io Jul 10 00:58:24.036265 env[1361]: time="2025-07-10T00:58:24.036249242Z" level=info msg="cleaning up dead shim" Jul 10 00:58:24.047785 systemd-networkd[1112]: lxc_health: Link DOWN Jul 10 00:58:24.047794 systemd-networkd[1112]: lxc_health: Lost carrier Jul 10 00:58:24.054589 env[1361]: time="2025-07-10T00:58:24.054562426Z" level=warning msg="cleanup warnings time=\"2025-07-10T00:58:24Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3944 runtime=io.containerd.runc.v2\n" Jul 10 00:58:24.059207 env[1361]: time="2025-07-10T00:58:24.059179418Z" level=info msg="StopContainer for \"24049de46ce8d7172ff197e73f7d333db2aa4e7cc4628074e3e243a81d1766f7\" returns successfully" Jul 10 00:58:24.059851 env[1361]: time="2025-07-10T00:58:24.059834424Z" level=info msg="StopPodSandbox for \"cd14f60a5e354c420f07d10c7ae3616905d515cf3134d23e1f5abdca9b7ebfac\"" Jul 10 00:58:24.060009 env[1361]: time="2025-07-10T00:58:24.059993918Z" level=info msg="Container to stop \"24049de46ce8d7172ff197e73f7d333db2aa4e7cc4628074e3e243a81d1766f7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 10 00:58:24.062377 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-cd14f60a5e354c420f07d10c7ae3616905d515cf3134d23e1f5abdca9b7ebfac-shm.mount: Deactivated successfully. Jul 10 00:58:24.088035 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ad9191f26f84f9e4297a2aae0bbfefa60f97a8e5dcc744aeab2b13df84ec485c-rootfs.mount: Deactivated successfully. Jul 10 00:58:24.093975 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cd14f60a5e354c420f07d10c7ae3616905d515cf3134d23e1f5abdca9b7ebfac-rootfs.mount: Deactivated successfully. Jul 10 00:58:24.115273 env[1361]: time="2025-07-10T00:58:24.115241074Z" level=info msg="shim disconnected" id=cd14f60a5e354c420f07d10c7ae3616905d515cf3134d23e1f5abdca9b7ebfac Jul 10 00:58:24.115475 env[1361]: time="2025-07-10T00:58:24.115462109Z" level=warning msg="cleaning up after shim disconnected" id=cd14f60a5e354c420f07d10c7ae3616905d515cf3134d23e1f5abdca9b7ebfac namespace=k8s.io Jul 10 00:58:24.115545 env[1361]: time="2025-07-10T00:58:24.115533003Z" level=info msg="cleaning up dead shim" Jul 10 00:58:24.115784 env[1361]: time="2025-07-10T00:58:24.115766309Z" level=info msg="shim disconnected" id=ad9191f26f84f9e4297a2aae0bbfefa60f97a8e5dcc744aeab2b13df84ec485c Jul 10 00:58:24.115858 env[1361]: time="2025-07-10T00:58:24.115846768Z" level=warning msg="cleaning up after shim disconnected" id=ad9191f26f84f9e4297a2aae0bbfefa60f97a8e5dcc744aeab2b13df84ec485c namespace=k8s.io Jul 10 00:58:24.116003 env[1361]: time="2025-07-10T00:58:24.115904581Z" level=info msg="cleaning up dead shim" Jul 10 00:58:24.123671 env[1361]: time="2025-07-10T00:58:24.123634721Z" level=warning msg="cleanup warnings time=\"2025-07-10T00:58:24Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3995 runtime=io.containerd.runc.v2\n" Jul 10 00:58:24.124535 env[1361]: time="2025-07-10T00:58:24.124517336Z" level=warning msg="cleanup warnings time=\"2025-07-10T00:58:24Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3994 runtime=io.containerd.runc.v2\n" Jul 10 00:58:24.125309 env[1361]: time="2025-07-10T00:58:24.125283551Z" level=info msg="TearDown network for sandbox \"cd14f60a5e354c420f07d10c7ae3616905d515cf3134d23e1f5abdca9b7ebfac\" successfully" Jul 10 00:58:24.125422 env[1361]: time="2025-07-10T00:58:24.125408867Z" level=info msg="StopPodSandbox for \"cd14f60a5e354c420f07d10c7ae3616905d515cf3134d23e1f5abdca9b7ebfac\" returns successfully" Jul 10 00:58:24.125485 env[1361]: time="2025-07-10T00:58:24.125466500Z" level=info msg="StopContainer for \"ad9191f26f84f9e4297a2aae0bbfefa60f97a8e5dcc744aeab2b13df84ec485c\" returns successfully" Jul 10 00:58:24.125888 env[1361]: time="2025-07-10T00:58:24.125864525Z" level=info msg="StopPodSandbox for \"aa2ea584e670c65d592afb6c3431d1dd08e8e0a3d5fb778c5ce691cbf3458f8c\"" Jul 10 00:58:24.125928 env[1361]: time="2025-07-10T00:58:24.125907088Z" level=info msg="Container to stop \"689f876575790d5e038c60306dced84a0e8fc4804dce66bd6675d14c6495227a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 10 00:58:24.125928 env[1361]: time="2025-07-10T00:58:24.125918822Z" level=info msg="Container to stop \"a39ee0701f1adf80eb5780715a053ef86748cdef302e628fa429d17a8e136834\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 10 00:58:24.125985 env[1361]: time="2025-07-10T00:58:24.125930303Z" level=info msg="Container to stop \"aa21674b8a415c3e0d7e77cfd8e6ba66ce23485ab590ddc030302dba8e044034\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 10 00:58:24.125985 env[1361]: time="2025-07-10T00:58:24.125942804Z" level=info msg="Container to stop \"c9d71382117a7305bf296ecabac0fb2b044a857e5e8d3d7b8fa4a7de234e8146\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 10 00:58:24.125985 env[1361]: time="2025-07-10T00:58:24.125953628Z" level=info msg="Container to stop \"ad9191f26f84f9e4297a2aae0bbfefa60f97a8e5dcc744aeab2b13df84ec485c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 10 00:58:24.156689 env[1361]: time="2025-07-10T00:58:24.156646805Z" level=info msg="shim disconnected" id=aa2ea584e670c65d592afb6c3431d1dd08e8e0a3d5fb778c5ce691cbf3458f8c Jul 10 00:58:24.156689 env[1361]: time="2025-07-10T00:58:24.156687417Z" level=warning msg="cleaning up after shim disconnected" id=aa2ea584e670c65d592afb6c3431d1dd08e8e0a3d5fb778c5ce691cbf3458f8c namespace=k8s.io Jul 10 00:58:24.156689 env[1361]: time="2025-07-10T00:58:24.156694972Z" level=info msg="cleaning up dead shim" Jul 10 00:58:24.163053 env[1361]: time="2025-07-10T00:58:24.163021110Z" level=warning msg="cleanup warnings time=\"2025-07-10T00:58:24Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4040 runtime=io.containerd.runc.v2\n" Jul 10 00:58:24.163669 env[1361]: time="2025-07-10T00:58:24.163647041Z" level=info msg="TearDown network for sandbox \"aa2ea584e670c65d592afb6c3431d1dd08e8e0a3d5fb778c5ce691cbf3458f8c\" successfully" Jul 10 00:58:24.163772 env[1361]: time="2025-07-10T00:58:24.163754928Z" level=info msg="StopPodSandbox for \"aa2ea584e670c65d592afb6c3431d1dd08e8e0a3d5fb778c5ce691cbf3458f8c\" returns successfully" Jul 10 00:58:24.327106 kubelet[2254]: I0710 00:58:24.327065 2254 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4042cc1e-c41f-4e6b-b861-ce18118b4808-bpf-maps\") pod \"4042cc1e-c41f-4e6b-b861-ce18118b4808\" (UID: \"4042cc1e-c41f-4e6b-b861-ce18118b4808\") " Jul 10 00:58:24.328600 kubelet[2254]: I0710 00:58:24.327131 2254 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4042cc1e-c41f-4e6b-b861-ce18118b4808-hubble-tls\") pod \"4042cc1e-c41f-4e6b-b861-ce18118b4808\" (UID: \"4042cc1e-c41f-4e6b-b861-ce18118b4808\") " Jul 10 00:58:24.328600 kubelet[2254]: I0710 00:58:24.327149 2254 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4042cc1e-c41f-4e6b-b861-ce18118b4808-clustermesh-secrets\") pod \"4042cc1e-c41f-4e6b-b861-ce18118b4808\" (UID: \"4042cc1e-c41f-4e6b-b861-ce18118b4808\") " Jul 10 00:58:24.328600 kubelet[2254]: I0710 00:58:24.327167 2254 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4042cc1e-c41f-4e6b-b861-ce18118b4808-cilium-config-path\") pod \"4042cc1e-c41f-4e6b-b861-ce18118b4808\" (UID: \"4042cc1e-c41f-4e6b-b861-ce18118b4808\") " Jul 10 00:58:24.328600 kubelet[2254]: I0710 00:58:24.327178 2254 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4042cc1e-c41f-4e6b-b861-ce18118b4808-cilium-cgroup\") pod \"4042cc1e-c41f-4e6b-b861-ce18118b4808\" (UID: \"4042cc1e-c41f-4e6b-b861-ce18118b4808\") " Jul 10 00:58:24.328600 kubelet[2254]: I0710 00:58:24.327188 2254 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4042cc1e-c41f-4e6b-b861-ce18118b4808-host-proc-sys-net\") pod \"4042cc1e-c41f-4e6b-b861-ce18118b4808\" (UID: \"4042cc1e-c41f-4e6b-b861-ce18118b4808\") " Jul 10 00:58:24.328600 kubelet[2254]: I0710 00:58:24.327207 2254 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4042cc1e-c41f-4e6b-b861-ce18118b4808-cni-path\") pod \"4042cc1e-c41f-4e6b-b861-ce18118b4808\" (UID: \"4042cc1e-c41f-4e6b-b861-ce18118b4808\") " Jul 10 00:58:24.328811 kubelet[2254]: I0710 00:58:24.327221 2254 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4042cc1e-c41f-4e6b-b861-ce18118b4808-cilium-run\") pod \"4042cc1e-c41f-4e6b-b861-ce18118b4808\" (UID: \"4042cc1e-c41f-4e6b-b861-ce18118b4808\") " Jul 10 00:58:24.328811 kubelet[2254]: I0710 00:58:24.327231 2254 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lcqtz\" (UniqueName: \"kubernetes.io/projected/4042cc1e-c41f-4e6b-b861-ce18118b4808-kube-api-access-lcqtz\") pod \"4042cc1e-c41f-4e6b-b861-ce18118b4808\" (UID: \"4042cc1e-c41f-4e6b-b861-ce18118b4808\") " Jul 10 00:58:24.328811 kubelet[2254]: I0710 00:58:24.327238 2254 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4042cc1e-c41f-4e6b-b861-ce18118b4808-hostproc\") pod \"4042cc1e-c41f-4e6b-b861-ce18118b4808\" (UID: \"4042cc1e-c41f-4e6b-b861-ce18118b4808\") " Jul 10 00:58:24.328811 kubelet[2254]: I0710 00:58:24.327248 2254 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4ftk8\" (UniqueName: \"kubernetes.io/projected/23cd1691-30eb-4002-9c8a-a66aab02e4b0-kube-api-access-4ftk8\") pod \"23cd1691-30eb-4002-9c8a-a66aab02e4b0\" (UID: \"23cd1691-30eb-4002-9c8a-a66aab02e4b0\") " Jul 10 00:58:24.328811 kubelet[2254]: I0710 00:58:24.327259 2254 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4042cc1e-c41f-4e6b-b861-ce18118b4808-host-proc-sys-kernel\") pod \"4042cc1e-c41f-4e6b-b861-ce18118b4808\" (UID: \"4042cc1e-c41f-4e6b-b861-ce18118b4808\") " Jul 10 00:58:24.328811 kubelet[2254]: I0710 00:58:24.327268 2254 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4042cc1e-c41f-4e6b-b861-ce18118b4808-etc-cni-netd\") pod \"4042cc1e-c41f-4e6b-b861-ce18118b4808\" (UID: \"4042cc1e-c41f-4e6b-b861-ce18118b4808\") " Jul 10 00:58:24.329033 kubelet[2254]: I0710 00:58:24.327286 2254 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/23cd1691-30eb-4002-9c8a-a66aab02e4b0-cilium-config-path\") pod \"23cd1691-30eb-4002-9c8a-a66aab02e4b0\" (UID: \"23cd1691-30eb-4002-9c8a-a66aab02e4b0\") " Jul 10 00:58:24.329033 kubelet[2254]: I0710 00:58:24.327302 2254 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4042cc1e-c41f-4e6b-b861-ce18118b4808-lib-modules\") pod \"4042cc1e-c41f-4e6b-b861-ce18118b4808\" (UID: \"4042cc1e-c41f-4e6b-b861-ce18118b4808\") " Jul 10 00:58:24.329033 kubelet[2254]: I0710 00:58:24.327317 2254 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4042cc1e-c41f-4e6b-b861-ce18118b4808-xtables-lock\") pod \"4042cc1e-c41f-4e6b-b861-ce18118b4808\" (UID: \"4042cc1e-c41f-4e6b-b861-ce18118b4808\") " Jul 10 00:58:24.334628 kubelet[2254]: I0710 00:58:24.332298 2254 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4042cc1e-c41f-4e6b-b861-ce18118b4808-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "4042cc1e-c41f-4e6b-b861-ce18118b4808" (UID: "4042cc1e-c41f-4e6b-b861-ce18118b4808"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 10 00:58:24.335703 kubelet[2254]: I0710 00:58:24.335663 2254 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4042cc1e-c41f-4e6b-b861-ce18118b4808-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "4042cc1e-c41f-4e6b-b861-ce18118b4808" (UID: "4042cc1e-c41f-4e6b-b861-ce18118b4808"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 10 00:58:24.345708 kubelet[2254]: I0710 00:58:24.344898 2254 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4042cc1e-c41f-4e6b-b861-ce18118b4808-kube-api-access-lcqtz" (OuterVolumeSpecName: "kube-api-access-lcqtz") pod "4042cc1e-c41f-4e6b-b861-ce18118b4808" (UID: "4042cc1e-c41f-4e6b-b861-ce18118b4808"). InnerVolumeSpecName "kube-api-access-lcqtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 10 00:58:24.345708 kubelet[2254]: I0710 00:58:24.345262 2254 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4042cc1e-c41f-4e6b-b861-ce18118b4808-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "4042cc1e-c41f-4e6b-b861-ce18118b4808" (UID: "4042cc1e-c41f-4e6b-b861-ce18118b4808"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 10 00:58:24.345708 kubelet[2254]: I0710 00:58:24.345283 2254 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4042cc1e-c41f-4e6b-b861-ce18118b4808-hostproc" (OuterVolumeSpecName: "hostproc") pod "4042cc1e-c41f-4e6b-b861-ce18118b4808" (UID: "4042cc1e-c41f-4e6b-b861-ce18118b4808"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 10 00:58:24.348425 kubelet[2254]: I0710 00:58:24.348376 2254 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/23cd1691-30eb-4002-9c8a-a66aab02e4b0-kube-api-access-4ftk8" (OuterVolumeSpecName: "kube-api-access-4ftk8") pod "23cd1691-30eb-4002-9c8a-a66aab02e4b0" (UID: "23cd1691-30eb-4002-9c8a-a66aab02e4b0"). InnerVolumeSpecName "kube-api-access-4ftk8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 10 00:58:24.348529 kubelet[2254]: I0710 00:58:24.348453 2254 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4042cc1e-c41f-4e6b-b861-ce18118b4808-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "4042cc1e-c41f-4e6b-b861-ce18118b4808" (UID: "4042cc1e-c41f-4e6b-b861-ce18118b4808"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 10 00:58:24.348529 kubelet[2254]: I0710 00:58:24.348479 2254 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4042cc1e-c41f-4e6b-b861-ce18118b4808-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "4042cc1e-c41f-4e6b-b861-ce18118b4808" (UID: "4042cc1e-c41f-4e6b-b861-ce18118b4808"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 10 00:58:24.348609 kubelet[2254]: I0710 00:58:24.348589 2254 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4042cc1e-c41f-4e6b-b861-ce18118b4808-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "4042cc1e-c41f-4e6b-b861-ce18118b4808" (UID: "4042cc1e-c41f-4e6b-b861-ce18118b4808"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 10 00:58:24.351821 kubelet[2254]: I0710 00:58:24.351783 2254 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/23cd1691-30eb-4002-9c8a-a66aab02e4b0-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "23cd1691-30eb-4002-9c8a-a66aab02e4b0" (UID: "23cd1691-30eb-4002-9c8a-a66aab02e4b0"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 10 00:58:24.351930 kubelet[2254]: I0710 00:58:24.351836 2254 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4042cc1e-c41f-4e6b-b861-ce18118b4808-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "4042cc1e-c41f-4e6b-b861-ce18118b4808" (UID: "4042cc1e-c41f-4e6b-b861-ce18118b4808"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 10 00:58:24.351930 kubelet[2254]: I0710 00:58:24.351852 2254 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4042cc1e-c41f-4e6b-b861-ce18118b4808-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "4042cc1e-c41f-4e6b-b861-ce18118b4808" (UID: "4042cc1e-c41f-4e6b-b861-ce18118b4808"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 10 00:58:24.351930 kubelet[2254]: I0710 00:58:24.351861 2254 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4042cc1e-c41f-4e6b-b861-ce18118b4808-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "4042cc1e-c41f-4e6b-b861-ce18118b4808" (UID: "4042cc1e-c41f-4e6b-b861-ce18118b4808"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 10 00:58:24.351930 kubelet[2254]: I0710 00:58:24.351871 2254 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4042cc1e-c41f-4e6b-b861-ce18118b4808-cni-path" (OuterVolumeSpecName: "cni-path") pod "4042cc1e-c41f-4e6b-b861-ce18118b4808" (UID: "4042cc1e-c41f-4e6b-b861-ce18118b4808"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 10 00:58:24.351930 kubelet[2254]: I0710 00:58:24.351880 2254 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4042cc1e-c41f-4e6b-b861-ce18118b4808-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "4042cc1e-c41f-4e6b-b861-ce18118b4808" (UID: "4042cc1e-c41f-4e6b-b861-ce18118b4808"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 10 00:58:24.352518 kubelet[2254]: I0710 00:58:24.352501 2254 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4042cc1e-c41f-4e6b-b861-ce18118b4808-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "4042cc1e-c41f-4e6b-b861-ce18118b4808" (UID: "4042cc1e-c41f-4e6b-b861-ce18118b4808"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 10 00:58:24.428417 kubelet[2254]: I0710 00:58:24.428378 2254 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4042cc1e-c41f-4e6b-b861-ce18118b4808-lib-modules\") on node \"localhost\" DevicePath \"\"" Jul 10 00:58:24.428417 kubelet[2254]: I0710 00:58:24.428407 2254 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4042cc1e-c41f-4e6b-b861-ce18118b4808-xtables-lock\") on node \"localhost\" DevicePath \"\"" Jul 10 00:58:24.428417 kubelet[2254]: I0710 00:58:24.428415 2254 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4042cc1e-c41f-4e6b-b861-ce18118b4808-bpf-maps\") on node \"localhost\" DevicePath \"\"" Jul 10 00:58:24.428417 kubelet[2254]: I0710 00:58:24.428420 2254 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4042cc1e-c41f-4e6b-b861-ce18118b4808-hubble-tls\") on node \"localhost\" DevicePath \"\"" Jul 10 00:58:24.428613 kubelet[2254]: I0710 00:58:24.428425 2254 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4042cc1e-c41f-4e6b-b861-ce18118b4808-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Jul 10 00:58:24.428613 kubelet[2254]: I0710 00:58:24.428446 2254 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4042cc1e-c41f-4e6b-b861-ce18118b4808-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jul 10 00:58:24.428613 kubelet[2254]: I0710 00:58:24.428451 2254 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4042cc1e-c41f-4e6b-b861-ce18118b4808-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Jul 10 00:58:24.428613 kubelet[2254]: I0710 00:58:24.428456 2254 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4042cc1e-c41f-4e6b-b861-ce18118b4808-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Jul 10 00:58:24.428613 kubelet[2254]: I0710 00:58:24.428461 2254 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4042cc1e-c41f-4e6b-b861-ce18118b4808-cni-path\") on node \"localhost\" DevicePath \"\"" Jul 10 00:58:24.428613 kubelet[2254]: I0710 00:58:24.428468 2254 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4042cc1e-c41f-4e6b-b861-ce18118b4808-cilium-run\") on node \"localhost\" DevicePath \"\"" Jul 10 00:58:24.428613 kubelet[2254]: I0710 00:58:24.428477 2254 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lcqtz\" (UniqueName: \"kubernetes.io/projected/4042cc1e-c41f-4e6b-b861-ce18118b4808-kube-api-access-lcqtz\") on node \"localhost\" DevicePath \"\"" Jul 10 00:58:24.428613 kubelet[2254]: I0710 00:58:24.428486 2254 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4042cc1e-c41f-4e6b-b861-ce18118b4808-hostproc\") on node \"localhost\" DevicePath \"\"" Jul 10 00:58:24.429125 kubelet[2254]: I0710 00:58:24.428496 2254 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4ftk8\" (UniqueName: \"kubernetes.io/projected/23cd1691-30eb-4002-9c8a-a66aab02e4b0-kube-api-access-4ftk8\") on node \"localhost\" DevicePath \"\"" Jul 10 00:58:24.429125 kubelet[2254]: I0710 00:58:24.428505 2254 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4042cc1e-c41f-4e6b-b861-ce18118b4808-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Jul 10 00:58:24.429125 kubelet[2254]: I0710 00:58:24.428512 2254 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/23cd1691-30eb-4002-9c8a-a66aab02e4b0-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jul 10 00:58:24.429125 kubelet[2254]: I0710 00:58:24.428517 2254 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4042cc1e-c41f-4e6b-b861-ce18118b4808-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Jul 10 00:58:24.587094 kubelet[2254]: I0710 00:58:24.587060 2254 scope.go:117] "RemoveContainer" containerID="24049de46ce8d7172ff197e73f7d333db2aa4e7cc4628074e3e243a81d1766f7" Jul 10 00:58:24.600414 env[1361]: time="2025-07-10T00:58:24.599266207Z" level=info msg="RemoveContainer for \"24049de46ce8d7172ff197e73f7d333db2aa4e7cc4628074e3e243a81d1766f7\"" Jul 10 00:58:24.605656 env[1361]: time="2025-07-10T00:58:24.605619675Z" level=info msg="RemoveContainer for \"24049de46ce8d7172ff197e73f7d333db2aa4e7cc4628074e3e243a81d1766f7\" returns successfully" Jul 10 00:58:24.618605 kubelet[2254]: I0710 00:58:24.618586 2254 scope.go:117] "RemoveContainer" containerID="24049de46ce8d7172ff197e73f7d333db2aa4e7cc4628074e3e243a81d1766f7" Jul 10 00:58:24.620807 env[1361]: time="2025-07-10T00:58:24.620707825Z" level=error msg="ContainerStatus for \"24049de46ce8d7172ff197e73f7d333db2aa4e7cc4628074e3e243a81d1766f7\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"24049de46ce8d7172ff197e73f7d333db2aa4e7cc4628074e3e243a81d1766f7\": not found" Jul 10 00:58:24.625859 kubelet[2254]: E0710 00:58:24.625803 2254 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"24049de46ce8d7172ff197e73f7d333db2aa4e7cc4628074e3e243a81d1766f7\": not found" containerID="24049de46ce8d7172ff197e73f7d333db2aa4e7cc4628074e3e243a81d1766f7" Jul 10 00:58:24.631993 kubelet[2254]: I0710 00:58:24.630654 2254 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"24049de46ce8d7172ff197e73f7d333db2aa4e7cc4628074e3e243a81d1766f7"} err="failed to get container status \"24049de46ce8d7172ff197e73f7d333db2aa4e7cc4628074e3e243a81d1766f7\": rpc error: code = NotFound desc = an error occurred when try to find container \"24049de46ce8d7172ff197e73f7d333db2aa4e7cc4628074e3e243a81d1766f7\": not found" Jul 10 00:58:24.632118 kubelet[2254]: I0710 00:58:24.631997 2254 scope.go:117] "RemoveContainer" containerID="ad9191f26f84f9e4297a2aae0bbfefa60f97a8e5dcc744aeab2b13df84ec485c" Jul 10 00:58:24.633179 env[1361]: time="2025-07-10T00:58:24.633138754Z" level=info msg="RemoveContainer for \"ad9191f26f84f9e4297a2aae0bbfefa60f97a8e5dcc744aeab2b13df84ec485c\"" Jul 10 00:58:24.634866 env[1361]: time="2025-07-10T00:58:24.634832456Z" level=info msg="RemoveContainer for \"ad9191f26f84f9e4297a2aae0bbfefa60f97a8e5dcc744aeab2b13df84ec485c\" returns successfully" Jul 10 00:58:24.635069 kubelet[2254]: I0710 00:58:24.635049 2254 scope.go:117] "RemoveContainer" containerID="c9d71382117a7305bf296ecabac0fb2b044a857e5e8d3d7b8fa4a7de234e8146" Jul 10 00:58:24.636376 env[1361]: time="2025-07-10T00:58:24.635897867Z" level=info msg="RemoveContainer for \"c9d71382117a7305bf296ecabac0fb2b044a857e5e8d3d7b8fa4a7de234e8146\"" Jul 10 00:58:24.638383 env[1361]: time="2025-07-10T00:58:24.637639531Z" level=info msg="RemoveContainer for \"c9d71382117a7305bf296ecabac0fb2b044a857e5e8d3d7b8fa4a7de234e8146\" returns successfully" Jul 10 00:58:24.640231 kubelet[2254]: I0710 00:58:24.640206 2254 scope.go:117] "RemoveContainer" containerID="aa21674b8a415c3e0d7e77cfd8e6ba66ce23485ab590ddc030302dba8e044034" Jul 10 00:58:24.641726 env[1361]: time="2025-07-10T00:58:24.641683786Z" level=info msg="RemoveContainer for \"aa21674b8a415c3e0d7e77cfd8e6ba66ce23485ab590ddc030302dba8e044034\"" Jul 10 00:58:24.643824 env[1361]: time="2025-07-10T00:58:24.643778348Z" level=info msg="RemoveContainer for \"aa21674b8a415c3e0d7e77cfd8e6ba66ce23485ab590ddc030302dba8e044034\" returns successfully" Jul 10 00:58:24.644121 kubelet[2254]: I0710 00:58:24.644105 2254 scope.go:117] "RemoveContainer" containerID="a39ee0701f1adf80eb5780715a053ef86748cdef302e628fa429d17a8e136834" Jul 10 00:58:24.645022 env[1361]: time="2025-07-10T00:58:24.644995778Z" level=info msg="RemoveContainer for \"a39ee0701f1adf80eb5780715a053ef86748cdef302e628fa429d17a8e136834\"" Jul 10 00:58:24.646838 env[1361]: time="2025-07-10T00:58:24.646809722Z" level=info msg="RemoveContainer for \"a39ee0701f1adf80eb5780715a053ef86748cdef302e628fa429d17a8e136834\" returns successfully" Jul 10 00:58:24.647120 kubelet[2254]: I0710 00:58:24.647104 2254 scope.go:117] "RemoveContainer" containerID="689f876575790d5e038c60306dced84a0e8fc4804dce66bd6675d14c6495227a" Jul 10 00:58:24.650835 env[1361]: time="2025-07-10T00:58:24.650787194Z" level=info msg="RemoveContainer for \"689f876575790d5e038c60306dced84a0e8fc4804dce66bd6675d14c6495227a\"" Jul 10 00:58:24.652835 env[1361]: time="2025-07-10T00:58:24.652793609Z" level=info msg="RemoveContainer for \"689f876575790d5e038c60306dced84a0e8fc4804dce66bd6675d14c6495227a\" returns successfully" Jul 10 00:58:24.653061 kubelet[2254]: I0710 00:58:24.653045 2254 scope.go:117] "RemoveContainer" containerID="ad9191f26f84f9e4297a2aae0bbfefa60f97a8e5dcc744aeab2b13df84ec485c" Jul 10 00:58:24.653386 env[1361]: time="2025-07-10T00:58:24.653325946Z" level=error msg="ContainerStatus for \"ad9191f26f84f9e4297a2aae0bbfefa60f97a8e5dcc744aeab2b13df84ec485c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ad9191f26f84f9e4297a2aae0bbfefa60f97a8e5dcc744aeab2b13df84ec485c\": not found" Jul 10 00:58:24.653555 kubelet[2254]: E0710 00:58:24.653534 2254 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ad9191f26f84f9e4297a2aae0bbfefa60f97a8e5dcc744aeab2b13df84ec485c\": not found" containerID="ad9191f26f84f9e4297a2aae0bbfefa60f97a8e5dcc744aeab2b13df84ec485c" Jul 10 00:58:24.653605 kubelet[2254]: I0710 00:58:24.653574 2254 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ad9191f26f84f9e4297a2aae0bbfefa60f97a8e5dcc744aeab2b13df84ec485c"} err="failed to get container status \"ad9191f26f84f9e4297a2aae0bbfefa60f97a8e5dcc744aeab2b13df84ec485c\": rpc error: code = NotFound desc = an error occurred when try to find container \"ad9191f26f84f9e4297a2aae0bbfefa60f97a8e5dcc744aeab2b13df84ec485c\": not found" Jul 10 00:58:24.653605 kubelet[2254]: I0710 00:58:24.653598 2254 scope.go:117] "RemoveContainer" containerID="c9d71382117a7305bf296ecabac0fb2b044a857e5e8d3d7b8fa4a7de234e8146" Jul 10 00:58:24.653758 env[1361]: time="2025-07-10T00:58:24.653728256Z" level=error msg="ContainerStatus for \"c9d71382117a7305bf296ecabac0fb2b044a857e5e8d3d7b8fa4a7de234e8146\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c9d71382117a7305bf296ecabac0fb2b044a857e5e8d3d7b8fa4a7de234e8146\": not found" Jul 10 00:58:24.653899 kubelet[2254]: E0710 00:58:24.653868 2254 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c9d71382117a7305bf296ecabac0fb2b044a857e5e8d3d7b8fa4a7de234e8146\": not found" containerID="c9d71382117a7305bf296ecabac0fb2b044a857e5e8d3d7b8fa4a7de234e8146" Jul 10 00:58:24.653942 kubelet[2254]: I0710 00:58:24.653903 2254 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c9d71382117a7305bf296ecabac0fb2b044a857e5e8d3d7b8fa4a7de234e8146"} err="failed to get container status \"c9d71382117a7305bf296ecabac0fb2b044a857e5e8d3d7b8fa4a7de234e8146\": rpc error: code = NotFound desc = an error occurred when try to find container \"c9d71382117a7305bf296ecabac0fb2b044a857e5e8d3d7b8fa4a7de234e8146\": not found" Jul 10 00:58:24.653942 kubelet[2254]: I0710 00:58:24.653919 2254 scope.go:117] "RemoveContainer" containerID="aa21674b8a415c3e0d7e77cfd8e6ba66ce23485ab590ddc030302dba8e044034" Jul 10 00:58:24.654100 env[1361]: time="2025-07-10T00:58:24.654073573Z" level=error msg="ContainerStatus for \"aa21674b8a415c3e0d7e77cfd8e6ba66ce23485ab590ddc030302dba8e044034\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"aa21674b8a415c3e0d7e77cfd8e6ba66ce23485ab590ddc030302dba8e044034\": not found" Jul 10 00:58:24.654224 kubelet[2254]: E0710 00:58:24.654209 2254 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"aa21674b8a415c3e0d7e77cfd8e6ba66ce23485ab590ddc030302dba8e044034\": not found" containerID="aa21674b8a415c3e0d7e77cfd8e6ba66ce23485ab590ddc030302dba8e044034" Jul 10 00:58:24.654319 kubelet[2254]: I0710 00:58:24.654303 2254 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"aa21674b8a415c3e0d7e77cfd8e6ba66ce23485ab590ddc030302dba8e044034"} err="failed to get container status \"aa21674b8a415c3e0d7e77cfd8e6ba66ce23485ab590ddc030302dba8e044034\": rpc error: code = NotFound desc = an error occurred when try to find container \"aa21674b8a415c3e0d7e77cfd8e6ba66ce23485ab590ddc030302dba8e044034\": not found" Jul 10 00:58:24.654416 kubelet[2254]: I0710 00:58:24.654407 2254 scope.go:117] "RemoveContainer" containerID="a39ee0701f1adf80eb5780715a053ef86748cdef302e628fa429d17a8e136834" Jul 10 00:58:24.654613 env[1361]: time="2025-07-10T00:58:24.654573805Z" level=error msg="ContainerStatus for \"a39ee0701f1adf80eb5780715a053ef86748cdef302e628fa429d17a8e136834\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a39ee0701f1adf80eb5780715a053ef86748cdef302e628fa429d17a8e136834\": not found" Jul 10 00:58:24.654749 kubelet[2254]: E0710 00:58:24.654738 2254 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a39ee0701f1adf80eb5780715a053ef86748cdef302e628fa429d17a8e136834\": not found" containerID="a39ee0701f1adf80eb5780715a053ef86748cdef302e628fa429d17a8e136834" Jul 10 00:58:24.654837 kubelet[2254]: I0710 00:58:24.654823 2254 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a39ee0701f1adf80eb5780715a053ef86748cdef302e628fa429d17a8e136834"} err="failed to get container status \"a39ee0701f1adf80eb5780715a053ef86748cdef302e628fa429d17a8e136834\": rpc error: code = NotFound desc = an error occurred when try to find container \"a39ee0701f1adf80eb5780715a053ef86748cdef302e628fa429d17a8e136834\": not found" Jul 10 00:58:24.654906 kubelet[2254]: I0710 00:58:24.654895 2254 scope.go:117] "RemoveContainer" containerID="689f876575790d5e038c60306dced84a0e8fc4804dce66bd6675d14c6495227a" Jul 10 00:58:24.655109 env[1361]: time="2025-07-10T00:58:24.655072006Z" level=error msg="ContainerStatus for \"689f876575790d5e038c60306dced84a0e8fc4804dce66bd6675d14c6495227a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"689f876575790d5e038c60306dced84a0e8fc4804dce66bd6675d14c6495227a\": not found" Jul 10 00:58:24.655279 kubelet[2254]: E0710 00:58:24.655257 2254 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"689f876575790d5e038c60306dced84a0e8fc4804dce66bd6675d14c6495227a\": not found" containerID="689f876575790d5e038c60306dced84a0e8fc4804dce66bd6675d14c6495227a" Jul 10 00:58:24.655360 kubelet[2254]: I0710 00:58:24.655289 2254 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"689f876575790d5e038c60306dced84a0e8fc4804dce66bd6675d14c6495227a"} err="failed to get container status \"689f876575790d5e038c60306dced84a0e8fc4804dce66bd6675d14c6495227a\": rpc error: code = NotFound desc = an error occurred when try to find container \"689f876575790d5e038c60306dced84a0e8fc4804dce66bd6675d14c6495227a\": not found" Jul 10 00:58:24.851562 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-aa2ea584e670c65d592afb6c3431d1dd08e8e0a3d5fb778c5ce691cbf3458f8c-rootfs.mount: Deactivated successfully. Jul 10 00:58:24.851707 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-aa2ea584e670c65d592afb6c3431d1dd08e8e0a3d5fb778c5ce691cbf3458f8c-shm.mount: Deactivated successfully. Jul 10 00:58:24.851826 systemd[1]: var-lib-kubelet-pods-4042cc1e\x2dc41f\x2d4e6b\x2db861\x2dce18118b4808-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dlcqtz.mount: Deactivated successfully. Jul 10 00:58:24.851920 systemd[1]: var-lib-kubelet-pods-4042cc1e\x2dc41f\x2d4e6b\x2db861\x2dce18118b4808-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 10 00:58:24.852033 systemd[1]: var-lib-kubelet-pods-4042cc1e\x2dc41f\x2d4e6b\x2db861\x2dce18118b4808-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 10 00:58:24.852124 systemd[1]: var-lib-kubelet-pods-23cd1691\x2d30eb\x2d4002\x2d9c8a\x2da66aab02e4b0-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d4ftk8.mount: Deactivated successfully. Jul 10 00:58:25.130160 kubelet[2254]: I0710 00:58:25.130104 2254 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="23cd1691-30eb-4002-9c8a-a66aab02e4b0" path="/var/lib/kubelet/pods/23cd1691-30eb-4002-9c8a-a66aab02e4b0/volumes" Jul 10 00:58:25.141674 kubelet[2254]: I0710 00:58:25.141651 2254 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4042cc1e-c41f-4e6b-b861-ce18118b4808" path="/var/lib/kubelet/pods/4042cc1e-c41f-4e6b-b861-ce18118b4808/volumes" Jul 10 00:58:25.731440 systemd[1]: Started sshd@21-139.178.70.107:22-139.178.68.195:43012.service. Jul 10 00:58:25.733382 sshd[3888]: pam_unix(sshd:session): session closed for user core Jul 10 00:58:25.737237 systemd[1]: sshd@20-139.178.70.107:22-139.178.68.195:43008.service: Deactivated successfully. Jul 10 00:58:25.737832 systemd-logind[1346]: Session 23 logged out. Waiting for processes to exit. Jul 10 00:58:25.737835 systemd[1]: session-23.scope: Deactivated successfully. Jul 10 00:58:25.738373 systemd-logind[1346]: Removed session 23. Jul 10 00:58:25.765109 sshd[4057]: Accepted publickey for core from 139.178.68.195 port 43012 ssh2: RSA SHA256:NVpdRDPpwzjVTzi6orhe1cA9BvcYymCSReGH8myOy/Q Jul 10 00:58:25.769634 sshd[4057]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 10 00:58:25.772068 systemd-logind[1346]: New session 24 of user core. Jul 10 00:58:25.772408 systemd[1]: Started session-24.scope. Jul 10 00:58:26.320046 systemd[1]: Started sshd@22-139.178.70.107:22-139.178.68.195:43016.service. Jul 10 00:58:26.322837 sshd[4057]: pam_unix(sshd:session): session closed for user core Jul 10 00:58:26.331080 kubelet[2254]: E0710 00:58:26.330906 2254 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="4042cc1e-c41f-4e6b-b861-ce18118b4808" containerName="clean-cilium-state" Jul 10 00:58:26.331080 kubelet[2254]: E0710 00:58:26.330924 2254 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="4042cc1e-c41f-4e6b-b861-ce18118b4808" containerName="cilium-agent" Jul 10 00:58:26.331080 kubelet[2254]: E0710 00:58:26.330931 2254 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="23cd1691-30eb-4002-9c8a-a66aab02e4b0" containerName="cilium-operator" Jul 10 00:58:26.331080 kubelet[2254]: E0710 00:58:26.330950 2254 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="4042cc1e-c41f-4e6b-b861-ce18118b4808" containerName="mount-cgroup" Jul 10 00:58:26.331080 kubelet[2254]: E0710 00:58:26.330956 2254 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="4042cc1e-c41f-4e6b-b861-ce18118b4808" containerName="apply-sysctl-overwrites" Jul 10 00:58:26.331080 kubelet[2254]: E0710 00:58:26.330959 2254 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="4042cc1e-c41f-4e6b-b861-ce18118b4808" containerName="mount-bpf-fs" Jul 10 00:58:26.332238 kubelet[2254]: I0710 00:58:26.332097 2254 memory_manager.go:354] "RemoveStaleState removing state" podUID="23cd1691-30eb-4002-9c8a-a66aab02e4b0" containerName="cilium-operator" Jul 10 00:58:26.332238 kubelet[2254]: I0710 00:58:26.332107 2254 memory_manager.go:354] "RemoveStaleState removing state" podUID="4042cc1e-c41f-4e6b-b861-ce18118b4808" containerName="cilium-agent" Jul 10 00:58:26.334784 systemd[1]: sshd@21-139.178.70.107:22-139.178.68.195:43012.service: Deactivated successfully. Jul 10 00:58:26.335535 systemd[1]: session-24.scope: Deactivated successfully. Jul 10 00:58:26.335569 systemd-logind[1346]: Session 24 logged out. Waiting for processes to exit. Jul 10 00:58:26.336989 systemd-logind[1346]: Removed session 24. Jul 10 00:58:26.373574 sshd[4068]: Accepted publickey for core from 139.178.68.195 port 43016 ssh2: RSA SHA256:NVpdRDPpwzjVTzi6orhe1cA9BvcYymCSReGH8myOy/Q Jul 10 00:58:26.374740 sshd[4068]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 10 00:58:26.378306 systemd-logind[1346]: New session 25 of user core. Jul 10 00:58:26.378640 systemd[1]: Started session-25.scope. Jul 10 00:58:26.439017 kubelet[2254]: I0710 00:58:26.438995 2254 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2b7f47fa-e70d-4a0b-a0a1-66e65b7cceba-cilium-config-path\") pod \"cilium-4tdc6\" (UID: \"2b7f47fa-e70d-4a0b-a0a1-66e65b7cceba\") " pod="kube-system/cilium-4tdc6" Jul 10 00:58:26.439154 kubelet[2254]: I0710 00:58:26.439139 2254 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2b7f47fa-e70d-4a0b-a0a1-66e65b7cceba-hubble-tls\") pod \"cilium-4tdc6\" (UID: \"2b7f47fa-e70d-4a0b-a0a1-66e65b7cceba\") " pod="kube-system/cilium-4tdc6" Jul 10 00:58:26.439224 kubelet[2254]: I0710 00:58:26.439209 2254 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2b7f47fa-e70d-4a0b-a0a1-66e65b7cceba-host-proc-sys-kernel\") pod \"cilium-4tdc6\" (UID: \"2b7f47fa-e70d-4a0b-a0a1-66e65b7cceba\") " pod="kube-system/cilium-4tdc6" Jul 10 00:58:26.439283 kubelet[2254]: I0710 00:58:26.439274 2254 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2b7f47fa-e70d-4a0b-a0a1-66e65b7cceba-clustermesh-secrets\") pod \"cilium-4tdc6\" (UID: \"2b7f47fa-e70d-4a0b-a0a1-66e65b7cceba\") " pod="kube-system/cilium-4tdc6" Jul 10 00:58:26.439337 kubelet[2254]: I0710 00:58:26.439328 2254 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2b7f47fa-e70d-4a0b-a0a1-66e65b7cceba-host-proc-sys-net\") pod \"cilium-4tdc6\" (UID: \"2b7f47fa-e70d-4a0b-a0a1-66e65b7cceba\") " pod="kube-system/cilium-4tdc6" Jul 10 00:58:26.439409 kubelet[2254]: I0710 00:58:26.439399 2254 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xsxzr\" (UniqueName: \"kubernetes.io/projected/2b7f47fa-e70d-4a0b-a0a1-66e65b7cceba-kube-api-access-xsxzr\") pod \"cilium-4tdc6\" (UID: \"2b7f47fa-e70d-4a0b-a0a1-66e65b7cceba\") " pod="kube-system/cilium-4tdc6" Jul 10 00:58:26.439462 kubelet[2254]: I0710 00:58:26.439454 2254 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2b7f47fa-e70d-4a0b-a0a1-66e65b7cceba-lib-modules\") pod \"cilium-4tdc6\" (UID: \"2b7f47fa-e70d-4a0b-a0a1-66e65b7cceba\") " pod="kube-system/cilium-4tdc6" Jul 10 00:58:26.439543 kubelet[2254]: I0710 00:58:26.439535 2254 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2b7f47fa-e70d-4a0b-a0a1-66e65b7cceba-cilium-run\") pod \"cilium-4tdc6\" (UID: \"2b7f47fa-e70d-4a0b-a0a1-66e65b7cceba\") " pod="kube-system/cilium-4tdc6" Jul 10 00:58:26.439598 kubelet[2254]: I0710 00:58:26.439589 2254 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2b7f47fa-e70d-4a0b-a0a1-66e65b7cceba-bpf-maps\") pod \"cilium-4tdc6\" (UID: \"2b7f47fa-e70d-4a0b-a0a1-66e65b7cceba\") " pod="kube-system/cilium-4tdc6" Jul 10 00:58:26.439651 kubelet[2254]: I0710 00:58:26.439642 2254 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2b7f47fa-e70d-4a0b-a0a1-66e65b7cceba-cilium-cgroup\") pod \"cilium-4tdc6\" (UID: \"2b7f47fa-e70d-4a0b-a0a1-66e65b7cceba\") " pod="kube-system/cilium-4tdc6" Jul 10 00:58:26.439703 kubelet[2254]: I0710 00:58:26.439695 2254 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2b7f47fa-e70d-4a0b-a0a1-66e65b7cceba-xtables-lock\") pod \"cilium-4tdc6\" (UID: \"2b7f47fa-e70d-4a0b-a0a1-66e65b7cceba\") " pod="kube-system/cilium-4tdc6" Jul 10 00:58:26.439757 kubelet[2254]: I0710 00:58:26.439748 2254 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/2b7f47fa-e70d-4a0b-a0a1-66e65b7cceba-cilium-ipsec-secrets\") pod \"cilium-4tdc6\" (UID: \"2b7f47fa-e70d-4a0b-a0a1-66e65b7cceba\") " pod="kube-system/cilium-4tdc6" Jul 10 00:58:26.439814 kubelet[2254]: I0710 00:58:26.439805 2254 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2b7f47fa-e70d-4a0b-a0a1-66e65b7cceba-etc-cni-netd\") pod \"cilium-4tdc6\" (UID: \"2b7f47fa-e70d-4a0b-a0a1-66e65b7cceba\") " pod="kube-system/cilium-4tdc6" Jul 10 00:58:26.439865 kubelet[2254]: I0710 00:58:26.439856 2254 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2b7f47fa-e70d-4a0b-a0a1-66e65b7cceba-hostproc\") pod \"cilium-4tdc6\" (UID: \"2b7f47fa-e70d-4a0b-a0a1-66e65b7cceba\") " pod="kube-system/cilium-4tdc6" Jul 10 00:58:26.439918 kubelet[2254]: I0710 00:58:26.439908 2254 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2b7f47fa-e70d-4a0b-a0a1-66e65b7cceba-cni-path\") pod \"cilium-4tdc6\" (UID: \"2b7f47fa-e70d-4a0b-a0a1-66e65b7cceba\") " pod="kube-system/cilium-4tdc6" Jul 10 00:58:26.598858 systemd[1]: Started sshd@23-139.178.70.107:22-139.178.68.195:43022.service. Jul 10 00:58:26.599932 sshd[4068]: pam_unix(sshd:session): session closed for user core Jul 10 00:58:26.615935 systemd-logind[1346]: Session 25 logged out. Waiting for processes to exit. Jul 10 00:58:26.616830 systemd[1]: sshd@22-139.178.70.107:22-139.178.68.195:43016.service: Deactivated successfully. Jul 10 00:58:26.617424 systemd[1]: session-25.scope: Deactivated successfully. Jul 10 00:58:26.618474 env[1361]: time="2025-07-10T00:58:26.617989043Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-4tdc6,Uid:2b7f47fa-e70d-4a0b-a0a1-66e65b7cceba,Namespace:kube-system,Attempt:0,}" Jul 10 00:58:26.619071 systemd-logind[1346]: Removed session 25. Jul 10 00:58:26.640103 sshd[4084]: Accepted publickey for core from 139.178.68.195 port 43022 ssh2: RSA SHA256:NVpdRDPpwzjVTzi6orhe1cA9BvcYymCSReGH8myOy/Q Jul 10 00:58:26.641132 sshd[4084]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 10 00:58:26.644771 systemd[1]: Started session-26.scope. Jul 10 00:58:26.645900 systemd-logind[1346]: New session 26 of user core. Jul 10 00:58:26.659291 env[1361]: time="2025-07-10T00:58:26.659146578Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 10 00:58:26.659291 env[1361]: time="2025-07-10T00:58:26.659171351Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 10 00:58:26.659291 env[1361]: time="2025-07-10T00:58:26.659178404Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 00:58:26.659503 env[1361]: time="2025-07-10T00:58:26.659475683Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/9e7a7a0d95ac7a88b6d6ca6713c56436fa005ec01a22f6c839f66ffbc470c75d pid=4096 runtime=io.containerd.runc.v2 Jul 10 00:58:26.689011 env[1361]: time="2025-07-10T00:58:26.688985594Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-4tdc6,Uid:2b7f47fa-e70d-4a0b-a0a1-66e65b7cceba,Namespace:kube-system,Attempt:0,} returns sandbox id \"9e7a7a0d95ac7a88b6d6ca6713c56436fa005ec01a22f6c839f66ffbc470c75d\"" Jul 10 00:58:26.692180 env[1361]: time="2025-07-10T00:58:26.692147776Z" level=info msg="CreateContainer within sandbox \"9e7a7a0d95ac7a88b6d6ca6713c56436fa005ec01a22f6c839f66ffbc470c75d\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 10 00:58:26.728219 env[1361]: time="2025-07-10T00:58:26.728180584Z" level=info msg="CreateContainer within sandbox \"9e7a7a0d95ac7a88b6d6ca6713c56436fa005ec01a22f6c839f66ffbc470c75d\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"3df449ba302a649d7ec36de01757387aa495f4347e42dcae13053204a7e7cded\"" Jul 10 00:58:26.729604 env[1361]: time="2025-07-10T00:58:26.728713301Z" level=info msg="StartContainer for \"3df449ba302a649d7ec36de01757387aa495f4347e42dcae13053204a7e7cded\"" Jul 10 00:58:26.776209 env[1361]: time="2025-07-10T00:58:26.776174594Z" level=info msg="StartContainer for \"3df449ba302a649d7ec36de01757387aa495f4347e42dcae13053204a7e7cded\" returns successfully" Jul 10 00:58:26.817489 env[1361]: time="2025-07-10T00:58:26.817414046Z" level=info msg="shim disconnected" id=3df449ba302a649d7ec36de01757387aa495f4347e42dcae13053204a7e7cded Jul 10 00:58:26.817489 env[1361]: time="2025-07-10T00:58:26.817485946Z" level=warning msg="cleaning up after shim disconnected" id=3df449ba302a649d7ec36de01757387aa495f4347e42dcae13053204a7e7cded namespace=k8s.io Jul 10 00:58:26.817489 env[1361]: time="2025-07-10T00:58:26.817493091Z" level=info msg="cleaning up dead shim" Jul 10 00:58:26.823469 env[1361]: time="2025-07-10T00:58:26.823434179Z" level=warning msg="cleanup warnings time=\"2025-07-10T00:58:26Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4191 runtime=io.containerd.runc.v2\n" Jul 10 00:58:27.215341 kubelet[2254]: E0710 00:58:27.215319 2254 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 10 00:58:27.610080 env[1361]: time="2025-07-10T00:58:27.607814606Z" level=info msg="StopPodSandbox for \"9e7a7a0d95ac7a88b6d6ca6713c56436fa005ec01a22f6c839f66ffbc470c75d\"" Jul 10 00:58:27.610080 env[1361]: time="2025-07-10T00:58:27.607849878Z" level=info msg="Container to stop \"3df449ba302a649d7ec36de01757387aa495f4347e42dcae13053204a7e7cded\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 10 00:58:27.609278 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9e7a7a0d95ac7a88b6d6ca6713c56436fa005ec01a22f6c839f66ffbc470c75d-shm.mount: Deactivated successfully. Jul 10 00:58:27.631599 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9e7a7a0d95ac7a88b6d6ca6713c56436fa005ec01a22f6c839f66ffbc470c75d-rootfs.mount: Deactivated successfully. Jul 10 00:58:27.645587 env[1361]: time="2025-07-10T00:58:27.645543795Z" level=info msg="shim disconnected" id=9e7a7a0d95ac7a88b6d6ca6713c56436fa005ec01a22f6c839f66ffbc470c75d Jul 10 00:58:27.645587 env[1361]: time="2025-07-10T00:58:27.645579719Z" level=warning msg="cleaning up after shim disconnected" id=9e7a7a0d95ac7a88b6d6ca6713c56436fa005ec01a22f6c839f66ffbc470c75d namespace=k8s.io Jul 10 00:58:27.645587 env[1361]: time="2025-07-10T00:58:27.645588131Z" level=info msg="cleaning up dead shim" Jul 10 00:58:27.652422 env[1361]: time="2025-07-10T00:58:27.652384863Z" level=warning msg="cleanup warnings time=\"2025-07-10T00:58:27Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4226 runtime=io.containerd.runc.v2\n" Jul 10 00:58:27.652647 env[1361]: time="2025-07-10T00:58:27.652622863Z" level=info msg="TearDown network for sandbox \"9e7a7a0d95ac7a88b6d6ca6713c56436fa005ec01a22f6c839f66ffbc470c75d\" successfully" Jul 10 00:58:27.652715 env[1361]: time="2025-07-10T00:58:27.652644823Z" level=info msg="StopPodSandbox for \"9e7a7a0d95ac7a88b6d6ca6713c56436fa005ec01a22f6c839f66ffbc470c75d\" returns successfully" Jul 10 00:58:27.850580 kubelet[2254]: I0710 00:58:27.850088 2254 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2b7f47fa-e70d-4a0b-a0a1-66e65b7cceba-cilium-run\") pod \"2b7f47fa-e70d-4a0b-a0a1-66e65b7cceba\" (UID: \"2b7f47fa-e70d-4a0b-a0a1-66e65b7cceba\") " Jul 10 00:58:27.850580 kubelet[2254]: I0710 00:58:27.850153 2254 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2b7f47fa-e70d-4a0b-a0a1-66e65b7cceba-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "2b7f47fa-e70d-4a0b-a0a1-66e65b7cceba" (UID: "2b7f47fa-e70d-4a0b-a0a1-66e65b7cceba"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 10 00:58:27.850580 kubelet[2254]: I0710 00:58:27.850174 2254 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2b7f47fa-e70d-4a0b-a0a1-66e65b7cceba-host-proc-sys-net\") pod \"2b7f47fa-e70d-4a0b-a0a1-66e65b7cceba\" (UID: \"2b7f47fa-e70d-4a0b-a0a1-66e65b7cceba\") " Jul 10 00:58:27.850580 kubelet[2254]: I0710 00:58:27.850197 2254 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2b7f47fa-e70d-4a0b-a0a1-66e65b7cceba-lib-modules\") pod \"2b7f47fa-e70d-4a0b-a0a1-66e65b7cceba\" (UID: \"2b7f47fa-e70d-4a0b-a0a1-66e65b7cceba\") " Jul 10 00:58:27.850580 kubelet[2254]: I0710 00:58:27.850212 2254 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/2b7f47fa-e70d-4a0b-a0a1-66e65b7cceba-cilium-ipsec-secrets\") pod \"2b7f47fa-e70d-4a0b-a0a1-66e65b7cceba\" (UID: \"2b7f47fa-e70d-4a0b-a0a1-66e65b7cceba\") " Jul 10 00:58:27.850580 kubelet[2254]: I0710 00:58:27.850225 2254 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2b7f47fa-e70d-4a0b-a0a1-66e65b7cceba-hostproc\") pod \"2b7f47fa-e70d-4a0b-a0a1-66e65b7cceba\" (UID: \"2b7f47fa-e70d-4a0b-a0a1-66e65b7cceba\") " Jul 10 00:58:27.850942 kubelet[2254]: I0710 00:58:27.850233 2254 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2b7f47fa-e70d-4a0b-a0a1-66e65b7cceba-bpf-maps\") pod \"2b7f47fa-e70d-4a0b-a0a1-66e65b7cceba\" (UID: \"2b7f47fa-e70d-4a0b-a0a1-66e65b7cceba\") " Jul 10 00:58:27.850942 kubelet[2254]: I0710 00:58:27.850240 2254 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2b7f47fa-e70d-4a0b-a0a1-66e65b7cceba-cilium-cgroup\") pod \"2b7f47fa-e70d-4a0b-a0a1-66e65b7cceba\" (UID: \"2b7f47fa-e70d-4a0b-a0a1-66e65b7cceba\") " Jul 10 00:58:27.850942 kubelet[2254]: I0710 00:58:27.850250 2254 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2b7f47fa-e70d-4a0b-a0a1-66e65b7cceba-xtables-lock\") pod \"2b7f47fa-e70d-4a0b-a0a1-66e65b7cceba\" (UID: \"2b7f47fa-e70d-4a0b-a0a1-66e65b7cceba\") " Jul 10 00:58:27.850942 kubelet[2254]: I0710 00:58:27.850258 2254 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2b7f47fa-e70d-4a0b-a0a1-66e65b7cceba-host-proc-sys-kernel\") pod \"2b7f47fa-e70d-4a0b-a0a1-66e65b7cceba\" (UID: \"2b7f47fa-e70d-4a0b-a0a1-66e65b7cceba\") " Jul 10 00:58:27.850942 kubelet[2254]: I0710 00:58:27.850278 2254 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xsxzr\" (UniqueName: \"kubernetes.io/projected/2b7f47fa-e70d-4a0b-a0a1-66e65b7cceba-kube-api-access-xsxzr\") pod \"2b7f47fa-e70d-4a0b-a0a1-66e65b7cceba\" (UID: \"2b7f47fa-e70d-4a0b-a0a1-66e65b7cceba\") " Jul 10 00:58:27.850942 kubelet[2254]: I0710 00:58:27.850289 2254 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2b7f47fa-e70d-4a0b-a0a1-66e65b7cceba-etc-cni-netd\") pod \"2b7f47fa-e70d-4a0b-a0a1-66e65b7cceba\" (UID: \"2b7f47fa-e70d-4a0b-a0a1-66e65b7cceba\") " Jul 10 00:58:27.851066 kubelet[2254]: I0710 00:58:27.850299 2254 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2b7f47fa-e70d-4a0b-a0a1-66e65b7cceba-cni-path\") pod \"2b7f47fa-e70d-4a0b-a0a1-66e65b7cceba\" (UID: \"2b7f47fa-e70d-4a0b-a0a1-66e65b7cceba\") " Jul 10 00:58:27.851066 kubelet[2254]: I0710 00:58:27.850308 2254 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2b7f47fa-e70d-4a0b-a0a1-66e65b7cceba-hubble-tls\") pod \"2b7f47fa-e70d-4a0b-a0a1-66e65b7cceba\" (UID: \"2b7f47fa-e70d-4a0b-a0a1-66e65b7cceba\") " Jul 10 00:58:27.851066 kubelet[2254]: I0710 00:58:27.850317 2254 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2b7f47fa-e70d-4a0b-a0a1-66e65b7cceba-clustermesh-secrets\") pod \"2b7f47fa-e70d-4a0b-a0a1-66e65b7cceba\" (UID: \"2b7f47fa-e70d-4a0b-a0a1-66e65b7cceba\") " Jul 10 00:58:27.851066 kubelet[2254]: I0710 00:58:27.850328 2254 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2b7f47fa-e70d-4a0b-a0a1-66e65b7cceba-cilium-config-path\") pod \"2b7f47fa-e70d-4a0b-a0a1-66e65b7cceba\" (UID: \"2b7f47fa-e70d-4a0b-a0a1-66e65b7cceba\") " Jul 10 00:58:27.851066 kubelet[2254]: I0710 00:58:27.850363 2254 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2b7f47fa-e70d-4a0b-a0a1-66e65b7cceba-cilium-run\") on node \"localhost\" DevicePath \"\"" Jul 10 00:58:27.851491 kubelet[2254]: I0710 00:58:27.851275 2254 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2b7f47fa-e70d-4a0b-a0a1-66e65b7cceba-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "2b7f47fa-e70d-4a0b-a0a1-66e65b7cceba" (UID: "2b7f47fa-e70d-4a0b-a0a1-66e65b7cceba"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 10 00:58:27.851491 kubelet[2254]: I0710 00:58:27.851293 2254 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2b7f47fa-e70d-4a0b-a0a1-66e65b7cceba-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "2b7f47fa-e70d-4a0b-a0a1-66e65b7cceba" (UID: "2b7f47fa-e70d-4a0b-a0a1-66e65b7cceba"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 10 00:58:27.851491 kubelet[2254]: I0710 00:58:27.851304 2254 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2b7f47fa-e70d-4a0b-a0a1-66e65b7cceba-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "2b7f47fa-e70d-4a0b-a0a1-66e65b7cceba" (UID: "2b7f47fa-e70d-4a0b-a0a1-66e65b7cceba"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 10 00:58:27.851491 kubelet[2254]: I0710 00:58:27.851313 2254 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2b7f47fa-e70d-4a0b-a0a1-66e65b7cceba-hostproc" (OuterVolumeSpecName: "hostproc") pod "2b7f47fa-e70d-4a0b-a0a1-66e65b7cceba" (UID: "2b7f47fa-e70d-4a0b-a0a1-66e65b7cceba"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 10 00:58:27.851491 kubelet[2254]: I0710 00:58:27.851321 2254 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2b7f47fa-e70d-4a0b-a0a1-66e65b7cceba-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "2b7f47fa-e70d-4a0b-a0a1-66e65b7cceba" (UID: "2b7f47fa-e70d-4a0b-a0a1-66e65b7cceba"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 10 00:58:27.858702 kubelet[2254]: I0710 00:58:27.851344 2254 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2b7f47fa-e70d-4a0b-a0a1-66e65b7cceba-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "2b7f47fa-e70d-4a0b-a0a1-66e65b7cceba" (UID: "2b7f47fa-e70d-4a0b-a0a1-66e65b7cceba"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 10 00:58:27.858702 kubelet[2254]: I0710 00:58:27.851361 2254 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2b7f47fa-e70d-4a0b-a0a1-66e65b7cceba-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "2b7f47fa-e70d-4a0b-a0a1-66e65b7cceba" (UID: "2b7f47fa-e70d-4a0b-a0a1-66e65b7cceba"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 10 00:58:27.858702 kubelet[2254]: I0710 00:58:27.851371 2254 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2b7f47fa-e70d-4a0b-a0a1-66e65b7cceba-cni-path" (OuterVolumeSpecName: "cni-path") pod "2b7f47fa-e70d-4a0b-a0a1-66e65b7cceba" (UID: "2b7f47fa-e70d-4a0b-a0a1-66e65b7cceba"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 10 00:58:27.858702 kubelet[2254]: I0710 00:58:27.853551 2254 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2b7f47fa-e70d-4a0b-a0a1-66e65b7cceba-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "2b7f47fa-e70d-4a0b-a0a1-66e65b7cceba" (UID: "2b7f47fa-e70d-4a0b-a0a1-66e65b7cceba"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 10 00:58:27.858702 kubelet[2254]: I0710 00:58:27.854405 2254 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b7f47fa-e70d-4a0b-a0a1-66e65b7cceba-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "2b7f47fa-e70d-4a0b-a0a1-66e65b7cceba" (UID: "2b7f47fa-e70d-4a0b-a0a1-66e65b7cceba"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 10 00:58:27.853187 systemd[1]: var-lib-kubelet-pods-2b7f47fa\x2de70d\x2d4a0b\x2da0a1\x2d66e65b7cceba-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Jul 10 00:58:27.859016 kubelet[2254]: I0710 00:58:27.854667 2254 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2b7f47fa-e70d-4a0b-a0a1-66e65b7cceba-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "2b7f47fa-e70d-4a0b-a0a1-66e65b7cceba" (UID: "2b7f47fa-e70d-4a0b-a0a1-66e65b7cceba"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 10 00:58:27.859016 kubelet[2254]: I0710 00:58:27.857177 2254 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b7f47fa-e70d-4a0b-a0a1-66e65b7cceba-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "2b7f47fa-e70d-4a0b-a0a1-66e65b7cceba" (UID: "2b7f47fa-e70d-4a0b-a0a1-66e65b7cceba"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 10 00:58:27.859016 kubelet[2254]: I0710 00:58:27.858269 2254 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b7f47fa-e70d-4a0b-a0a1-66e65b7cceba-kube-api-access-xsxzr" (OuterVolumeSpecName: "kube-api-access-xsxzr") pod "2b7f47fa-e70d-4a0b-a0a1-66e65b7cceba" (UID: "2b7f47fa-e70d-4a0b-a0a1-66e65b7cceba"). InnerVolumeSpecName "kube-api-access-xsxzr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 10 00:58:27.856826 systemd[1]: var-lib-kubelet-pods-2b7f47fa\x2de70d\x2d4a0b\x2da0a1\x2d66e65b7cceba-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 10 00:58:27.859284 kubelet[2254]: I0710 00:58:27.859265 2254 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b7f47fa-e70d-4a0b-a0a1-66e65b7cceba-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "2b7f47fa-e70d-4a0b-a0a1-66e65b7cceba" (UID: "2b7f47fa-e70d-4a0b-a0a1-66e65b7cceba"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 10 00:58:27.951392 kubelet[2254]: I0710 00:58:27.951286 2254 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2b7f47fa-e70d-4a0b-a0a1-66e65b7cceba-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Jul 10 00:58:27.951392 kubelet[2254]: I0710 00:58:27.951316 2254 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2b7f47fa-e70d-4a0b-a0a1-66e65b7cceba-xtables-lock\") on node \"localhost\" DevicePath \"\"" Jul 10 00:58:27.951392 kubelet[2254]: I0710 00:58:27.951329 2254 reconciler_common.go:293] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/2b7f47fa-e70d-4a0b-a0a1-66e65b7cceba-cilium-ipsec-secrets\") on node \"localhost\" DevicePath \"\"" Jul 10 00:58:27.951392 kubelet[2254]: I0710 00:58:27.951341 2254 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2b7f47fa-e70d-4a0b-a0a1-66e65b7cceba-hostproc\") on node \"localhost\" DevicePath \"\"" Jul 10 00:58:27.951392 kubelet[2254]: I0710 00:58:27.951363 2254 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2b7f47fa-e70d-4a0b-a0a1-66e65b7cceba-bpf-maps\") on node \"localhost\" DevicePath \"\"" Jul 10 00:58:27.951392 kubelet[2254]: I0710 00:58:27.951374 2254 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xsxzr\" (UniqueName: \"kubernetes.io/projected/2b7f47fa-e70d-4a0b-a0a1-66e65b7cceba-kube-api-access-xsxzr\") on node \"localhost\" DevicePath \"\"" Jul 10 00:58:27.952595 kubelet[2254]: I0710 00:58:27.952538 2254 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2b7f47fa-e70d-4a0b-a0a1-66e65b7cceba-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Jul 10 00:58:27.952680 kubelet[2254]: I0710 00:58:27.952670 2254 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2b7f47fa-e70d-4a0b-a0a1-66e65b7cceba-cni-path\") on node \"localhost\" DevicePath \"\"" Jul 10 00:58:27.952763 kubelet[2254]: I0710 00:58:27.952753 2254 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2b7f47fa-e70d-4a0b-a0a1-66e65b7cceba-hubble-tls\") on node \"localhost\" DevicePath \"\"" Jul 10 00:58:27.952827 kubelet[2254]: I0710 00:58:27.952817 2254 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2b7f47fa-e70d-4a0b-a0a1-66e65b7cceba-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Jul 10 00:58:27.952899 kubelet[2254]: I0710 00:58:27.952890 2254 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2b7f47fa-e70d-4a0b-a0a1-66e65b7cceba-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Jul 10 00:58:27.952965 kubelet[2254]: I0710 00:58:27.952955 2254 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2b7f47fa-e70d-4a0b-a0a1-66e65b7cceba-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jul 10 00:58:27.953031 kubelet[2254]: I0710 00:58:27.953023 2254 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2b7f47fa-e70d-4a0b-a0a1-66e65b7cceba-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Jul 10 00:58:27.953088 kubelet[2254]: I0710 00:58:27.953073 2254 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2b7f47fa-e70d-4a0b-a0a1-66e65b7cceba-lib-modules\") on node \"localhost\" DevicePath \"\"" Jul 10 00:58:28.545786 systemd[1]: var-lib-kubelet-pods-2b7f47fa\x2de70d\x2d4a0b\x2da0a1\x2d66e65b7cceba-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dxsxzr.mount: Deactivated successfully. Jul 10 00:58:28.545879 systemd[1]: var-lib-kubelet-pods-2b7f47fa\x2de70d\x2d4a0b\x2da0a1\x2d66e65b7cceba-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 10 00:58:28.609385 kubelet[2254]: I0710 00:58:28.609341 2254 scope.go:117] "RemoveContainer" containerID="3df449ba302a649d7ec36de01757387aa495f4347e42dcae13053204a7e7cded" Jul 10 00:58:28.611498 env[1361]: time="2025-07-10T00:58:28.611208329Z" level=info msg="RemoveContainer for \"3df449ba302a649d7ec36de01757387aa495f4347e42dcae13053204a7e7cded\"" Jul 10 00:58:28.621718 env[1361]: time="2025-07-10T00:58:28.621509470Z" level=info msg="RemoveContainer for \"3df449ba302a649d7ec36de01757387aa495f4347e42dcae13053204a7e7cded\" returns successfully" Jul 10 00:58:28.651784 kubelet[2254]: E0710 00:58:28.651762 2254 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="2b7f47fa-e70d-4a0b-a0a1-66e65b7cceba" containerName="mount-cgroup" Jul 10 00:58:28.651955 kubelet[2254]: I0710 00:58:28.651943 2254 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b7f47fa-e70d-4a0b-a0a1-66e65b7cceba" containerName="mount-cgroup" Jul 10 00:58:28.658789 kubelet[2254]: I0710 00:58:28.656648 2254 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/712840c7-03f3-4ec6-80af-1ac7be98c64f-cni-path\") pod \"cilium-btphp\" (UID: \"712840c7-03f3-4ec6-80af-1ac7be98c64f\") " pod="kube-system/cilium-btphp" Jul 10 00:58:28.658789 kubelet[2254]: I0710 00:58:28.656675 2254 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/712840c7-03f3-4ec6-80af-1ac7be98c64f-cilium-ipsec-secrets\") pod \"cilium-btphp\" (UID: \"712840c7-03f3-4ec6-80af-1ac7be98c64f\") " pod="kube-system/cilium-btphp" Jul 10 00:58:28.658789 kubelet[2254]: I0710 00:58:28.656736 2254 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/712840c7-03f3-4ec6-80af-1ac7be98c64f-host-proc-sys-net\") pod \"cilium-btphp\" (UID: \"712840c7-03f3-4ec6-80af-1ac7be98c64f\") " pod="kube-system/cilium-btphp" Jul 10 00:58:28.658789 kubelet[2254]: I0710 00:58:28.656799 2254 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/712840c7-03f3-4ec6-80af-1ac7be98c64f-cilium-cgroup\") pod \"cilium-btphp\" (UID: \"712840c7-03f3-4ec6-80af-1ac7be98c64f\") " pod="kube-system/cilium-btphp" Jul 10 00:58:28.658789 kubelet[2254]: I0710 00:58:28.656837 2254 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/712840c7-03f3-4ec6-80af-1ac7be98c64f-clustermesh-secrets\") pod \"cilium-btphp\" (UID: \"712840c7-03f3-4ec6-80af-1ac7be98c64f\") " pod="kube-system/cilium-btphp" Jul 10 00:58:28.658789 kubelet[2254]: I0710 00:58:28.656850 2254 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/712840c7-03f3-4ec6-80af-1ac7be98c64f-hubble-tls\") pod \"cilium-btphp\" (UID: \"712840c7-03f3-4ec6-80af-1ac7be98c64f\") " pod="kube-system/cilium-btphp" Jul 10 00:58:28.659065 kubelet[2254]: I0710 00:58:28.656888 2254 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/712840c7-03f3-4ec6-80af-1ac7be98c64f-cilium-config-path\") pod \"cilium-btphp\" (UID: \"712840c7-03f3-4ec6-80af-1ac7be98c64f\") " pod="kube-system/cilium-btphp" Jul 10 00:58:28.659065 kubelet[2254]: I0710 00:58:28.656932 2254 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/712840c7-03f3-4ec6-80af-1ac7be98c64f-cilium-run\") pod \"cilium-btphp\" (UID: \"712840c7-03f3-4ec6-80af-1ac7be98c64f\") " pod="kube-system/cilium-btphp" Jul 10 00:58:28.659065 kubelet[2254]: I0710 00:58:28.656995 2254 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/712840c7-03f3-4ec6-80af-1ac7be98c64f-bpf-maps\") pod \"cilium-btphp\" (UID: \"712840c7-03f3-4ec6-80af-1ac7be98c64f\") " pod="kube-system/cilium-btphp" Jul 10 00:58:28.659065 kubelet[2254]: I0710 00:58:28.657008 2254 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/712840c7-03f3-4ec6-80af-1ac7be98c64f-lib-modules\") pod \"cilium-btphp\" (UID: \"712840c7-03f3-4ec6-80af-1ac7be98c64f\") " pod="kube-system/cilium-btphp" Jul 10 00:58:28.659065 kubelet[2254]: I0710 00:58:28.657089 2254 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/712840c7-03f3-4ec6-80af-1ac7be98c64f-hostproc\") pod \"cilium-btphp\" (UID: \"712840c7-03f3-4ec6-80af-1ac7be98c64f\") " pod="kube-system/cilium-btphp" Jul 10 00:58:28.659065 kubelet[2254]: I0710 00:58:28.657169 2254 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/712840c7-03f3-4ec6-80af-1ac7be98c64f-host-proc-sys-kernel\") pod \"cilium-btphp\" (UID: \"712840c7-03f3-4ec6-80af-1ac7be98c64f\") " pod="kube-system/cilium-btphp" Jul 10 00:58:28.659236 kubelet[2254]: I0710 00:58:28.657251 2254 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/712840c7-03f3-4ec6-80af-1ac7be98c64f-etc-cni-netd\") pod \"cilium-btphp\" (UID: \"712840c7-03f3-4ec6-80af-1ac7be98c64f\") " pod="kube-system/cilium-btphp" Jul 10 00:58:28.659236 kubelet[2254]: I0710 00:58:28.657313 2254 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/712840c7-03f3-4ec6-80af-1ac7be98c64f-xtables-lock\") pod \"cilium-btphp\" (UID: \"712840c7-03f3-4ec6-80af-1ac7be98c64f\") " pod="kube-system/cilium-btphp" Jul 10 00:58:28.659236 kubelet[2254]: I0710 00:58:28.657326 2254 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vqcxn\" (UniqueName: \"kubernetes.io/projected/712840c7-03f3-4ec6-80af-1ac7be98c64f-kube-api-access-vqcxn\") pod \"cilium-btphp\" (UID: \"712840c7-03f3-4ec6-80af-1ac7be98c64f\") " pod="kube-system/cilium-btphp" Jul 10 00:58:28.957189 env[1361]: time="2025-07-10T00:58:28.956691039Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-btphp,Uid:712840c7-03f3-4ec6-80af-1ac7be98c64f,Namespace:kube-system,Attempt:0,}" Jul 10 00:58:28.964593 env[1361]: time="2025-07-10T00:58:28.964529552Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 10 00:58:28.964593 env[1361]: time="2025-07-10T00:58:28.964569276Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 10 00:58:28.964760 env[1361]: time="2025-07-10T00:58:28.964737569Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 00:58:28.964990 env[1361]: time="2025-07-10T00:58:28.964959530Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c992f0d5c2170238d81d6f7b6262828f11865de1e8d34143add207049a6795a8 pid=4253 runtime=io.containerd.runc.v2 Jul 10 00:58:28.991787 env[1361]: time="2025-07-10T00:58:28.991761168Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-btphp,Uid:712840c7-03f3-4ec6-80af-1ac7be98c64f,Namespace:kube-system,Attempt:0,} returns sandbox id \"c992f0d5c2170238d81d6f7b6262828f11865de1e8d34143add207049a6795a8\"" Jul 10 00:58:28.993995 env[1361]: time="2025-07-10T00:58:28.993969683Z" level=info msg="CreateContainer within sandbox \"c992f0d5c2170238d81d6f7b6262828f11865de1e8d34143add207049a6795a8\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 10 00:58:29.053385 env[1361]: time="2025-07-10T00:58:29.053341031Z" level=info msg="CreateContainer within sandbox \"c992f0d5c2170238d81d6f7b6262828f11865de1e8d34143add207049a6795a8\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"6a196539a0a3f02a94b625027e1ed6db23b01f4e72d878a49ca1dfec8e04c9f1\"" Jul 10 00:58:29.054676 env[1361]: time="2025-07-10T00:58:29.054653659Z" level=info msg="StartContainer for \"6a196539a0a3f02a94b625027e1ed6db23b01f4e72d878a49ca1dfec8e04c9f1\"" Jul 10 00:58:29.102310 env[1361]: time="2025-07-10T00:58:29.102276707Z" level=info msg="StartContainer for \"6a196539a0a3f02a94b625027e1ed6db23b01f4e72d878a49ca1dfec8e04c9f1\" returns successfully" Jul 10 00:58:29.131181 kubelet[2254]: I0710 00:58:29.130942 2254 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2b7f47fa-e70d-4a0b-a0a1-66e65b7cceba" path="/var/lib/kubelet/pods/2b7f47fa-e70d-4a0b-a0a1-66e65b7cceba/volumes" Jul 10 00:58:29.146324 env[1361]: time="2025-07-10T00:58:29.146290392Z" level=info msg="shim disconnected" id=6a196539a0a3f02a94b625027e1ed6db23b01f4e72d878a49ca1dfec8e04c9f1 Jul 10 00:58:29.146537 env[1361]: time="2025-07-10T00:58:29.146523589Z" level=warning msg="cleaning up after shim disconnected" id=6a196539a0a3f02a94b625027e1ed6db23b01f4e72d878a49ca1dfec8e04c9f1 namespace=k8s.io Jul 10 00:58:29.146596 env[1361]: time="2025-07-10T00:58:29.146586187Z" level=info msg="cleaning up dead shim" Jul 10 00:58:29.153127 env[1361]: time="2025-07-10T00:58:29.153091617Z" level=warning msg="cleanup warnings time=\"2025-07-10T00:58:29Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4336 runtime=io.containerd.runc.v2\n" Jul 10 00:58:29.451204 kubelet[2254]: I0710 00:58:29.451174 2254 setters.go:600] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-07-10T00:58:29Z","lastTransitionTime":"2025-07-10T00:58:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jul 10 00:58:29.613087 env[1361]: time="2025-07-10T00:58:29.613049328Z" level=info msg="CreateContainer within sandbox \"c992f0d5c2170238d81d6f7b6262828f11865de1e8d34143add207049a6795a8\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 10 00:58:29.661739 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount915377116.mount: Deactivated successfully. Jul 10 00:58:29.706798 env[1361]: time="2025-07-10T00:58:29.706715152Z" level=info msg="CreateContainer within sandbox \"c992f0d5c2170238d81d6f7b6262828f11865de1e8d34143add207049a6795a8\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"ccc9ff865a20c06e499c0af8fb8186cda71793bc4c81599b69fa6d27b17c0607\"" Jul 10 00:58:29.708175 env[1361]: time="2025-07-10T00:58:29.708152756Z" level=info msg="StartContainer for \"ccc9ff865a20c06e499c0af8fb8186cda71793bc4c81599b69fa6d27b17c0607\"" Jul 10 00:58:29.754759 env[1361]: time="2025-07-10T00:58:29.754734769Z" level=info msg="StartContainer for \"ccc9ff865a20c06e499c0af8fb8186cda71793bc4c81599b69fa6d27b17c0607\" returns successfully" Jul 10 00:58:29.837840 env[1361]: time="2025-07-10T00:58:29.837810551Z" level=info msg="shim disconnected" id=ccc9ff865a20c06e499c0af8fb8186cda71793bc4c81599b69fa6d27b17c0607 Jul 10 00:58:29.838012 env[1361]: time="2025-07-10T00:58:29.837998461Z" level=warning msg="cleaning up after shim disconnected" id=ccc9ff865a20c06e499c0af8fb8186cda71793bc4c81599b69fa6d27b17c0607 namespace=k8s.io Jul 10 00:58:29.838073 env[1361]: time="2025-07-10T00:58:29.838063903Z" level=info msg="cleaning up dead shim" Jul 10 00:58:29.844007 env[1361]: time="2025-07-10T00:58:29.843968087Z" level=warning msg="cleanup warnings time=\"2025-07-10T00:58:29Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4398 runtime=io.containerd.runc.v2\n" Jul 10 00:58:30.617898 env[1361]: time="2025-07-10T00:58:30.617864562Z" level=info msg="CreateContainer within sandbox \"c992f0d5c2170238d81d6f7b6262828f11865de1e8d34143add207049a6795a8\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 10 00:58:30.631234 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3592907654.mount: Deactivated successfully. Jul 10 00:58:30.635697 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1027262872.mount: Deactivated successfully. Jul 10 00:58:30.666762 env[1361]: time="2025-07-10T00:58:30.666690067Z" level=info msg="CreateContainer within sandbox \"c992f0d5c2170238d81d6f7b6262828f11865de1e8d34143add207049a6795a8\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"8925464b651f15f8e31dd3b524e273b6469d6185dfaf08cf0f67896e96f41865\"" Jul 10 00:58:30.668108 env[1361]: time="2025-07-10T00:58:30.667680296Z" level=info msg="StartContainer for \"8925464b651f15f8e31dd3b524e273b6469d6185dfaf08cf0f67896e96f41865\"" Jul 10 00:58:30.713452 env[1361]: time="2025-07-10T00:58:30.713419332Z" level=info msg="StartContainer for \"8925464b651f15f8e31dd3b524e273b6469d6185dfaf08cf0f67896e96f41865\" returns successfully" Jul 10 00:58:30.779790 env[1361]: time="2025-07-10T00:58:30.779669296Z" level=info msg="shim disconnected" id=8925464b651f15f8e31dd3b524e273b6469d6185dfaf08cf0f67896e96f41865 Jul 10 00:58:30.779918 env[1361]: time="2025-07-10T00:58:30.779798754Z" level=warning msg="cleaning up after shim disconnected" id=8925464b651f15f8e31dd3b524e273b6469d6185dfaf08cf0f67896e96f41865 namespace=k8s.io Jul 10 00:58:30.779918 env[1361]: time="2025-07-10T00:58:30.779808055Z" level=info msg="cleaning up dead shim" Jul 10 00:58:30.786172 env[1361]: time="2025-07-10T00:58:30.786123795Z" level=warning msg="cleanup warnings time=\"2025-07-10T00:58:30Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4456 runtime=io.containerd.runc.v2\n" Jul 10 00:58:31.618952 env[1361]: time="2025-07-10T00:58:31.618910649Z" level=info msg="CreateContainer within sandbox \"c992f0d5c2170238d81d6f7b6262828f11865de1e8d34143add207049a6795a8\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 10 00:58:31.701584 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1272629984.mount: Deactivated successfully. Jul 10 00:58:31.708222 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount355738007.mount: Deactivated successfully. Jul 10 00:58:31.739581 env[1361]: time="2025-07-10T00:58:31.739546804Z" level=info msg="CreateContainer within sandbox \"c992f0d5c2170238d81d6f7b6262828f11865de1e8d34143add207049a6795a8\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"7da72f3428e31f4973d48d1f07320d93bac750e5c32b3bb072260b08c863d5dd\"" Jul 10 00:58:31.740359 env[1361]: time="2025-07-10T00:58:31.740328585Z" level=info msg="StartContainer for \"7da72f3428e31f4973d48d1f07320d93bac750e5c32b3bb072260b08c863d5dd\"" Jul 10 00:58:31.777376 env[1361]: time="2025-07-10T00:58:31.777341538Z" level=info msg="StartContainer for \"7da72f3428e31f4973d48d1f07320d93bac750e5c32b3bb072260b08c863d5dd\" returns successfully" Jul 10 00:58:31.794688 env[1361]: time="2025-07-10T00:58:31.794644086Z" level=info msg="shim disconnected" id=7da72f3428e31f4973d48d1f07320d93bac750e5c32b3bb072260b08c863d5dd Jul 10 00:58:31.794688 env[1361]: time="2025-07-10T00:58:31.794684537Z" level=warning msg="cleaning up after shim disconnected" id=7da72f3428e31f4973d48d1f07320d93bac750e5c32b3bb072260b08c863d5dd namespace=k8s.io Jul 10 00:58:31.794688 env[1361]: time="2025-07-10T00:58:31.794691650Z" level=info msg="cleaning up dead shim" Jul 10 00:58:31.801287 env[1361]: time="2025-07-10T00:58:31.801256968Z" level=warning msg="cleanup warnings time=\"2025-07-10T00:58:31Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4512 runtime=io.containerd.runc.v2\n" Jul 10 00:58:32.216401 kubelet[2254]: E0710 00:58:32.216364 2254 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 10 00:58:32.623317 env[1361]: time="2025-07-10T00:58:32.623085808Z" level=info msg="CreateContainer within sandbox \"c992f0d5c2170238d81d6f7b6262828f11865de1e8d34143add207049a6795a8\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 10 00:58:32.665878 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2019307008.mount: Deactivated successfully. Jul 10 00:58:32.669741 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3694359957.mount: Deactivated successfully. Jul 10 00:58:32.701029 env[1361]: time="2025-07-10T00:58:32.700997435Z" level=info msg="CreateContainer within sandbox \"c992f0d5c2170238d81d6f7b6262828f11865de1e8d34143add207049a6795a8\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"4fc2cbd2faa1100c6f40c89d239e0af60bb875ab1bfdc1f1c057ffa8d3097627\"" Jul 10 00:58:32.702317 env[1361]: time="2025-07-10T00:58:32.702300261Z" level=info msg="StartContainer for \"4fc2cbd2faa1100c6f40c89d239e0af60bb875ab1bfdc1f1c057ffa8d3097627\"" Jul 10 00:58:32.742833 env[1361]: time="2025-07-10T00:58:32.742797356Z" level=info msg="StartContainer for \"4fc2cbd2faa1100c6f40c89d239e0af60bb875ab1bfdc1f1c057ffa8d3097627\" returns successfully" Jul 10 00:58:33.664727 kubelet[2254]: I0710 00:58:33.664682 2254 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-btphp" podStartSLOduration=5.651474793 podStartE2EDuration="5.651474793s" podCreationTimestamp="2025-07-10 00:58:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 00:58:33.65055159 +0000 UTC m=+146.782007830" watchObservedRunningTime="2025-07-10 00:58:33.651474793 +0000 UTC m=+146.782931039" Jul 10 00:58:33.874380 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Jul 10 00:58:35.012831 systemd[1]: run-containerd-runc-k8s.io-4fc2cbd2faa1100c6f40c89d239e0af60bb875ab1bfdc1f1c057ffa8d3097627-runc.cWdHUF.mount: Deactivated successfully. Jul 10 00:58:36.887507 systemd-networkd[1112]: lxc_health: Link UP Jul 10 00:58:36.904874 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Jul 10 00:58:36.904457 systemd-networkd[1112]: lxc_health: Gained carrier Jul 10 00:58:37.204757 systemd[1]: run-containerd-runc-k8s.io-4fc2cbd2faa1100c6f40c89d239e0af60bb875ab1bfdc1f1c057ffa8d3097627-runc.UN63Un.mount: Deactivated successfully. Jul 10 00:58:38.256466 systemd-networkd[1112]: lxc_health: Gained IPv6LL Jul 10 00:58:39.293439 systemd[1]: run-containerd-runc-k8s.io-4fc2cbd2faa1100c6f40c89d239e0af60bb875ab1bfdc1f1c057ffa8d3097627-runc.rHXJYa.mount: Deactivated successfully. Jul 10 00:58:41.471574 systemd[1]: run-containerd-runc-k8s.io-4fc2cbd2faa1100c6f40c89d239e0af60bb875ab1bfdc1f1c057ffa8d3097627-runc.9SF8kv.mount: Deactivated successfully. Jul 10 00:58:43.566416 systemd[1]: run-containerd-runc-k8s.io-4fc2cbd2faa1100c6f40c89d239e0af60bb875ab1bfdc1f1c057ffa8d3097627-runc.KfRLDY.mount: Deactivated successfully. Jul 10 00:58:43.606871 sshd[4084]: pam_unix(sshd:session): session closed for user core Jul 10 00:58:43.608427 systemd[1]: sshd@23-139.178.70.107:22-139.178.68.195:43022.service: Deactivated successfully. Jul 10 00:58:43.609051 systemd[1]: session-26.scope: Deactivated successfully. Jul 10 00:58:43.609109 systemd-logind[1346]: Session 26 logged out. Waiting for processes to exit. Jul 10 00:58:43.609841 systemd-logind[1346]: Removed session 26.