Aug 13 01:13:25.660093 kernel: Linux version 5.15.189-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Tue Aug 12 23:01:50 -00 2025 Aug 13 01:13:25.660108 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=8f8aacd9fbcdd713563d390e899e90bedf5577e4b1b261b4e57687d87edd6b57 Aug 13 01:13:25.660114 kernel: Disabled fast string operations Aug 13 01:13:25.660119 kernel: BIOS-provided physical RAM map: Aug 13 01:13:25.660122 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ebff] usable Aug 13 01:13:25.660127 kernel: BIOS-e820: [mem 0x000000000009ec00-0x000000000009ffff] reserved Aug 13 01:13:25.660132 kernel: BIOS-e820: [mem 0x00000000000dc000-0x00000000000fffff] reserved Aug 13 01:13:25.660137 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007fedffff] usable Aug 13 01:13:25.660141 kernel: BIOS-e820: [mem 0x000000007fee0000-0x000000007fefefff] ACPI data Aug 13 01:13:25.660145 kernel: BIOS-e820: [mem 0x000000007feff000-0x000000007fefffff] ACPI NVS Aug 13 01:13:25.660149 kernel: BIOS-e820: [mem 0x000000007ff00000-0x000000007fffffff] usable Aug 13 01:13:25.660153 kernel: BIOS-e820: [mem 0x00000000f0000000-0x00000000f7ffffff] reserved Aug 13 01:13:25.660157 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec0ffff] reserved Aug 13 01:13:25.660161 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved Aug 13 01:13:25.660167 kernel: BIOS-e820: [mem 0x00000000fffe0000-0x00000000ffffffff] reserved Aug 13 01:13:25.660172 kernel: NX (Execute Disable) protection: active Aug 13 01:13:25.660176 kernel: SMBIOS 2.7 present. Aug 13 01:13:25.660181 kernel: DMI: VMware, Inc. VMware Virtual Platform/440BX Desktop Reference Platform, BIOS 6.00 05/28/2020 Aug 13 01:13:25.660186 kernel: vmware: hypercall mode: 0x00 Aug 13 01:13:25.660190 kernel: Hypervisor detected: VMware Aug 13 01:13:25.660195 kernel: vmware: TSC freq read from hypervisor : 3408.000 MHz Aug 13 01:13:25.660200 kernel: vmware: Host bus clock speed read from hypervisor : 66000000 Hz Aug 13 01:13:25.660204 kernel: vmware: using clock offset of 5066133855 ns Aug 13 01:13:25.660209 kernel: tsc: Detected 3408.000 MHz processor Aug 13 01:13:25.660214 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Aug 13 01:13:25.660219 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Aug 13 01:13:25.660223 kernel: last_pfn = 0x80000 max_arch_pfn = 0x400000000 Aug 13 01:13:25.660230 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Aug 13 01:13:25.660237 kernel: total RAM covered: 3072M Aug 13 01:13:25.660246 kernel: Found optimal setting for mtrr clean up Aug 13 01:13:25.660254 kernel: gran_size: 64K chunk_size: 64K num_reg: 2 lose cover RAM: 0G Aug 13 01:13:25.660261 kernel: Using GB pages for direct mapping Aug 13 01:13:25.660269 kernel: ACPI: Early table checksum verification disabled Aug 13 01:13:25.660276 kernel: ACPI: RSDP 0x00000000000F6A00 000024 (v02 PTLTD ) Aug 13 01:13:25.660280 kernel: ACPI: XSDT 0x000000007FEE965B 00005C (v01 INTEL 440BX 06040000 VMW 01324272) Aug 13 01:13:25.660285 kernel: ACPI: FACP 0x000000007FEFEE73 0000F4 (v04 INTEL 440BX 06040000 PTL 000F4240) Aug 13 01:13:25.660290 kernel: ACPI: DSDT 0x000000007FEEAD55 01411E (v01 PTLTD Custom 06040000 MSFT 03000001) Aug 13 01:13:25.660294 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 Aug 13 01:13:25.660299 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 Aug 13 01:13:25.660305 kernel: ACPI: BOOT 0x000000007FEEAD2D 000028 (v01 PTLTD $SBFTBL$ 06040000 LTP 00000001) Aug 13 01:13:25.660312 kernel: ACPI: APIC 0x000000007FEEA5EB 000742 (v01 PTLTD ? APIC 06040000 LTP 00000000) Aug 13 01:13:25.660317 kernel: ACPI: MCFG 0x000000007FEEA5AF 00003C (v01 PTLTD $PCITBL$ 06040000 LTP 00000001) Aug 13 01:13:25.660321 kernel: ACPI: SRAT 0x000000007FEE9757 0008A8 (v02 VMWARE MEMPLUG 06040000 VMW 00000001) Aug 13 01:13:25.660327 kernel: ACPI: HPET 0x000000007FEE971F 000038 (v01 VMWARE VMW HPET 06040000 VMW 00000001) Aug 13 01:13:25.660333 kernel: ACPI: WAET 0x000000007FEE96F7 000028 (v01 VMWARE VMW WAET 06040000 VMW 00000001) Aug 13 01:13:25.660338 kernel: ACPI: Reserving FACP table memory at [mem 0x7fefee73-0x7fefef66] Aug 13 01:13:25.660343 kernel: ACPI: Reserving DSDT table memory at [mem 0x7feead55-0x7fefee72] Aug 13 01:13:25.660347 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] Aug 13 01:13:25.660352 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] Aug 13 01:13:25.660357 kernel: ACPI: Reserving BOOT table memory at [mem 0x7feead2d-0x7feead54] Aug 13 01:13:25.660362 kernel: ACPI: Reserving APIC table memory at [mem 0x7feea5eb-0x7feead2c] Aug 13 01:13:25.660367 kernel: ACPI: Reserving MCFG table memory at [mem 0x7feea5af-0x7feea5ea] Aug 13 01:13:25.660372 kernel: ACPI: Reserving SRAT table memory at [mem 0x7fee9757-0x7fee9ffe] Aug 13 01:13:25.660378 kernel: ACPI: Reserving HPET table memory at [mem 0x7fee971f-0x7fee9756] Aug 13 01:13:25.660383 kernel: ACPI: Reserving WAET table memory at [mem 0x7fee96f7-0x7fee971e] Aug 13 01:13:25.660388 kernel: system APIC only can use physical flat Aug 13 01:13:25.660393 kernel: Setting APIC routing to physical flat. Aug 13 01:13:25.660397 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Aug 13 01:13:25.660403 kernel: SRAT: PXM 0 -> APIC 0x02 -> Node 0 Aug 13 01:13:25.660407 kernel: SRAT: PXM 0 -> APIC 0x04 -> Node 0 Aug 13 01:13:25.660412 kernel: SRAT: PXM 0 -> APIC 0x06 -> Node 0 Aug 13 01:13:25.660417 kernel: SRAT: PXM 0 -> APIC 0x08 -> Node 0 Aug 13 01:13:25.660423 kernel: SRAT: PXM 0 -> APIC 0x0a -> Node 0 Aug 13 01:13:25.660428 kernel: SRAT: PXM 0 -> APIC 0x0c -> Node 0 Aug 13 01:13:25.660433 kernel: SRAT: PXM 0 -> APIC 0x0e -> Node 0 Aug 13 01:13:25.660438 kernel: SRAT: PXM 0 -> APIC 0x10 -> Node 0 Aug 13 01:13:25.660442 kernel: SRAT: PXM 0 -> APIC 0x12 -> Node 0 Aug 13 01:13:25.660447 kernel: SRAT: PXM 0 -> APIC 0x14 -> Node 0 Aug 13 01:13:25.660452 kernel: SRAT: PXM 0 -> APIC 0x16 -> Node 0 Aug 13 01:13:25.660457 kernel: SRAT: PXM 0 -> APIC 0x18 -> Node 0 Aug 13 01:13:25.660462 kernel: SRAT: PXM 0 -> APIC 0x1a -> Node 0 Aug 13 01:13:25.660466 kernel: SRAT: PXM 0 -> APIC 0x1c -> Node 0 Aug 13 01:13:25.660472 kernel: SRAT: PXM 0 -> APIC 0x1e -> Node 0 Aug 13 01:13:25.660477 kernel: SRAT: PXM 0 -> APIC 0x20 -> Node 0 Aug 13 01:13:25.660482 kernel: SRAT: PXM 0 -> APIC 0x22 -> Node 0 Aug 13 01:13:25.660487 kernel: SRAT: PXM 0 -> APIC 0x24 -> Node 0 Aug 13 01:13:25.660492 kernel: SRAT: PXM 0 -> APIC 0x26 -> Node 0 Aug 13 01:13:25.660497 kernel: SRAT: PXM 0 -> APIC 0x28 -> Node 0 Aug 13 01:13:25.660501 kernel: SRAT: PXM 0 -> APIC 0x2a -> Node 0 Aug 13 01:13:25.660506 kernel: SRAT: PXM 0 -> APIC 0x2c -> Node 0 Aug 13 01:13:25.660511 kernel: SRAT: PXM 0 -> APIC 0x2e -> Node 0 Aug 13 01:13:25.660516 kernel: SRAT: PXM 0 -> APIC 0x30 -> Node 0 Aug 13 01:13:25.660522 kernel: SRAT: PXM 0 -> APIC 0x32 -> Node 0 Aug 13 01:13:25.660527 kernel: SRAT: PXM 0 -> APIC 0x34 -> Node 0 Aug 13 01:13:25.660531 kernel: SRAT: PXM 0 -> APIC 0x36 -> Node 0 Aug 13 01:13:25.660536 kernel: SRAT: PXM 0 -> APIC 0x38 -> Node 0 Aug 13 01:13:25.660541 kernel: SRAT: PXM 0 -> APIC 0x3a -> Node 0 Aug 13 01:13:25.660546 kernel: SRAT: PXM 0 -> APIC 0x3c -> Node 0 Aug 13 01:13:25.660551 kernel: SRAT: PXM 0 -> APIC 0x3e -> Node 0 Aug 13 01:13:25.660556 kernel: SRAT: PXM 0 -> APIC 0x40 -> Node 0 Aug 13 01:13:25.660561 kernel: SRAT: PXM 0 -> APIC 0x42 -> Node 0 Aug 13 01:13:25.660566 kernel: SRAT: PXM 0 -> APIC 0x44 -> Node 0 Aug 13 01:13:25.660571 kernel: SRAT: PXM 0 -> APIC 0x46 -> Node 0 Aug 13 01:13:25.660576 kernel: SRAT: PXM 0 -> APIC 0x48 -> Node 0 Aug 13 01:13:25.660581 kernel: SRAT: PXM 0 -> APIC 0x4a -> Node 0 Aug 13 01:13:25.660586 kernel: SRAT: PXM 0 -> APIC 0x4c -> Node 0 Aug 13 01:13:25.660591 kernel: SRAT: PXM 0 -> APIC 0x4e -> Node 0 Aug 13 01:13:25.660596 kernel: SRAT: PXM 0 -> APIC 0x50 -> Node 0 Aug 13 01:13:25.660600 kernel: SRAT: PXM 0 -> APIC 0x52 -> Node 0 Aug 13 01:13:25.660605 kernel: SRAT: PXM 0 -> APIC 0x54 -> Node 0 Aug 13 01:13:25.660610 kernel: SRAT: PXM 0 -> APIC 0x56 -> Node 0 Aug 13 01:13:25.660616 kernel: SRAT: PXM 0 -> APIC 0x58 -> Node 0 Aug 13 01:13:25.660621 kernel: SRAT: PXM 0 -> APIC 0x5a -> Node 0 Aug 13 01:13:25.660626 kernel: SRAT: PXM 0 -> APIC 0x5c -> Node 0 Aug 13 01:13:25.660630 kernel: SRAT: PXM 0 -> APIC 0x5e -> Node 0 Aug 13 01:13:25.660635 kernel: SRAT: PXM 0 -> APIC 0x60 -> Node 0 Aug 13 01:13:25.660640 kernel: SRAT: PXM 0 -> APIC 0x62 -> Node 0 Aug 13 01:13:25.660645 kernel: SRAT: PXM 0 -> APIC 0x64 -> Node 0 Aug 13 01:13:25.660650 kernel: SRAT: PXM 0 -> APIC 0x66 -> Node 0 Aug 13 01:13:25.660655 kernel: SRAT: PXM 0 -> APIC 0x68 -> Node 0 Aug 13 01:13:25.660660 kernel: SRAT: PXM 0 -> APIC 0x6a -> Node 0 Aug 13 01:13:25.660665 kernel: SRAT: PXM 0 -> APIC 0x6c -> Node 0 Aug 13 01:13:25.660670 kernel: SRAT: PXM 0 -> APIC 0x6e -> Node 0 Aug 13 01:13:25.660675 kernel: SRAT: PXM 0 -> APIC 0x70 -> Node 0 Aug 13 01:13:25.660680 kernel: SRAT: PXM 0 -> APIC 0x72 -> Node 0 Aug 13 01:13:25.660685 kernel: SRAT: PXM 0 -> APIC 0x74 -> Node 0 Aug 13 01:13:25.660689 kernel: SRAT: PXM 0 -> APIC 0x76 -> Node 0 Aug 13 01:13:25.660695 kernel: SRAT: PXM 0 -> APIC 0x78 -> Node 0 Aug 13 01:13:25.660704 kernel: SRAT: PXM 0 -> APIC 0x7a -> Node 0 Aug 13 01:13:25.660709 kernel: SRAT: PXM 0 -> APIC 0x7c -> Node 0 Aug 13 01:13:25.660714 kernel: SRAT: PXM 0 -> APIC 0x7e -> Node 0 Aug 13 01:13:25.660719 kernel: SRAT: PXM 0 -> APIC 0x80 -> Node 0 Aug 13 01:13:25.660726 kernel: SRAT: PXM 0 -> APIC 0x82 -> Node 0 Aug 13 01:13:25.660731 kernel: SRAT: PXM 0 -> APIC 0x84 -> Node 0 Aug 13 01:13:25.660736 kernel: SRAT: PXM 0 -> APIC 0x86 -> Node 0 Aug 13 01:13:25.660741 kernel: SRAT: PXM 0 -> APIC 0x88 -> Node 0 Aug 13 01:13:25.662191 kernel: SRAT: PXM 0 -> APIC 0x8a -> Node 0 Aug 13 01:13:25.662198 kernel: SRAT: PXM 0 -> APIC 0x8c -> Node 0 Aug 13 01:13:25.662205 kernel: SRAT: PXM 0 -> APIC 0x8e -> Node 0 Aug 13 01:13:25.662210 kernel: SRAT: PXM 0 -> APIC 0x90 -> Node 0 Aug 13 01:13:25.662216 kernel: SRAT: PXM 0 -> APIC 0x92 -> Node 0 Aug 13 01:13:25.662221 kernel: SRAT: PXM 0 -> APIC 0x94 -> Node 0 Aug 13 01:13:25.662226 kernel: SRAT: PXM 0 -> APIC 0x96 -> Node 0 Aug 13 01:13:25.662232 kernel: SRAT: PXM 0 -> APIC 0x98 -> Node 0 Aug 13 01:13:25.662237 kernel: SRAT: PXM 0 -> APIC 0x9a -> Node 0 Aug 13 01:13:25.662242 kernel: SRAT: PXM 0 -> APIC 0x9c -> Node 0 Aug 13 01:13:25.662248 kernel: SRAT: PXM 0 -> APIC 0x9e -> Node 0 Aug 13 01:13:25.662253 kernel: SRAT: PXM 0 -> APIC 0xa0 -> Node 0 Aug 13 01:13:25.662259 kernel: SRAT: PXM 0 -> APIC 0xa2 -> Node 0 Aug 13 01:13:25.662265 kernel: SRAT: PXM 0 -> APIC 0xa4 -> Node 0 Aug 13 01:13:25.662270 kernel: SRAT: PXM 0 -> APIC 0xa6 -> Node 0 Aug 13 01:13:25.662276 kernel: SRAT: PXM 0 -> APIC 0xa8 -> Node 0 Aug 13 01:13:25.662281 kernel: SRAT: PXM 0 -> APIC 0xaa -> Node 0 Aug 13 01:13:25.662289 kernel: SRAT: PXM 0 -> APIC 0xac -> Node 0 Aug 13 01:13:25.662298 kernel: SRAT: PXM 0 -> APIC 0xae -> Node 0 Aug 13 01:13:25.662303 kernel: SRAT: PXM 0 -> APIC 0xb0 -> Node 0 Aug 13 01:13:25.662308 kernel: SRAT: PXM 0 -> APIC 0xb2 -> Node 0 Aug 13 01:13:25.662313 kernel: SRAT: PXM 0 -> APIC 0xb4 -> Node 0 Aug 13 01:13:25.662320 kernel: SRAT: PXM 0 -> APIC 0xb6 -> Node 0 Aug 13 01:13:25.662325 kernel: SRAT: PXM 0 -> APIC 0xb8 -> Node 0 Aug 13 01:13:25.662332 kernel: SRAT: PXM 0 -> APIC 0xba -> Node 0 Aug 13 01:13:25.662340 kernel: SRAT: PXM 0 -> APIC 0xbc -> Node 0 Aug 13 01:13:25.662348 kernel: SRAT: PXM 0 -> APIC 0xbe -> Node 0 Aug 13 01:13:25.662356 kernel: SRAT: PXM 0 -> APIC 0xc0 -> Node 0 Aug 13 01:13:25.662364 kernel: SRAT: PXM 0 -> APIC 0xc2 -> Node 0 Aug 13 01:13:25.662372 kernel: SRAT: PXM 0 -> APIC 0xc4 -> Node 0 Aug 13 01:13:25.662379 kernel: SRAT: PXM 0 -> APIC 0xc6 -> Node 0 Aug 13 01:13:25.662388 kernel: SRAT: PXM 0 -> APIC 0xc8 -> Node 0 Aug 13 01:13:25.662397 kernel: SRAT: PXM 0 -> APIC 0xca -> Node 0 Aug 13 01:13:25.662406 kernel: SRAT: PXM 0 -> APIC 0xcc -> Node 0 Aug 13 01:13:25.662412 kernel: SRAT: PXM 0 -> APIC 0xce -> Node 0 Aug 13 01:13:25.662417 kernel: SRAT: PXM 0 -> APIC 0xd0 -> Node 0 Aug 13 01:13:25.662423 kernel: SRAT: PXM 0 -> APIC 0xd2 -> Node 0 Aug 13 01:13:25.662428 kernel: SRAT: PXM 0 -> APIC 0xd4 -> Node 0 Aug 13 01:13:25.662433 kernel: SRAT: PXM 0 -> APIC 0xd6 -> Node 0 Aug 13 01:13:25.662439 kernel: SRAT: PXM 0 -> APIC 0xd8 -> Node 0 Aug 13 01:13:25.662446 kernel: SRAT: PXM 0 -> APIC 0xda -> Node 0 Aug 13 01:13:25.662453 kernel: SRAT: PXM 0 -> APIC 0xdc -> Node 0 Aug 13 01:13:25.662459 kernel: SRAT: PXM 0 -> APIC 0xde -> Node 0 Aug 13 01:13:25.662465 kernel: SRAT: PXM 0 -> APIC 0xe0 -> Node 0 Aug 13 01:13:25.662474 kernel: SRAT: PXM 0 -> APIC 0xe2 -> Node 0 Aug 13 01:13:25.662482 kernel: SRAT: PXM 0 -> APIC 0xe4 -> Node 0 Aug 13 01:13:25.662487 kernel: SRAT: PXM 0 -> APIC 0xe6 -> Node 0 Aug 13 01:13:25.662495 kernel: SRAT: PXM 0 -> APIC 0xe8 -> Node 0 Aug 13 01:13:25.662504 kernel: SRAT: PXM 0 -> APIC 0xea -> Node 0 Aug 13 01:13:25.662513 kernel: SRAT: PXM 0 -> APIC 0xec -> Node 0 Aug 13 01:13:25.662521 kernel: SRAT: PXM 0 -> APIC 0xee -> Node 0 Aug 13 01:13:25.662529 kernel: SRAT: PXM 0 -> APIC 0xf0 -> Node 0 Aug 13 01:13:25.662534 kernel: SRAT: PXM 0 -> APIC 0xf2 -> Node 0 Aug 13 01:13:25.662539 kernel: SRAT: PXM 0 -> APIC 0xf4 -> Node 0 Aug 13 01:13:25.662547 kernel: SRAT: PXM 0 -> APIC 0xf6 -> Node 0 Aug 13 01:13:25.662553 kernel: SRAT: PXM 0 -> APIC 0xf8 -> Node 0 Aug 13 01:13:25.662558 kernel: SRAT: PXM 0 -> APIC 0xfa -> Node 0 Aug 13 01:13:25.662564 kernel: SRAT: PXM 0 -> APIC 0xfc -> Node 0 Aug 13 01:13:25.662569 kernel: SRAT: PXM 0 -> APIC 0xfe -> Node 0 Aug 13 01:13:25.662574 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Aug 13 01:13:25.662581 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Aug 13 01:13:25.662587 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000-0xbfffffff] hotplug Aug 13 01:13:25.662596 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7fffffff] -> [mem 0x00000000-0x7fffffff] Aug 13 01:13:25.662604 kernel: NODE_DATA(0) allocated [mem 0x7fffa000-0x7fffffff] Aug 13 01:13:25.662609 kernel: Zone ranges: Aug 13 01:13:25.662615 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Aug 13 01:13:25.662620 kernel: DMA32 [mem 0x0000000001000000-0x000000007fffffff] Aug 13 01:13:25.662625 kernel: Normal empty Aug 13 01:13:25.662634 kernel: Movable zone start for each node Aug 13 01:13:25.662645 kernel: Early memory node ranges Aug 13 01:13:25.662654 kernel: node 0: [mem 0x0000000000001000-0x000000000009dfff] Aug 13 01:13:25.662663 kernel: node 0: [mem 0x0000000000100000-0x000000007fedffff] Aug 13 01:13:25.662672 kernel: node 0: [mem 0x000000007ff00000-0x000000007fffffff] Aug 13 01:13:25.662681 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007fffffff] Aug 13 01:13:25.662688 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Aug 13 01:13:25.662694 kernel: On node 0, zone DMA: 98 pages in unavailable ranges Aug 13 01:13:25.662704 kernel: On node 0, zone DMA32: 32 pages in unavailable ranges Aug 13 01:13:25.662713 kernel: ACPI: PM-Timer IO Port: 0x1008 Aug 13 01:13:25.662722 kernel: system APIC only can use physical flat Aug 13 01:13:25.662729 kernel: ACPI: LAPIC_NMI (acpi_id[0x00] high edge lint[0x1]) Aug 13 01:13:25.662734 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) Aug 13 01:13:25.662740 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) Aug 13 01:13:25.664995 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) Aug 13 01:13:25.665002 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) Aug 13 01:13:25.665008 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) Aug 13 01:13:25.665013 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) Aug 13 01:13:25.665018 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) Aug 13 01:13:25.665024 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) Aug 13 01:13:25.665031 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) Aug 13 01:13:25.665037 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) Aug 13 01:13:25.665042 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) Aug 13 01:13:25.665048 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) Aug 13 01:13:25.665053 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) Aug 13 01:13:25.665059 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) Aug 13 01:13:25.665064 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) Aug 13 01:13:25.665069 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) Aug 13 01:13:25.665074 kernel: ACPI: LAPIC_NMI (acpi_id[0x11] high edge lint[0x1]) Aug 13 01:13:25.665081 kernel: ACPI: LAPIC_NMI (acpi_id[0x12] high edge lint[0x1]) Aug 13 01:13:25.665086 kernel: ACPI: LAPIC_NMI (acpi_id[0x13] high edge lint[0x1]) Aug 13 01:13:25.665091 kernel: ACPI: LAPIC_NMI (acpi_id[0x14] high edge lint[0x1]) Aug 13 01:13:25.665097 kernel: ACPI: LAPIC_NMI (acpi_id[0x15] high edge lint[0x1]) Aug 13 01:13:25.665102 kernel: ACPI: LAPIC_NMI (acpi_id[0x16] high edge lint[0x1]) Aug 13 01:13:25.665107 kernel: ACPI: LAPIC_NMI (acpi_id[0x17] high edge lint[0x1]) Aug 13 01:13:25.665113 kernel: ACPI: LAPIC_NMI (acpi_id[0x18] high edge lint[0x1]) Aug 13 01:13:25.665118 kernel: ACPI: LAPIC_NMI (acpi_id[0x19] high edge lint[0x1]) Aug 13 01:13:25.665123 kernel: ACPI: LAPIC_NMI (acpi_id[0x1a] high edge lint[0x1]) Aug 13 01:13:25.665128 kernel: ACPI: LAPIC_NMI (acpi_id[0x1b] high edge lint[0x1]) Aug 13 01:13:25.665135 kernel: ACPI: LAPIC_NMI (acpi_id[0x1c] high edge lint[0x1]) Aug 13 01:13:25.665140 kernel: ACPI: LAPIC_NMI (acpi_id[0x1d] high edge lint[0x1]) Aug 13 01:13:25.665145 kernel: ACPI: LAPIC_NMI (acpi_id[0x1e] high edge lint[0x1]) Aug 13 01:13:25.665150 kernel: ACPI: LAPIC_NMI (acpi_id[0x1f] high edge lint[0x1]) Aug 13 01:13:25.665156 kernel: ACPI: LAPIC_NMI (acpi_id[0x20] high edge lint[0x1]) Aug 13 01:13:25.665161 kernel: ACPI: LAPIC_NMI (acpi_id[0x21] high edge lint[0x1]) Aug 13 01:13:25.665167 kernel: ACPI: LAPIC_NMI (acpi_id[0x22] high edge lint[0x1]) Aug 13 01:13:25.665172 kernel: ACPI: LAPIC_NMI (acpi_id[0x23] high edge lint[0x1]) Aug 13 01:13:25.665177 kernel: ACPI: LAPIC_NMI (acpi_id[0x24] high edge lint[0x1]) Aug 13 01:13:25.665183 kernel: ACPI: LAPIC_NMI (acpi_id[0x25] high edge lint[0x1]) Aug 13 01:13:25.665189 kernel: ACPI: LAPIC_NMI (acpi_id[0x26] high edge lint[0x1]) Aug 13 01:13:25.665194 kernel: ACPI: LAPIC_NMI (acpi_id[0x27] high edge lint[0x1]) Aug 13 01:13:25.665200 kernel: ACPI: LAPIC_NMI (acpi_id[0x28] high edge lint[0x1]) Aug 13 01:13:25.665205 kernel: ACPI: LAPIC_NMI (acpi_id[0x29] high edge lint[0x1]) Aug 13 01:13:25.665210 kernel: ACPI: LAPIC_NMI (acpi_id[0x2a] high edge lint[0x1]) Aug 13 01:13:25.665216 kernel: ACPI: LAPIC_NMI (acpi_id[0x2b] high edge lint[0x1]) Aug 13 01:13:25.665221 kernel: ACPI: LAPIC_NMI (acpi_id[0x2c] high edge lint[0x1]) Aug 13 01:13:25.665226 kernel: ACPI: LAPIC_NMI (acpi_id[0x2d] high edge lint[0x1]) Aug 13 01:13:25.665232 kernel: ACPI: LAPIC_NMI (acpi_id[0x2e] high edge lint[0x1]) Aug 13 01:13:25.665238 kernel: ACPI: LAPIC_NMI (acpi_id[0x2f] high edge lint[0x1]) Aug 13 01:13:25.665243 kernel: ACPI: LAPIC_NMI (acpi_id[0x30] high edge lint[0x1]) Aug 13 01:13:25.665248 kernel: ACPI: LAPIC_NMI (acpi_id[0x31] high edge lint[0x1]) Aug 13 01:13:25.665253 kernel: ACPI: LAPIC_NMI (acpi_id[0x32] high edge lint[0x1]) Aug 13 01:13:25.665259 kernel: ACPI: LAPIC_NMI (acpi_id[0x33] high edge lint[0x1]) Aug 13 01:13:25.665264 kernel: ACPI: LAPIC_NMI (acpi_id[0x34] high edge lint[0x1]) Aug 13 01:13:25.665270 kernel: ACPI: LAPIC_NMI (acpi_id[0x35] high edge lint[0x1]) Aug 13 01:13:25.665275 kernel: ACPI: LAPIC_NMI (acpi_id[0x36] high edge lint[0x1]) Aug 13 01:13:25.665281 kernel: ACPI: LAPIC_NMI (acpi_id[0x37] high edge lint[0x1]) Aug 13 01:13:25.665290 kernel: ACPI: LAPIC_NMI (acpi_id[0x38] high edge lint[0x1]) Aug 13 01:13:25.665295 kernel: ACPI: LAPIC_NMI (acpi_id[0x39] high edge lint[0x1]) Aug 13 01:13:25.665301 kernel: ACPI: LAPIC_NMI (acpi_id[0x3a] high edge lint[0x1]) Aug 13 01:13:25.665306 kernel: ACPI: LAPIC_NMI (acpi_id[0x3b] high edge lint[0x1]) Aug 13 01:13:25.665311 kernel: ACPI: LAPIC_NMI (acpi_id[0x3c] high edge lint[0x1]) Aug 13 01:13:25.665316 kernel: ACPI: LAPIC_NMI (acpi_id[0x3d] high edge lint[0x1]) Aug 13 01:13:25.665321 kernel: ACPI: LAPIC_NMI (acpi_id[0x3e] high edge lint[0x1]) Aug 13 01:13:25.665327 kernel: ACPI: LAPIC_NMI (acpi_id[0x3f] high edge lint[0x1]) Aug 13 01:13:25.665333 kernel: ACPI: LAPIC_NMI (acpi_id[0x40] high edge lint[0x1]) Aug 13 01:13:25.665338 kernel: ACPI: LAPIC_NMI (acpi_id[0x41] high edge lint[0x1]) Aug 13 01:13:25.665344 kernel: ACPI: LAPIC_NMI (acpi_id[0x42] high edge lint[0x1]) Aug 13 01:13:25.665349 kernel: ACPI: LAPIC_NMI (acpi_id[0x43] high edge lint[0x1]) Aug 13 01:13:25.665354 kernel: ACPI: LAPIC_NMI (acpi_id[0x44] high edge lint[0x1]) Aug 13 01:13:25.665359 kernel: ACPI: LAPIC_NMI (acpi_id[0x45] high edge lint[0x1]) Aug 13 01:13:25.665364 kernel: ACPI: LAPIC_NMI (acpi_id[0x46] high edge lint[0x1]) Aug 13 01:13:25.665369 kernel: ACPI: LAPIC_NMI (acpi_id[0x47] high edge lint[0x1]) Aug 13 01:13:25.665375 kernel: ACPI: LAPIC_NMI (acpi_id[0x48] high edge lint[0x1]) Aug 13 01:13:25.665380 kernel: ACPI: LAPIC_NMI (acpi_id[0x49] high edge lint[0x1]) Aug 13 01:13:25.665387 kernel: ACPI: LAPIC_NMI (acpi_id[0x4a] high edge lint[0x1]) Aug 13 01:13:25.665392 kernel: ACPI: LAPIC_NMI (acpi_id[0x4b] high edge lint[0x1]) Aug 13 01:13:25.665397 kernel: ACPI: LAPIC_NMI (acpi_id[0x4c] high edge lint[0x1]) Aug 13 01:13:25.665402 kernel: ACPI: LAPIC_NMI (acpi_id[0x4d] high edge lint[0x1]) Aug 13 01:13:25.665408 kernel: ACPI: LAPIC_NMI (acpi_id[0x4e] high edge lint[0x1]) Aug 13 01:13:25.665413 kernel: ACPI: LAPIC_NMI (acpi_id[0x4f] high edge lint[0x1]) Aug 13 01:13:25.665418 kernel: ACPI: LAPIC_NMI (acpi_id[0x50] high edge lint[0x1]) Aug 13 01:13:25.665424 kernel: ACPI: LAPIC_NMI (acpi_id[0x51] high edge lint[0x1]) Aug 13 01:13:25.665429 kernel: ACPI: LAPIC_NMI (acpi_id[0x52] high edge lint[0x1]) Aug 13 01:13:25.665435 kernel: ACPI: LAPIC_NMI (acpi_id[0x53] high edge lint[0x1]) Aug 13 01:13:25.665441 kernel: ACPI: LAPIC_NMI (acpi_id[0x54] high edge lint[0x1]) Aug 13 01:13:25.665446 kernel: ACPI: LAPIC_NMI (acpi_id[0x55] high edge lint[0x1]) Aug 13 01:13:25.665451 kernel: ACPI: LAPIC_NMI (acpi_id[0x56] high edge lint[0x1]) Aug 13 01:13:25.665456 kernel: ACPI: LAPIC_NMI (acpi_id[0x57] high edge lint[0x1]) Aug 13 01:13:25.665461 kernel: ACPI: LAPIC_NMI (acpi_id[0x58] high edge lint[0x1]) Aug 13 01:13:25.665467 kernel: ACPI: LAPIC_NMI (acpi_id[0x59] high edge lint[0x1]) Aug 13 01:13:25.665472 kernel: ACPI: LAPIC_NMI (acpi_id[0x5a] high edge lint[0x1]) Aug 13 01:13:25.665477 kernel: ACPI: LAPIC_NMI (acpi_id[0x5b] high edge lint[0x1]) Aug 13 01:13:25.665482 kernel: ACPI: LAPIC_NMI (acpi_id[0x5c] high edge lint[0x1]) Aug 13 01:13:25.665488 kernel: ACPI: LAPIC_NMI (acpi_id[0x5d] high edge lint[0x1]) Aug 13 01:13:25.665493 kernel: ACPI: LAPIC_NMI (acpi_id[0x5e] high edge lint[0x1]) Aug 13 01:13:25.665499 kernel: ACPI: LAPIC_NMI (acpi_id[0x5f] high edge lint[0x1]) Aug 13 01:13:25.665504 kernel: ACPI: LAPIC_NMI (acpi_id[0x60] high edge lint[0x1]) Aug 13 01:13:25.665509 kernel: ACPI: LAPIC_NMI (acpi_id[0x61] high edge lint[0x1]) Aug 13 01:13:25.665515 kernel: ACPI: LAPIC_NMI (acpi_id[0x62] high edge lint[0x1]) Aug 13 01:13:25.665520 kernel: ACPI: LAPIC_NMI (acpi_id[0x63] high edge lint[0x1]) Aug 13 01:13:25.665525 kernel: ACPI: LAPIC_NMI (acpi_id[0x64] high edge lint[0x1]) Aug 13 01:13:25.665531 kernel: ACPI: LAPIC_NMI (acpi_id[0x65] high edge lint[0x1]) Aug 13 01:13:25.665537 kernel: ACPI: LAPIC_NMI (acpi_id[0x66] high edge lint[0x1]) Aug 13 01:13:25.665542 kernel: ACPI: LAPIC_NMI (acpi_id[0x67] high edge lint[0x1]) Aug 13 01:13:25.665548 kernel: ACPI: LAPIC_NMI (acpi_id[0x68] high edge lint[0x1]) Aug 13 01:13:25.665553 kernel: ACPI: LAPIC_NMI (acpi_id[0x69] high edge lint[0x1]) Aug 13 01:13:25.665558 kernel: ACPI: LAPIC_NMI (acpi_id[0x6a] high edge lint[0x1]) Aug 13 01:13:25.665564 kernel: ACPI: LAPIC_NMI (acpi_id[0x6b] high edge lint[0x1]) Aug 13 01:13:25.665569 kernel: ACPI: LAPIC_NMI (acpi_id[0x6c] high edge lint[0x1]) Aug 13 01:13:25.665574 kernel: ACPI: LAPIC_NMI (acpi_id[0x6d] high edge lint[0x1]) Aug 13 01:13:25.665579 kernel: ACPI: LAPIC_NMI (acpi_id[0x6e] high edge lint[0x1]) Aug 13 01:13:25.665585 kernel: ACPI: LAPIC_NMI (acpi_id[0x6f] high edge lint[0x1]) Aug 13 01:13:25.665591 kernel: ACPI: LAPIC_NMI (acpi_id[0x70] high edge lint[0x1]) Aug 13 01:13:25.665596 kernel: ACPI: LAPIC_NMI (acpi_id[0x71] high edge lint[0x1]) Aug 13 01:13:25.665601 kernel: ACPI: LAPIC_NMI (acpi_id[0x72] high edge lint[0x1]) Aug 13 01:13:25.665606 kernel: ACPI: LAPIC_NMI (acpi_id[0x73] high edge lint[0x1]) Aug 13 01:13:25.665612 kernel: ACPI: LAPIC_NMI (acpi_id[0x74] high edge lint[0x1]) Aug 13 01:13:25.665617 kernel: ACPI: LAPIC_NMI (acpi_id[0x75] high edge lint[0x1]) Aug 13 01:13:25.665622 kernel: ACPI: LAPIC_NMI (acpi_id[0x76] high edge lint[0x1]) Aug 13 01:13:25.665627 kernel: ACPI: LAPIC_NMI (acpi_id[0x77] high edge lint[0x1]) Aug 13 01:13:25.665633 kernel: ACPI: LAPIC_NMI (acpi_id[0x78] high edge lint[0x1]) Aug 13 01:13:25.665639 kernel: ACPI: LAPIC_NMI (acpi_id[0x79] high edge lint[0x1]) Aug 13 01:13:25.665644 kernel: ACPI: LAPIC_NMI (acpi_id[0x7a] high edge lint[0x1]) Aug 13 01:13:25.665649 kernel: ACPI: LAPIC_NMI (acpi_id[0x7b] high edge lint[0x1]) Aug 13 01:13:25.665654 kernel: ACPI: LAPIC_NMI (acpi_id[0x7c] high edge lint[0x1]) Aug 13 01:13:25.665659 kernel: ACPI: LAPIC_NMI (acpi_id[0x7d] high edge lint[0x1]) Aug 13 01:13:25.665665 kernel: ACPI: LAPIC_NMI (acpi_id[0x7e] high edge lint[0x1]) Aug 13 01:13:25.665670 kernel: ACPI: LAPIC_NMI (acpi_id[0x7f] high edge lint[0x1]) Aug 13 01:13:25.665676 kernel: IOAPIC[0]: apic_id 1, version 17, address 0xfec00000, GSI 0-23 Aug 13 01:13:25.665681 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 high edge) Aug 13 01:13:25.665687 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Aug 13 01:13:25.665693 kernel: ACPI: HPET id: 0x8086af01 base: 0xfed00000 Aug 13 01:13:25.665698 kernel: TSC deadline timer available Aug 13 01:13:25.665704 kernel: smpboot: Allowing 128 CPUs, 126 hotplug CPUs Aug 13 01:13:25.665709 kernel: [mem 0x80000000-0xefffffff] available for PCI devices Aug 13 01:13:25.665714 kernel: Booting paravirtualized kernel on VMware hypervisor Aug 13 01:13:25.665720 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Aug 13 01:13:25.665725 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:128 nr_node_ids:1 Aug 13 01:13:25.665731 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u262144 Aug 13 01:13:25.665737 kernel: pcpu-alloc: s188696 r8192 d32488 u262144 alloc=1*2097152 Aug 13 01:13:25.665748 kernel: pcpu-alloc: [0] 000 001 002 003 004 005 006 007 Aug 13 01:13:25.665754 kernel: pcpu-alloc: [0] 008 009 010 011 012 013 014 015 Aug 13 01:13:25.665759 kernel: pcpu-alloc: [0] 016 017 018 019 020 021 022 023 Aug 13 01:13:25.665764 kernel: pcpu-alloc: [0] 024 025 026 027 028 029 030 031 Aug 13 01:13:25.665769 kernel: pcpu-alloc: [0] 032 033 034 035 036 037 038 039 Aug 13 01:13:25.665774 kernel: pcpu-alloc: [0] 040 041 042 043 044 045 046 047 Aug 13 01:13:25.665780 kernel: pcpu-alloc: [0] 048 049 050 051 052 053 054 055 Aug 13 01:13:25.665792 kernel: pcpu-alloc: [0] 056 057 058 059 060 061 062 063 Aug 13 01:13:25.665799 kernel: pcpu-alloc: [0] 064 065 066 067 068 069 070 071 Aug 13 01:13:25.665805 kernel: pcpu-alloc: [0] 072 073 074 075 076 077 078 079 Aug 13 01:13:25.665811 kernel: pcpu-alloc: [0] 080 081 082 083 084 085 086 087 Aug 13 01:13:25.665816 kernel: pcpu-alloc: [0] 088 089 090 091 092 093 094 095 Aug 13 01:13:25.665822 kernel: pcpu-alloc: [0] 096 097 098 099 100 101 102 103 Aug 13 01:13:25.665827 kernel: pcpu-alloc: [0] 104 105 106 107 108 109 110 111 Aug 13 01:13:25.665833 kernel: pcpu-alloc: [0] 112 113 114 115 116 117 118 119 Aug 13 01:13:25.665838 kernel: pcpu-alloc: [0] 120 121 122 123 124 125 126 127 Aug 13 01:13:25.665845 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515808 Aug 13 01:13:25.665851 kernel: Policy zone: DMA32 Aug 13 01:13:25.665857 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=8f8aacd9fbcdd713563d390e899e90bedf5577e4b1b261b4e57687d87edd6b57 Aug 13 01:13:25.665863 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Aug 13 01:13:25.665869 kernel: printk: log_buf_len individual max cpu contribution: 4096 bytes Aug 13 01:13:25.665874 kernel: printk: log_buf_len total cpu_extra contributions: 520192 bytes Aug 13 01:13:25.665880 kernel: printk: log_buf_len min size: 262144 bytes Aug 13 01:13:25.665886 kernel: printk: log_buf_len: 1048576 bytes Aug 13 01:13:25.665892 kernel: printk: early log buf free: 239728(91%) Aug 13 01:13:25.665898 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Aug 13 01:13:25.665904 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Aug 13 01:13:25.665909 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Aug 13 01:13:25.665915 kernel: Memory: 1940392K/2096628K available (12295K kernel code, 2276K rwdata, 13732K rodata, 47488K init, 4092K bss, 155976K reserved, 0K cma-reserved) Aug 13 01:13:25.665921 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=128, Nodes=1 Aug 13 01:13:25.665926 kernel: ftrace: allocating 34608 entries in 136 pages Aug 13 01:13:25.665932 kernel: ftrace: allocated 136 pages with 2 groups Aug 13 01:13:25.665939 kernel: rcu: Hierarchical RCU implementation. Aug 13 01:13:25.665945 kernel: rcu: RCU event tracing is enabled. Aug 13 01:13:25.665951 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=128. Aug 13 01:13:25.665957 kernel: Rude variant of Tasks RCU enabled. Aug 13 01:13:25.665962 kernel: Tracing variant of Tasks RCU enabled. Aug 13 01:13:25.665968 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Aug 13 01:13:25.665974 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=128 Aug 13 01:13:25.665980 kernel: NR_IRQS: 33024, nr_irqs: 1448, preallocated irqs: 16 Aug 13 01:13:25.665986 kernel: random: crng init done Aug 13 01:13:25.665991 kernel: Console: colour VGA+ 80x25 Aug 13 01:13:25.665997 kernel: printk: console [tty0] enabled Aug 13 01:13:25.666003 kernel: printk: console [ttyS0] enabled Aug 13 01:13:25.666008 kernel: ACPI: Core revision 20210730 Aug 13 01:13:25.666014 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 133484882848 ns Aug 13 01:13:25.666020 kernel: APIC: Switch to symmetric I/O mode setup Aug 13 01:13:25.666026 kernel: x2apic enabled Aug 13 01:13:25.666032 kernel: Switched APIC routing to physical x2apic. Aug 13 01:13:25.666038 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Aug 13 01:13:25.666044 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns Aug 13 01:13:25.666050 kernel: Calibrating delay loop (skipped) preset value.. 6816.00 BogoMIPS (lpj=3408000) Aug 13 01:13:25.666056 kernel: Disabled fast string operations Aug 13 01:13:25.666061 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Aug 13 01:13:25.666067 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Aug 13 01:13:25.666073 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Aug 13 01:13:25.666078 kernel: Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks! Aug 13 01:13:25.666085 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Aug 13 01:13:25.666091 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Aug 13 01:13:25.666097 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Aug 13 01:13:25.666102 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Aug 13 01:13:25.666108 kernel: RETBleed: Mitigation: Enhanced IBRS Aug 13 01:13:25.666114 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Aug 13 01:13:25.666120 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Aug 13 01:13:25.666125 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Aug 13 01:13:25.666131 kernel: SRBDS: Unknown: Dependent on hypervisor status Aug 13 01:13:25.666138 kernel: GDS: Unknown: Dependent on hypervisor status Aug 13 01:13:25.666143 kernel: ITS: Mitigation: Aligned branch/return thunks Aug 13 01:13:25.666149 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Aug 13 01:13:25.666155 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Aug 13 01:13:25.666160 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Aug 13 01:13:25.666166 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Aug 13 01:13:25.666172 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Aug 13 01:13:25.666177 kernel: Freeing SMP alternatives memory: 32K Aug 13 01:13:25.666183 kernel: pid_max: default: 131072 minimum: 1024 Aug 13 01:13:25.666190 kernel: LSM: Security Framework initializing Aug 13 01:13:25.666196 kernel: SELinux: Initializing. Aug 13 01:13:25.666202 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Aug 13 01:13:25.666208 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Aug 13 01:13:25.666214 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) Aug 13 01:13:25.666220 kernel: Performance Events: Skylake events, core PMU driver. Aug 13 01:13:25.666226 kernel: core: CPUID marked event: 'cpu cycles' unavailable Aug 13 01:13:25.666232 kernel: core: CPUID marked event: 'instructions' unavailable Aug 13 01:13:25.666238 kernel: core: CPUID marked event: 'bus cycles' unavailable Aug 13 01:13:25.666244 kernel: core: CPUID marked event: 'cache references' unavailable Aug 13 01:13:25.666249 kernel: core: CPUID marked event: 'cache misses' unavailable Aug 13 01:13:25.666254 kernel: core: CPUID marked event: 'branch instructions' unavailable Aug 13 01:13:25.666260 kernel: core: CPUID marked event: 'branch misses' unavailable Aug 13 01:13:25.666266 kernel: ... version: 1 Aug 13 01:13:25.666271 kernel: ... bit width: 48 Aug 13 01:13:25.666277 kernel: ... generic registers: 4 Aug 13 01:13:25.666283 kernel: ... value mask: 0000ffffffffffff Aug 13 01:13:25.666289 kernel: ... max period: 000000007fffffff Aug 13 01:13:25.666295 kernel: ... fixed-purpose events: 0 Aug 13 01:13:25.666300 kernel: ... event mask: 000000000000000f Aug 13 01:13:25.666306 kernel: signal: max sigframe size: 1776 Aug 13 01:13:25.666312 kernel: rcu: Hierarchical SRCU implementation. Aug 13 01:13:25.666317 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Aug 13 01:13:25.666323 kernel: smp: Bringing up secondary CPUs ... Aug 13 01:13:25.666329 kernel: x86: Booting SMP configuration: Aug 13 01:13:25.666334 kernel: .... node #0, CPUs: #1 Aug 13 01:13:25.666341 kernel: Disabled fast string operations Aug 13 01:13:25.666346 kernel: smpboot: CPU 1 Converting physical 2 to logical package 1 Aug 13 01:13:25.666352 kernel: smpboot: CPU 1 Converting physical 0 to logical die 1 Aug 13 01:13:25.666357 kernel: smp: Brought up 1 node, 2 CPUs Aug 13 01:13:25.666363 kernel: smpboot: Max logical packages: 128 Aug 13 01:13:25.666369 kernel: smpboot: Total of 2 processors activated (13632.00 BogoMIPS) Aug 13 01:13:25.666374 kernel: devtmpfs: initialized Aug 13 01:13:25.666380 kernel: x86/mm: Memory block size: 128MB Aug 13 01:13:25.666386 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7feff000-0x7fefffff] (4096 bytes) Aug 13 01:13:25.666392 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Aug 13 01:13:25.666398 kernel: futex hash table entries: 32768 (order: 9, 2097152 bytes, linear) Aug 13 01:13:25.666404 kernel: pinctrl core: initialized pinctrl subsystem Aug 13 01:13:25.666410 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Aug 13 01:13:25.666415 kernel: audit: initializing netlink subsys (disabled) Aug 13 01:13:25.666421 kernel: audit: type=2000 audit(1755047603.084:1): state=initialized audit_enabled=0 res=1 Aug 13 01:13:25.666427 kernel: thermal_sys: Registered thermal governor 'step_wise' Aug 13 01:13:25.666432 kernel: thermal_sys: Registered thermal governor 'user_space' Aug 13 01:13:25.666438 kernel: cpuidle: using governor menu Aug 13 01:13:25.666444 kernel: Simple Boot Flag at 0x36 set to 0x80 Aug 13 01:13:25.666450 kernel: ACPI: bus type PCI registered Aug 13 01:13:25.666456 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Aug 13 01:13:25.666461 kernel: dca service started, version 1.12.1 Aug 13 01:13:25.666467 kernel: PCI: MMCONFIG for domain 0000 [bus 00-7f] at [mem 0xf0000000-0xf7ffffff] (base 0xf0000000) Aug 13 01:13:25.666473 kernel: PCI: MMCONFIG at [mem 0xf0000000-0xf7ffffff] reserved in E820 Aug 13 01:13:25.666478 kernel: PCI: Using configuration type 1 for base access Aug 13 01:13:25.666484 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Aug 13 01:13:25.666491 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Aug 13 01:13:25.666496 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Aug 13 01:13:25.666503 kernel: ACPI: Added _OSI(Module Device) Aug 13 01:13:25.666508 kernel: ACPI: Added _OSI(Processor Device) Aug 13 01:13:25.666514 kernel: ACPI: Added _OSI(Processor Aggregator Device) Aug 13 01:13:25.666520 kernel: ACPI: Added _OSI(Linux-Dell-Video) Aug 13 01:13:25.666526 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Aug 13 01:13:25.666531 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Aug 13 01:13:25.666537 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Aug 13 01:13:25.666542 kernel: ACPI: [Firmware Bug]: BIOS _OSI(Linux) query ignored Aug 13 01:13:25.666548 kernel: ACPI: Interpreter enabled Aug 13 01:13:25.666555 kernel: ACPI: PM: (supports S0 S1 S5) Aug 13 01:13:25.666560 kernel: ACPI: Using IOAPIC for interrupt routing Aug 13 01:13:25.666566 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Aug 13 01:13:25.666572 kernel: ACPI: Enabled 4 GPEs in block 00 to 0F Aug 13 01:13:25.666577 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-7f]) Aug 13 01:13:25.666656 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Aug 13 01:13:25.666709 kernel: acpi PNP0A03:00: _OSC: platform does not support [AER LTR] Aug 13 01:13:25.666767 kernel: acpi PNP0A03:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability] Aug 13 01:13:25.666776 kernel: PCI host bridge to bus 0000:00 Aug 13 01:13:25.666827 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Aug 13 01:13:25.666871 kernel: pci_bus 0000:00: root bus resource [mem 0x000cc000-0x000dbfff window] Aug 13 01:13:25.666913 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Aug 13 01:13:25.666955 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Aug 13 01:13:25.666996 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xfeff window] Aug 13 01:13:25.667037 kernel: pci_bus 0000:00: root bus resource [bus 00-7f] Aug 13 01:13:25.667095 kernel: pci 0000:00:00.0: [8086:7190] type 00 class 0x060000 Aug 13 01:13:25.667148 kernel: pci 0000:00:01.0: [8086:7191] type 01 class 0x060400 Aug 13 01:13:25.667205 kernel: pci 0000:00:07.0: [8086:7110] type 00 class 0x060100 Aug 13 01:13:25.667258 kernel: pci 0000:00:07.1: [8086:7111] type 00 class 0x01018a Aug 13 01:13:25.667307 kernel: pci 0000:00:07.1: reg 0x20: [io 0x1060-0x106f] Aug 13 01:13:25.667814 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Aug 13 01:13:25.667870 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Aug 13 01:13:25.667920 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Aug 13 01:13:25.667969 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Aug 13 01:13:25.668024 kernel: pci 0000:00:07.3: [8086:7113] type 00 class 0x068000 Aug 13 01:13:25.668073 kernel: pci 0000:00:07.3: quirk: [io 0x1000-0x103f] claimed by PIIX4 ACPI Aug 13 01:13:25.668122 kernel: pci 0000:00:07.3: quirk: [io 0x1040-0x104f] claimed by PIIX4 SMB Aug 13 01:13:25.668177 kernel: pci 0000:00:07.7: [15ad:0740] type 00 class 0x088000 Aug 13 01:13:25.668228 kernel: pci 0000:00:07.7: reg 0x10: [io 0x1080-0x10bf] Aug 13 01:13:25.668277 kernel: pci 0000:00:07.7: reg 0x14: [mem 0xfebfe000-0xfebfffff 64bit] Aug 13 01:13:25.668346 kernel: pci 0000:00:0f.0: [15ad:0405] type 00 class 0x030000 Aug 13 01:13:25.668397 kernel: pci 0000:00:0f.0: reg 0x10: [io 0x1070-0x107f] Aug 13 01:13:25.668446 kernel: pci 0000:00:0f.0: reg 0x14: [mem 0xe8000000-0xefffffff pref] Aug 13 01:13:25.668492 kernel: pci 0000:00:0f.0: reg 0x18: [mem 0xfe000000-0xfe7fffff] Aug 13 01:13:25.668543 kernel: pci 0000:00:0f.0: reg 0x30: [mem 0x00000000-0x00007fff pref] Aug 13 01:13:25.668590 kernel: pci 0000:00:0f.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Aug 13 01:13:25.668643 kernel: pci 0000:00:11.0: [15ad:0790] type 01 class 0x060401 Aug 13 01:13:25.668695 kernel: pci 0000:00:15.0: [15ad:07a0] type 01 class 0x060400 Aug 13 01:13:25.669781 kernel: pci 0000:00:15.0: PME# supported from D0 D3hot D3cold Aug 13 01:13:25.669856 kernel: pci 0000:00:15.1: [15ad:07a0] type 01 class 0x060400 Aug 13 01:13:25.669916 kernel: pci 0000:00:15.1: PME# supported from D0 D3hot D3cold Aug 13 01:13:25.669974 kernel: pci 0000:00:15.2: [15ad:07a0] type 01 class 0x060400 Aug 13 01:13:25.670024 kernel: pci 0000:00:15.2: PME# supported from D0 D3hot D3cold Aug 13 01:13:25.670076 kernel: pci 0000:00:15.3: [15ad:07a0] type 01 class 0x060400 Aug 13 01:13:25.670125 kernel: pci 0000:00:15.3: PME# supported from D0 D3hot D3cold Aug 13 01:13:25.670178 kernel: pci 0000:00:15.4: [15ad:07a0] type 01 class 0x060400 Aug 13 01:13:25.670229 kernel: pci 0000:00:15.4: PME# supported from D0 D3hot D3cold Aug 13 01:13:25.670281 kernel: pci 0000:00:15.5: [15ad:07a0] type 01 class 0x060400 Aug 13 01:13:25.670331 kernel: pci 0000:00:15.5: PME# supported from D0 D3hot D3cold Aug 13 01:13:25.670383 kernel: pci 0000:00:15.6: [15ad:07a0] type 01 class 0x060400 Aug 13 01:13:25.670432 kernel: pci 0000:00:15.6: PME# supported from D0 D3hot D3cold Aug 13 01:13:25.670482 kernel: pci 0000:00:15.7: [15ad:07a0] type 01 class 0x060400 Aug 13 01:13:25.670530 kernel: pci 0000:00:15.7: PME# supported from D0 D3hot D3cold Aug 13 01:13:25.670584 kernel: pci 0000:00:16.0: [15ad:07a0] type 01 class 0x060400 Aug 13 01:13:25.670631 kernel: pci 0000:00:16.0: PME# supported from D0 D3hot D3cold Aug 13 01:13:25.670684 kernel: pci 0000:00:16.1: [15ad:07a0] type 01 class 0x060400 Aug 13 01:13:25.670732 kernel: pci 0000:00:16.1: PME# supported from D0 D3hot D3cold Aug 13 01:13:25.671813 kernel: pci 0000:00:16.2: [15ad:07a0] type 01 class 0x060400 Aug 13 01:13:25.671869 kernel: pci 0000:00:16.2: PME# supported from D0 D3hot D3cold Aug 13 01:13:25.671922 kernel: pci 0000:00:16.3: [15ad:07a0] type 01 class 0x060400 Aug 13 01:13:25.671970 kernel: pci 0000:00:16.3: PME# supported from D0 D3hot D3cold Aug 13 01:13:25.672023 kernel: pci 0000:00:16.4: [15ad:07a0] type 01 class 0x060400 Aug 13 01:13:25.672071 kernel: pci 0000:00:16.4: PME# supported from D0 D3hot D3cold Aug 13 01:13:25.672122 kernel: pci 0000:00:16.5: [15ad:07a0] type 01 class 0x060400 Aug 13 01:13:25.672172 kernel: pci 0000:00:16.5: PME# supported from D0 D3hot D3cold Aug 13 01:13:25.672224 kernel: pci 0000:00:16.6: [15ad:07a0] type 01 class 0x060400 Aug 13 01:13:25.672271 kernel: pci 0000:00:16.6: PME# supported from D0 D3hot D3cold Aug 13 01:13:25.672322 kernel: pci 0000:00:16.7: [15ad:07a0] type 01 class 0x060400 Aug 13 01:13:25.672370 kernel: pci 0000:00:16.7: PME# supported from D0 D3hot D3cold Aug 13 01:13:25.672421 kernel: pci 0000:00:17.0: [15ad:07a0] type 01 class 0x060400 Aug 13 01:13:25.672471 kernel: pci 0000:00:17.0: PME# supported from D0 D3hot D3cold Aug 13 01:13:25.672522 kernel: pci 0000:00:17.1: [15ad:07a0] type 01 class 0x060400 Aug 13 01:13:25.672570 kernel: pci 0000:00:17.1: PME# supported from D0 D3hot D3cold Aug 13 01:13:25.672621 kernel: pci 0000:00:17.2: [15ad:07a0] type 01 class 0x060400 Aug 13 01:13:25.672669 kernel: pci 0000:00:17.2: PME# supported from D0 D3hot D3cold Aug 13 01:13:25.672723 kernel: pci 0000:00:17.3: [15ad:07a0] type 01 class 0x060400 Aug 13 01:13:25.673809 kernel: pci 0000:00:17.3: PME# supported from D0 D3hot D3cold Aug 13 01:13:25.673869 kernel: pci 0000:00:17.4: [15ad:07a0] type 01 class 0x060400 Aug 13 01:13:25.673919 kernel: pci 0000:00:17.4: PME# supported from D0 D3hot D3cold Aug 13 01:13:25.673971 kernel: pci 0000:00:17.5: [15ad:07a0] type 01 class 0x060400 Aug 13 01:13:25.674019 kernel: pci 0000:00:17.5: PME# supported from D0 D3hot D3cold Aug 13 01:13:25.674071 kernel: pci 0000:00:17.6: [15ad:07a0] type 01 class 0x060400 Aug 13 01:13:25.674133 kernel: pci 0000:00:17.6: PME# supported from D0 D3hot D3cold Aug 13 01:13:25.674190 kernel: pci 0000:00:17.7: [15ad:07a0] type 01 class 0x060400 Aug 13 01:13:25.675793 kernel: pci 0000:00:17.7: PME# supported from D0 D3hot D3cold Aug 13 01:13:25.675871 kernel: pci 0000:00:18.0: [15ad:07a0] type 01 class 0x060400 Aug 13 01:13:25.675926 kernel: pci 0000:00:18.0: PME# supported from D0 D3hot D3cold Aug 13 01:13:25.675980 kernel: pci 0000:00:18.1: [15ad:07a0] type 01 class 0x060400 Aug 13 01:13:25.676030 kernel: pci 0000:00:18.1: PME# supported from D0 D3hot D3cold Aug 13 01:13:25.676085 kernel: pci 0000:00:18.2: [15ad:07a0] type 01 class 0x060400 Aug 13 01:13:25.676134 kernel: pci 0000:00:18.2: PME# supported from D0 D3hot D3cold Aug 13 01:13:25.676189 kernel: pci 0000:00:18.3: [15ad:07a0] type 01 class 0x060400 Aug 13 01:13:25.676238 kernel: pci 0000:00:18.3: PME# supported from D0 D3hot D3cold Aug 13 01:13:25.676289 kernel: pci 0000:00:18.4: [15ad:07a0] type 01 class 0x060400 Aug 13 01:13:25.676338 kernel: pci 0000:00:18.4: PME# supported from D0 D3hot D3cold Aug 13 01:13:25.676393 kernel: pci 0000:00:18.5: [15ad:07a0] type 01 class 0x060400 Aug 13 01:13:25.676441 kernel: pci 0000:00:18.5: PME# supported from D0 D3hot D3cold Aug 13 01:13:25.676493 kernel: pci 0000:00:18.6: [15ad:07a0] type 01 class 0x060400 Aug 13 01:13:25.676542 kernel: pci 0000:00:18.6: PME# supported from D0 D3hot D3cold Aug 13 01:13:25.676593 kernel: pci 0000:00:18.7: [15ad:07a0] type 01 class 0x060400 Aug 13 01:13:25.676641 kernel: pci 0000:00:18.7: PME# supported from D0 D3hot D3cold Aug 13 01:13:25.676693 kernel: pci_bus 0000:01: extended config space not accessible Aug 13 01:13:25.677977 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Aug 13 01:13:25.678043 kernel: pci_bus 0000:02: extended config space not accessible Aug 13 01:13:25.678054 kernel: acpiphp: Slot [32] registered Aug 13 01:13:25.678061 kernel: acpiphp: Slot [33] registered Aug 13 01:13:25.678066 kernel: acpiphp: Slot [34] registered Aug 13 01:13:25.678072 kernel: acpiphp: Slot [35] registered Aug 13 01:13:25.678078 kernel: acpiphp: Slot [36] registered Aug 13 01:13:25.678085 kernel: acpiphp: Slot [37] registered Aug 13 01:13:25.678091 kernel: acpiphp: Slot [38] registered Aug 13 01:13:25.678097 kernel: acpiphp: Slot [39] registered Aug 13 01:13:25.678103 kernel: acpiphp: Slot [40] registered Aug 13 01:13:25.678109 kernel: acpiphp: Slot [41] registered Aug 13 01:13:25.678115 kernel: acpiphp: Slot [42] registered Aug 13 01:13:25.678120 kernel: acpiphp: Slot [43] registered Aug 13 01:13:25.678126 kernel: acpiphp: Slot [44] registered Aug 13 01:13:25.678132 kernel: acpiphp: Slot [45] registered Aug 13 01:13:25.678137 kernel: acpiphp: Slot [46] registered Aug 13 01:13:25.678144 kernel: acpiphp: Slot [47] registered Aug 13 01:13:25.678150 kernel: acpiphp: Slot [48] registered Aug 13 01:13:25.678156 kernel: acpiphp: Slot [49] registered Aug 13 01:13:25.678161 kernel: acpiphp: Slot [50] registered Aug 13 01:13:25.678167 kernel: acpiphp: Slot [51] registered Aug 13 01:13:25.678173 kernel: acpiphp: Slot [52] registered Aug 13 01:13:25.678178 kernel: acpiphp: Slot [53] registered Aug 13 01:13:25.678184 kernel: acpiphp: Slot [54] registered Aug 13 01:13:25.678190 kernel: acpiphp: Slot [55] registered Aug 13 01:13:25.678197 kernel: acpiphp: Slot [56] registered Aug 13 01:13:25.678202 kernel: acpiphp: Slot [57] registered Aug 13 01:13:25.678208 kernel: acpiphp: Slot [58] registered Aug 13 01:13:25.678214 kernel: acpiphp: Slot [59] registered Aug 13 01:13:25.678219 kernel: acpiphp: Slot [60] registered Aug 13 01:13:25.678225 kernel: acpiphp: Slot [61] registered Aug 13 01:13:25.678231 kernel: acpiphp: Slot [62] registered Aug 13 01:13:25.678236 kernel: acpiphp: Slot [63] registered Aug 13 01:13:25.678294 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] (subtractive decode) Aug 13 01:13:25.678631 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] Aug 13 01:13:25.678693 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] Aug 13 01:13:25.678754 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] Aug 13 01:13:25.678813 kernel: pci 0000:00:11.0: bridge window [mem 0x000a0000-0x000bffff window] (subtractive decode) Aug 13 01:13:25.678862 kernel: pci 0000:00:11.0: bridge window [mem 0x000cc000-0x000dbfff window] (subtractive decode) Aug 13 01:13:25.678911 kernel: pci 0000:00:11.0: bridge window [mem 0xc0000000-0xfebfffff window] (subtractive decode) Aug 13 01:13:25.678959 kernel: pci 0000:00:11.0: bridge window [io 0x0000-0x0cf7 window] (subtractive decode) Aug 13 01:13:25.679802 kernel: pci 0000:00:11.0: bridge window [io 0x0d00-0xfeff window] (subtractive decode) Aug 13 01:13:25.679869 kernel: pci 0000:03:00.0: [15ad:07c0] type 00 class 0x010700 Aug 13 01:13:25.679924 kernel: pci 0000:03:00.0: reg 0x10: [io 0x4000-0x4007] Aug 13 01:13:25.679976 kernel: pci 0000:03:00.0: reg 0x14: [mem 0xfd5f8000-0xfd5fffff 64bit] Aug 13 01:13:25.680028 kernel: pci 0000:03:00.0: reg 0x30: [mem 0x00000000-0x0000ffff pref] Aug 13 01:13:25.680079 kernel: pci 0000:03:00.0: PME# supported from D0 D3hot D3cold Aug 13 01:13:25.680129 kernel: pci 0000:03:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' Aug 13 01:13:25.680180 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] Aug 13 01:13:25.680231 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] Aug 13 01:13:25.680280 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] Aug 13 01:13:25.680330 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] Aug 13 01:13:25.680379 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] Aug 13 01:13:25.680427 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] Aug 13 01:13:25.680475 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] Aug 13 01:13:25.680525 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] Aug 13 01:13:25.680573 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] Aug 13 01:13:25.680624 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] Aug 13 01:13:25.680672 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] Aug 13 01:13:25.680722 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] Aug 13 01:13:25.681806 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] Aug 13 01:13:25.681864 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] Aug 13 01:13:25.681934 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] Aug 13 01:13:25.681996 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] Aug 13 01:13:25.682066 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] Aug 13 01:13:25.682133 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] Aug 13 01:13:25.682208 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] Aug 13 01:13:25.682260 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] Aug 13 01:13:25.682311 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] Aug 13 01:13:25.682362 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] Aug 13 01:13:25.682411 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] Aug 13 01:13:25.682459 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] Aug 13 01:13:25.682508 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] Aug 13 01:13:25.682556 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] Aug 13 01:13:25.682612 kernel: pci 0000:0b:00.0: [15ad:07b0] type 00 class 0x020000 Aug 13 01:13:25.682663 kernel: pci 0000:0b:00.0: reg 0x10: [mem 0xfd4fc000-0xfd4fcfff] Aug 13 01:13:25.682714 kernel: pci 0000:0b:00.0: reg 0x14: [mem 0xfd4fd000-0xfd4fdfff] Aug 13 01:13:25.682772 kernel: pci 0000:0b:00.0: reg 0x18: [mem 0xfd4fe000-0xfd4fffff] Aug 13 01:13:25.682823 kernel: pci 0000:0b:00.0: reg 0x1c: [io 0x5000-0x500f] Aug 13 01:13:25.682872 kernel: pci 0000:0b:00.0: reg 0x30: [mem 0x00000000-0x0000ffff pref] Aug 13 01:13:25.682922 kernel: pci 0000:0b:00.0: supports D1 D2 Aug 13 01:13:25.682972 kernel: pci 0000:0b:00.0: PME# supported from D0 D1 D2 D3hot D3cold Aug 13 01:13:25.683022 kernel: pci 0000:0b:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' Aug 13 01:13:25.683071 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] Aug 13 01:13:25.683121 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] Aug 13 01:13:25.683169 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] Aug 13 01:13:25.683218 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] Aug 13 01:13:25.683266 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] Aug 13 01:13:25.683314 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] Aug 13 01:13:25.683363 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] Aug 13 01:13:25.683421 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] Aug 13 01:13:25.683475 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] Aug 13 01:13:25.683536 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] Aug 13 01:13:25.683599 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] Aug 13 01:13:25.683657 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] Aug 13 01:13:25.683727 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] Aug 13 01:13:25.683792 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] Aug 13 01:13:25.683864 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] Aug 13 01:13:25.683944 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] Aug 13 01:13:25.683997 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] Aug 13 01:13:25.684049 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] Aug 13 01:13:25.684099 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] Aug 13 01:13:25.686232 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] Aug 13 01:13:25.686298 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] Aug 13 01:13:25.686352 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] Aug 13 01:13:25.686411 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] Aug 13 01:13:25.686471 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] Aug 13 01:13:25.686523 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] Aug 13 01:13:25.686597 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] Aug 13 01:13:25.686649 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] Aug 13 01:13:25.686698 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] Aug 13 01:13:25.687266 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] Aug 13 01:13:25.687336 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] Aug 13 01:13:25.687393 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] Aug 13 01:13:25.687444 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] Aug 13 01:13:25.687497 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] Aug 13 01:13:25.687545 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] Aug 13 01:13:25.687597 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] Aug 13 01:13:25.687645 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] Aug 13 01:13:25.687698 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] Aug 13 01:13:25.687767 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] Aug 13 01:13:25.687818 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] Aug 13 01:13:25.687867 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] Aug 13 01:13:25.687917 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] Aug 13 01:13:25.687967 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] Aug 13 01:13:25.688015 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] Aug 13 01:13:25.688062 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] Aug 13 01:13:25.688112 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] Aug 13 01:13:25.688161 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] Aug 13 01:13:25.688208 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] Aug 13 01:13:25.688258 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] Aug 13 01:13:25.688321 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] Aug 13 01:13:25.688370 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] Aug 13 01:13:25.688421 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] Aug 13 01:13:25.688476 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] Aug 13 01:13:25.688529 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] Aug 13 01:13:25.688578 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] Aug 13 01:13:25.688626 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] Aug 13 01:13:25.688674 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] Aug 13 01:13:25.688724 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] Aug 13 01:13:25.688786 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] Aug 13 01:13:25.688836 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] Aug 13 01:13:25.688903 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] Aug 13 01:13:25.688951 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] Aug 13 01:13:25.689000 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] Aug 13 01:13:25.689049 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] Aug 13 01:13:25.689100 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] Aug 13 01:13:25.689149 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] Aug 13 01:13:25.689196 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] Aug 13 01:13:25.689246 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] Aug 13 01:13:25.689306 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] Aug 13 01:13:25.689355 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] Aug 13 01:13:25.689403 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] Aug 13 01:13:25.689452 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] Aug 13 01:13:25.689503 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] Aug 13 01:13:25.689551 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] Aug 13 01:13:25.689599 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] Aug 13 01:13:25.689647 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] Aug 13 01:13:25.689695 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] Aug 13 01:13:25.689749 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] Aug 13 01:13:25.689799 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] Aug 13 01:13:25.689847 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] Aug 13 01:13:25.689857 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 9 Aug 13 01:13:25.689864 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 0 Aug 13 01:13:25.689870 kernel: ACPI: PCI: Interrupt link LNKB disabled Aug 13 01:13:25.689876 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Aug 13 01:13:25.689882 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 10 Aug 13 01:13:25.689888 kernel: iommu: Default domain type: Translated Aug 13 01:13:25.689893 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Aug 13 01:13:25.689942 kernel: pci 0000:00:0f.0: vgaarb: setting as boot VGA device Aug 13 01:13:25.689991 kernel: pci 0000:00:0f.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Aug 13 01:13:25.690040 kernel: pci 0000:00:0f.0: vgaarb: bridge control possible Aug 13 01:13:25.690049 kernel: vgaarb: loaded Aug 13 01:13:25.690055 kernel: pps_core: LinuxPPS API ver. 1 registered Aug 13 01:13:25.690060 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Aug 13 01:13:25.690066 kernel: PTP clock support registered Aug 13 01:13:25.690072 kernel: PCI: Using ACPI for IRQ routing Aug 13 01:13:25.690078 kernel: PCI: pci_cache_line_size set to 64 bytes Aug 13 01:13:25.690084 kernel: e820: reserve RAM buffer [mem 0x0009ec00-0x0009ffff] Aug 13 01:13:25.690089 kernel: e820: reserve RAM buffer [mem 0x7fee0000-0x7fffffff] Aug 13 01:13:25.690096 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 Aug 13 01:13:25.690103 kernel: hpet0: 16 comparators, 64-bit 14.318180 MHz counter Aug 13 01:13:25.690108 kernel: clocksource: Switched to clocksource tsc-early Aug 13 01:13:25.690114 kernel: VFS: Disk quotas dquot_6.6.0 Aug 13 01:13:25.690120 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Aug 13 01:13:25.690126 kernel: pnp: PnP ACPI init Aug 13 01:13:25.690177 kernel: system 00:00: [io 0x1000-0x103f] has been reserved Aug 13 01:13:25.690222 kernel: system 00:00: [io 0x1040-0x104f] has been reserved Aug 13 01:13:25.690269 kernel: system 00:00: [io 0x0cf0-0x0cf1] has been reserved Aug 13 01:13:25.690316 kernel: system 00:04: [mem 0xfed00000-0xfed003ff] has been reserved Aug 13 01:13:25.690365 kernel: pnp 00:06: [dma 2] Aug 13 01:13:25.690413 kernel: system 00:07: [io 0xfce0-0xfcff] has been reserved Aug 13 01:13:25.690457 kernel: system 00:07: [mem 0xf0000000-0xf7ffffff] has been reserved Aug 13 01:13:25.690500 kernel: system 00:07: [mem 0xfe800000-0xfe9fffff] has been reserved Aug 13 01:13:25.690509 kernel: pnp: PnP ACPI: found 8 devices Aug 13 01:13:25.690516 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Aug 13 01:13:25.690522 kernel: NET: Registered PF_INET protocol family Aug 13 01:13:25.690528 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Aug 13 01:13:25.690534 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Aug 13 01:13:25.690540 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Aug 13 01:13:25.690546 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Aug 13 01:13:25.690551 kernel: TCP bind hash table entries: 16384 (order: 6, 262144 bytes, linear) Aug 13 01:13:25.690560 kernel: TCP: Hash tables configured (established 16384 bind 16384) Aug 13 01:13:25.690566 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Aug 13 01:13:25.690572 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Aug 13 01:13:25.690578 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Aug 13 01:13:25.690584 kernel: NET: Registered PF_XDP protocol family Aug 13 01:13:25.690636 kernel: pci 0000:00:15.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 Aug 13 01:13:25.690686 kernel: pci 0000:00:15.3: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Aug 13 01:13:25.690737 kernel: pci 0000:00:15.4: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Aug 13 01:13:25.690800 kernel: pci 0000:00:15.5: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Aug 13 01:13:25.690854 kernel: pci 0000:00:15.6: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Aug 13 01:13:25.691619 kernel: pci 0000:00:15.7: bridge window [io 0x1000-0x0fff] to [bus 0a] add_size 1000 Aug 13 01:13:25.691689 kernel: pci 0000:00:16.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 0b] add_size 200000 add_align 100000 Aug 13 01:13:25.691767 kernel: pci 0000:00:16.3: bridge window [io 0x1000-0x0fff] to [bus 0e] add_size 1000 Aug 13 01:13:25.691822 kernel: pci 0000:00:16.4: bridge window [io 0x1000-0x0fff] to [bus 0f] add_size 1000 Aug 13 01:13:25.691872 kernel: pci 0000:00:16.5: bridge window [io 0x1000-0x0fff] to [bus 10] add_size 1000 Aug 13 01:13:25.691924 kernel: pci 0000:00:16.6: bridge window [io 0x1000-0x0fff] to [bus 11] add_size 1000 Aug 13 01:13:25.691973 kernel: pci 0000:00:16.7: bridge window [io 0x1000-0x0fff] to [bus 12] add_size 1000 Aug 13 01:13:25.692036 kernel: pci 0000:00:17.3: bridge window [io 0x1000-0x0fff] to [bus 16] add_size 1000 Aug 13 01:13:25.692085 kernel: pci 0000:00:17.4: bridge window [io 0x1000-0x0fff] to [bus 17] add_size 1000 Aug 13 01:13:25.692133 kernel: pci 0000:00:17.5: bridge window [io 0x1000-0x0fff] to [bus 18] add_size 1000 Aug 13 01:13:25.692188 kernel: pci 0000:00:17.6: bridge window [io 0x1000-0x0fff] to [bus 19] add_size 1000 Aug 13 01:13:25.692244 kernel: pci 0000:00:17.7: bridge window [io 0x1000-0x0fff] to [bus 1a] add_size 1000 Aug 13 01:13:25.692293 kernel: pci 0000:00:18.2: bridge window [io 0x1000-0x0fff] to [bus 1d] add_size 1000 Aug 13 01:13:25.692342 kernel: pci 0000:00:18.3: bridge window [io 0x1000-0x0fff] to [bus 1e] add_size 1000 Aug 13 01:13:25.692390 kernel: pci 0000:00:18.4: bridge window [io 0x1000-0x0fff] to [bus 1f] add_size 1000 Aug 13 01:13:25.692438 kernel: pci 0000:00:18.5: bridge window [io 0x1000-0x0fff] to [bus 20] add_size 1000 Aug 13 01:13:25.692489 kernel: pci 0000:00:18.6: bridge window [io 0x1000-0x0fff] to [bus 21] add_size 1000 Aug 13 01:13:25.692537 kernel: pci 0000:00:18.7: bridge window [io 0x1000-0x0fff] to [bus 22] add_size 1000 Aug 13 01:13:25.692585 kernel: pci 0000:00:15.0: BAR 15: assigned [mem 0xc0000000-0xc01fffff 64bit pref] Aug 13 01:13:25.692633 kernel: pci 0000:00:16.0: BAR 15: assigned [mem 0xc0200000-0xc03fffff 64bit pref] Aug 13 01:13:25.692681 kernel: pci 0000:00:15.3: BAR 13: no space for [io size 0x1000] Aug 13 01:13:25.692730 kernel: pci 0000:00:15.3: BAR 13: failed to assign [io size 0x1000] Aug 13 01:13:25.692807 kernel: pci 0000:00:15.4: BAR 13: no space for [io size 0x1000] Aug 13 01:13:25.694065 kernel: pci 0000:00:15.4: BAR 13: failed to assign [io size 0x1000] Aug 13 01:13:25.694153 kernel: pci 0000:00:15.5: BAR 13: no space for [io size 0x1000] Aug 13 01:13:25.694213 kernel: pci 0000:00:15.5: BAR 13: failed to assign [io size 0x1000] Aug 13 01:13:25.694274 kernel: pci 0000:00:15.6: BAR 13: no space for [io size 0x1000] Aug 13 01:13:25.694330 kernel: pci 0000:00:15.6: BAR 13: failed to assign [io size 0x1000] Aug 13 01:13:25.694378 kernel: pci 0000:00:15.7: BAR 13: no space for [io size 0x1000] Aug 13 01:13:25.694427 kernel: pci 0000:00:15.7: BAR 13: failed to assign [io size 0x1000] Aug 13 01:13:25.694476 kernel: pci 0000:00:16.3: BAR 13: no space for [io size 0x1000] Aug 13 01:13:25.694527 kernel: pci 0000:00:16.3: BAR 13: failed to assign [io size 0x1000] Aug 13 01:13:25.694575 kernel: pci 0000:00:16.4: BAR 13: no space for [io size 0x1000] Aug 13 01:13:25.694623 kernel: pci 0000:00:16.4: BAR 13: failed to assign [io size 0x1000] Aug 13 01:13:25.694671 kernel: pci 0000:00:16.5: BAR 13: no space for [io size 0x1000] Aug 13 01:13:25.694718 kernel: pci 0000:00:16.5: BAR 13: failed to assign [io size 0x1000] Aug 13 01:13:25.694789 kernel: pci 0000:00:16.6: BAR 13: no space for [io size 0x1000] Aug 13 01:13:25.694838 kernel: pci 0000:00:16.6: BAR 13: failed to assign [io size 0x1000] Aug 13 01:13:25.694887 kernel: pci 0000:00:16.7: BAR 13: no space for [io size 0x1000] Aug 13 01:13:25.694938 kernel: pci 0000:00:16.7: BAR 13: failed to assign [io size 0x1000] Aug 13 01:13:25.694990 kernel: pci 0000:00:17.3: BAR 13: no space for [io size 0x1000] Aug 13 01:13:25.695049 kernel: pci 0000:00:17.3: BAR 13: failed to assign [io size 0x1000] Aug 13 01:13:25.695099 kernel: pci 0000:00:17.4: BAR 13: no space for [io size 0x1000] Aug 13 01:13:25.695146 kernel: pci 0000:00:17.4: BAR 13: failed to assign [io size 0x1000] Aug 13 01:13:25.695195 kernel: pci 0000:00:17.5: BAR 13: no space for [io size 0x1000] Aug 13 01:13:25.695242 kernel: pci 0000:00:17.5: BAR 13: failed to assign [io size 0x1000] Aug 13 01:13:25.695291 kernel: pci 0000:00:17.6: BAR 13: no space for [io size 0x1000] Aug 13 01:13:25.695339 kernel: pci 0000:00:17.6: BAR 13: failed to assign [io size 0x1000] Aug 13 01:13:25.695389 kernel: pci 0000:00:17.7: BAR 13: no space for [io size 0x1000] Aug 13 01:13:25.695448 kernel: pci 0000:00:17.7: BAR 13: failed to assign [io size 0x1000] Aug 13 01:13:25.695496 kernel: pci 0000:00:18.2: BAR 13: no space for [io size 0x1000] Aug 13 01:13:25.695544 kernel: pci 0000:00:18.2: BAR 13: failed to assign [io size 0x1000] Aug 13 01:13:25.695593 kernel: pci 0000:00:18.3: BAR 13: no space for [io size 0x1000] Aug 13 01:13:25.695642 kernel: pci 0000:00:18.3: BAR 13: failed to assign [io size 0x1000] Aug 13 01:13:25.696405 kernel: pci 0000:00:18.4: BAR 13: no space for [io size 0x1000] Aug 13 01:13:25.696478 kernel: pci 0000:00:18.4: BAR 13: failed to assign [io size 0x1000] Aug 13 01:13:25.696860 kernel: pci 0000:00:18.5: BAR 13: no space for [io size 0x1000] Aug 13 01:13:25.696920 kernel: pci 0000:00:18.5: BAR 13: failed to assign [io size 0x1000] Aug 13 01:13:25.697197 kernel: pci 0000:00:18.6: BAR 13: no space for [io size 0x1000] Aug 13 01:13:25.697256 kernel: pci 0000:00:18.6: BAR 13: failed to assign [io size 0x1000] Aug 13 01:13:25.697314 kernel: pci 0000:00:18.7: BAR 13: no space for [io size 0x1000] Aug 13 01:13:25.697369 kernel: pci 0000:00:18.7: BAR 13: failed to assign [io size 0x1000] Aug 13 01:13:25.697418 kernel: pci 0000:00:18.7: BAR 13: no space for [io size 0x1000] Aug 13 01:13:25.697471 kernel: pci 0000:00:18.7: BAR 13: failed to assign [io size 0x1000] Aug 13 01:13:25.697523 kernel: pci 0000:00:18.6: BAR 13: no space for [io size 0x1000] Aug 13 01:13:25.697571 kernel: pci 0000:00:18.6: BAR 13: failed to assign [io size 0x1000] Aug 13 01:13:25.697619 kernel: pci 0000:00:18.5: BAR 13: no space for [io size 0x1000] Aug 13 01:13:25.697666 kernel: pci 0000:00:18.5: BAR 13: failed to assign [io size 0x1000] Aug 13 01:13:25.697714 kernel: pci 0000:00:18.4: BAR 13: no space for [io size 0x1000] Aug 13 01:13:25.698104 kernel: pci 0000:00:18.4: BAR 13: failed to assign [io size 0x1000] Aug 13 01:13:25.698169 kernel: pci 0000:00:18.3: BAR 13: no space for [io size 0x1000] Aug 13 01:13:25.698234 kernel: pci 0000:00:18.3: BAR 13: failed to assign [io size 0x1000] Aug 13 01:13:25.698309 kernel: pci 0000:00:18.2: BAR 13: no space for [io size 0x1000] Aug 13 01:13:25.698371 kernel: pci 0000:00:18.2: BAR 13: failed to assign [io size 0x1000] Aug 13 01:13:25.698435 kernel: pci 0000:00:17.7: BAR 13: no space for [io size 0x1000] Aug 13 01:13:25.698505 kernel: pci 0000:00:17.7: BAR 13: failed to assign [io size 0x1000] Aug 13 01:13:25.698567 kernel: pci 0000:00:17.6: BAR 13: no space for [io size 0x1000] Aug 13 01:13:25.698628 kernel: pci 0000:00:17.6: BAR 13: failed to assign [io size 0x1000] Aug 13 01:13:25.698678 kernel: pci 0000:00:17.5: BAR 13: no space for [io size 0x1000] Aug 13 01:13:25.698726 kernel: pci 0000:00:17.5: BAR 13: failed to assign [io size 0x1000] Aug 13 01:13:25.698798 kernel: pci 0000:00:17.4: BAR 13: no space for [io size 0x1000] Aug 13 01:13:25.698851 kernel: pci 0000:00:17.4: BAR 13: failed to assign [io size 0x1000] Aug 13 01:13:25.698901 kernel: pci 0000:00:17.3: BAR 13: no space for [io size 0x1000] Aug 13 01:13:25.698967 kernel: pci 0000:00:17.3: BAR 13: failed to assign [io size 0x1000] Aug 13 01:13:25.699074 kernel: pci 0000:00:16.7: BAR 13: no space for [io size 0x1000] Aug 13 01:13:25.699148 kernel: pci 0000:00:16.7: BAR 13: failed to assign [io size 0x1000] Aug 13 01:13:25.699201 kernel: pci 0000:00:16.6: BAR 13: no space for [io size 0x1000] Aug 13 01:13:25.699269 kernel: pci 0000:00:16.6: BAR 13: failed to assign [io size 0x1000] Aug 13 01:13:25.699345 kernel: pci 0000:00:16.5: BAR 13: no space for [io size 0x1000] Aug 13 01:13:25.699410 kernel: pci 0000:00:16.5: BAR 13: failed to assign [io size 0x1000] Aug 13 01:13:25.699460 kernel: pci 0000:00:16.4: BAR 13: no space for [io size 0x1000] Aug 13 01:13:25.699516 kernel: pci 0000:00:16.4: BAR 13: failed to assign [io size 0x1000] Aug 13 01:13:25.699568 kernel: pci 0000:00:16.3: BAR 13: no space for [io size 0x1000] Aug 13 01:13:25.699625 kernel: pci 0000:00:16.3: BAR 13: failed to assign [io size 0x1000] Aug 13 01:13:25.699683 kernel: pci 0000:00:15.7: BAR 13: no space for [io size 0x1000] Aug 13 01:13:25.699976 kernel: pci 0000:00:15.7: BAR 13: failed to assign [io size 0x1000] Aug 13 01:13:25.700039 kernel: pci 0000:00:15.6: BAR 13: no space for [io size 0x1000] Aug 13 01:13:25.700102 kernel: pci 0000:00:15.6: BAR 13: failed to assign [io size 0x1000] Aug 13 01:13:25.700182 kernel: pci 0000:00:15.5: BAR 13: no space for [io size 0x1000] Aug 13 01:13:25.700247 kernel: pci 0000:00:15.5: BAR 13: failed to assign [io size 0x1000] Aug 13 01:13:25.700297 kernel: pci 0000:00:15.4: BAR 13: no space for [io size 0x1000] Aug 13 01:13:25.700361 kernel: pci 0000:00:15.4: BAR 13: failed to assign [io size 0x1000] Aug 13 01:13:25.700682 kernel: pci 0000:00:15.3: BAR 13: no space for [io size 0x1000] Aug 13 01:13:25.700773 kernel: pci 0000:00:15.3: BAR 13: failed to assign [io size 0x1000] Aug 13 01:13:25.700842 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Aug 13 01:13:25.700902 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] Aug 13 01:13:25.701177 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] Aug 13 01:13:25.701237 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] Aug 13 01:13:25.701521 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] Aug 13 01:13:25.701580 kernel: pci 0000:03:00.0: BAR 6: assigned [mem 0xfd500000-0xfd50ffff pref] Aug 13 01:13:25.701635 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] Aug 13 01:13:25.701684 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] Aug 13 01:13:25.701733 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] Aug 13 01:13:25.701816 kernel: pci 0000:00:15.0: bridge window [mem 0xc0000000-0xc01fffff 64bit pref] Aug 13 01:13:25.701867 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] Aug 13 01:13:25.701915 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] Aug 13 01:13:25.701962 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] Aug 13 01:13:25.702009 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] Aug 13 01:13:25.702060 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] Aug 13 01:13:25.702110 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] Aug 13 01:13:25.702158 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] Aug 13 01:13:25.702205 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] Aug 13 01:13:25.702253 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] Aug 13 01:13:25.702300 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] Aug 13 01:13:25.702348 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] Aug 13 01:13:25.702396 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] Aug 13 01:13:25.702444 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] Aug 13 01:13:25.702492 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] Aug 13 01:13:25.702543 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] Aug 13 01:13:25.702591 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] Aug 13 01:13:25.702640 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] Aug 13 01:13:25.702688 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] Aug 13 01:13:25.702734 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] Aug 13 01:13:25.702795 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] Aug 13 01:13:25.702845 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] Aug 13 01:13:25.702896 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] Aug 13 01:13:25.702945 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] Aug 13 01:13:25.702998 kernel: pci 0000:0b:00.0: BAR 6: assigned [mem 0xfd400000-0xfd40ffff pref] Aug 13 01:13:25.703297 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] Aug 13 01:13:25.703359 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] Aug 13 01:13:25.703411 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] Aug 13 01:13:25.703481 kernel: pci 0000:00:16.0: bridge window [mem 0xc0200000-0xc03fffff 64bit pref] Aug 13 01:13:25.703760 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] Aug 13 01:13:25.703822 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] Aug 13 01:13:25.703877 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] Aug 13 01:13:25.703959 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] Aug 13 01:13:25.704226 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] Aug 13 01:13:25.704296 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] Aug 13 01:13:25.704369 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] Aug 13 01:13:25.704430 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] Aug 13 01:13:25.704500 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] Aug 13 01:13:25.704567 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] Aug 13 01:13:25.704617 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] Aug 13 01:13:25.704670 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] Aug 13 01:13:25.704725 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] Aug 13 01:13:25.705052 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] Aug 13 01:13:25.705119 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] Aug 13 01:13:25.705428 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] Aug 13 01:13:25.705492 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] Aug 13 01:13:25.705566 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] Aug 13 01:13:25.705636 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] Aug 13 01:13:25.705707 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] Aug 13 01:13:25.706067 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] Aug 13 01:13:25.706150 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] Aug 13 01:13:25.706207 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] Aug 13 01:13:25.706271 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] Aug 13 01:13:25.706347 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] Aug 13 01:13:25.706408 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] Aug 13 01:13:25.706467 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] Aug 13 01:13:25.706543 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] Aug 13 01:13:25.706612 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] Aug 13 01:13:25.706667 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] Aug 13 01:13:25.706719 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] Aug 13 01:13:25.707100 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] Aug 13 01:13:25.707166 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] Aug 13 01:13:25.707231 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] Aug 13 01:13:25.707307 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] Aug 13 01:13:25.707378 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] Aug 13 01:13:25.707429 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] Aug 13 01:13:25.707480 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] Aug 13 01:13:25.707547 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] Aug 13 01:13:25.707612 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] Aug 13 01:13:25.707687 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] Aug 13 01:13:25.708080 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] Aug 13 01:13:25.708472 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] Aug 13 01:13:25.708546 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] Aug 13 01:13:25.708622 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] Aug 13 01:13:25.708691 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] Aug 13 01:13:25.709047 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] Aug 13 01:13:25.709111 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] Aug 13 01:13:25.709181 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] Aug 13 01:13:25.709244 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] Aug 13 01:13:25.709302 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] Aug 13 01:13:25.709376 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] Aug 13 01:13:25.709449 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] Aug 13 01:13:25.709507 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] Aug 13 01:13:25.709557 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] Aug 13 01:13:25.709920 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] Aug 13 01:13:25.709996 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] Aug 13 01:13:25.710049 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] Aug 13 01:13:25.710100 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] Aug 13 01:13:25.710177 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] Aug 13 01:13:25.710251 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] Aug 13 01:13:25.710325 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] Aug 13 01:13:25.710375 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] Aug 13 01:13:25.710426 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] Aug 13 01:13:25.710493 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] Aug 13 01:13:25.710554 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] Aug 13 01:13:25.710615 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] Aug 13 01:13:25.710680 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] Aug 13 01:13:25.711019 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] Aug 13 01:13:25.711095 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] Aug 13 01:13:25.711404 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] Aug 13 01:13:25.711484 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] Aug 13 01:13:25.711554 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] Aug 13 01:13:25.711612 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] Aug 13 01:13:25.711662 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] Aug 13 01:13:25.711720 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] Aug 13 01:13:25.712063 kernel: pci_bus 0000:00: resource 4 [mem 0x000a0000-0x000bffff window] Aug 13 01:13:25.712133 kernel: pci_bus 0000:00: resource 5 [mem 0x000cc000-0x000dbfff window] Aug 13 01:13:25.712188 kernel: pci_bus 0000:00: resource 6 [mem 0xc0000000-0xfebfffff window] Aug 13 01:13:25.712553 kernel: pci_bus 0000:00: resource 7 [io 0x0000-0x0cf7 window] Aug 13 01:13:25.712613 kernel: pci_bus 0000:00: resource 8 [io 0x0d00-0xfeff window] Aug 13 01:13:25.712679 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x3fff] Aug 13 01:13:25.712737 kernel: pci_bus 0000:02: resource 1 [mem 0xfd600000-0xfdffffff] Aug 13 01:13:25.713134 kernel: pci_bus 0000:02: resource 2 [mem 0xe7b00000-0xe7ffffff 64bit pref] Aug 13 01:13:25.713201 kernel: pci_bus 0000:02: resource 4 [mem 0x000a0000-0x000bffff window] Aug 13 01:13:25.713252 kernel: pci_bus 0000:02: resource 5 [mem 0x000cc000-0x000dbfff window] Aug 13 01:13:25.713302 kernel: pci_bus 0000:02: resource 6 [mem 0xc0000000-0xfebfffff window] Aug 13 01:13:25.713357 kernel: pci_bus 0000:02: resource 7 [io 0x0000-0x0cf7 window] Aug 13 01:13:25.713408 kernel: pci_bus 0000:02: resource 8 [io 0x0d00-0xfeff window] Aug 13 01:13:25.713477 kernel: pci_bus 0000:03: resource 0 [io 0x4000-0x4fff] Aug 13 01:13:25.713534 kernel: pci_bus 0000:03: resource 1 [mem 0xfd500000-0xfd5fffff] Aug 13 01:13:25.713601 kernel: pci_bus 0000:03: resource 2 [mem 0xc0000000-0xc01fffff 64bit pref] Aug 13 01:13:25.713659 kernel: pci_bus 0000:04: resource 0 [io 0x8000-0x8fff] Aug 13 01:13:25.713725 kernel: pci_bus 0000:04: resource 1 [mem 0xfd100000-0xfd1fffff] Aug 13 01:13:25.713809 kernel: pci_bus 0000:04: resource 2 [mem 0xe7800000-0xe78fffff 64bit pref] Aug 13 01:13:25.713862 kernel: pci_bus 0000:05: resource 0 [io 0xc000-0xcfff] Aug 13 01:13:25.713919 kernel: pci_bus 0000:05: resource 1 [mem 0xfcd00000-0xfcdfffff] Aug 13 01:13:25.714235 kernel: pci_bus 0000:05: resource 2 [mem 0xe7400000-0xe74fffff 64bit pref] Aug 13 01:13:25.714311 kernel: pci_bus 0000:06: resource 1 [mem 0xfc900000-0xfc9fffff] Aug 13 01:13:25.714367 kernel: pci_bus 0000:06: resource 2 [mem 0xe7000000-0xe70fffff 64bit pref] Aug 13 01:13:25.714440 kernel: pci_bus 0000:07: resource 1 [mem 0xfc500000-0xfc5fffff] Aug 13 01:13:25.714514 kernel: pci_bus 0000:07: resource 2 [mem 0xe6c00000-0xe6cfffff 64bit pref] Aug 13 01:13:25.714569 kernel: pci_bus 0000:08: resource 1 [mem 0xfc100000-0xfc1fffff] Aug 13 01:13:25.714614 kernel: pci_bus 0000:08: resource 2 [mem 0xe6800000-0xe68fffff 64bit pref] Aug 13 01:13:25.714668 kernel: pci_bus 0000:09: resource 1 [mem 0xfbd00000-0xfbdfffff] Aug 13 01:13:25.714729 kernel: pci_bus 0000:09: resource 2 [mem 0xe6400000-0xe64fffff 64bit pref] Aug 13 01:13:25.714805 kernel: pci_bus 0000:0a: resource 1 [mem 0xfb900000-0xfb9fffff] Aug 13 01:13:25.714863 kernel: pci_bus 0000:0a: resource 2 [mem 0xe6000000-0xe60fffff 64bit pref] Aug 13 01:13:25.714925 kernel: pci_bus 0000:0b: resource 0 [io 0x5000-0x5fff] Aug 13 01:13:25.714984 kernel: pci_bus 0000:0b: resource 1 [mem 0xfd400000-0xfd4fffff] Aug 13 01:13:25.715051 kernel: pci_bus 0000:0b: resource 2 [mem 0xc0200000-0xc03fffff 64bit pref] Aug 13 01:13:25.715102 kernel: pci_bus 0000:0c: resource 0 [io 0x9000-0x9fff] Aug 13 01:13:25.715147 kernel: pci_bus 0000:0c: resource 1 [mem 0xfd000000-0xfd0fffff] Aug 13 01:13:25.715198 kernel: pci_bus 0000:0c: resource 2 [mem 0xe7700000-0xe77fffff 64bit pref] Aug 13 01:13:25.715274 kernel: pci_bus 0000:0d: resource 0 [io 0xd000-0xdfff] Aug 13 01:13:25.715341 kernel: pci_bus 0000:0d: resource 1 [mem 0xfcc00000-0xfccfffff] Aug 13 01:13:25.715395 kernel: pci_bus 0000:0d: resource 2 [mem 0xe7300000-0xe73fffff 64bit pref] Aug 13 01:13:25.715458 kernel: pci_bus 0000:0e: resource 1 [mem 0xfc800000-0xfc8fffff] Aug 13 01:13:25.715515 kernel: pci_bus 0000:0e: resource 2 [mem 0xe6f00000-0xe6ffffff 64bit pref] Aug 13 01:13:25.715579 kernel: pci_bus 0000:0f: resource 1 [mem 0xfc400000-0xfc4fffff] Aug 13 01:13:25.715625 kernel: pci_bus 0000:0f: resource 2 [mem 0xe6b00000-0xe6bfffff 64bit pref] Aug 13 01:13:25.715680 kernel: pci_bus 0000:10: resource 1 [mem 0xfc000000-0xfc0fffff] Aug 13 01:13:25.715738 kernel: pci_bus 0000:10: resource 2 [mem 0xe6700000-0xe67fffff 64bit pref] Aug 13 01:13:25.716093 kernel: pci_bus 0000:11: resource 1 [mem 0xfbc00000-0xfbcfffff] Aug 13 01:13:25.716145 kernel: pci_bus 0000:11: resource 2 [mem 0xe6300000-0xe63fffff 64bit pref] Aug 13 01:13:25.716211 kernel: pci_bus 0000:12: resource 1 [mem 0xfb800000-0xfb8fffff] Aug 13 01:13:25.716278 kernel: pci_bus 0000:12: resource 2 [mem 0xe5f00000-0xe5ffffff 64bit pref] Aug 13 01:13:25.716349 kernel: pci_bus 0000:13: resource 0 [io 0x6000-0x6fff] Aug 13 01:13:25.716400 kernel: pci_bus 0000:13: resource 1 [mem 0xfd300000-0xfd3fffff] Aug 13 01:13:25.716444 kernel: pci_bus 0000:13: resource 2 [mem 0xe7a00000-0xe7afffff 64bit pref] Aug 13 01:13:25.716502 kernel: pci_bus 0000:14: resource 0 [io 0xa000-0xafff] Aug 13 01:13:25.716549 kernel: pci_bus 0000:14: resource 1 [mem 0xfcf00000-0xfcffffff] Aug 13 01:13:25.716614 kernel: pci_bus 0000:14: resource 2 [mem 0xe7600000-0xe76fffff 64bit pref] Aug 13 01:13:25.716672 kernel: pci_bus 0000:15: resource 0 [io 0xe000-0xefff] Aug 13 01:13:25.716731 kernel: pci_bus 0000:15: resource 1 [mem 0xfcb00000-0xfcbfffff] Aug 13 01:13:25.717086 kernel: pci_bus 0000:15: resource 2 [mem 0xe7200000-0xe72fffff 64bit pref] Aug 13 01:13:25.717406 kernel: pci_bus 0000:16: resource 1 [mem 0xfc700000-0xfc7fffff] Aug 13 01:13:25.717471 kernel: pci_bus 0000:16: resource 2 [mem 0xe6e00000-0xe6efffff 64bit pref] Aug 13 01:13:25.717896 kernel: pci_bus 0000:17: resource 1 [mem 0xfc300000-0xfc3fffff] Aug 13 01:13:25.717971 kernel: pci_bus 0000:17: resource 2 [mem 0xe6a00000-0xe6afffff 64bit pref] Aug 13 01:13:25.718025 kernel: pci_bus 0000:18: resource 1 [mem 0xfbf00000-0xfbffffff] Aug 13 01:13:25.718215 kernel: pci_bus 0000:18: resource 2 [mem 0xe6600000-0xe66fffff 64bit pref] Aug 13 01:13:25.718281 kernel: pci_bus 0000:19: resource 1 [mem 0xfbb00000-0xfbbfffff] Aug 13 01:13:25.718338 kernel: pci_bus 0000:19: resource 2 [mem 0xe6200000-0xe62fffff 64bit pref] Aug 13 01:13:25.718737 kernel: pci_bus 0000:1a: resource 1 [mem 0xfb700000-0xfb7fffff] Aug 13 01:13:25.718826 kernel: pci_bus 0000:1a: resource 2 [mem 0xe5e00000-0xe5efffff 64bit pref] Aug 13 01:13:25.718879 kernel: pci_bus 0000:1b: resource 0 [io 0x7000-0x7fff] Aug 13 01:13:25.719084 kernel: pci_bus 0000:1b: resource 1 [mem 0xfd200000-0xfd2fffff] Aug 13 01:13:25.719384 kernel: pci_bus 0000:1b: resource 2 [mem 0xe7900000-0xe79fffff 64bit pref] Aug 13 01:13:25.719452 kernel: pci_bus 0000:1c: resource 0 [io 0xb000-0xbfff] Aug 13 01:13:25.719866 kernel: pci_bus 0000:1c: resource 1 [mem 0xfce00000-0xfcefffff] Aug 13 01:13:25.719919 kernel: pci_bus 0000:1c: resource 2 [mem 0xe7500000-0xe75fffff 64bit pref] Aug 13 01:13:25.719982 kernel: pci_bus 0000:1d: resource 1 [mem 0xfca00000-0xfcafffff] Aug 13 01:13:25.720036 kernel: pci_bus 0000:1d: resource 2 [mem 0xe7100000-0xe71fffff 64bit pref] Aug 13 01:13:25.720223 kernel: pci_bus 0000:1e: resource 1 [mem 0xfc600000-0xfc6fffff] Aug 13 01:13:25.720294 kernel: pci_bus 0000:1e: resource 2 [mem 0xe6d00000-0xe6dfffff 64bit pref] Aug 13 01:13:25.720356 kernel: pci_bus 0000:1f: resource 1 [mem 0xfc200000-0xfc2fffff] Aug 13 01:13:25.720404 kernel: pci_bus 0000:1f: resource 2 [mem 0xe6900000-0xe69fffff 64bit pref] Aug 13 01:13:25.720472 kernel: pci_bus 0000:20: resource 1 [mem 0xfbe00000-0xfbefffff] Aug 13 01:13:25.720548 kernel: pci_bus 0000:20: resource 2 [mem 0xe6500000-0xe65fffff 64bit pref] Aug 13 01:13:25.720614 kernel: pci_bus 0000:21: resource 1 [mem 0xfba00000-0xfbafffff] Aug 13 01:13:25.720664 kernel: pci_bus 0000:21: resource 2 [mem 0xe6100000-0xe61fffff 64bit pref] Aug 13 01:13:25.720721 kernel: pci_bus 0000:22: resource 1 [mem 0xfb600000-0xfb6fffff] Aug 13 01:13:25.720794 kernel: pci_bus 0000:22: resource 2 [mem 0xe5d00000-0xe5dfffff 64bit pref] Aug 13 01:13:25.720871 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Aug 13 01:13:25.720886 kernel: PCI: CLS 32 bytes, default 64 Aug 13 01:13:25.720893 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Aug 13 01:13:25.720900 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns Aug 13 01:13:25.720914 kernel: clocksource: Switched to clocksource tsc Aug 13 01:13:25.720924 kernel: Initialise system trusted keyrings Aug 13 01:13:25.720934 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Aug 13 01:13:25.720944 kernel: Key type asymmetric registered Aug 13 01:13:25.720953 kernel: Asymmetric key parser 'x509' registered Aug 13 01:13:25.720963 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Aug 13 01:13:25.720971 kernel: io scheduler mq-deadline registered Aug 13 01:13:25.720978 kernel: io scheduler kyber registered Aug 13 01:13:25.720988 kernel: io scheduler bfq registered Aug 13 01:13:25.721063 kernel: pcieport 0000:00:15.0: PME: Signaling with IRQ 24 Aug 13 01:13:25.721119 kernel: pcieport 0000:00:15.0: pciehp: Slot #160 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Aug 13 01:13:25.721171 kernel: pcieport 0000:00:15.1: PME: Signaling with IRQ 25 Aug 13 01:13:25.721228 kernel: pcieport 0000:00:15.1: pciehp: Slot #161 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Aug 13 01:13:25.721312 kernel: pcieport 0000:00:15.2: PME: Signaling with IRQ 26 Aug 13 01:13:25.721384 kernel: pcieport 0000:00:15.2: pciehp: Slot #162 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Aug 13 01:13:25.721435 kernel: pcieport 0000:00:15.3: PME: Signaling with IRQ 27 Aug 13 01:13:25.721495 kernel: pcieport 0000:00:15.3: pciehp: Slot #163 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Aug 13 01:13:25.721569 kernel: pcieport 0000:00:15.4: PME: Signaling with IRQ 28 Aug 13 01:13:25.721621 kernel: pcieport 0000:00:15.4: pciehp: Slot #164 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Aug 13 01:13:25.721681 kernel: pcieport 0000:00:15.5: PME: Signaling with IRQ 29 Aug 13 01:13:25.721748 kernel: pcieport 0000:00:15.5: pciehp: Slot #165 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Aug 13 01:13:25.722158 kernel: pcieport 0000:00:15.6: PME: Signaling with IRQ 30 Aug 13 01:13:25.722219 kernel: pcieport 0000:00:15.6: pciehp: Slot #166 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Aug 13 01:13:25.722272 kernel: pcieport 0000:00:15.7: PME: Signaling with IRQ 31 Aug 13 01:13:25.722740 kernel: pcieport 0000:00:15.7: pciehp: Slot #167 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Aug 13 01:13:25.722841 kernel: pcieport 0000:00:16.0: PME: Signaling with IRQ 32 Aug 13 01:13:25.722904 kernel: pcieport 0000:00:16.0: pciehp: Slot #192 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Aug 13 01:13:25.723305 kernel: pcieport 0000:00:16.1: PME: Signaling with IRQ 33 Aug 13 01:13:25.723383 kernel: pcieport 0000:00:16.1: pciehp: Slot #193 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Aug 13 01:13:25.723439 kernel: pcieport 0000:00:16.2: PME: Signaling with IRQ 34 Aug 13 01:13:25.723622 kernel: pcieport 0000:00:16.2: pciehp: Slot #194 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Aug 13 01:13:25.723683 kernel: pcieport 0000:00:16.3: PME: Signaling with IRQ 35 Aug 13 01:13:25.724108 kernel: pcieport 0000:00:16.3: pciehp: Slot #195 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Aug 13 01:13:25.724175 kernel: pcieport 0000:00:16.4: PME: Signaling with IRQ 36 Aug 13 01:13:25.724234 kernel: pcieport 0000:00:16.4: pciehp: Slot #196 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Aug 13 01:13:25.724294 kernel: pcieport 0000:00:16.5: PME: Signaling with IRQ 37 Aug 13 01:13:25.724367 kernel: pcieport 0000:00:16.5: pciehp: Slot #197 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Aug 13 01:13:25.724440 kernel: pcieport 0000:00:16.6: PME: Signaling with IRQ 38 Aug 13 01:13:25.724505 kernel: pcieport 0000:00:16.6: pciehp: Slot #198 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Aug 13 01:13:25.724572 kernel: pcieport 0000:00:16.7: PME: Signaling with IRQ 39 Aug 13 01:13:25.724636 kernel: pcieport 0000:00:16.7: pciehp: Slot #199 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Aug 13 01:13:25.724689 kernel: pcieport 0000:00:17.0: PME: Signaling with IRQ 40 Aug 13 01:13:25.724794 kernel: pcieport 0000:00:17.0: pciehp: Slot #224 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Aug 13 01:13:25.724863 kernel: pcieport 0000:00:17.1: PME: Signaling with IRQ 41 Aug 13 01:13:25.724922 kernel: pcieport 0000:00:17.1: pciehp: Slot #225 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Aug 13 01:13:25.724977 kernel: pcieport 0000:00:17.2: PME: Signaling with IRQ 42 Aug 13 01:13:25.725043 kernel: pcieport 0000:00:17.2: pciehp: Slot #226 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Aug 13 01:13:25.725118 kernel: pcieport 0000:00:17.3: PME: Signaling with IRQ 43 Aug 13 01:13:25.725188 kernel: pcieport 0000:00:17.3: pciehp: Slot #227 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Aug 13 01:13:25.725248 kernel: pcieport 0000:00:17.4: PME: Signaling with IRQ 44 Aug 13 01:13:25.725302 kernel: pcieport 0000:00:17.4: pciehp: Slot #228 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Aug 13 01:13:25.725358 kernel: pcieport 0000:00:17.5: PME: Signaling with IRQ 45 Aug 13 01:13:25.725432 kernel: pcieport 0000:00:17.5: pciehp: Slot #229 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Aug 13 01:13:25.725497 kernel: pcieport 0000:00:17.6: PME: Signaling with IRQ 46 Aug 13 01:13:25.725554 kernel: pcieport 0000:00:17.6: pciehp: Slot #230 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Aug 13 01:13:25.725625 kernel: pcieport 0000:00:17.7: PME: Signaling with IRQ 47 Aug 13 01:13:25.725678 kernel: pcieport 0000:00:17.7: pciehp: Slot #231 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Aug 13 01:13:25.725733 kernel: pcieport 0000:00:18.0: PME: Signaling with IRQ 48 Aug 13 01:13:25.725803 kernel: pcieport 0000:00:18.0: pciehp: Slot #256 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Aug 13 01:13:25.725872 kernel: pcieport 0000:00:18.1: PME: Signaling with IRQ 49 Aug 13 01:13:25.725941 kernel: pcieport 0000:00:18.1: pciehp: Slot #257 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Aug 13 01:13:25.725996 kernel: pcieport 0000:00:18.2: PME: Signaling with IRQ 50 Aug 13 01:13:25.726224 kernel: pcieport 0000:00:18.2: pciehp: Slot #258 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Aug 13 01:13:25.726555 kernel: pcieport 0000:00:18.3: PME: Signaling with IRQ 51 Aug 13 01:13:25.726628 kernel: pcieport 0000:00:18.3: pciehp: Slot #259 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Aug 13 01:13:25.726693 kernel: pcieport 0000:00:18.4: PME: Signaling with IRQ 52 Aug 13 01:13:25.727049 kernel: pcieport 0000:00:18.4: pciehp: Slot #260 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Aug 13 01:13:25.727628 kernel: pcieport 0000:00:18.5: PME: Signaling with IRQ 53 Aug 13 01:13:25.727700 kernel: pcieport 0000:00:18.5: pciehp: Slot #261 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Aug 13 01:13:25.727796 kernel: pcieport 0000:00:18.6: PME: Signaling with IRQ 54 Aug 13 01:13:25.727879 kernel: pcieport 0000:00:18.6: pciehp: Slot #262 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Aug 13 01:13:25.727947 kernel: pcieport 0000:00:18.7: PME: Signaling with IRQ 55 Aug 13 01:13:25.727998 kernel: pcieport 0000:00:18.7: pciehp: Slot #263 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Aug 13 01:13:25.728010 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Aug 13 01:13:25.728021 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Aug 13 01:13:25.728032 kernel: 00:05: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Aug 13 01:13:25.728043 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBC,PNP0f13:MOUS] at 0x60,0x64 irq 1,12 Aug 13 01:13:25.728052 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Aug 13 01:13:25.728059 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Aug 13 01:13:25.728129 kernel: rtc_cmos 00:01: registered as rtc0 Aug 13 01:13:25.728181 kernel: rtc_cmos 00:01: setting system clock to 2025-08-13T01:13:25 UTC (1755047605) Aug 13 01:13:25.728229 kernel: rtc_cmos 00:01: alarms up to one month, y3k, 114 bytes nvram Aug 13 01:13:25.728241 kernel: intel_pstate: CPU model not supported Aug 13 01:13:25.728247 kernel: NET: Registered PF_INET6 protocol family Aug 13 01:13:25.728253 kernel: Segment Routing with IPv6 Aug 13 01:13:25.728263 kernel: In-situ OAM (IOAM) with IPv6 Aug 13 01:13:25.728273 kernel: NET: Registered PF_PACKET protocol family Aug 13 01:13:25.728284 kernel: Key type dns_resolver registered Aug 13 01:13:25.728298 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Aug 13 01:13:25.728305 kernel: IPI shorthand broadcast: enabled Aug 13 01:13:25.728311 kernel: sched_clock: Marking stable (881415802, 228337564)->(1184912025, -75158659) Aug 13 01:13:25.728319 kernel: registered taskstats version 1 Aug 13 01:13:25.728330 kernel: Loading compiled-in X.509 certificates Aug 13 01:13:25.728340 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.189-flatcar: 1d5a64b5798e654719a8bd91d683e7e9894bd433' Aug 13 01:13:25.728347 kernel: Key type .fscrypt registered Aug 13 01:13:25.728353 kernel: Key type fscrypt-provisioning registered Aug 13 01:13:25.728360 kernel: ima: No TPM chip found, activating TPM-bypass! Aug 13 01:13:25.728371 kernel: ima: Allocated hash algorithm: sha1 Aug 13 01:13:25.728381 kernel: ima: No architecture policies found Aug 13 01:13:25.728391 kernel: clk: Disabling unused clocks Aug 13 01:13:25.728402 kernel: Freeing unused kernel image (initmem) memory: 47488K Aug 13 01:13:25.728413 kernel: Write protecting the kernel read-only data: 28672k Aug 13 01:13:25.728427 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Aug 13 01:13:25.728433 kernel: Freeing unused kernel image (rodata/data gap) memory: 604K Aug 13 01:13:25.728444 kernel: Run /init as init process Aug 13 01:13:25.728454 kernel: with arguments: Aug 13 01:13:25.728461 kernel: /init Aug 13 01:13:25.728467 kernel: with environment: Aug 13 01:13:25.728473 kernel: HOME=/ Aug 13 01:13:25.728481 kernel: TERM=linux Aug 13 01:13:25.728487 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Aug 13 01:13:25.728495 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Aug 13 01:13:25.728504 systemd[1]: Detected virtualization vmware. Aug 13 01:13:25.728511 systemd[1]: Detected architecture x86-64. Aug 13 01:13:25.728544 systemd[1]: Running in initrd. Aug 13 01:13:25.728570 systemd[1]: No hostname configured, using default hostname. Aug 13 01:13:25.728581 systemd[1]: Hostname set to . Aug 13 01:13:25.728595 systemd[1]: Initializing machine ID from random generator. Aug 13 01:13:25.728601 systemd[1]: Queued start job for default target initrd.target. Aug 13 01:13:25.728610 systemd[1]: Started systemd-ask-password-console.path. Aug 13 01:13:25.728620 systemd[1]: Reached target cryptsetup.target. Aug 13 01:13:25.728633 systemd[1]: Reached target paths.target. Aug 13 01:13:25.728640 systemd[1]: Reached target slices.target. Aug 13 01:13:25.728650 systemd[1]: Reached target swap.target. Aug 13 01:13:25.728660 systemd[1]: Reached target timers.target. Aug 13 01:13:25.728674 systemd[1]: Listening on iscsid.socket. Aug 13 01:13:25.728684 systemd[1]: Listening on iscsiuio.socket. Aug 13 01:13:25.728691 systemd[1]: Listening on systemd-journald-audit.socket. Aug 13 01:13:25.728697 systemd[1]: Listening on systemd-journald-dev-log.socket. Aug 13 01:13:25.728703 systemd[1]: Listening on systemd-journald.socket. Aug 13 01:13:25.728710 systemd[1]: Listening on systemd-networkd.socket. Aug 13 01:13:25.728716 systemd[1]: Listening on systemd-udevd-control.socket. Aug 13 01:13:25.728723 systemd[1]: Listening on systemd-udevd-kernel.socket. Aug 13 01:13:25.728731 systemd[1]: Reached target sockets.target. Aug 13 01:13:25.728737 systemd[1]: Starting kmod-static-nodes.service... Aug 13 01:13:25.728750 systemd[1]: Finished network-cleanup.service. Aug 13 01:13:25.728760 systemd[1]: Starting systemd-fsck-usr.service... Aug 13 01:13:25.728769 systemd[1]: Starting systemd-journald.service... Aug 13 01:13:25.728776 systemd[1]: Starting systemd-modules-load.service... Aug 13 01:13:25.728782 systemd[1]: Starting systemd-resolved.service... Aug 13 01:13:25.728789 systemd[1]: Starting systemd-vconsole-setup.service... Aug 13 01:13:25.728795 systemd[1]: Finished kmod-static-nodes.service. Aug 13 01:13:25.728803 systemd[1]: Finished systemd-fsck-usr.service. Aug 13 01:13:25.728811 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Aug 13 01:13:25.728822 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Aug 13 01:13:25.728834 kernel: audit: type=1130 audit(1755047605.657:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:13:25.728844 systemd[1]: Finished systemd-vconsole-setup.service. Aug 13 01:13:25.728856 systemd[1]: Starting dracut-cmdline-ask.service... Aug 13 01:13:25.728867 kernel: audit: type=1130 audit(1755047605.662:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:13:25.728876 systemd[1]: Finished dracut-cmdline-ask.service. Aug 13 01:13:25.728884 systemd[1]: Starting dracut-cmdline.service... Aug 13 01:13:25.728890 kernel: audit: type=1130 audit(1755047605.678:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:13:25.728900 systemd[1]: Started systemd-resolved.service. Aug 13 01:13:25.728911 kernel: audit: type=1130 audit(1755047605.691:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:13:25.728923 systemd[1]: Reached target nss-lookup.target. Aug 13 01:13:25.728935 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Aug 13 01:13:25.728942 kernel: Bridge firewalling registered Aug 13 01:13:25.728948 kernel: SCSI subsystem initialized Aug 13 01:13:25.728961 systemd-journald[216]: Journal started Aug 13 01:13:25.729005 systemd-journald[216]: Runtime Journal (/run/log/journal/fddbe065ba924289adf66f6b7e7ae77c) is 4.8M, max 38.8M, 34.0M free. Aug 13 01:13:25.657000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:13:25.662000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:13:25.678000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:13:25.691000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:13:25.653780 systemd-modules-load[217]: Inserted module 'overlay' Aug 13 01:13:25.730599 systemd[1]: Started systemd-journald.service. Aug 13 01:13:25.681506 systemd-resolved[218]: Positive Trust Anchors: Aug 13 01:13:25.681512 systemd-resolved[218]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 13 01:13:25.729000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:13:25.681532 systemd-resolved[218]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Aug 13 01:13:25.685630 systemd-resolved[218]: Defaulting to hostname 'linux'. Aug 13 01:13:25.734865 kernel: audit: type=1130 audit(1755047605.729:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:13:25.709640 systemd-modules-load[217]: Inserted module 'br_netfilter' Aug 13 01:13:25.735395 dracut-cmdline[233]: dracut-dracut-053 Aug 13 01:13:25.735395 dracut-cmdline[233]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LA Aug 13 01:13:25.735395 dracut-cmdline[233]: BEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=8f8aacd9fbcdd713563d390e899e90bedf5577e4b1b261b4e57687d87edd6b57 Aug 13 01:13:25.745507 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Aug 13 01:13:25.745539 kernel: device-mapper: uevent: version 1.0.3 Aug 13 01:13:25.745548 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Aug 13 01:13:25.749813 systemd-modules-load[217]: Inserted module 'dm_multipath' Aug 13 01:13:25.750220 systemd[1]: Finished systemd-modules-load.service. Aug 13 01:13:25.750770 systemd[1]: Starting systemd-sysctl.service... Aug 13 01:13:25.754063 kernel: Loading iSCSI transport class v2.0-870. Aug 13 01:13:25.754075 kernel: audit: type=1130 audit(1755047605.748:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:13:25.748000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:13:25.758207 systemd[1]: Finished systemd-sysctl.service. Aug 13 01:13:25.756000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:13:25.761764 kernel: audit: type=1130 audit(1755047605.756:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:13:25.765756 kernel: iscsi: registered transport (tcp) Aug 13 01:13:25.782756 kernel: iscsi: registered transport (qla4xxx) Aug 13 01:13:25.782776 kernel: QLogic iSCSI HBA Driver Aug 13 01:13:25.798611 systemd[1]: Finished dracut-cmdline.service. Aug 13 01:13:25.796000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:13:25.799212 systemd[1]: Starting dracut-pre-udev.service... Aug 13 01:13:25.802453 kernel: audit: type=1130 audit(1755047605.796:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:13:25.835756 kernel: raid6: avx2x4 gen() 47507 MB/s Aug 13 01:13:25.852763 kernel: raid6: avx2x4 xor() 18900 MB/s Aug 13 01:13:25.869764 kernel: raid6: avx2x2 gen() 41890 MB/s Aug 13 01:13:25.886762 kernel: raid6: avx2x2 xor() 26027 MB/s Aug 13 01:13:25.903790 kernel: raid6: avx2x1 gen() 44567 MB/s Aug 13 01:13:25.920794 kernel: raid6: avx2x1 xor() 27454 MB/s Aug 13 01:13:25.937784 kernel: raid6: sse2x4 gen() 21109 MB/s Aug 13 01:13:25.954785 kernel: raid6: sse2x4 xor() 11648 MB/s Aug 13 01:13:25.971754 kernel: raid6: sse2x2 gen() 21164 MB/s Aug 13 01:13:25.988757 kernel: raid6: sse2x2 xor() 13101 MB/s Aug 13 01:13:26.005762 kernel: raid6: sse2x1 gen() 18147 MB/s Aug 13 01:13:26.022976 kernel: raid6: sse2x1 xor() 8877 MB/s Aug 13 01:13:26.023021 kernel: raid6: using algorithm avx2x4 gen() 47507 MB/s Aug 13 01:13:26.023030 kernel: raid6: .... xor() 18900 MB/s, rmw enabled Aug 13 01:13:26.024175 kernel: raid6: using avx2x2 recovery algorithm Aug 13 01:13:26.032763 kernel: xor: automatically using best checksumming function avx Aug 13 01:13:26.093766 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Aug 13 01:13:26.096000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:13:26.098363 systemd[1]: Finished dracut-pre-udev.service. Aug 13 01:13:26.099000 audit: BPF prog-id=7 op=LOAD Aug 13 01:13:26.099000 audit: BPF prog-id=8 op=LOAD Aug 13 01:13:26.101314 systemd[1]: Starting systemd-udevd.service... Aug 13 01:13:26.101761 kernel: audit: type=1130 audit(1755047606.096:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:13:26.109545 systemd-udevd[415]: Using default interface naming scheme 'v252'. Aug 13 01:13:26.112302 systemd[1]: Started systemd-udevd.service. Aug 13 01:13:26.110000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:13:26.116332 systemd[1]: Starting dracut-pre-trigger.service... Aug 13 01:13:26.122523 dracut-pre-trigger[431]: rd.md=0: removing MD RAID activation Aug 13 01:13:26.136000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:13:26.138259 systemd[1]: Finished dracut-pre-trigger.service. Aug 13 01:13:26.138861 systemd[1]: Starting systemd-udev-trigger.service... Aug 13 01:13:26.203000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:13:26.205149 systemd[1]: Finished systemd-udev-trigger.service. Aug 13 01:13:26.267047 kernel: VMware PVSCSI driver - version 1.0.7.0-k Aug 13 01:13:26.267079 kernel: vmw_pvscsi: using 64bit dma Aug 13 01:13:26.272756 kernel: libata version 3.00 loaded. Aug 13 01:13:26.274762 kernel: ata_piix 0000:00:07.1: version 2.13 Aug 13 01:13:26.286786 kernel: vmw_pvscsi: max_id: 16 Aug 13 01:13:26.286798 kernel: vmw_pvscsi: setting ring_pages to 8 Aug 13 01:13:26.286805 kernel: scsi host0: ata_piix Aug 13 01:13:26.286879 kernel: scsi host2: ata_piix Aug 13 01:13:26.286940 kernel: ata1: PATA max UDMA/33 cmd 0x1f0 ctl 0x3f6 bmdma 0x1060 irq 14 Aug 13 01:13:26.286949 kernel: ata2: PATA max UDMA/33 cmd 0x170 ctl 0x376 bmdma 0x1068 irq 15 Aug 13 01:13:26.289759 kernel: vmw_pvscsi: enabling reqCallThreshold Aug 13 01:13:26.289779 kernel: vmw_pvscsi: driver-based request coalescing enabled Aug 13 01:13:26.289788 kernel: vmw_pvscsi: using MSI-X Aug 13 01:13:26.295610 kernel: scsi host1: VMware PVSCSI storage adapter rev 2, req/cmp/msg rings: 8/8/1 pages, cmd_per_lun=254 Aug 13 01:13:26.295704 kernel: VMware vmxnet3 virtual NIC driver - version 1.6.0.0-k-NAPI Aug 13 01:13:26.295714 kernel: vmw_pvscsi 0000:03:00.0: VMware PVSCSI rev 2 host #1 Aug 13 01:13:26.298790 kernel: scsi 1:0:0:0: Direct-Access VMware Virtual disk 2.0 PQ: 0 ANSI: 6 Aug 13 01:13:26.300806 kernel: vmxnet3 0000:0b:00.0: # of Tx queues : 2, # of Rx queues : 2 Aug 13 01:13:26.302605 kernel: vmxnet3 0000:0b:00.0 eth0: NIC Link is Up 10000 Mbps Aug 13 01:13:26.305757 kernel: cryptd: max_cpu_qlen set to 1000 Aug 13 01:13:26.450760 kernel: ata2.00: ATAPI: VMware Virtual IDE CDROM Drive, 00000001, max UDMA/33 Aug 13 01:13:26.456785 kernel: scsi 2:0:0:0: CD-ROM NECVMWar VMware IDE CDR10 1.00 PQ: 0 ANSI: 5 Aug 13 01:13:26.463043 kernel: vmxnet3 0000:0b:00.0 ens192: renamed from eth0 Aug 13 01:13:26.464963 kernel: AVX2 version of gcm_enc/dec engaged. Aug 13 01:13:26.464985 kernel: AES CTR mode by8 optimization enabled Aug 13 01:13:26.478074 kernel: sd 1:0:0:0: [sda] 17805312 512-byte logical blocks: (9.12 GB/8.49 GiB) Aug 13 01:13:26.491258 kernel: sd 1:0:0:0: [sda] Write Protect is off Aug 13 01:13:26.491334 kernel: sd 1:0:0:0: [sda] Mode Sense: 31 00 00 00 Aug 13 01:13:26.491398 kernel: sd 1:0:0:0: [sda] Cache data unavailable Aug 13 01:13:26.491457 kernel: sd 1:0:0:0: [sda] Assuming drive cache: write through Aug 13 01:13:26.491515 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 1x/1x writer dvd-ram cd/rw xa/form2 cdda tray Aug 13 01:13:26.502041 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Aug 13 01:13:26.502054 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 13 01:13:26.502067 kernel: sd 1:0:0:0: [sda] Attached SCSI disk Aug 13 01:13:26.502155 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Aug 13 01:13:26.532134 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Aug 13 01:13:26.532933 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/sda6 scanned by (udev-worker) (462) Aug 13 01:13:26.537521 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Aug 13 01:13:26.537669 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Aug 13 01:13:26.540054 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Aug 13 01:13:26.541305 systemd[1]: Starting disk-uuid.service... Aug 13 01:13:26.545842 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Aug 13 01:13:26.597761 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 13 01:13:26.609763 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 13 01:13:27.614825 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 13 01:13:27.614876 disk-uuid[548]: The operation has completed successfully. Aug 13 01:13:27.654399 systemd[1]: disk-uuid.service: Deactivated successfully. Aug 13 01:13:27.654475 systemd[1]: Finished disk-uuid.service. Aug 13 01:13:27.652000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:13:27.652000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:13:27.657430 systemd[1]: Starting verity-setup.service... Aug 13 01:13:27.668764 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Aug 13 01:13:27.745733 systemd[1]: Found device dev-mapper-usr.device. Aug 13 01:13:27.746656 systemd[1]: Mounting sysusr-usr.mount... Aug 13 01:13:27.749015 systemd[1]: Finished verity-setup.service. Aug 13 01:13:27.747000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:13:27.848936 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Aug 13 01:13:27.847825 systemd[1]: Mounted sysusr-usr.mount. Aug 13 01:13:27.848391 systemd[1]: Starting afterburn-network-kargs.service... Aug 13 01:13:27.848835 systemd[1]: Starting ignition-setup.service... Aug 13 01:13:27.881438 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Aug 13 01:13:27.881472 kernel: BTRFS info (device sda6): using free space tree Aug 13 01:13:27.881484 kernel: BTRFS info (device sda6): has skinny extents Aug 13 01:13:27.886756 kernel: BTRFS info (device sda6): enabling ssd optimizations Aug 13 01:13:27.892283 systemd[1]: mnt-oem.mount: Deactivated successfully. Aug 13 01:13:27.897737 systemd[1]: Finished ignition-setup.service. Aug 13 01:13:27.896000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:13:27.898427 systemd[1]: Starting ignition-fetch-offline.service... Aug 13 01:13:27.982488 systemd[1]: Finished afterburn-network-kargs.service. Aug 13 01:13:27.980000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=afterburn-network-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:13:27.983165 systemd[1]: Starting parse-ip-for-networkd.service... Aug 13 01:13:28.029000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:13:28.029000 audit: BPF prog-id=9 op=LOAD Aug 13 01:13:28.030809 systemd[1]: Finished parse-ip-for-networkd.service. Aug 13 01:13:28.031684 systemd[1]: Starting systemd-networkd.service... Aug 13 01:13:28.046078 systemd-networkd[734]: lo: Link UP Aug 13 01:13:28.046086 systemd-networkd[734]: lo: Gained carrier Aug 13 01:13:28.046551 systemd-networkd[734]: Enumeration completed Aug 13 01:13:28.050774 kernel: vmxnet3 0000:0b:00.0 ens192: intr type 3, mode 0, 3 vectors allocated Aug 13 01:13:28.050895 kernel: vmxnet3 0000:0b:00.0 ens192: NIC Link is Up 10000 Mbps Aug 13 01:13:28.045000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:13:28.046711 systemd[1]: Started systemd-networkd.service. Aug 13 01:13:28.046860 systemd[1]: Reached target network.target. Aug 13 01:13:28.046913 systemd-networkd[734]: ens192: Configuring with /etc/systemd/network/10-dracut-cmdline-99.network. Aug 13 01:13:28.047374 systemd[1]: Starting iscsiuio.service... Aug 13 01:13:28.050940 systemd-networkd[734]: ens192: Link UP Aug 13 01:13:28.050942 systemd-networkd[734]: ens192: Gained carrier Aug 13 01:13:28.050000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:13:28.052121 systemd[1]: Started iscsiuio.service. Aug 13 01:13:28.052693 systemd[1]: Starting iscsid.service... Aug 13 01:13:28.055127 iscsid[739]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Aug 13 01:13:28.055127 iscsid[739]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Aug 13 01:13:28.055127 iscsid[739]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Aug 13 01:13:28.055127 iscsid[739]: If using hardware iscsi like qla4xxx this message can be ignored. Aug 13 01:13:28.055127 iscsid[739]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Aug 13 01:13:28.055127 iscsid[739]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Aug 13 01:13:28.054000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:13:28.056267 systemd[1]: Started iscsid.service. Aug 13 01:13:28.058843 systemd[1]: Starting dracut-initqueue.service... Aug 13 01:13:28.065000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:13:28.067204 systemd[1]: Finished dracut-initqueue.service. Aug 13 01:13:28.067367 systemd[1]: Reached target remote-fs-pre.target. Aug 13 01:13:28.067454 systemd[1]: Reached target remote-cryptsetup.target. Aug 13 01:13:28.067540 systemd[1]: Reached target remote-fs.target. Aug 13 01:13:28.068189 systemd[1]: Starting dracut-pre-mount.service... Aug 13 01:13:28.074124 systemd[1]: Finished dracut-pre-mount.service. Aug 13 01:13:28.072000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:13:28.126280 ignition[605]: Ignition 2.14.0 Aug 13 01:13:28.126288 ignition[605]: Stage: fetch-offline Aug 13 01:13:28.126340 ignition[605]: reading system config file "/usr/lib/ignition/base.d/base.ign" Aug 13 01:13:28.126357 ignition[605]: parsing config with SHA512: bd85a898f7da4744ff98e02742aa4854e1ceea8026a4e95cb6fb599b39b54cff0db353847df13d3c55ae196a9dc5d648977228d55e5da3ea20cd600fa7cec8ed Aug 13 01:13:28.138467 ignition[605]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Aug 13 01:13:28.138562 ignition[605]: parsed url from cmdline: "" Aug 13 01:13:28.138564 ignition[605]: no config URL provided Aug 13 01:13:28.138567 ignition[605]: reading system config file "/usr/lib/ignition/user.ign" Aug 13 01:13:28.138572 ignition[605]: no config at "/usr/lib/ignition/user.ign" Aug 13 01:13:28.145367 ignition[605]: config successfully fetched Aug 13 01:13:28.145403 ignition[605]: parsing config with SHA512: bd196a1ba5f7196a359785700b013864b1a0ed05bbf9563131b4c460f1c971b9268f894c5c00b79816598fbbf2ec9b34afc6c634d1bcec342d35e4ef996d5e26 Aug 13 01:13:28.151153 unknown[605]: fetched base config from "system" Aug 13 01:13:28.151161 unknown[605]: fetched user config from "vmware" Aug 13 01:13:28.151636 ignition[605]: fetch-offline: fetch-offline passed Aug 13 01:13:28.151692 ignition[605]: Ignition finished successfully Aug 13 01:13:28.150000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:13:28.152339 systemd[1]: Finished ignition-fetch-offline.service. Aug 13 01:13:28.152487 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Aug 13 01:13:28.152951 systemd[1]: Starting ignition-kargs.service... Aug 13 01:13:28.158713 ignition[753]: Ignition 2.14.0 Aug 13 01:13:28.158721 ignition[753]: Stage: kargs Aug 13 01:13:28.158807 ignition[753]: reading system config file "/usr/lib/ignition/base.d/base.ign" Aug 13 01:13:28.158818 ignition[753]: parsing config with SHA512: bd85a898f7da4744ff98e02742aa4854e1ceea8026a4e95cb6fb599b39b54cff0db353847df13d3c55ae196a9dc5d648977228d55e5da3ea20cd600fa7cec8ed Aug 13 01:13:28.160065 ignition[753]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Aug 13 01:13:28.161640 ignition[753]: kargs: kargs passed Aug 13 01:13:28.161674 ignition[753]: Ignition finished successfully Aug 13 01:13:28.162834 systemd[1]: Finished ignition-kargs.service. Aug 13 01:13:28.161000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:13:28.163476 systemd[1]: Starting ignition-disks.service... Aug 13 01:13:28.168713 ignition[759]: Ignition 2.14.0 Aug 13 01:13:28.168721 ignition[759]: Stage: disks Aug 13 01:13:28.168810 ignition[759]: reading system config file "/usr/lib/ignition/base.d/base.ign" Aug 13 01:13:28.168825 ignition[759]: parsing config with SHA512: bd85a898f7da4744ff98e02742aa4854e1ceea8026a4e95cb6fb599b39b54cff0db353847df13d3c55ae196a9dc5d648977228d55e5da3ea20cd600fa7cec8ed Aug 13 01:13:28.170249 ignition[759]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Aug 13 01:13:28.171966 ignition[759]: disks: disks passed Aug 13 01:13:28.172004 ignition[759]: Ignition finished successfully Aug 13 01:13:28.172695 systemd[1]: Finished ignition-disks.service. Aug 13 01:13:28.172929 systemd[1]: Reached target initrd-root-device.target. Aug 13 01:13:28.171000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:13:28.173043 systemd[1]: Reached target local-fs-pre.target. Aug 13 01:13:28.173216 systemd[1]: Reached target local-fs.target. Aug 13 01:13:28.173404 systemd[1]: Reached target sysinit.target. Aug 13 01:13:28.173564 systemd[1]: Reached target basic.target. Aug 13 01:13:28.174275 systemd[1]: Starting systemd-fsck-root.service... Aug 13 01:13:28.186444 systemd-fsck[767]: ROOT: clean, 629/1628000 files, 124064/1617920 blocks Aug 13 01:13:28.186000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:13:28.188164 systemd[1]: Finished systemd-fsck-root.service. Aug 13 01:13:28.188846 systemd[1]: Mounting sysroot.mount... Aug 13 01:13:28.226814 kernel: EXT4-fs (sda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Aug 13 01:13:28.227204 systemd[1]: Mounted sysroot.mount. Aug 13 01:13:28.227397 systemd[1]: Reached target initrd-root-fs.target. Aug 13 01:13:28.229060 systemd[1]: Mounting sysroot-usr.mount... Aug 13 01:13:28.229489 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Aug 13 01:13:28.229522 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Aug 13 01:13:28.229543 systemd[1]: Reached target ignition-diskful.target. Aug 13 01:13:28.231707 systemd[1]: Mounted sysroot-usr.mount. Aug 13 01:13:28.232302 systemd[1]: Starting initrd-setup-root.service... Aug 13 01:13:28.235883 initrd-setup-root[777]: cut: /sysroot/etc/passwd: No such file or directory Aug 13 01:13:28.240277 initrd-setup-root[785]: cut: /sysroot/etc/group: No such file or directory Aug 13 01:13:28.242819 initrd-setup-root[793]: cut: /sysroot/etc/shadow: No such file or directory Aug 13 01:13:28.245224 initrd-setup-root[801]: cut: /sysroot/etc/gshadow: No such file or directory Aug 13 01:13:28.273458 systemd[1]: Finished initrd-setup-root.service. Aug 13 01:13:28.271000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:13:28.274063 systemd[1]: Starting ignition-mount.service... Aug 13 01:13:28.274533 systemd[1]: Starting sysroot-boot.service... Aug 13 01:13:28.277810 bash[818]: umount: /sysroot/usr/share/oem: not mounted. Aug 13 01:13:28.282840 ignition[819]: INFO : Ignition 2.14.0 Aug 13 01:13:28.282840 ignition[819]: INFO : Stage: mount Aug 13 01:13:28.283194 ignition[819]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Aug 13 01:13:28.283194 ignition[819]: DEBUG : parsing config with SHA512: bd85a898f7da4744ff98e02742aa4854e1ceea8026a4e95cb6fb599b39b54cff0db353847df13d3c55ae196a9dc5d648977228d55e5da3ea20cd600fa7cec8ed Aug 13 01:13:28.284172 ignition[819]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Aug 13 01:13:28.286126 ignition[819]: INFO : mount: mount passed Aug 13 01:13:28.286246 ignition[819]: INFO : Ignition finished successfully Aug 13 01:13:28.286869 systemd[1]: Finished ignition-mount.service. Aug 13 01:13:28.285000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:13:28.323827 systemd[1]: Finished sysroot-boot.service. Aug 13 01:13:28.322000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:13:28.803190 systemd[1]: Mounting sysroot-usr-share-oem.mount... Aug 13 01:13:28.819762 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (828) Aug 13 01:13:28.822043 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Aug 13 01:13:28.822058 kernel: BTRFS info (device sda6): using free space tree Aug 13 01:13:28.822067 kernel: BTRFS info (device sda6): has skinny extents Aug 13 01:13:28.826756 kernel: BTRFS info (device sda6): enabling ssd optimizations Aug 13 01:13:28.827982 systemd[1]: Mounted sysroot-usr-share-oem.mount. Aug 13 01:13:28.828555 systemd[1]: Starting ignition-files.service... Aug 13 01:13:28.839296 ignition[848]: INFO : Ignition 2.14.0 Aug 13 01:13:28.839543 ignition[848]: INFO : Stage: files Aug 13 01:13:28.839718 ignition[848]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Aug 13 01:13:28.839878 ignition[848]: DEBUG : parsing config with SHA512: bd85a898f7da4744ff98e02742aa4854e1ceea8026a4e95cb6fb599b39b54cff0db353847df13d3c55ae196a9dc5d648977228d55e5da3ea20cd600fa7cec8ed Aug 13 01:13:28.841381 ignition[848]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Aug 13 01:13:28.843707 ignition[848]: DEBUG : files: compiled without relabeling support, skipping Aug 13 01:13:28.844266 ignition[848]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Aug 13 01:13:28.844420 ignition[848]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Aug 13 01:13:28.847025 ignition[848]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Aug 13 01:13:28.847317 ignition[848]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Aug 13 01:13:28.848100 unknown[848]: wrote ssh authorized keys file for user: core Aug 13 01:13:28.848325 ignition[848]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Aug 13 01:13:28.849014 ignition[848]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Aug 13 01:13:28.849233 ignition[848]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Aug 13 01:13:28.915466 ignition[848]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Aug 13 01:13:29.088587 ignition[848]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Aug 13 01:13:29.089140 ignition[848]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Aug 13 01:13:29.089379 ignition[848]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Aug 13 01:13:29.292243 ignition[848]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Aug 13 01:13:29.358826 ignition[848]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Aug 13 01:13:29.358826 ignition[848]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Aug 13 01:13:29.359209 ignition[848]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Aug 13 01:13:29.359209 ignition[848]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Aug 13 01:13:29.359209 ignition[848]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Aug 13 01:13:29.359209 ignition[848]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 13 01:13:29.359209 ignition[848]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 13 01:13:29.359209 ignition[848]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 13 01:13:29.359209 ignition[848]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 13 01:13:29.360931 ignition[848]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Aug 13 01:13:29.361092 ignition[848]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Aug 13 01:13:29.361092 ignition[848]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Aug 13 01:13:29.361092 ignition[848]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Aug 13 01:13:29.362729 ignition[848]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/etc/systemd/system/vmtoolsd.service" Aug 13 01:13:29.362729 ignition[848]: INFO : files: createFilesystemsFiles: createFiles: op(b): oem config not found in "/usr/share/oem", looking on oem partition Aug 13 01:13:29.366075 ignition[848]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(c): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3625521361" Aug 13 01:13:29.366270 ignition[848]: CRITICAL : files: createFilesystemsFiles: createFiles: op(b): op(c): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3625521361": device or resource busy Aug 13 01:13:29.366270 ignition[848]: ERROR : files: createFilesystemsFiles: createFiles: op(b): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem3625521361", trying btrfs: device or resource busy Aug 13 01:13:29.366270 ignition[848]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(d): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3625521361" Aug 13 01:13:29.368602 ignition[848]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(d): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3625521361" Aug 13 01:13:29.369724 ignition[848]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(e): [started] unmounting "/mnt/oem3625521361" Aug 13 01:13:29.370773 ignition[848]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(e): [finished] unmounting "/mnt/oem3625521361" Aug 13 01:13:29.370773 ignition[848]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/etc/systemd/system/vmtoolsd.service" Aug 13 01:13:29.370773 ignition[848]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Aug 13 01:13:29.370773 ignition[848]: INFO : files: createFilesystemsFiles: createFiles: op(f): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Aug 13 01:13:29.370517 systemd[1]: mnt-oem3625521361.mount: Deactivated successfully. Aug 13 01:13:29.806689 ignition[848]: INFO : files: createFilesystemsFiles: createFiles: op(f): GET result: OK Aug 13 01:13:29.878819 systemd-networkd[734]: ens192: Gained IPv6LL Aug 13 01:13:29.991960 ignition[848]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Aug 13 01:13:29.992350 ignition[848]: INFO : files: createFilesystemsFiles: createFiles: op(10): [started] writing file "/sysroot/etc/systemd/network/00-vmware.network" Aug 13 01:13:29.992350 ignition[848]: INFO : files: createFilesystemsFiles: createFiles: op(10): [finished] writing file "/sysroot/etc/systemd/network/00-vmware.network" Aug 13 01:13:29.992350 ignition[848]: INFO : files: op(11): [started] processing unit "vmtoolsd.service" Aug 13 01:13:29.992350 ignition[848]: INFO : files: op(11): [finished] processing unit "vmtoolsd.service" Aug 13 01:13:29.992350 ignition[848]: INFO : files: op(12): [started] processing unit "prepare-helm.service" Aug 13 01:13:29.992350 ignition[848]: INFO : files: op(12): op(13): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 13 01:13:29.993419 ignition[848]: INFO : files: op(12): op(13): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 13 01:13:29.993419 ignition[848]: INFO : files: op(12): [finished] processing unit "prepare-helm.service" Aug 13 01:13:29.993419 ignition[848]: INFO : files: op(14): [started] processing unit "coreos-metadata.service" Aug 13 01:13:29.993419 ignition[848]: INFO : files: op(14): op(15): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Aug 13 01:13:29.993419 ignition[848]: INFO : files: op(14): op(15): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Aug 13 01:13:29.993419 ignition[848]: INFO : files: op(14): [finished] processing unit "coreos-metadata.service" Aug 13 01:13:29.993419 ignition[848]: INFO : files: op(16): [started] setting preset to enabled for "vmtoolsd.service" Aug 13 01:13:29.993419 ignition[848]: INFO : files: op(16): [finished] setting preset to enabled for "vmtoolsd.service" Aug 13 01:13:29.993419 ignition[848]: INFO : files: op(17): [started] setting preset to enabled for "prepare-helm.service" Aug 13 01:13:29.993419 ignition[848]: INFO : files: op(17): [finished] setting preset to enabled for "prepare-helm.service" Aug 13 01:13:29.993419 ignition[848]: INFO : files: op(18): [started] setting preset to disabled for "coreos-metadata.service" Aug 13 01:13:29.993419 ignition[848]: INFO : files: op(18): op(19): [started] removing enablement symlink(s) for "coreos-metadata.service" Aug 13 01:13:30.126562 ignition[848]: INFO : files: op(18): op(19): [finished] removing enablement symlink(s) for "coreos-metadata.service" Aug 13 01:13:30.126844 ignition[848]: INFO : files: op(18): [finished] setting preset to disabled for "coreos-metadata.service" Aug 13 01:13:30.126844 ignition[848]: INFO : files: createResultFile: createFiles: op(1a): [started] writing file "/sysroot/etc/.ignition-result.json" Aug 13 01:13:30.126844 ignition[848]: INFO : files: createResultFile: createFiles: op(1a): [finished] writing file "/sysroot/etc/.ignition-result.json" Aug 13 01:13:30.126844 ignition[848]: INFO : files: files passed Aug 13 01:13:30.127629 ignition[848]: INFO : Ignition finished successfully Aug 13 01:13:30.127565 systemd[1]: Finished ignition-files.service. Aug 13 01:13:30.130899 kernel: kauditd_printk_skb: 24 callbacks suppressed Aug 13 01:13:30.130928 kernel: audit: type=1130 audit(1755047610.126:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:13:30.126000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:13:30.129268 systemd[1]: Starting initrd-setup-root-after-ignition.service... Aug 13 01:13:30.132520 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Aug 13 01:13:30.133355 systemd[1]: Starting ignition-quench.service... Aug 13 01:13:30.143852 systemd[1]: ignition-quench.service: Deactivated successfully. Aug 13 01:13:30.143925 systemd[1]: Finished ignition-quench.service. Aug 13 01:13:30.142000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:13:30.144580 initrd-setup-root-after-ignition[875]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 13 01:13:30.142000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:13:30.151906 kernel: audit: type=1130 audit(1755047610.142:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:13:30.151940 kernel: audit: type=1131 audit(1755047610.142:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:13:30.152144 systemd[1]: Finished initrd-setup-root-after-ignition.service. Aug 13 01:13:30.156076 kernel: audit: type=1130 audit(1755047610.150:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:13:30.150000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:13:30.152481 systemd[1]: Reached target ignition-complete.target. Aug 13 01:13:30.157077 systemd[1]: Starting initrd-parse-etc.service... Aug 13 01:13:30.168040 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Aug 13 01:13:30.168122 systemd[1]: Finished initrd-parse-etc.service. Aug 13 01:13:30.168381 systemd[1]: Reached target initrd-fs.target. Aug 13 01:13:30.173626 kernel: audit: type=1130 audit(1755047610.166:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:13:30.173653 kernel: audit: type=1131 audit(1755047610.166:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:13:30.166000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:13:30.166000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:13:30.173529 systemd[1]: Reached target initrd.target. Aug 13 01:13:30.173711 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Aug 13 01:13:30.174458 systemd[1]: Starting dracut-pre-pivot.service... Aug 13 01:13:30.183015 systemd[1]: Finished dracut-pre-pivot.service. Aug 13 01:13:30.181000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:13:30.183866 systemd[1]: Starting initrd-cleanup.service... Aug 13 01:13:30.186775 kernel: audit: type=1130 audit(1755047610.181:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:13:30.191134 systemd[1]: initrd-cleanup.service: Deactivated successfully. Aug 13 01:13:30.191334 systemd[1]: Finished initrd-cleanup.service. Aug 13 01:13:30.189000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:13:30.191995 systemd[1]: Stopped target nss-lookup.target. Aug 13 01:13:30.196622 kernel: audit: type=1130 audit(1755047610.189:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:13:30.196637 kernel: audit: type=1131 audit(1755047610.189:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:13:30.189000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:13:30.196691 systemd[1]: Stopped target remote-cryptsetup.target. Aug 13 01:13:30.196826 systemd[1]: Stopped target timers.target. Aug 13 01:13:30.197003 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Aug 13 01:13:30.199712 kernel: audit: type=1131 audit(1755047610.195:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:13:30.195000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:13:30.197037 systemd[1]: Stopped dracut-pre-pivot.service. Aug 13 01:13:30.197191 systemd[1]: Stopped target initrd.target. Aug 13 01:13:30.199783 systemd[1]: Stopped target basic.target. Aug 13 01:13:30.199964 systemd[1]: Stopped target ignition-complete.target. Aug 13 01:13:30.200126 systemd[1]: Stopped target ignition-diskful.target. Aug 13 01:13:30.200291 systemd[1]: Stopped target initrd-root-device.target. Aug 13 01:13:30.200471 systemd[1]: Stopped target remote-fs.target. Aug 13 01:13:30.200641 systemd[1]: Stopped target remote-fs-pre.target. Aug 13 01:13:30.200833 systemd[1]: Stopped target sysinit.target. Aug 13 01:13:30.200984 systemd[1]: Stopped target local-fs.target. Aug 13 01:13:30.201146 systemd[1]: Stopped target local-fs-pre.target. Aug 13 01:13:30.201307 systemd[1]: Stopped target swap.target. Aug 13 01:13:30.199000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:13:30.201469 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Aug 13 01:13:30.201507 systemd[1]: Stopped dracut-pre-mount.service. Aug 13 01:13:30.201653 systemd[1]: Stopped target cryptsetup.target. Aug 13 01:13:30.200000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:13:30.200000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:13:30.201795 systemd[1]: dracut-initqueue.service: Deactivated successfully. Aug 13 01:13:30.201820 systemd[1]: Stopped dracut-initqueue.service. Aug 13 01:13:30.202006 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Aug 13 01:13:30.202028 systemd[1]: Stopped ignition-fetch-offline.service. Aug 13 01:13:30.202147 systemd[1]: Stopped target paths.target. Aug 13 01:13:30.202275 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Aug 13 01:13:30.205766 systemd[1]: Stopped systemd-ask-password-console.path. Aug 13 01:13:30.205881 systemd[1]: Stopped target slices.target. Aug 13 01:13:30.206059 systemd[1]: Stopped target sockets.target. Aug 13 01:13:30.206227 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Aug 13 01:13:30.204000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:13:30.206251 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Aug 13 01:13:30.204000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:13:30.206408 systemd[1]: ignition-files.service: Deactivated successfully. Aug 13 01:13:30.206431 systemd[1]: Stopped ignition-files.service. Aug 13 01:13:30.207059 systemd[1]: Stopping ignition-mount.service... Aug 13 01:13:30.207401 iscsid[739]: iscsid shutting down. Aug 13 01:13:30.207000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:13:30.207000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:13:30.208000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:13:30.208000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:13:30.208914 systemd[1]: Stopping iscsid.service... Aug 13 01:13:30.213000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:13:30.215018 ignition[888]: INFO : Ignition 2.14.0 Aug 13 01:13:30.215018 ignition[888]: INFO : Stage: umount Aug 13 01:13:30.215018 ignition[888]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Aug 13 01:13:30.215018 ignition[888]: DEBUG : parsing config with SHA512: bd85a898f7da4744ff98e02742aa4854e1ceea8026a4e95cb6fb599b39b54cff0db353847df13d3c55ae196a9dc5d648977228d55e5da3ea20cd600fa7cec8ed Aug 13 01:13:30.215018 ignition[888]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Aug 13 01:13:30.208998 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Aug 13 01:13:30.209025 systemd[1]: Stopped kmod-static-nodes.service. Aug 13 01:13:30.209460 systemd[1]: Stopping sysroot-boot.service... Aug 13 01:13:30.209587 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Aug 13 01:13:30.209628 systemd[1]: Stopped systemd-udev-trigger.service. Aug 13 01:13:30.209769 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Aug 13 01:13:30.209791 systemd[1]: Stopped dracut-pre-trigger.service. Aug 13 01:13:30.210089 systemd[1]: iscsid.service: Deactivated successfully. Aug 13 01:13:30.210155 systemd[1]: Stopped iscsid.service. Aug 13 01:13:30.210303 systemd[1]: iscsid.socket: Deactivated successfully. Aug 13 01:13:30.210320 systemd[1]: Closed iscsid.socket. Aug 13 01:13:30.210837 systemd[1]: Stopping iscsiuio.service... Aug 13 01:13:30.212179 systemd[1]: iscsiuio.service: Deactivated successfully. Aug 13 01:13:30.212270 systemd[1]: Stopped iscsiuio.service. Aug 13 01:13:30.214980 systemd[1]: iscsiuio.socket: Deactivated successfully. Aug 13 01:13:30.214999 systemd[1]: Closed iscsiuio.socket. Aug 13 01:13:30.218392 ignition[888]: INFO : umount: umount passed Aug 13 01:13:30.218392 ignition[888]: INFO : Ignition finished successfully Aug 13 01:13:30.219005 systemd[1]: ignition-mount.service: Deactivated successfully. Aug 13 01:13:30.219065 systemd[1]: Stopped ignition-mount.service. Aug 13 01:13:30.217000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:13:30.219268 systemd[1]: Stopped target network.target. Aug 13 01:13:30.219370 systemd[1]: ignition-disks.service: Deactivated successfully. Aug 13 01:13:30.217000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:13:30.219394 systemd[1]: Stopped ignition-disks.service. Aug 13 01:13:30.217000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:13:30.219552 systemd[1]: ignition-kargs.service: Deactivated successfully. Aug 13 01:13:30.218000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:13:30.219572 systemd[1]: Stopped ignition-kargs.service. Aug 13 01:13:30.219722 systemd[1]: ignition-setup.service: Deactivated successfully. Aug 13 01:13:30.219752 systemd[1]: Stopped ignition-setup.service. Aug 13 01:13:30.219944 systemd[1]: Stopping systemd-networkd.service... Aug 13 01:13:30.220328 systemd[1]: Stopping systemd-resolved.service... Aug 13 01:13:30.223291 systemd[1]: systemd-networkd.service: Deactivated successfully. Aug 13 01:13:30.223350 systemd[1]: Stopped systemd-networkd.service. Aug 13 01:13:30.221000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:13:30.224019 systemd[1]: systemd-networkd.socket: Deactivated successfully. Aug 13 01:13:30.224042 systemd[1]: Closed systemd-networkd.socket. Aug 13 01:13:30.223000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:13:30.223000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=afterburn-network-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:13:30.223000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:13:30.225150 systemd[1]: Stopping network-cleanup.service... Aug 13 01:13:30.225249 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Aug 13 01:13:30.225284 systemd[1]: Stopped parse-ip-for-networkd.service. Aug 13 01:13:30.225420 systemd[1]: afterburn-network-kargs.service: Deactivated successfully. Aug 13 01:13:30.225443 systemd[1]: Stopped afterburn-network-kargs.service. Aug 13 01:13:30.225551 systemd[1]: systemd-sysctl.service: Deactivated successfully. Aug 13 01:13:30.225000 audit: BPF prog-id=9 op=UNLOAD Aug 13 01:13:30.225571 systemd[1]: Stopped systemd-sysctl.service. Aug 13 01:13:30.226867 systemd[1]: systemd-modules-load.service: Deactivated successfully. Aug 13 01:13:30.225000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:13:30.226887 systemd[1]: Stopped systemd-modules-load.service. Aug 13 01:13:30.230002 systemd[1]: Stopping systemd-udevd.service... Aug 13 01:13:30.230801 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Aug 13 01:13:30.231062 systemd[1]: systemd-resolved.service: Deactivated successfully. Aug 13 01:13:30.231122 systemd[1]: Stopped systemd-resolved.service. Aug 13 01:13:30.229000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:13:30.233376 systemd[1]: network-cleanup.service: Deactivated successfully. Aug 13 01:13:30.233427 systemd[1]: Stopped network-cleanup.service. Aug 13 01:13:30.231000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:13:30.232000 audit: BPF prog-id=6 op=UNLOAD Aug 13 01:13:30.233000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:13:30.234000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:13:30.234000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:13:30.235089 systemd[1]: sysroot-boot.mount: Deactivated successfully. Aug 13 01:13:30.235336 systemd[1]: systemd-udevd.service: Deactivated successfully. Aug 13 01:13:30.235406 systemd[1]: Stopped systemd-udevd.service. Aug 13 01:13:30.235728 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Aug 13 01:13:30.234000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:13:30.235771 systemd[1]: Closed systemd-udevd-control.socket. Aug 13 01:13:30.235874 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Aug 13 01:13:30.235890 systemd[1]: Closed systemd-udevd-kernel.socket. Aug 13 01:13:30.235980 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Aug 13 01:13:30.236001 systemd[1]: Stopped dracut-pre-udev.service. Aug 13 01:13:30.236102 systemd[1]: dracut-cmdline.service: Deactivated successfully. Aug 13 01:13:30.236121 systemd[1]: Stopped dracut-cmdline.service. Aug 13 01:13:30.235000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:13:30.236218 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 13 01:13:30.236236 systemd[1]: Stopped dracut-cmdline-ask.service. Aug 13 01:13:30.236700 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Aug 13 01:13:30.236824 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 13 01:13:30.236848 systemd[1]: Stopped systemd-vconsole-setup.service. Aug 13 01:13:30.239000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:13:30.239000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:13:30.241173 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Aug 13 01:13:30.241235 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Aug 13 01:13:30.396895 systemd[1]: sysroot-boot.service: Deactivated successfully. Aug 13 01:13:30.396976 systemd[1]: Stopped sysroot-boot.service. Aug 13 01:13:30.395000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:13:30.397602 systemd[1]: Reached target initrd-switch-root.target. Aug 13 01:13:30.397939 systemd[1]: initrd-setup-root.service: Deactivated successfully. Aug 13 01:13:30.397975 systemd[1]: Stopped initrd-setup-root.service. Aug 13 01:13:30.396000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:13:30.399014 systemd[1]: Starting initrd-switch-root.service... Aug 13 01:13:30.407905 systemd[1]: Switching root. Aug 13 01:13:30.419525 systemd-journald[216]: Journal stopped Aug 13 01:13:33.143007 systemd-journald[216]: Received SIGTERM from PID 1 (systemd). Aug 13 01:13:33.143027 kernel: SELinux: Class mctp_socket not defined in policy. Aug 13 01:13:33.143035 kernel: SELinux: Class anon_inode not defined in policy. Aug 13 01:13:33.143041 kernel: SELinux: the above unknown classes and permissions will be allowed Aug 13 01:13:33.143047 kernel: SELinux: policy capability network_peer_controls=1 Aug 13 01:13:33.143054 kernel: SELinux: policy capability open_perms=1 Aug 13 01:13:33.143060 kernel: SELinux: policy capability extended_socket_class=1 Aug 13 01:13:33.143066 kernel: SELinux: policy capability always_check_network=0 Aug 13 01:13:33.143072 kernel: SELinux: policy capability cgroup_seclabel=1 Aug 13 01:13:33.143078 kernel: SELinux: policy capability nnp_nosuid_transition=1 Aug 13 01:13:33.143083 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Aug 13 01:13:33.143089 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Aug 13 01:13:33.143096 systemd[1]: Successfully loaded SELinux policy in 58.234ms. Aug 13 01:13:33.143104 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 6.683ms. Aug 13 01:13:33.143111 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Aug 13 01:13:33.143118 systemd[1]: Detected virtualization vmware. Aug 13 01:13:33.143125 systemd[1]: Detected architecture x86-64. Aug 13 01:13:33.143132 systemd[1]: Detected first boot. Aug 13 01:13:33.143139 systemd[1]: Initializing machine ID from random generator. Aug 13 01:13:33.143146 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Aug 13 01:13:33.143152 systemd[1]: Populated /etc with preset unit settings. Aug 13 01:13:33.143160 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Aug 13 01:13:33.143167 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Aug 13 01:13:33.143174 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 01:13:33.143183 systemd[1]: initrd-switch-root.service: Deactivated successfully. Aug 13 01:13:33.143189 systemd[1]: Stopped initrd-switch-root.service. Aug 13 01:13:33.143196 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Aug 13 01:13:33.143203 systemd[1]: Created slice system-addon\x2dconfig.slice. Aug 13 01:13:33.143209 systemd[1]: Created slice system-addon\x2drun.slice. Aug 13 01:13:33.143216 systemd[1]: Created slice system-getty.slice. Aug 13 01:13:33.143222 systemd[1]: Created slice system-modprobe.slice. Aug 13 01:13:33.143230 systemd[1]: Created slice system-serial\x2dgetty.slice. Aug 13 01:13:33.143236 systemd[1]: Created slice system-system\x2dcloudinit.slice. Aug 13 01:13:33.143243 systemd[1]: Created slice system-systemd\x2dfsck.slice. Aug 13 01:13:33.143249 systemd[1]: Created slice user.slice. Aug 13 01:13:33.143255 systemd[1]: Started systemd-ask-password-console.path. Aug 13 01:13:33.143262 systemd[1]: Started systemd-ask-password-wall.path. Aug 13 01:13:33.143268 systemd[1]: Set up automount boot.automount. Aug 13 01:13:33.143292 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Aug 13 01:13:33.143300 systemd[1]: Stopped target initrd-switch-root.target. Aug 13 01:13:33.143308 systemd[1]: Stopped target initrd-fs.target. Aug 13 01:13:33.143316 systemd[1]: Stopped target initrd-root-fs.target. Aug 13 01:13:33.143323 systemd[1]: Reached target integritysetup.target. Aug 13 01:13:33.143345 systemd[1]: Reached target remote-cryptsetup.target. Aug 13 01:13:33.143352 systemd[1]: Reached target remote-fs.target. Aug 13 01:13:33.143359 systemd[1]: Reached target slices.target. Aug 13 01:13:33.143365 systemd[1]: Reached target swap.target. Aug 13 01:13:33.143372 systemd[1]: Reached target torcx.target. Aug 13 01:13:33.143379 systemd[1]: Reached target veritysetup.target. Aug 13 01:13:33.143386 systemd[1]: Listening on systemd-coredump.socket. Aug 13 01:13:33.143393 systemd[1]: Listening on systemd-initctl.socket. Aug 13 01:13:33.143400 systemd[1]: Listening on systemd-networkd.socket. Aug 13 01:13:33.143406 systemd[1]: Listening on systemd-udevd-control.socket. Aug 13 01:13:33.143413 systemd[1]: Listening on systemd-udevd-kernel.socket. Aug 13 01:13:33.143422 systemd[1]: Listening on systemd-userdbd.socket. Aug 13 01:13:33.143429 systemd[1]: Mounting dev-hugepages.mount... Aug 13 01:13:33.143435 systemd[1]: Mounting dev-mqueue.mount... Aug 13 01:13:33.143442 systemd[1]: Mounting media.mount... Aug 13 01:13:33.143449 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 01:13:33.143456 systemd[1]: Mounting sys-kernel-debug.mount... Aug 13 01:13:33.143463 systemd[1]: Mounting sys-kernel-tracing.mount... Aug 13 01:13:33.143471 systemd[1]: Mounting tmp.mount... Aug 13 01:13:33.143478 systemd[1]: Starting flatcar-tmpfiles.service... Aug 13 01:13:33.143486 systemd[1]: Starting ignition-delete-config.service... Aug 13 01:13:33.143493 systemd[1]: Starting kmod-static-nodes.service... Aug 13 01:13:33.143500 systemd[1]: Starting modprobe@configfs.service... Aug 13 01:13:33.143507 systemd[1]: Starting modprobe@dm_mod.service... Aug 13 01:13:33.143513 systemd[1]: Starting modprobe@drm.service... Aug 13 01:13:33.143520 systemd[1]: Starting modprobe@efi_pstore.service... Aug 13 01:13:33.143527 systemd[1]: Starting modprobe@fuse.service... Aug 13 01:13:33.143535 systemd[1]: Starting modprobe@loop.service... Aug 13 01:13:33.143542 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Aug 13 01:13:33.143549 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Aug 13 01:13:33.143556 systemd[1]: Stopped systemd-fsck-root.service. Aug 13 01:13:33.143562 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Aug 13 01:13:33.143569 systemd[1]: Stopped systemd-fsck-usr.service. Aug 13 01:13:33.143576 systemd[1]: Stopped systemd-journald.service. Aug 13 01:13:33.143583 systemd[1]: Starting systemd-journald.service... Aug 13 01:13:33.143590 systemd[1]: Starting systemd-modules-load.service... Aug 13 01:13:33.143597 systemd[1]: Starting systemd-network-generator.service... Aug 13 01:13:33.143604 systemd[1]: Starting systemd-remount-fs.service... Aug 13 01:13:33.143612 systemd[1]: Starting systemd-udev-trigger.service... Aug 13 01:13:33.143619 systemd[1]: verity-setup.service: Deactivated successfully. Aug 13 01:13:33.143625 systemd[1]: Stopped verity-setup.service. Aug 13 01:13:33.145664 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 01:13:33.145676 systemd[1]: Mounted dev-hugepages.mount. Aug 13 01:13:33.145684 systemd[1]: Mounted dev-mqueue.mount. Aug 13 01:13:33.145692 systemd[1]: Mounted media.mount. Aug 13 01:13:33.145701 systemd[1]: Mounted sys-kernel-debug.mount. Aug 13 01:13:33.145708 systemd[1]: Mounted sys-kernel-tracing.mount. Aug 13 01:13:33.145732 systemd[1]: Mounted tmp.mount. Aug 13 01:13:33.145748 systemd[1]: Finished kmod-static-nodes.service. Aug 13 01:13:33.145757 kernel: fuse: init (API version 7.34) Aug 13 01:13:33.145765 systemd[1]: modprobe@configfs.service: Deactivated successfully. Aug 13 01:13:33.145772 systemd[1]: Finished modprobe@configfs.service. Aug 13 01:13:33.145781 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 01:13:33.145793 systemd[1]: Finished modprobe@dm_mod.service. Aug 13 01:13:33.145807 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 13 01:13:33.145818 systemd[1]: Finished modprobe@drm.service. Aug 13 01:13:33.145828 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 01:13:33.145837 systemd[1]: Finished modprobe@efi_pstore.service. Aug 13 01:13:33.145849 systemd-journald[1010]: Journal started Aug 13 01:13:33.145890 systemd-journald[1010]: Runtime Journal (/run/log/journal/022f79ff8e03414b888b343e354d359b) is 4.8M, max 38.8M, 34.0M free. Aug 13 01:13:30.637000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Aug 13 01:13:30.739000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Aug 13 01:13:30.739000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Aug 13 01:13:30.739000 audit: BPF prog-id=10 op=LOAD Aug 13 01:13:30.739000 audit: BPF prog-id=10 op=UNLOAD Aug 13 01:13:30.739000 audit: BPF prog-id=11 op=LOAD Aug 13 01:13:30.739000 audit: BPF prog-id=11 op=UNLOAD Aug 13 01:13:30.935000 audit[922]: AVC avc: denied { associate } for pid=922 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Aug 13 01:13:30.935000 audit[922]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c00014d8b2 a1=c0000cede0 a2=c0000d70c0 a3=32 items=0 ppid=905 pid=922 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:13:30.935000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Aug 13 01:13:30.937000 audit[922]: AVC avc: denied { associate } for pid=922 comm="torcx-generator" name="bin" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Aug 13 01:13:30.937000 audit[922]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c00014d999 a2=1ed a3=0 items=2 ppid=905 pid=922 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:13:30.937000 audit: CWD cwd="/" Aug 13 01:13:30.937000 audit: PATH item=0 name=(null) inode=2 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:13:30.937000 audit: PATH item=1 name=(null) inode=3 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:13:30.937000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Aug 13 01:13:33.034000 audit: BPF prog-id=12 op=LOAD Aug 13 01:13:33.034000 audit: BPF prog-id=3 op=UNLOAD Aug 13 01:13:33.149604 systemd[1]: Started systemd-journald.service. Aug 13 01:13:33.034000 audit: BPF prog-id=13 op=LOAD Aug 13 01:13:33.034000 audit: BPF prog-id=14 op=LOAD Aug 13 01:13:33.034000 audit: BPF prog-id=4 op=UNLOAD Aug 13 01:13:33.034000 audit: BPF prog-id=5 op=UNLOAD Aug 13 01:13:33.034000 audit: BPF prog-id=15 op=LOAD Aug 13 01:13:33.034000 audit: BPF prog-id=12 op=UNLOAD Aug 13 01:13:33.034000 audit: BPF prog-id=16 op=LOAD Aug 13 01:13:33.034000 audit: BPF prog-id=17 op=LOAD Aug 13 01:13:33.034000 audit: BPF prog-id=13 op=UNLOAD Aug 13 01:13:33.034000 audit: BPF prog-id=14 op=UNLOAD Aug 13 01:13:33.036000 audit: BPF prog-id=18 op=LOAD Aug 13 01:13:33.036000 audit: BPF prog-id=15 op=UNLOAD Aug 13 01:13:33.036000 audit: BPF prog-id=19 op=LOAD Aug 13 01:13:33.036000 audit: BPF prog-id=20 op=LOAD Aug 13 01:13:33.036000 audit: BPF prog-id=16 op=UNLOAD Aug 13 01:13:33.036000 audit: BPF prog-id=17 op=UNLOAD Aug 13 01:13:33.036000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:13:33.039000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:13:33.039000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:13:33.040000 audit: BPF prog-id=18 op=UNLOAD Aug 13 01:13:33.105000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:13:33.107000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:13:33.108000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:13:33.108000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:13:33.108000 audit: BPF prog-id=21 op=LOAD Aug 13 01:13:33.108000 audit: BPF prog-id=22 op=LOAD Aug 13 01:13:33.108000 audit: BPF prog-id=23 op=LOAD Aug 13 01:13:33.108000 audit: BPF prog-id=19 op=UNLOAD Aug 13 01:13:33.108000 audit: BPF prog-id=20 op=UNLOAD Aug 13 01:13:33.120000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:13:33.136000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:13:33.139000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:13:33.139000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:13:33.139000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Aug 13 01:13:33.139000 audit[1010]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=4 a1=7ffed7aab000 a2=4000 a3=7ffed7aab09c items=0 ppid=1 pid=1010 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:13:33.139000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Aug 13 01:13:33.141000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:13:33.141000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:13:33.142000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:13:33.142000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:13:33.146000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:13:33.146000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:13:33.147000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:13:33.148000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:13:33.148000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:13:33.148000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:13:33.148000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:13:33.034842 systemd[1]: Queued start job for default target multi-user.target. Aug 13 01:13:30.912381 /usr/lib/systemd/system-generators/torcx-generator[922]: time="2025-08-13T01:13:30Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Aug 13 01:13:33.034852 systemd[1]: Unnecessary job was removed for dev-sda6.device. Aug 13 01:13:30.920507 /usr/lib/systemd/system-generators/torcx-generator[922]: time="2025-08-13T01:13:30Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Aug 13 01:13:33.038365 systemd[1]: systemd-journald.service: Deactivated successfully. Aug 13 01:13:30.920524 /usr/lib/systemd/system-generators/torcx-generator[922]: time="2025-08-13T01:13:30Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Aug 13 01:13:33.149956 systemd[1]: modprobe@fuse.service: Deactivated successfully. Aug 13 01:13:30.920550 /usr/lib/systemd/system-generators/torcx-generator[922]: time="2025-08-13T01:13:30Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Aug 13 01:13:33.150038 systemd[1]: Finished modprobe@fuse.service. Aug 13 01:13:30.920558 /usr/lib/systemd/system-generators/torcx-generator[922]: time="2025-08-13T01:13:30Z" level=debug msg="skipped missing lower profile" missing profile=oem Aug 13 01:13:33.150272 systemd[1]: Finished systemd-network-generator.service. Aug 13 01:13:30.920584 /usr/lib/systemd/system-generators/torcx-generator[922]: time="2025-08-13T01:13:30Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Aug 13 01:13:33.150496 systemd[1]: Finished systemd-remount-fs.service. Aug 13 01:13:30.920594 /usr/lib/systemd/system-generators/torcx-generator[922]: time="2025-08-13T01:13:30Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Aug 13 01:13:33.150730 systemd[1]: Reached target network-pre.target. Aug 13 01:13:30.920763 /usr/lib/systemd/system-generators/torcx-generator[922]: time="2025-08-13T01:13:30Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Aug 13 01:13:33.153411 jq[989]: true Aug 13 01:13:33.151836 systemd[1]: Mounting sys-fs-fuse-connections.mount... Aug 13 01:13:30.920796 /usr/lib/systemd/system-generators/torcx-generator[922]: time="2025-08-13T01:13:30Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Aug 13 01:13:30.920806 /usr/lib/systemd/system-generators/torcx-generator[922]: time="2025-08-13T01:13:30Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Aug 13 01:13:30.934624 /usr/lib/systemd/system-generators/torcx-generator[922]: time="2025-08-13T01:13:30Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Aug 13 01:13:30.934653 /usr/lib/systemd/system-generators/torcx-generator[922]: time="2025-08-13T01:13:30Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Aug 13 01:13:33.153930 systemd[1]: Mounting sys-kernel-config.mount... Aug 13 01:13:30.934669 /usr/lib/systemd/system-generators/torcx-generator[922]: time="2025-08-13T01:13:30Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.8: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.8 Aug 13 01:13:33.154150 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Aug 13 01:13:30.934680 /usr/lib/systemd/system-generators/torcx-generator[922]: time="2025-08-13T01:13:30Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Aug 13 01:13:30.934694 /usr/lib/systemd/system-generators/torcx-generator[922]: time="2025-08-13T01:13:30Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.8: no such file or directory" path=/var/lib/torcx/store/3510.3.8 Aug 13 01:13:30.934704 /usr/lib/systemd/system-generators/torcx-generator[922]: time="2025-08-13T01:13:30Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Aug 13 01:13:32.591376 /usr/lib/systemd/system-generators/torcx-generator[922]: time="2025-08-13T01:13:32Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Aug 13 01:13:32.591532 /usr/lib/systemd/system-generators/torcx-generator[922]: time="2025-08-13T01:13:32Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Aug 13 01:13:32.591591 /usr/lib/systemd/system-generators/torcx-generator[922]: time="2025-08-13T01:13:32Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Aug 13 01:13:32.591693 /usr/lib/systemd/system-generators/torcx-generator[922]: time="2025-08-13T01:13:32Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Aug 13 01:13:33.157965 systemd[1]: Starting systemd-hwdb-update.service... Aug 13 01:13:33.161659 jq[1019]: true Aug 13 01:13:33.167000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:13:33.169316 systemd-journald[1010]: Time spent on flushing to /var/log/journal/022f79ff8e03414b888b343e354d359b is 61.567ms for 1991 entries. Aug 13 01:13:33.169316 systemd-journald[1010]: System Journal (/var/log/journal/022f79ff8e03414b888b343e354d359b) is 8.0M, max 584.8M, 576.8M free. Aug 13 01:13:33.263408 systemd-journald[1010]: Received client request to flush runtime journal. Aug 13 01:13:33.263458 kernel: loop: module loaded Aug 13 01:13:33.169000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:13:33.175000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:13:33.175000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:13:33.196000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:13:33.199000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:13:33.236000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:13:33.257000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:13:32.591727 /usr/lib/systemd/system-generators/torcx-generator[922]: time="2025-08-13T01:13:32Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Aug 13 01:13:33.262000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:13:33.158816 systemd[1]: Starting systemd-journal-flush.service... Aug 13 01:13:32.591782 /usr/lib/systemd/system-generators/torcx-generator[922]: time="2025-08-13T01:13:32Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Aug 13 01:13:33.158952 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 01:13:33.159682 systemd[1]: Starting systemd-random-seed.service... Aug 13 01:13:33.162380 systemd[1]: Mounted sys-fs-fuse-connections.mount. Aug 13 01:13:33.163001 systemd[1]: Mounted sys-kernel-config.mount. Aug 13 01:13:33.168845 systemd[1]: Finished systemd-random-seed.service. Aug 13 01:13:33.169018 systemd[1]: Reached target first-boot-complete.target. Aug 13 01:13:33.170730 systemd[1]: Finished systemd-modules-load.service. Aug 13 01:13:33.171737 systemd[1]: Starting systemd-sysctl.service... Aug 13 01:13:33.177536 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 01:13:33.177621 systemd[1]: Finished modprobe@loop.service. Aug 13 01:13:33.177828 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Aug 13 01:13:33.197656 systemd[1]: Finished systemd-sysctl.service. Aug 13 01:13:33.200709 systemd[1]: Finished flatcar-tmpfiles.service. Aug 13 01:13:33.201718 systemd[1]: Starting systemd-sysusers.service... Aug 13 01:13:33.237789 systemd[1]: Finished systemd-sysusers.service. Aug 13 01:13:33.259474 systemd[1]: Finished systemd-udev-trigger.service. Aug 13 01:13:33.260549 systemd[1]: Starting systemd-udev-settle.service... Aug 13 01:13:33.264136 systemd[1]: Finished systemd-journal-flush.service. Aug 13 01:13:33.269656 udevadm[1052]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Aug 13 01:13:33.354477 ignition[1033]: Ignition 2.14.0 Aug 13 01:13:33.354974 ignition[1033]: deleting config from guestinfo properties Aug 13 01:13:33.357633 ignition[1033]: Successfully deleted config Aug 13 01:13:33.358245 systemd[1]: Finished ignition-delete-config.service. Aug 13 01:13:33.356000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ignition-delete-config comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:13:33.626000 systemd[1]: Finished systemd-hwdb-update.service. Aug 13 01:13:33.624000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:13:33.625000 audit: BPF prog-id=24 op=LOAD Aug 13 01:13:33.625000 audit: BPF prog-id=25 op=LOAD Aug 13 01:13:33.625000 audit: BPF prog-id=7 op=UNLOAD Aug 13 01:13:33.625000 audit: BPF prog-id=8 op=UNLOAD Aug 13 01:13:33.627167 systemd[1]: Starting systemd-udevd.service... Aug 13 01:13:33.638585 systemd-udevd[1054]: Using default interface naming scheme 'v252'. Aug 13 01:13:33.659000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:13:33.660000 audit: BPF prog-id=26 op=LOAD Aug 13 01:13:33.661609 systemd[1]: Started systemd-udevd.service. Aug 13 01:13:33.662931 systemd[1]: Starting systemd-networkd.service... Aug 13 01:13:33.668000 audit: BPF prog-id=27 op=LOAD Aug 13 01:13:33.668000 audit: BPF prog-id=28 op=LOAD Aug 13 01:13:33.668000 audit: BPF prog-id=29 op=LOAD Aug 13 01:13:33.670639 systemd[1]: Starting systemd-userdbd.service... Aug 13 01:13:33.689860 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. Aug 13 01:13:33.699000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:13:33.701336 systemd[1]: Started systemd-userdbd.service. Aug 13 01:13:33.749759 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Aug 13 01:13:33.753758 kernel: ACPI: button: Power Button [PWRF] Aug 13 01:13:33.780853 systemd-networkd[1063]: lo: Link UP Aug 13 01:13:33.780859 systemd-networkd[1063]: lo: Gained carrier Aug 13 01:13:33.781366 systemd-networkd[1063]: Enumeration completed Aug 13 01:13:33.779000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:13:33.781427 systemd[1]: Started systemd-networkd.service. Aug 13 01:13:33.781439 systemd-networkd[1063]: ens192: Configuring with /etc/systemd/network/00-vmware.network. Aug 13 01:13:33.783801 kernel: vmxnet3 0000:0b:00.0 ens192: intr type 3, mode 0, 3 vectors allocated Aug 13 01:13:33.783918 kernel: vmxnet3 0000:0b:00.0 ens192: NIC Link is Up 10000 Mbps Aug 13 01:13:33.785503 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): ens192: link becomes ready Aug 13 01:13:33.785870 systemd-networkd[1063]: ens192: Link UP Aug 13 01:13:33.785963 systemd-networkd[1063]: ens192: Gained carrier Aug 13 01:13:33.815760 kernel: vmw_vmci 0000:00:07.7: Found VMCI PCI device at 0x11080, irq 16 Aug 13 01:13:33.825704 kernel: vmw_vmci 0000:00:07.7: Using capabilities 0xc Aug 13 01:13:33.825821 kernel: Guest personality initialized and is active Aug 13 01:13:33.811000 audit[1069]: AVC avc: denied { confidentiality } for pid=1069 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Aug 13 01:13:33.811000 audit[1069]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=555bce248fe0 a1=338ac a2=7fba50189bc5 a3=5 items=110 ppid=1054 pid=1069 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:13:33.811000 audit: CWD cwd="/" Aug 13 01:13:33.811000 audit: PATH item=0 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:13:33.811000 audit: PATH item=1 name=(null) inode=25141 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:13:33.811000 audit: PATH item=2 name=(null) inode=25141 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:13:33.811000 audit: PATH item=3 name=(null) inode=25142 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:13:33.811000 audit: PATH item=4 name=(null) inode=25141 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:13:33.811000 audit: PATH item=5 name=(null) inode=25143 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:13:33.811000 audit: PATH item=6 name=(null) inode=25141 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:13:33.811000 audit: PATH item=7 name=(null) inode=25144 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:13:33.811000 audit: PATH item=8 name=(null) inode=25144 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:13:33.811000 audit: PATH item=9 name=(null) inode=25145 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:13:33.811000 audit: PATH item=10 name=(null) inode=25144 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:13:33.811000 audit: PATH item=11 name=(null) inode=25146 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:13:33.811000 audit: PATH item=12 name=(null) inode=25144 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:13:33.811000 audit: PATH item=13 name=(null) inode=25147 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:13:33.811000 audit: PATH item=14 name=(null) inode=25144 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:13:33.811000 audit: PATH item=15 name=(null) inode=25148 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:13:33.811000 audit: PATH item=16 name=(null) inode=25144 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:13:33.811000 audit: PATH item=17 name=(null) inode=25149 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:13:33.811000 audit: PATH item=18 name=(null) inode=25141 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:13:33.811000 audit: PATH item=19 name=(null) inode=25150 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:13:33.811000 audit: PATH item=20 name=(null) inode=25150 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:13:33.811000 audit: PATH item=21 name=(null) inode=25151 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:13:33.811000 audit: PATH item=22 name=(null) inode=25150 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:13:33.811000 audit: PATH item=23 name=(null) inode=25152 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:13:33.811000 audit: PATH item=24 name=(null) inode=25150 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:13:33.811000 audit: PATH item=25 name=(null) inode=25153 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:13:33.811000 audit: PATH item=26 name=(null) inode=25150 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:13:33.811000 audit: PATH item=27 name=(null) inode=25154 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:13:33.811000 audit: PATH item=28 name=(null) inode=25150 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:13:33.811000 audit: PATH item=29 name=(null) inode=25155 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:13:33.811000 audit: PATH item=30 name=(null) inode=25141 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:13:33.811000 audit: PATH item=31 name=(null) inode=25156 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:13:33.811000 audit: PATH item=32 name=(null) inode=25156 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:13:33.811000 audit: PATH item=33 name=(null) inode=25157 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:13:33.811000 audit: PATH item=34 name=(null) inode=25156 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:13:33.811000 audit: PATH item=35 name=(null) inode=25158 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:13:33.811000 audit: PATH item=36 name=(null) inode=25156 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:13:33.811000 audit: PATH item=37 name=(null) inode=25159 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:13:33.811000 audit: PATH item=38 name=(null) inode=25156 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:13:33.811000 audit: PATH item=39 name=(null) inode=25160 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:13:33.811000 audit: PATH item=40 name=(null) inode=25156 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:13:33.811000 audit: PATH item=41 name=(null) inode=25161 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:13:33.811000 audit: PATH item=42 name=(null) inode=25141 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:13:33.811000 audit: PATH item=43 name=(null) inode=25162 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:13:33.811000 audit: PATH item=44 name=(null) inode=25162 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:13:33.811000 audit: PATH item=45 name=(null) inode=25163 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:13:33.811000 audit: PATH item=46 name=(null) inode=25162 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:13:33.811000 audit: PATH item=47 name=(null) inode=25164 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:13:33.811000 audit: PATH item=48 name=(null) inode=25162 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:13:33.827170 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Aug 13 01:13:33.827187 kernel: Initialized host personality Aug 13 01:13:33.811000 audit: PATH item=49 name=(null) inode=25165 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:13:33.811000 audit: PATH item=50 name=(null) inode=25162 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:13:33.811000 audit: PATH item=51 name=(null) inode=25166 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:13:33.811000 audit: PATH item=52 name=(null) inode=25162 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:13:33.811000 audit: PATH item=53 name=(null) inode=25167 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:13:33.811000 audit: PATH item=54 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:13:33.811000 audit: PATH item=55 name=(null) inode=25168 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:13:33.811000 audit: PATH item=56 name=(null) inode=25168 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:13:33.811000 audit: PATH item=57 name=(null) inode=25169 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:13:33.811000 audit: PATH item=58 name=(null) inode=25168 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:13:33.811000 audit: PATH item=59 name=(null) inode=25170 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:13:33.811000 audit: PATH item=60 name=(null) inode=25168 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:13:33.811000 audit: PATH item=61 name=(null) inode=25171 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:13:33.811000 audit: PATH item=62 name=(null) inode=25171 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:13:33.811000 audit: PATH item=63 name=(null) inode=25172 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:13:33.811000 audit: PATH item=64 name=(null) inode=25171 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:13:33.811000 audit: PATH item=65 name=(null) inode=25173 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:13:33.811000 audit: PATH item=66 name=(null) inode=25171 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:13:33.811000 audit: PATH item=67 name=(null) inode=25174 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:13:33.811000 audit: PATH item=68 name=(null) inode=25171 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:13:33.811000 audit: PATH item=69 name=(null) inode=25175 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:13:33.811000 audit: PATH item=70 name=(null) inode=25171 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:13:33.811000 audit: PATH item=71 name=(null) inode=25176 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:13:33.811000 audit: PATH item=72 name=(null) inode=25168 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:13:33.811000 audit: PATH item=73 name=(null) inode=25177 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:13:33.811000 audit: PATH item=74 name=(null) inode=25177 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:13:33.811000 audit: PATH item=75 name=(null) inode=25178 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:13:33.811000 audit: PATH item=76 name=(null) inode=25177 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:13:33.811000 audit: PATH item=77 name=(null) inode=25179 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:13:33.811000 audit: PATH item=78 name=(null) inode=25177 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:13:33.811000 audit: PATH item=79 name=(null) inode=25180 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:13:33.811000 audit: PATH item=80 name=(null) inode=25177 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:13:33.811000 audit: PATH item=81 name=(null) inode=25181 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:13:33.811000 audit: PATH item=82 name=(null) inode=25177 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:13:33.811000 audit: PATH item=83 name=(null) inode=25182 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:13:33.811000 audit: PATH item=84 name=(null) inode=25168 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:13:33.811000 audit: PATH item=85 name=(null) inode=25183 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:13:33.811000 audit: PATH item=86 name=(null) inode=25183 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:13:33.811000 audit: PATH item=87 name=(null) inode=25184 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:13:33.811000 audit: PATH item=88 name=(null) inode=25183 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:13:33.811000 audit: PATH item=89 name=(null) inode=25185 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:13:33.811000 audit: PATH item=90 name=(null) inode=25183 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:13:33.811000 audit: PATH item=91 name=(null) inode=25186 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:13:33.811000 audit: PATH item=92 name=(null) inode=25183 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:13:33.811000 audit: PATH item=93 name=(null) inode=25187 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:13:33.811000 audit: PATH item=94 name=(null) inode=25183 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:13:33.811000 audit: PATH item=95 name=(null) inode=25188 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:13:33.811000 audit: PATH item=96 name=(null) inode=25168 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:13:33.811000 audit: PATH item=97 name=(null) inode=25189 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:13:33.811000 audit: PATH item=98 name=(null) inode=25189 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:13:33.811000 audit: PATH item=99 name=(null) inode=25190 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:13:33.811000 audit: PATH item=100 name=(null) inode=25189 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:13:33.811000 audit: PATH item=101 name=(null) inode=25191 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:13:33.811000 audit: PATH item=102 name=(null) inode=25189 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:13:33.811000 audit: PATH item=103 name=(null) inode=25192 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:13:33.811000 audit: PATH item=104 name=(null) inode=25189 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:13:33.811000 audit: PATH item=105 name=(null) inode=25193 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:13:33.811000 audit: PATH item=106 name=(null) inode=25189 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:13:33.811000 audit: PATH item=107 name=(null) inode=25194 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:13:33.811000 audit: PATH item=108 name=(null) inode=1 dev=00:07 mode=040700 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:13:33.811000 audit: PATH item=109 name=(null) inode=25195 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:13:33.811000 audit: PROCTITLE proctitle="(udev-worker)" Aug 13 01:13:33.817154 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Aug 13 01:13:33.830764 kernel: piix4_smbus 0000:00:07.3: SMBus Host Controller not enabled! Aug 13 01:13:33.842757 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input3 Aug 13 01:13:33.860762 kernel: mousedev: PS/2 mouse device common for all mice Aug 13 01:13:33.863713 (udev-worker)[1070]: id: Truncating stdout of 'dmi_memory_id' up to 16384 byte. Aug 13 01:13:33.874986 systemd[1]: Finished systemd-udev-settle.service. Aug 13 01:13:33.873000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:13:33.876049 systemd[1]: Starting lvm2-activation-early.service... Aug 13 01:13:33.893652 lvm[1087]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Aug 13 01:13:33.918350 systemd[1]: Finished lvm2-activation-early.service. Aug 13 01:13:33.916000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:13:33.918548 systemd[1]: Reached target cryptsetup.target. Aug 13 01:13:33.919565 systemd[1]: Starting lvm2-activation.service... Aug 13 01:13:33.922072 lvm[1088]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Aug 13 01:13:33.938000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:13:33.940305 systemd[1]: Finished lvm2-activation.service. Aug 13 01:13:33.940480 systemd[1]: Reached target local-fs-pre.target. Aug 13 01:13:33.940578 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Aug 13 01:13:33.940592 systemd[1]: Reached target local-fs.target. Aug 13 01:13:33.940683 systemd[1]: Reached target machines.target. Aug 13 01:13:33.941698 systemd[1]: Starting ldconfig.service... Aug 13 01:13:33.942256 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Aug 13 01:13:33.942289 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Aug 13 01:13:33.943043 systemd[1]: Starting systemd-boot-update.service... Aug 13 01:13:33.943732 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Aug 13 01:13:33.944554 systemd[1]: Starting systemd-machine-id-commit.service... Aug 13 01:13:33.945475 systemd[1]: Starting systemd-sysext.service... Aug 13 01:13:33.951740 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1090 (bootctl) Aug 13 01:13:33.952496 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Aug 13 01:13:33.966146 systemd[1]: Unmounting usr-share-oem.mount... Aug 13 01:13:33.975000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:13:33.977499 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Aug 13 01:13:33.980170 systemd[1]: usr-share-oem.mount: Deactivated successfully. Aug 13 01:13:33.980269 systemd[1]: Unmounted usr-share-oem.mount. Aug 13 01:13:33.999761 kernel: loop0: detected capacity change from 0 to 221472 Aug 13 01:13:34.559000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:13:34.560448 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Aug 13 01:13:34.560812 systemd[1]: Finished systemd-machine-id-commit.service. Aug 13 01:13:34.582759 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Aug 13 01:13:34.585242 systemd-fsck[1101]: fsck.fat 4.2 (2021-01-31) Aug 13 01:13:34.585242 systemd-fsck[1101]: /dev/sda1: 789 files, 119324/258078 clusters Aug 13 01:13:34.584000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:13:34.586283 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Aug 13 01:13:34.587325 systemd[1]: Mounting boot.mount... Aug 13 01:13:34.599957 systemd[1]: Mounted boot.mount. Aug 13 01:13:34.610768 kernel: loop1: detected capacity change from 0 to 221472 Aug 13 01:13:34.609000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:13:34.611566 systemd[1]: Finished systemd-boot-update.service. Aug 13 01:13:34.629386 (sd-sysext)[1105]: Using extensions 'kubernetes'. Aug 13 01:13:34.630486 (sd-sysext)[1105]: Merged extensions into '/usr'. Aug 13 01:13:34.640128 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 01:13:34.641223 systemd[1]: Mounting usr-share-oem.mount... Aug 13 01:13:34.642343 systemd[1]: Starting modprobe@dm_mod.service... Aug 13 01:13:34.644354 systemd[1]: Starting modprobe@efi_pstore.service... Aug 13 01:13:34.645131 systemd[1]: Starting modprobe@loop.service... Aug 13 01:13:34.645312 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Aug 13 01:13:34.645413 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Aug 13 01:13:34.645538 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 01:13:34.648042 systemd[1]: Mounted usr-share-oem.mount. Aug 13 01:13:34.648339 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 01:13:34.646000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:13:34.646000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:13:34.648441 systemd[1]: Finished modprobe@dm_mod.service. Aug 13 01:13:34.648772 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 01:13:34.648847 systemd[1]: Finished modprobe@efi_pstore.service. Aug 13 01:13:34.647000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:13:34.647000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:13:34.649138 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 01:13:34.649210 systemd[1]: Finished modprobe@loop.service. Aug 13 01:13:34.647000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:13:34.647000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:13:34.649533 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 01:13:34.649596 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Aug 13 01:13:34.650240 systemd[1]: Finished systemd-sysext.service. Aug 13 01:13:34.648000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:13:34.651173 systemd[1]: Starting ensure-sysext.service... Aug 13 01:13:34.651983 systemd[1]: Starting systemd-tmpfiles-setup.service... Aug 13 01:13:34.655991 systemd[1]: Reloading. Aug 13 01:13:34.672728 systemd-tmpfiles[1112]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Aug 13 01:13:34.682453 systemd-tmpfiles[1112]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Aug 13 01:13:34.695254 systemd-tmpfiles[1112]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Aug 13 01:13:34.698363 /usr/lib/systemd/system-generators/torcx-generator[1131]: time="2025-08-13T01:13:34Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Aug 13 01:13:34.698381 /usr/lib/systemd/system-generators/torcx-generator[1131]: time="2025-08-13T01:13:34Z" level=info msg="torcx already run" Aug 13 01:13:34.765668 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Aug 13 01:13:34.765678 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Aug 13 01:13:34.777586 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 01:13:34.811000 audit: BPF prog-id=30 op=LOAD Aug 13 01:13:34.811000 audit: BPF prog-id=26 op=UNLOAD Aug 13 01:13:34.811000 audit: BPF prog-id=31 op=LOAD Aug 13 01:13:34.811000 audit: BPF prog-id=27 op=UNLOAD Aug 13 01:13:34.811000 audit: BPF prog-id=32 op=LOAD Aug 13 01:13:34.811000 audit: BPF prog-id=33 op=LOAD Aug 13 01:13:34.811000 audit: BPF prog-id=28 op=UNLOAD Aug 13 01:13:34.811000 audit: BPF prog-id=29 op=UNLOAD Aug 13 01:13:34.812000 audit: BPF prog-id=34 op=LOAD Aug 13 01:13:34.812000 audit: BPF prog-id=21 op=UNLOAD Aug 13 01:13:34.812000 audit: BPF prog-id=35 op=LOAD Aug 13 01:13:34.812000 audit: BPF prog-id=36 op=LOAD Aug 13 01:13:34.812000 audit: BPF prog-id=22 op=UNLOAD Aug 13 01:13:34.812000 audit: BPF prog-id=23 op=UNLOAD Aug 13 01:13:34.812000 audit: BPF prog-id=37 op=LOAD Aug 13 01:13:34.812000 audit: BPF prog-id=38 op=LOAD Aug 13 01:13:34.812000 audit: BPF prog-id=24 op=UNLOAD Aug 13 01:13:34.812000 audit: BPF prog-id=25 op=UNLOAD Aug 13 01:13:34.821805 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 01:13:34.822616 systemd[1]: Starting modprobe@dm_mod.service... Aug 13 01:13:34.823400 systemd[1]: Starting modprobe@efi_pstore.service... Aug 13 01:13:34.825073 systemd[1]: Starting modprobe@loop.service... Aug 13 01:13:34.826086 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Aug 13 01:13:34.826159 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Aug 13 01:13:34.826226 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 01:13:34.826690 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 01:13:34.826860 systemd[1]: Finished modprobe@dm_mod.service. Aug 13 01:13:34.825000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:13:34.825000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:13:34.827191 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 01:13:34.827261 systemd[1]: Finished modprobe@efi_pstore.service. Aug 13 01:13:34.825000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:13:34.825000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:13:34.827595 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 01:13:34.827661 systemd[1]: Finished modprobe@loop.service. Aug 13 01:13:34.826000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:13:34.826000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:13:34.828004 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 01:13:34.828062 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Aug 13 01:13:34.828951 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 01:13:34.829757 systemd[1]: Starting modprobe@dm_mod.service... Aug 13 01:13:34.831483 systemd[1]: Starting modprobe@efi_pstore.service... Aug 13 01:13:34.833091 systemd[1]: Starting modprobe@loop.service... Aug 13 01:13:34.833792 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Aug 13 01:13:34.833864 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Aug 13 01:13:34.833929 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 01:13:34.834391 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 01:13:34.834473 systemd[1]: Finished modprobe@dm_mod.service. Aug 13 01:13:34.832000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:13:34.832000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:13:34.834780 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 01:13:34.834848 systemd[1]: Finished modprobe@efi_pstore.service. Aug 13 01:13:34.833000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:13:34.833000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:13:34.835566 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 01:13:34.835630 systemd[1]: Finished modprobe@loop.service. Aug 13 01:13:34.834000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:13:34.834000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:13:34.836055 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 01:13:34.836118 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Aug 13 01:13:34.837496 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 01:13:34.838608 systemd[1]: Starting modprobe@dm_mod.service... Aug 13 01:13:34.840449 systemd[1]: Starting modprobe@drm.service... Aug 13 01:13:34.841417 systemd[1]: Starting modprobe@efi_pstore.service... Aug 13 01:13:34.842241 systemd[1]: Starting modprobe@loop.service... Aug 13 01:13:34.842389 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Aug 13 01:13:34.842457 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Aug 13 01:13:34.843312 systemd[1]: Starting systemd-networkd-wait-online.service... Aug 13 01:13:34.843466 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 01:13:34.844047 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 01:13:34.844138 systemd[1]: Finished modprobe@dm_mod.service. Aug 13 01:13:34.842000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:13:34.843000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:13:34.843000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:13:34.844000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:13:34.845517 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 13 01:13:34.845596 systemd[1]: Finished modprobe@drm.service. Aug 13 01:13:34.845927 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 01:13:34.845993 systemd[1]: Finished modprobe@efi_pstore.service. Aug 13 01:13:34.844000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:13:34.844000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:13:34.846285 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 01:13:34.845000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:13:34.847045 systemd[1]: Finished ensure-sysext.service. Aug 13 01:13:34.850050 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 01:13:34.850122 systemd[1]: Finished modprobe@loop.service. Aug 13 01:13:34.848000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:13:34.848000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:13:34.850269 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Aug 13 01:13:34.898283 systemd[1]: Finished systemd-tmpfiles-setup.service. Aug 13 01:13:34.896000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:13:34.899469 systemd[1]: Starting audit-rules.service... Aug 13 01:13:34.901274 systemd[1]: Starting clean-ca-certificates.service... Aug 13 01:13:34.902397 systemd[1]: Starting systemd-journal-catalog-update.service... Aug 13 01:13:34.902000 audit: BPF prog-id=39 op=LOAD Aug 13 01:13:34.904000 audit: BPF prog-id=40 op=LOAD Aug 13 01:13:34.905446 systemd[1]: Starting systemd-resolved.service... Aug 13 01:13:34.907263 systemd[1]: Starting systemd-timesyncd.service... Aug 13 01:13:34.908854 systemd[1]: Starting systemd-update-utmp.service... Aug 13 01:13:34.914077 systemd[1]: Finished clean-ca-certificates.service. Aug 13 01:13:34.912000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:13:34.914269 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Aug 13 01:13:34.914000 audit[1211]: SYSTEM_BOOT pid=1211 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Aug 13 01:13:34.917334 systemd[1]: Finished systemd-update-utmp.service. Aug 13 01:13:34.915000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:13:34.941738 ldconfig[1089]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Aug 13 01:13:34.944576 systemd[1]: Finished ldconfig.service. Aug 13 01:13:34.942000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:13:34.945189 systemd[1]: Finished systemd-journal-catalog-update.service. Aug 13 01:13:34.946204 systemd[1]: Starting systemd-update-done.service... Aug 13 01:13:34.943000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:13:34.950322 systemd[1]: Finished systemd-update-done.service. Aug 13 01:13:34.948000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-done comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:13:34.959000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Aug 13 01:13:34.959000 audit[1226]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffcafd8c7b0 a2=420 a3=0 items=0 ppid=1205 pid=1226 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:13:34.959000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Aug 13 01:13:34.962204 augenrules[1226]: No rules Aug 13 01:13:34.961982 systemd[1]: Finished audit-rules.service. Aug 13 01:13:34.968479 systemd[1]: Started systemd-timesyncd.service. Aug 13 01:13:34.968640 systemd[1]: Reached target time-set.target. Aug 13 01:13:34.970509 systemd-resolved[1209]: Positive Trust Anchors: Aug 13 01:13:34.970645 systemd-resolved[1209]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 13 01:13:34.970707 systemd-resolved[1209]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Aug 13 01:15:04.308480 systemd-timesyncd[1210]: Contacted time server 104.233.211.205:123 (0.flatcar.pool.ntp.org). Aug 13 01:15:04.308510 systemd-timesyncd[1210]: Initial clock synchronization to Wed 2025-08-13 01:15:04.308428 UTC. Aug 13 01:15:04.319888 systemd-resolved[1209]: Defaulting to hostname 'linux'. Aug 13 01:15:04.321003 systemd[1]: Started systemd-resolved.service. Aug 13 01:15:04.321150 systemd[1]: Reached target network.target. Aug 13 01:15:04.321255 systemd[1]: Reached target nss-lookup.target. Aug 13 01:15:04.321348 systemd[1]: Reached target sysinit.target. Aug 13 01:15:04.321482 systemd[1]: Started motdgen.path. Aug 13 01:15:04.321610 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Aug 13 01:15:04.321800 systemd[1]: Started logrotate.timer. Aug 13 01:15:04.321929 systemd[1]: Started mdadm.timer. Aug 13 01:15:04.322011 systemd[1]: Started systemd-tmpfiles-clean.timer. Aug 13 01:15:04.322122 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Aug 13 01:15:04.322141 systemd[1]: Reached target paths.target. Aug 13 01:15:04.322225 systemd[1]: Reached target timers.target. Aug 13 01:15:04.322480 systemd[1]: Listening on dbus.socket. Aug 13 01:15:04.323372 systemd[1]: Starting docker.socket... Aug 13 01:15:04.325792 systemd[1]: Listening on sshd.socket. Aug 13 01:15:04.325952 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Aug 13 01:15:04.326189 systemd[1]: Listening on docker.socket. Aug 13 01:15:04.326314 systemd[1]: Reached target sockets.target. Aug 13 01:15:04.326403 systemd[1]: Reached target basic.target. Aug 13 01:15:04.326509 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Aug 13 01:15:04.326528 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Aug 13 01:15:04.327290 systemd[1]: Starting containerd.service... Aug 13 01:15:04.328761 systemd[1]: Starting dbus.service... Aug 13 01:15:04.329834 systemd[1]: Starting enable-oem-cloudinit.service... Aug 13 01:15:04.331779 jq[1236]: false Aug 13 01:15:04.330797 systemd[1]: Starting extend-filesystems.service... Aug 13 01:15:04.331060 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Aug 13 01:15:04.332448 systemd[1]: Starting motdgen.service... Aug 13 01:15:04.335563 systemd[1]: Starting prepare-helm.service... Aug 13 01:15:04.336437 systemd[1]: Starting ssh-key-proc-cmdline.service... Aug 13 01:15:04.337266 systemd[1]: Starting sshd-keygen.service... Aug 13 01:15:04.339226 systemd[1]: Starting systemd-logind.service... Aug 13 01:15:04.339439 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Aug 13 01:15:04.339483 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Aug 13 01:15:04.340190 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Aug 13 01:15:04.340618 systemd[1]: Starting update-engine.service... Aug 13 01:15:04.342501 systemd[1]: Starting update-ssh-keys-after-ignition.service... Aug 13 01:15:04.343566 systemd[1]: Starting vmtoolsd.service... Aug 13 01:15:04.344625 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Aug 13 01:15:04.348758 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Aug 13 01:15:04.350208 systemd[1]: Started vmtoolsd.service. Aug 13 01:15:04.353008 jq[1248]: true Aug 13 01:15:04.355346 extend-filesystems[1237]: Found loop1 Aug 13 01:15:04.355346 extend-filesystems[1237]: Found sda Aug 13 01:15:04.355346 extend-filesystems[1237]: Found sda1 Aug 13 01:15:04.355346 extend-filesystems[1237]: Found sda2 Aug 13 01:15:04.355346 extend-filesystems[1237]: Found sda3 Aug 13 01:15:04.355346 extend-filesystems[1237]: Found usr Aug 13 01:15:04.355346 extend-filesystems[1237]: Found sda4 Aug 13 01:15:04.355346 extend-filesystems[1237]: Found sda6 Aug 13 01:15:04.355346 extend-filesystems[1237]: Found sda7 Aug 13 01:15:04.355346 extend-filesystems[1237]: Found sda9 Aug 13 01:15:04.355346 extend-filesystems[1237]: Checking size of /dev/sda9 Aug 13 01:15:04.364247 tar[1252]: linux-amd64/helm Aug 13 01:15:04.366686 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Aug 13 01:15:04.366790 systemd[1]: Finished ssh-key-proc-cmdline.service. Aug 13 01:15:04.367531 extend-filesystems[1237]: Old size kept for /dev/sda9 Aug 13 01:15:04.368301 systemd[1]: motdgen.service: Deactivated successfully. Aug 13 01:15:04.368394 systemd[1]: Finished motdgen.service. Aug 13 01:15:04.368465 extend-filesystems[1237]: Found sr0 Aug 13 01:15:04.370364 jq[1258]: true Aug 13 01:15:04.370940 systemd[1]: extend-filesystems.service: Deactivated successfully. Aug 13 01:15:04.371043 systemd[1]: Finished extend-filesystems.service. Aug 13 01:15:04.387340 dbus-daemon[1235]: [system] SELinux support is enabled Aug 13 01:15:04.387450 systemd[1]: Started dbus.service. Aug 13 01:15:04.388782 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Aug 13 01:15:04.388800 systemd[1]: Reached target system-config.target. Aug 13 01:15:04.388912 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Aug 13 01:15:04.388922 systemd[1]: Reached target user-config.target. Aug 13 01:15:04.410753 bash[1288]: Updated "/home/core/.ssh/authorized_keys" Aug 13 01:15:04.411354 systemd[1]: Finished update-ssh-keys-after-ignition.service. Aug 13 01:15:04.414997 env[1257]: time="2025-08-13T01:15:04.414971931Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Aug 13 01:15:04.434684 kernel: NET: Registered PF_VSOCK protocol family Aug 13 01:15:04.444132 update_engine[1247]: I0813 01:15:04.443403 1247 main.cc:92] Flatcar Update Engine starting Aug 13 01:15:04.445557 systemd[1]: Started update-engine.service. Aug 13 01:15:04.445700 update_engine[1247]: I0813 01:15:04.445581 1247 update_check_scheduler.cc:74] Next update check in 6m42s Aug 13 01:15:04.446886 systemd[1]: Started locksmithd.service. Aug 13 01:15:04.461732 systemd-logind[1246]: Watching system buttons on /dev/input/event1 (Power Button) Aug 13 01:15:04.461747 systemd-logind[1246]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Aug 13 01:15:04.463177 systemd-logind[1246]: New seat seat0. Aug 13 01:15:04.464959 systemd[1]: Started systemd-logind.service. Aug 13 01:15:04.479957 env[1257]: time="2025-08-13T01:15:04.479930137Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Aug 13 01:15:04.480129 env[1257]: time="2025-08-13T01:15:04.480118035Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Aug 13 01:15:04.481183 env[1257]: time="2025-08-13T01:15:04.481147078Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.189-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Aug 13 01:15:04.485482 env[1257]: time="2025-08-13T01:15:04.485471465Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Aug 13 01:15:04.485655 env[1257]: time="2025-08-13T01:15:04.485642977Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 01:15:04.485731 env[1257]: time="2025-08-13T01:15:04.485720616Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Aug 13 01:15:04.485777 env[1257]: time="2025-08-13T01:15:04.485766406Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Aug 13 01:15:04.485830 env[1257]: time="2025-08-13T01:15:04.485817611Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Aug 13 01:15:04.485916 env[1257]: time="2025-08-13T01:15:04.485907221Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Aug 13 01:15:04.486080 env[1257]: time="2025-08-13T01:15:04.486070842Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Aug 13 01:15:04.486217 env[1257]: time="2025-08-13T01:15:04.486204564Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 01:15:04.486261 env[1257]: time="2025-08-13T01:15:04.486251825Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Aug 13 01:15:04.486331 env[1257]: time="2025-08-13T01:15:04.486320865Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Aug 13 01:15:04.486373 env[1257]: time="2025-08-13T01:15:04.486363227Z" level=info msg="metadata content store policy set" policy=shared Aug 13 01:15:04.488401 env[1257]: time="2025-08-13T01:15:04.488389633Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Aug 13 01:15:04.488465 env[1257]: time="2025-08-13T01:15:04.488455111Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Aug 13 01:15:04.488514 env[1257]: time="2025-08-13T01:15:04.488504067Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Aug 13 01:15:04.488570 env[1257]: time="2025-08-13T01:15:04.488560838Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Aug 13 01:15:04.488617 env[1257]: time="2025-08-13T01:15:04.488607401Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Aug 13 01:15:04.488671 env[1257]: time="2025-08-13T01:15:04.488654564Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Aug 13 01:15:04.488722 env[1257]: time="2025-08-13T01:15:04.488712959Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Aug 13 01:15:04.488768 env[1257]: time="2025-08-13T01:15:04.488758447Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Aug 13 01:15:04.488813 env[1257]: time="2025-08-13T01:15:04.488803615Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Aug 13 01:15:04.488859 env[1257]: time="2025-08-13T01:15:04.488849585Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Aug 13 01:15:04.488916 env[1257]: time="2025-08-13T01:15:04.488906126Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Aug 13 01:15:04.488961 env[1257]: time="2025-08-13T01:15:04.488951832Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Aug 13 01:15:04.489063 env[1257]: time="2025-08-13T01:15:04.489054394Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Aug 13 01:15:04.489150 env[1257]: time="2025-08-13T01:15:04.489141943Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Aug 13 01:15:04.489356 env[1257]: time="2025-08-13T01:15:04.489341070Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Aug 13 01:15:04.489386 env[1257]: time="2025-08-13T01:15:04.489364272Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Aug 13 01:15:04.489386 env[1257]: time="2025-08-13T01:15:04.489373115Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Aug 13 01:15:04.489419 env[1257]: time="2025-08-13T01:15:04.489409513Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Aug 13 01:15:04.489436 env[1257]: time="2025-08-13T01:15:04.489418317Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Aug 13 01:15:04.489436 env[1257]: time="2025-08-13T01:15:04.489425450Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Aug 13 01:15:04.489436 env[1257]: time="2025-08-13T01:15:04.489432272Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Aug 13 01:15:04.489489 env[1257]: time="2025-08-13T01:15:04.489446344Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Aug 13 01:15:04.489489 env[1257]: time="2025-08-13T01:15:04.489453909Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Aug 13 01:15:04.489489 env[1257]: time="2025-08-13T01:15:04.489460345Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Aug 13 01:15:04.489489 env[1257]: time="2025-08-13T01:15:04.489466366Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Aug 13 01:15:04.489489 env[1257]: time="2025-08-13T01:15:04.489474361Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Aug 13 01:15:04.489570 env[1257]: time="2025-08-13T01:15:04.489548690Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Aug 13 01:15:04.489570 env[1257]: time="2025-08-13T01:15:04.489557670Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Aug 13 01:15:04.489570 env[1257]: time="2025-08-13T01:15:04.489566627Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Aug 13 01:15:04.489617 env[1257]: time="2025-08-13T01:15:04.489573142Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Aug 13 01:15:04.489617 env[1257]: time="2025-08-13T01:15:04.489581318Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Aug 13 01:15:04.489617 env[1257]: time="2025-08-13T01:15:04.489587725Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Aug 13 01:15:04.489617 env[1257]: time="2025-08-13T01:15:04.489597727Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Aug 13 01:15:04.489696 env[1257]: time="2025-08-13T01:15:04.489619087Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Aug 13 01:15:04.489782 env[1257]: time="2025-08-13T01:15:04.489746926Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Aug 13 01:15:04.489782 env[1257]: time="2025-08-13T01:15:04.489781287Z" level=info msg="Connect containerd service" Aug 13 01:15:04.492043 env[1257]: time="2025-08-13T01:15:04.489802824Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Aug 13 01:15:04.492043 env[1257]: time="2025-08-13T01:15:04.490129055Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 13 01:15:04.492043 env[1257]: time="2025-08-13T01:15:04.490266398Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Aug 13 01:15:04.492043 env[1257]: time="2025-08-13T01:15:04.490290288Z" level=info msg=serving... address=/run/containerd/containerd.sock Aug 13 01:15:04.492043 env[1257]: time="2025-08-13T01:15:04.490876571Z" level=info msg="containerd successfully booted in 0.076522s" Aug 13 01:15:04.492043 env[1257]: time="2025-08-13T01:15:04.491538226Z" level=info msg="Start subscribing containerd event" Aug 13 01:15:04.492043 env[1257]: time="2025-08-13T01:15:04.491578635Z" level=info msg="Start recovering state" Aug 13 01:15:04.492043 env[1257]: time="2025-08-13T01:15:04.491616988Z" level=info msg="Start event monitor" Aug 13 01:15:04.492043 env[1257]: time="2025-08-13T01:15:04.491626480Z" level=info msg="Start snapshots syncer" Aug 13 01:15:04.492043 env[1257]: time="2025-08-13T01:15:04.491687486Z" level=info msg="Start cni network conf syncer for default" Aug 13 01:15:04.492043 env[1257]: time="2025-08-13T01:15:04.491698351Z" level=info msg="Start streaming server" Aug 13 01:15:04.490362 systemd[1]: Started containerd.service. Aug 13 01:15:04.777288 tar[1252]: linux-amd64/LICENSE Aug 13 01:15:04.777415 tar[1252]: linux-amd64/README.md Aug 13 01:15:04.780431 systemd[1]: Finished prepare-helm.service. Aug 13 01:15:04.813735 locksmithd[1296]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Aug 13 01:15:05.096871 systemd-networkd[1063]: ens192: Gained IPv6LL Aug 13 01:15:05.098134 systemd[1]: Finished systemd-networkd-wait-online.service. Aug 13 01:15:05.098436 systemd[1]: Reached target network-online.target. Aug 13 01:15:05.099774 systemd[1]: Starting kubelet.service... Aug 13 01:15:05.410644 sshd_keygen[1270]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Aug 13 01:15:05.426155 systemd[1]: Finished sshd-keygen.service. Aug 13 01:15:05.427425 systemd[1]: Starting issuegen.service... Aug 13 01:15:05.431286 systemd[1]: issuegen.service: Deactivated successfully. Aug 13 01:15:05.431400 systemd[1]: Finished issuegen.service. Aug 13 01:15:05.432726 systemd[1]: Starting systemd-user-sessions.service... Aug 13 01:15:05.437828 systemd[1]: Finished systemd-user-sessions.service. Aug 13 01:15:05.438930 systemd[1]: Started getty@tty1.service. Aug 13 01:15:05.439864 systemd[1]: Started serial-getty@ttyS0.service. Aug 13 01:15:05.440074 systemd[1]: Reached target getty.target. Aug 13 01:15:06.745249 systemd[1]: Started kubelet.service. Aug 13 01:15:06.745564 systemd[1]: Reached target multi-user.target. Aug 13 01:15:06.746537 systemd[1]: Starting systemd-update-utmp-runlevel.service... Aug 13 01:15:06.751283 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Aug 13 01:15:06.751375 systemd[1]: Finished systemd-update-utmp-runlevel.service. Aug 13 01:15:06.751541 systemd[1]: Startup finished in 919ms (kernel) + 4.971s (initrd) + 6.904s (userspace) = 12.794s. Aug 13 01:15:06.777528 login[1364]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Aug 13 01:15:06.777626 login[1363]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Aug 13 01:15:06.791222 systemd[1]: Created slice user-500.slice. Aug 13 01:15:06.792146 systemd[1]: Starting user-runtime-dir@500.service... Aug 13 01:15:06.795117 systemd-logind[1246]: New session 1 of user core. Aug 13 01:15:06.798416 systemd-logind[1246]: New session 2 of user core. Aug 13 01:15:06.801734 systemd[1]: Finished user-runtime-dir@500.service. Aug 13 01:15:06.802848 systemd[1]: Starting user@500.service... Aug 13 01:15:06.806703 (systemd)[1370]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Aug 13 01:15:06.870951 systemd[1370]: Queued start job for default target default.target. Aug 13 01:15:06.871593 systemd[1370]: Reached target paths.target. Aug 13 01:15:06.871610 systemd[1370]: Reached target sockets.target. Aug 13 01:15:06.871619 systemd[1370]: Reached target timers.target. Aug 13 01:15:06.871627 systemd[1370]: Reached target basic.target. Aug 13 01:15:06.871681 systemd[1370]: Reached target default.target. Aug 13 01:15:06.871702 systemd[1370]: Startup finished in 56ms. Aug 13 01:15:06.871709 systemd[1]: Started user@500.service. Aug 13 01:15:06.872693 systemd[1]: Started session-1.scope. Aug 13 01:15:06.873268 systemd[1]: Started session-2.scope. Aug 13 01:15:07.404136 kubelet[1367]: E0813 01:15:07.404103 1367 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 01:15:07.405250 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 01:15:07.405332 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 01:15:17.412634 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Aug 13 01:15:17.412774 systemd[1]: Stopped kubelet.service. Aug 13 01:15:17.413758 systemd[1]: Starting kubelet.service... Aug 13 01:15:17.640753 systemd[1]: Started kubelet.service. Aug 13 01:15:17.723074 kubelet[1399]: E0813 01:15:17.723018 1399 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 01:15:17.725223 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 01:15:17.725297 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 01:15:27.912830 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Aug 13 01:15:27.912984 systemd[1]: Stopped kubelet.service. Aug 13 01:15:27.914219 systemd[1]: Starting kubelet.service... Aug 13 01:15:28.169758 systemd[1]: Started kubelet.service. Aug 13 01:15:28.211118 kubelet[1408]: E0813 01:15:28.211081 1408 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 01:15:28.212198 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 01:15:28.212280 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 01:15:34.503375 systemd[1]: Created slice system-sshd.slice. Aug 13 01:15:34.504406 systemd[1]: Started sshd@0-139.178.70.100:22-139.178.68.195:44210.service. Aug 13 01:15:34.546551 sshd[1415]: Accepted publickey for core from 139.178.68.195 port 44210 ssh2: RSA SHA256:D9fG+3NI27jZdcTgqPkKAyN2+BKarYhwuSKj47TtA0s Aug 13 01:15:34.547487 sshd[1415]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 01:15:34.551412 systemd[1]: Started session-3.scope. Aug 13 01:15:34.551801 systemd-logind[1246]: New session 3 of user core. Aug 13 01:15:34.601375 systemd[1]: Started sshd@1-139.178.70.100:22-139.178.68.195:44212.service. Aug 13 01:15:34.630749 sshd[1420]: Accepted publickey for core from 139.178.68.195 port 44212 ssh2: RSA SHA256:D9fG+3NI27jZdcTgqPkKAyN2+BKarYhwuSKj47TtA0s Aug 13 01:15:34.631473 sshd[1420]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 01:15:34.634335 systemd[1]: Started session-4.scope. Aug 13 01:15:34.634859 systemd-logind[1246]: New session 4 of user core. Aug 13 01:15:34.686207 sshd[1420]: pam_unix(sshd:session): session closed for user core Aug 13 01:15:34.688477 systemd[1]: Started sshd@2-139.178.70.100:22-139.178.68.195:44214.service. Aug 13 01:15:34.691463 systemd[1]: sshd@1-139.178.70.100:22-139.178.68.195:44212.service: Deactivated successfully. Aug 13 01:15:34.691940 systemd[1]: session-4.scope: Deactivated successfully. Aug 13 01:15:34.692791 systemd-logind[1246]: Session 4 logged out. Waiting for processes to exit. Aug 13 01:15:34.693338 systemd-logind[1246]: Removed session 4. Aug 13 01:15:34.720121 sshd[1425]: Accepted publickey for core from 139.178.68.195 port 44214 ssh2: RSA SHA256:D9fG+3NI27jZdcTgqPkKAyN2+BKarYhwuSKj47TtA0s Aug 13 01:15:34.720978 sshd[1425]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 01:15:34.724355 systemd[1]: Started session-5.scope. Aug 13 01:15:34.725158 systemd-logind[1246]: New session 5 of user core. Aug 13 01:15:34.772987 sshd[1425]: pam_unix(sshd:session): session closed for user core Aug 13 01:15:34.775530 systemd[1]: sshd@2-139.178.70.100:22-139.178.68.195:44214.service: Deactivated successfully. Aug 13 01:15:34.775979 systemd[1]: session-5.scope: Deactivated successfully. Aug 13 01:15:34.776403 systemd-logind[1246]: Session 5 logged out. Waiting for processes to exit. Aug 13 01:15:34.777272 systemd[1]: Started sshd@3-139.178.70.100:22-139.178.68.195:44216.service. Aug 13 01:15:34.777909 systemd-logind[1246]: Removed session 5. Aug 13 01:15:34.808328 sshd[1432]: Accepted publickey for core from 139.178.68.195 port 44216 ssh2: RSA SHA256:D9fG+3NI27jZdcTgqPkKAyN2+BKarYhwuSKj47TtA0s Aug 13 01:15:34.809184 sshd[1432]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 01:15:34.812887 systemd[1]: Started session-6.scope. Aug 13 01:15:34.813159 systemd-logind[1246]: New session 6 of user core. Aug 13 01:15:34.864056 sshd[1432]: pam_unix(sshd:session): session closed for user core Aug 13 01:15:34.867228 systemd[1]: Started sshd@4-139.178.70.100:22-139.178.68.195:44230.service. Aug 13 01:15:34.867618 systemd[1]: sshd@3-139.178.70.100:22-139.178.68.195:44216.service: Deactivated successfully. Aug 13 01:15:34.868154 systemd[1]: session-6.scope: Deactivated successfully. Aug 13 01:15:34.868657 systemd-logind[1246]: Session 6 logged out. Waiting for processes to exit. Aug 13 01:15:34.869656 systemd-logind[1246]: Removed session 6. Aug 13 01:15:34.899746 sshd[1437]: Accepted publickey for core from 139.178.68.195 port 44230 ssh2: RSA SHA256:D9fG+3NI27jZdcTgqPkKAyN2+BKarYhwuSKj47TtA0s Aug 13 01:15:34.900933 sshd[1437]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 01:15:34.904130 systemd-logind[1246]: New session 7 of user core. Aug 13 01:15:34.904628 systemd[1]: Started session-7.scope. Aug 13 01:15:34.964983 sudo[1441]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Aug 13 01:15:34.965162 sudo[1441]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Aug 13 01:15:34.981577 systemd[1]: Starting docker.service... Aug 13 01:15:35.007295 env[1451]: time="2025-08-13T01:15:35.007270245Z" level=info msg="Starting up" Aug 13 01:15:35.008237 env[1451]: time="2025-08-13T01:15:35.008196725Z" level=info msg="parsed scheme: \"unix\"" module=grpc Aug 13 01:15:35.008322 env[1451]: time="2025-08-13T01:15:35.008211091Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Aug 13 01:15:35.008344 env[1451]: time="2025-08-13T01:15:35.008328717Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Aug 13 01:15:35.008344 env[1451]: time="2025-08-13T01:15:35.008335847Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Aug 13 01:15:35.009333 env[1451]: time="2025-08-13T01:15:35.009318410Z" level=info msg="parsed scheme: \"unix\"" module=grpc Aug 13 01:15:35.009333 env[1451]: time="2025-08-13T01:15:35.009330089Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Aug 13 01:15:35.009388 env[1451]: time="2025-08-13T01:15:35.009338138Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Aug 13 01:15:35.009388 env[1451]: time="2025-08-13T01:15:35.009343275Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Aug 13 01:15:35.021885 env[1451]: time="2025-08-13T01:15:35.021867650Z" level=info msg="Loading containers: start." Aug 13 01:15:35.103688 kernel: Initializing XFRM netlink socket Aug 13 01:15:35.128644 env[1451]: time="2025-08-13T01:15:35.128616431Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Aug 13 01:15:35.167574 systemd-networkd[1063]: docker0: Link UP Aug 13 01:15:35.177344 env[1451]: time="2025-08-13T01:15:35.177324506Z" level=info msg="Loading containers: done." Aug 13 01:15:35.183304 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1549905431-merged.mount: Deactivated successfully. Aug 13 01:15:35.185961 env[1451]: time="2025-08-13T01:15:35.185944303Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Aug 13 01:15:35.186128 env[1451]: time="2025-08-13T01:15:35.186117550Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Aug 13 01:15:35.186214 env[1451]: time="2025-08-13T01:15:35.186205604Z" level=info msg="Daemon has completed initialization" Aug 13 01:15:35.195385 env[1451]: time="2025-08-13T01:15:35.195359117Z" level=info msg="API listen on /run/docker.sock" Aug 13 01:15:35.195564 systemd[1]: Started docker.service. Aug 13 01:15:35.942708 env[1257]: time="2025-08-13T01:15:35.942679923Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.11\"" Aug 13 01:15:36.696681 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3687693602.mount: Deactivated successfully. Aug 13 01:15:37.794327 env[1257]: time="2025-08-13T01:15:37.794296050Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.31.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:15:37.794914 env[1257]: time="2025-08-13T01:15:37.794897845Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ea7fa3cfabed1b85e7de8e0a02356b6dcb7708442d6e4600d68abaebe1e9b1fc,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:15:37.795927 env[1257]: time="2025-08-13T01:15:37.795913157Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.31.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:15:37.796865 env[1257]: time="2025-08-13T01:15:37.796852266Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:a3d1c4440817725a1b503a7ccce94f3dce2b208ebf257b405dc2d97817df3dde,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:15:37.797309 env[1257]: time="2025-08-13T01:15:37.797291719Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.11\" returns image reference \"sha256:ea7fa3cfabed1b85e7de8e0a02356b6dcb7708442d6e4600d68abaebe1e9b1fc\"" Aug 13 01:15:37.797679 env[1257]: time="2025-08-13T01:15:37.797652105Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.11\"" Aug 13 01:15:38.412595 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Aug 13 01:15:38.412731 systemd[1]: Stopped kubelet.service. Aug 13 01:15:38.413710 systemd[1]: Starting kubelet.service... Aug 13 01:15:38.479398 systemd[1]: Started kubelet.service. Aug 13 01:15:38.521386 kubelet[1577]: E0813 01:15:38.521352 1577 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 01:15:38.522547 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 01:15:38.522622 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 01:15:39.631361 env[1257]: time="2025-08-13T01:15:39.631326931Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.31.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:15:39.632147 env[1257]: time="2025-08-13T01:15:39.632129144Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c057eceea4b436b01f9ce394734cfb06f13b2a3688c3983270e99743370b6051,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:15:39.633119 env[1257]: time="2025-08-13T01:15:39.633104470Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.31.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:15:39.634120 env[1257]: time="2025-08-13T01:15:39.634107847Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:0f19de157f3d251f5ddeb6e9d026895bc55cb02592874b326fa345c57e5e2848,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:15:39.634605 env[1257]: time="2025-08-13T01:15:39.634587877Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.11\" returns image reference \"sha256:c057eceea4b436b01f9ce394734cfb06f13b2a3688c3983270e99743370b6051\"" Aug 13 01:15:39.634875 env[1257]: time="2025-08-13T01:15:39.634863522Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.11\"" Aug 13 01:15:40.868346 env[1257]: time="2025-08-13T01:15:40.868319665Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.31.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:15:40.869137 env[1257]: time="2025-08-13T01:15:40.869118866Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:64e6a0b453108c87da0bb61473b35fd54078119a09edc56a4c8cb31602437c58,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:15:40.870142 env[1257]: time="2025-08-13T01:15:40.870127791Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.31.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:15:40.871105 env[1257]: time="2025-08-13T01:15:40.871089762Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:1a9b59b3bfa6c1f1911f6f865a795620c461d079e413061bb71981cadd67f39d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:15:40.871612 env[1257]: time="2025-08-13T01:15:40.871597354Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.11\" returns image reference \"sha256:64e6a0b453108c87da0bb61473b35fd54078119a09edc56a4c8cb31602437c58\"" Aug 13 01:15:40.872013 env[1257]: time="2025-08-13T01:15:40.871995592Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.11\"" Aug 13 01:15:41.820172 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2909720266.mount: Deactivated successfully. Aug 13 01:15:42.257686 env[1257]: time="2025-08-13T01:15:42.257637507Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.31.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:15:42.258321 env[1257]: time="2025-08-13T01:15:42.258300507Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:0cec28fd5c3c446ec52e2886ddea38bf7f7e17755aa5d0095d50d3df5914a8fd,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:15:42.258967 env[1257]: time="2025-08-13T01:15:42.258953634Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.31.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:15:42.259725 env[1257]: time="2025-08-13T01:15:42.259712826Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:a31da847792c5e7e92e91b78da1ad21d693e4b2b48d0e9f4610c8764dc2a5d79,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:15:42.260139 env[1257]: time="2025-08-13T01:15:42.260118749Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.11\" returns image reference \"sha256:0cec28fd5c3c446ec52e2886ddea38bf7f7e17755aa5d0095d50d3df5914a8fd\"" Aug 13 01:15:42.260486 env[1257]: time="2025-08-13T01:15:42.260474355Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Aug 13 01:15:42.857562 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3179646962.mount: Deactivated successfully. Aug 13 01:15:43.673616 env[1257]: time="2025-08-13T01:15:43.673573689Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:15:43.683264 env[1257]: time="2025-08-13T01:15:43.683235705Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:15:43.693072 env[1257]: time="2025-08-13T01:15:43.693045131Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:15:43.705820 env[1257]: time="2025-08-13T01:15:43.705717686Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:15:43.706327 env[1257]: time="2025-08-13T01:15:43.706309228Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Aug 13 01:15:43.706763 env[1257]: time="2025-08-13T01:15:43.706745698Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Aug 13 01:15:44.267904 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4265700187.mount: Deactivated successfully. Aug 13 01:15:44.270200 env[1257]: time="2025-08-13T01:15:44.270173401Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:15:44.271186 env[1257]: time="2025-08-13T01:15:44.271170805Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:15:44.271992 env[1257]: time="2025-08-13T01:15:44.271977964Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:15:44.272717 env[1257]: time="2025-08-13T01:15:44.272701295Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:15:44.272985 env[1257]: time="2025-08-13T01:15:44.272967736Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Aug 13 01:15:44.273656 env[1257]: time="2025-08-13T01:15:44.273644107Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Aug 13 01:15:44.770121 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount155001647.mount: Deactivated successfully. Aug 13 01:15:46.664452 env[1257]: time="2025-08-13T01:15:46.664416965Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:15:46.684752 env[1257]: time="2025-08-13T01:15:46.684722472Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:15:46.692534 env[1257]: time="2025-08-13T01:15:46.692510102Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:15:46.700875 env[1257]: time="2025-08-13T01:15:46.700852302Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:15:46.701269 env[1257]: time="2025-08-13T01:15:46.701252078Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Aug 13 01:15:48.662605 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Aug 13 01:15:48.662764 systemd[1]: Stopped kubelet.service. Aug 13 01:15:48.663813 systemd[1]: Starting kubelet.service... Aug 13 01:15:49.400299 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Aug 13 01:15:49.400361 systemd[1]: kubelet.service: Failed with result 'signal'. Aug 13 01:15:49.400511 systemd[1]: Stopped kubelet.service. Aug 13 01:15:49.402144 systemd[1]: Starting kubelet.service... Aug 13 01:15:49.422854 systemd[1]: Reloading. Aug 13 01:15:49.477850 /usr/lib/systemd/system-generators/torcx-generator[1628]: time="2025-08-13T01:15:49Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Aug 13 01:15:49.477867 /usr/lib/systemd/system-generators/torcx-generator[1628]: time="2025-08-13T01:15:49Z" level=info msg="torcx already run" Aug 13 01:15:49.529839 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Aug 13 01:15:49.529958 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Aug 13 01:15:49.541937 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 01:15:49.607282 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Aug 13 01:15:49.607329 systemd[1]: kubelet.service: Failed with result 'signal'. Aug 13 01:15:49.607528 systemd[1]: Stopped kubelet.service. Aug 13 01:15:49.609058 systemd[1]: Starting kubelet.service... Aug 13 01:15:49.972016 update_engine[1247]: I0813 01:15:49.971788 1247 update_attempter.cc:509] Updating boot flags... Aug 13 01:15:50.392233 systemd[1]: Started kubelet.service. Aug 13 01:15:50.634925 kubelet[1710]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 01:15:50.635143 kubelet[1710]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Aug 13 01:15:50.635187 kubelet[1710]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 01:15:50.635577 kubelet[1710]: I0813 01:15:50.635556 1710 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 13 01:15:50.928736 kubelet[1710]: I0813 01:15:50.928715 1710 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Aug 13 01:15:50.928892 kubelet[1710]: I0813 01:15:50.928883 1710 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 13 01:15:50.929119 kubelet[1710]: I0813 01:15:50.929110 1710 server.go:934] "Client rotation is on, will bootstrap in background" Aug 13 01:15:50.957061 kubelet[1710]: E0813 01:15:50.957034 1710 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://139.178.70.100:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 139.178.70.100:6443: connect: connection refused" logger="UnhandledError" Aug 13 01:15:50.957850 kubelet[1710]: I0813 01:15:50.957835 1710 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 13 01:15:50.964105 kubelet[1710]: E0813 01:15:50.964082 1710 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Aug 13 01:15:50.964105 kubelet[1710]: I0813 01:15:50.964100 1710 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Aug 13 01:15:50.966897 kubelet[1710]: I0813 01:15:50.966879 1710 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 13 01:15:50.967499 kubelet[1710]: I0813 01:15:50.967483 1710 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Aug 13 01:15:50.967594 kubelet[1710]: I0813 01:15:50.967574 1710 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 13 01:15:50.967710 kubelet[1710]: I0813 01:15:50.967592 1710 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Aug 13 01:15:50.967783 kubelet[1710]: I0813 01:15:50.967715 1710 topology_manager.go:138] "Creating topology manager with none policy" Aug 13 01:15:50.967783 kubelet[1710]: I0813 01:15:50.967722 1710 container_manager_linux.go:300] "Creating device plugin manager" Aug 13 01:15:50.967823 kubelet[1710]: I0813 01:15:50.967792 1710 state_mem.go:36] "Initialized new in-memory state store" Aug 13 01:15:50.970031 kubelet[1710]: I0813 01:15:50.970013 1710 kubelet.go:408] "Attempting to sync node with API server" Aug 13 01:15:50.970031 kubelet[1710]: I0813 01:15:50.970033 1710 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 13 01:15:50.970101 kubelet[1710]: I0813 01:15:50.970064 1710 kubelet.go:314] "Adding apiserver pod source" Aug 13 01:15:50.970101 kubelet[1710]: I0813 01:15:50.970074 1710 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 13 01:15:50.981918 kubelet[1710]: I0813 01:15:50.981899 1710 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Aug 13 01:15:50.982054 kubelet[1710]: W0813 01:15:50.982026 1710 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://139.178.70.100:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 139.178.70.100:6443: connect: connection refused Aug 13 01:15:50.982115 kubelet[1710]: E0813 01:15:50.982065 1710 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://139.178.70.100:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 139.178.70.100:6443: connect: connection refused" logger="UnhandledError" Aug 13 01:15:50.982159 kubelet[1710]: W0813 01:15:50.982108 1710 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://139.178.70.100:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 139.178.70.100:6443: connect: connection refused Aug 13 01:15:50.982159 kubelet[1710]: E0813 01:15:50.982126 1710 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://139.178.70.100:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 139.178.70.100:6443: connect: connection refused" logger="UnhandledError" Aug 13 01:15:50.982480 kubelet[1710]: I0813 01:15:50.982471 1710 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Aug 13 01:15:50.983044 kubelet[1710]: W0813 01:15:50.983034 1710 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Aug 13 01:15:50.984460 kubelet[1710]: I0813 01:15:50.984451 1710 server.go:1274] "Started kubelet" Aug 13 01:15:50.986372 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Aug 13 01:15:50.986475 kubelet[1710]: I0813 01:15:50.986467 1710 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 13 01:15:50.991620 kubelet[1710]: I0813 01:15:50.991600 1710 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Aug 13 01:15:50.992205 kubelet[1710]: I0813 01:15:50.992192 1710 server.go:449] "Adding debug handlers to kubelet server" Aug 13 01:15:50.997458 kubelet[1710]: I0813 01:15:50.997440 1710 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 13 01:15:50.997619 kubelet[1710]: I0813 01:15:50.997611 1710 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 13 01:15:50.997802 kubelet[1710]: I0813 01:15:50.997793 1710 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Aug 13 01:15:50.999394 kubelet[1710]: I0813 01:15:50.999386 1710 volume_manager.go:289] "Starting Kubelet Volume Manager" Aug 13 01:15:50.999553 kubelet[1710]: E0813 01:15:50.999543 1710 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 13 01:15:51.001153 kubelet[1710]: I0813 01:15:51.001144 1710 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Aug 13 01:15:51.001233 kubelet[1710]: I0813 01:15:51.001226 1710 reconciler.go:26] "Reconciler: start to sync state" Aug 13 01:15:51.014258 kubelet[1710]: E0813 01:15:51.014236 1710 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.100:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.100:6443: connect: connection refused" interval="200ms" Aug 13 01:15:51.017798 kubelet[1710]: E0813 01:15:51.014414 1710 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://139.178.70.100:6443/api/v1/namespaces/default/events\": dial tcp 139.178.70.100:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.185b2ea1814678cd default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-08-13 01:15:50.984431821 +0000 UTC m=+0.590060954,LastTimestamp:2025-08-13 01:15:50.984431821 +0000 UTC m=+0.590060954,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Aug 13 01:15:51.020360 kubelet[1710]: W0813 01:15:51.020337 1710 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://139.178.70.100:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.70.100:6443: connect: connection refused Aug 13 01:15:51.020452 kubelet[1710]: E0813 01:15:51.020441 1710 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://139.178.70.100:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 139.178.70.100:6443: connect: connection refused" logger="UnhandledError" Aug 13 01:15:51.024783 kubelet[1710]: I0813 01:15:51.024701 1710 factory.go:221] Registration of the systemd container factory successfully Aug 13 01:15:51.024783 kubelet[1710]: I0813 01:15:51.024779 1710 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 13 01:15:51.025820 kubelet[1710]: I0813 01:15:51.025762 1710 factory.go:221] Registration of the containerd container factory successfully Aug 13 01:15:51.026066 kubelet[1710]: E0813 01:15:51.026057 1710 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 13 01:15:51.034002 kubelet[1710]: I0813 01:15:51.033927 1710 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 13 01:15:51.034556 kubelet[1710]: I0813 01:15:51.034544 1710 cpu_manager.go:214] "Starting CPU manager" policy="none" Aug 13 01:15:51.034556 kubelet[1710]: I0813 01:15:51.034552 1710 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Aug 13 01:15:51.034628 kubelet[1710]: I0813 01:15:51.034561 1710 state_mem.go:36] "Initialized new in-memory state store" Aug 13 01:15:51.034893 kubelet[1710]: I0813 01:15:51.034885 1710 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 13 01:15:51.034947 kubelet[1710]: I0813 01:15:51.034938 1710 status_manager.go:217] "Starting to sync pod status with apiserver" Aug 13 01:15:51.035015 kubelet[1710]: I0813 01:15:51.035007 1710 kubelet.go:2321] "Starting kubelet main sync loop" Aug 13 01:15:51.035099 kubelet[1710]: E0813 01:15:51.035085 1710 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 13 01:15:51.035843 kubelet[1710]: W0813 01:15:51.035822 1710 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://139.178.70.100:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.70.100:6443: connect: connection refused Aug 13 01:15:51.037540 kubelet[1710]: E0813 01:15:51.035883 1710 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://139.178.70.100:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 139.178.70.100:6443: connect: connection refused" logger="UnhandledError" Aug 13 01:15:51.037834 kubelet[1710]: I0813 01:15:51.037822 1710 policy_none.go:49] "None policy: Start" Aug 13 01:15:51.038235 kubelet[1710]: I0813 01:15:51.038224 1710 memory_manager.go:170] "Starting memorymanager" policy="None" Aug 13 01:15:51.038276 kubelet[1710]: I0813 01:15:51.038239 1710 state_mem.go:35] "Initializing new in-memory state store" Aug 13 01:15:51.056455 systemd[1]: Created slice kubepods.slice. Aug 13 01:15:51.059140 systemd[1]: Created slice kubepods-burstable.slice. Aug 13 01:15:51.062116 systemd[1]: Created slice kubepods-besteffort.slice. Aug 13 01:15:51.076251 kubelet[1710]: I0813 01:15:51.076236 1710 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 13 01:15:51.076742 kubelet[1710]: I0813 01:15:51.076732 1710 eviction_manager.go:189] "Eviction manager: starting control loop" Aug 13 01:15:51.076837 kubelet[1710]: I0813 01:15:51.076811 1710 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 13 01:15:51.077267 kubelet[1710]: I0813 01:15:51.077260 1710 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 13 01:15:51.078174 kubelet[1710]: E0813 01:15:51.078155 1710 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Aug 13 01:15:51.141790 systemd[1]: Created slice kubepods-burstable-pod5f0f3f312279b56beb51bcbf6f7a032d.slice. Aug 13 01:15:51.155481 systemd[1]: Created slice kubepods-burstable-pod407c569889bb86d746b0274843003fd0.slice. Aug 13 01:15:51.164020 systemd[1]: Created slice kubepods-burstable-pod27e4a50e94f48ec00f6bd509cb48ed05.slice. Aug 13 01:15:51.178471 kubelet[1710]: I0813 01:15:51.178448 1710 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Aug 13 01:15:51.178722 kubelet[1710]: E0813 01:15:51.178705 1710 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://139.178.70.100:6443/api/v1/nodes\": dial tcp 139.178.70.100:6443: connect: connection refused" node="localhost" Aug 13 01:15:51.215412 kubelet[1710]: E0813 01:15:51.215322 1710 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.100:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.100:6443: connect: connection refused" interval="400ms" Aug 13 01:15:51.302702 kubelet[1710]: I0813 01:15:51.302661 1710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5f0f3f312279b56beb51bcbf6f7a032d-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"5f0f3f312279b56beb51bcbf6f7a032d\") " pod="kube-system/kube-apiserver-localhost" Aug 13 01:15:51.302844 kubelet[1710]: I0813 01:15:51.302833 1710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5f0f3f312279b56beb51bcbf6f7a032d-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"5f0f3f312279b56beb51bcbf6f7a032d\") " pod="kube-system/kube-apiserver-localhost" Aug 13 01:15:51.302910 kubelet[1710]: I0813 01:15:51.302901 1710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5f0f3f312279b56beb51bcbf6f7a032d-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"5f0f3f312279b56beb51bcbf6f7a032d\") " pod="kube-system/kube-apiserver-localhost" Aug 13 01:15:51.302977 kubelet[1710]: I0813 01:15:51.302960 1710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/407c569889bb86d746b0274843003fd0-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"407c569889bb86d746b0274843003fd0\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 01:15:51.303026 kubelet[1710]: I0813 01:15:51.303018 1710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/27e4a50e94f48ec00f6bd509cb48ed05-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"27e4a50e94f48ec00f6bd509cb48ed05\") " pod="kube-system/kube-scheduler-localhost" Aug 13 01:15:51.303091 kubelet[1710]: I0813 01:15:51.303083 1710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/407c569889bb86d746b0274843003fd0-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"407c569889bb86d746b0274843003fd0\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 01:15:51.303151 kubelet[1710]: I0813 01:15:51.303134 1710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/407c569889bb86d746b0274843003fd0-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"407c569889bb86d746b0274843003fd0\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 01:15:51.303213 kubelet[1710]: I0813 01:15:51.303199 1710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/407c569889bb86d746b0274843003fd0-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"407c569889bb86d746b0274843003fd0\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 01:15:51.303274 kubelet[1710]: I0813 01:15:51.303264 1710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/407c569889bb86d746b0274843003fd0-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"407c569889bb86d746b0274843003fd0\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 01:15:51.380354 kubelet[1710]: I0813 01:15:51.380322 1710 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Aug 13 01:15:51.380707 kubelet[1710]: E0813 01:15:51.380689 1710 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://139.178.70.100:6443/api/v1/nodes\": dial tcp 139.178.70.100:6443: connect: connection refused" node="localhost" Aug 13 01:15:51.454531 env[1257]: time="2025-08-13T01:15:51.454444960Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:5f0f3f312279b56beb51bcbf6f7a032d,Namespace:kube-system,Attempt:0,}" Aug 13 01:15:51.458457 env[1257]: time="2025-08-13T01:15:51.458433465Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:407c569889bb86d746b0274843003fd0,Namespace:kube-system,Attempt:0,}" Aug 13 01:15:51.466577 env[1257]: time="2025-08-13T01:15:51.466459597Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:27e4a50e94f48ec00f6bd509cb48ed05,Namespace:kube-system,Attempt:0,}" Aug 13 01:15:51.616286 kubelet[1710]: E0813 01:15:51.616260 1710 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.100:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.100:6443: connect: connection refused" interval="800ms" Aug 13 01:15:51.781675 kubelet[1710]: I0813 01:15:51.781611 1710 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Aug 13 01:15:51.782110 kubelet[1710]: E0813 01:15:51.782096 1710 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://139.178.70.100:6443/api/v1/nodes\": dial tcp 139.178.70.100:6443: connect: connection refused" node="localhost" Aug 13 01:15:52.082922 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1875980607.mount: Deactivated successfully. Aug 13 01:15:52.085081 env[1257]: time="2025-08-13T01:15:52.085057786Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:15:52.086458 env[1257]: time="2025-08-13T01:15:52.086437339Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:15:52.088805 env[1257]: time="2025-08-13T01:15:52.088779647Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:15:52.089347 env[1257]: time="2025-08-13T01:15:52.089327632Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:15:52.089827 env[1257]: time="2025-08-13T01:15:52.089808303Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:15:52.090283 env[1257]: time="2025-08-13T01:15:52.090266545Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:15:52.090809 env[1257]: time="2025-08-13T01:15:52.090793032Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:15:52.092954 env[1257]: time="2025-08-13T01:15:52.092934796Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:15:52.096262 env[1257]: time="2025-08-13T01:15:52.096239479Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:15:52.098914 env[1257]: time="2025-08-13T01:15:52.098892927Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:15:52.100994 env[1257]: time="2025-08-13T01:15:52.100974120Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:15:52.105488 env[1257]: time="2025-08-13T01:15:52.105468638Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:15:52.173282 env[1257]: time="2025-08-13T01:15:52.164262990Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 01:15:52.173282 env[1257]: time="2025-08-13T01:15:52.164300724Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 01:15:52.173282 env[1257]: time="2025-08-13T01:15:52.164308853Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 01:15:52.173282 env[1257]: time="2025-08-13T01:15:52.166189766Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/3964cce4ff54640eac07a2522213e8aa912978eacb4b79a4d884a6008a61318e pid=1749 runtime=io.containerd.runc.v2 Aug 13 01:15:52.173511 env[1257]: time="2025-08-13T01:15:52.171164453Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 01:15:52.173511 env[1257]: time="2025-08-13T01:15:52.171187699Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 01:15:52.173511 env[1257]: time="2025-08-13T01:15:52.171194365Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 01:15:52.173511 env[1257]: time="2025-08-13T01:15:52.171339940Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/97125cfc38eee3ce98b8985fd2f4cc25ff7b59a968a2c993f4d492d2b9cc8315 pid=1766 runtime=io.containerd.runc.v2 Aug 13 01:15:52.189064 env[1257]: time="2025-08-13T01:15:52.188436046Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 01:15:52.189064 env[1257]: time="2025-08-13T01:15:52.188481826Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 01:15:52.189064 env[1257]: time="2025-08-13T01:15:52.188502360Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 01:15:52.189064 env[1257]: time="2025-08-13T01:15:52.188586096Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/2d57d75a1c0e5356a17f2ce68c4724806f432fb2f60da03147a68a55946ae04d pid=1801 runtime=io.containerd.runc.v2 Aug 13 01:15:52.189733 systemd[1]: Started cri-containerd-3964cce4ff54640eac07a2522213e8aa912978eacb4b79a4d884a6008a61318e.scope. Aug 13 01:15:52.200259 systemd[1]: Started cri-containerd-97125cfc38eee3ce98b8985fd2f4cc25ff7b59a968a2c993f4d492d2b9cc8315.scope. Aug 13 01:15:52.219526 systemd[1]: Started cri-containerd-2d57d75a1c0e5356a17f2ce68c4724806f432fb2f60da03147a68a55946ae04d.scope. Aug 13 01:15:52.243750 env[1257]: time="2025-08-13T01:15:52.243716953Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:407c569889bb86d746b0274843003fd0,Namespace:kube-system,Attempt:0,} returns sandbox id \"97125cfc38eee3ce98b8985fd2f4cc25ff7b59a968a2c993f4d492d2b9cc8315\"" Aug 13 01:15:52.247841 env[1257]: time="2025-08-13T01:15:52.247814641Z" level=info msg="CreateContainer within sandbox \"97125cfc38eee3ce98b8985fd2f4cc25ff7b59a968a2c993f4d492d2b9cc8315\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Aug 13 01:15:52.249952 kubelet[1710]: W0813 01:15:52.249901 1710 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://139.178.70.100:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.70.100:6443: connect: connection refused Aug 13 01:15:52.249952 kubelet[1710]: E0813 01:15:52.249936 1710 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://139.178.70.100:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 139.178.70.100:6443: connect: connection refused" logger="UnhandledError" Aug 13 01:15:52.255739 env[1257]: time="2025-08-13T01:15:52.255718077Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:5f0f3f312279b56beb51bcbf6f7a032d,Namespace:kube-system,Attempt:0,} returns sandbox id \"3964cce4ff54640eac07a2522213e8aa912978eacb4b79a4d884a6008a61318e\"" Aug 13 01:15:52.258910 env[1257]: time="2025-08-13T01:15:52.258884543Z" level=info msg="CreateContainer within sandbox \"3964cce4ff54640eac07a2522213e8aa912978eacb4b79a4d884a6008a61318e\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Aug 13 01:15:52.269380 env[1257]: time="2025-08-13T01:15:52.269349147Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:27e4a50e94f48ec00f6bd509cb48ed05,Namespace:kube-system,Attempt:0,} returns sandbox id \"2d57d75a1c0e5356a17f2ce68c4724806f432fb2f60da03147a68a55946ae04d\"" Aug 13 01:15:52.270592 env[1257]: time="2025-08-13T01:15:52.270570480Z" level=info msg="CreateContainer within sandbox \"2d57d75a1c0e5356a17f2ce68c4724806f432fb2f60da03147a68a55946ae04d\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Aug 13 01:15:52.288468 env[1257]: time="2025-08-13T01:15:52.288420200Z" level=info msg="CreateContainer within sandbox \"97125cfc38eee3ce98b8985fd2f4cc25ff7b59a968a2c993f4d492d2b9cc8315\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"015daaea80c6c6defe75e774b4be8d1577766163b68a5dc4f3e6a553198f8d40\"" Aug 13 01:15:52.289676 env[1257]: time="2025-08-13T01:15:52.289649015Z" level=info msg="StartContainer for \"015daaea80c6c6defe75e774b4be8d1577766163b68a5dc4f3e6a553198f8d40\"" Aug 13 01:15:52.291790 env[1257]: time="2025-08-13T01:15:52.291761980Z" level=info msg="CreateContainer within sandbox \"2d57d75a1c0e5356a17f2ce68c4724806f432fb2f60da03147a68a55946ae04d\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"5afee928141d7c91caae7d8341984d072825495f01fb70d75e30f3dc6add0a5b\"" Aug 13 01:15:52.292427 env[1257]: time="2025-08-13T01:15:52.292406817Z" level=info msg="CreateContainer within sandbox \"3964cce4ff54640eac07a2522213e8aa912978eacb4b79a4d884a6008a61318e\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"56d85a05d81a81871e0d3bc18a875d84fe888b4eaed212e3dcb71325830859b5\"" Aug 13 01:15:52.292821 env[1257]: time="2025-08-13T01:15:52.292802144Z" level=info msg="StartContainer for \"5afee928141d7c91caae7d8341984d072825495f01fb70d75e30f3dc6add0a5b\"" Aug 13 01:15:52.293927 env[1257]: time="2025-08-13T01:15:52.293908193Z" level=info msg="StartContainer for \"56d85a05d81a81871e0d3bc18a875d84fe888b4eaed212e3dcb71325830859b5\"" Aug 13 01:15:52.305391 systemd[1]: Started cri-containerd-56d85a05d81a81871e0d3bc18a875d84fe888b4eaed212e3dcb71325830859b5.scope. Aug 13 01:15:52.312475 systemd[1]: Started cri-containerd-015daaea80c6c6defe75e774b4be8d1577766163b68a5dc4f3e6a553198f8d40.scope. Aug 13 01:15:52.324997 systemd[1]: Started cri-containerd-5afee928141d7c91caae7d8341984d072825495f01fb70d75e30f3dc6add0a5b.scope. Aug 13 01:15:52.344005 env[1257]: time="2025-08-13T01:15:52.343944186Z" level=info msg="StartContainer for \"56d85a05d81a81871e0d3bc18a875d84fe888b4eaed212e3dcb71325830859b5\" returns successfully" Aug 13 01:15:52.368476 env[1257]: time="2025-08-13T01:15:52.368443363Z" level=info msg="StartContainer for \"015daaea80c6c6defe75e774b4be8d1577766163b68a5dc4f3e6a553198f8d40\" returns successfully" Aug 13 01:15:52.375682 env[1257]: time="2025-08-13T01:15:52.375639275Z" level=info msg="StartContainer for \"5afee928141d7c91caae7d8341984d072825495f01fb70d75e30f3dc6add0a5b\" returns successfully" Aug 13 01:15:52.416893 kubelet[1710]: E0813 01:15:52.416852 1710 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.100:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.100:6443: connect: connection refused" interval="1.6s" Aug 13 01:15:52.431971 kubelet[1710]: W0813 01:15:52.430366 1710 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://139.178.70.100:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.70.100:6443: connect: connection refused Aug 13 01:15:52.431971 kubelet[1710]: E0813 01:15:52.430416 1710 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://139.178.70.100:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 139.178.70.100:6443: connect: connection refused" logger="UnhandledError" Aug 13 01:15:52.519386 kubelet[1710]: W0813 01:15:52.519346 1710 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://139.178.70.100:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 139.178.70.100:6443: connect: connection refused Aug 13 01:15:52.519531 kubelet[1710]: E0813 01:15:52.519514 1710 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://139.178.70.100:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 139.178.70.100:6443: connect: connection refused" logger="UnhandledError" Aug 13 01:15:52.540011 kubelet[1710]: W0813 01:15:52.539969 1710 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://139.178.70.100:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 139.178.70.100:6443: connect: connection refused Aug 13 01:15:52.540096 kubelet[1710]: E0813 01:15:52.540017 1710 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://139.178.70.100:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 139.178.70.100:6443: connect: connection refused" logger="UnhandledError" Aug 13 01:15:52.582926 kubelet[1710]: I0813 01:15:52.582908 1710 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Aug 13 01:15:52.583255 kubelet[1710]: E0813 01:15:52.583232 1710 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://139.178.70.100:6443/api/v1/nodes\": dial tcp 139.178.70.100:6443: connect: connection refused" node="localhost" Aug 13 01:15:53.089932 kubelet[1710]: E0813 01:15:53.089908 1710 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://139.178.70.100:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 139.178.70.100:6443: connect: connection refused" logger="UnhandledError" Aug 13 01:15:54.184777 kubelet[1710]: I0813 01:15:54.184761 1710 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Aug 13 01:15:54.481544 kubelet[1710]: E0813 01:15:54.481473 1710 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Aug 13 01:15:54.657510 kubelet[1710]: I0813 01:15:54.657486 1710 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Aug 13 01:15:54.657648 kubelet[1710]: E0813 01:15:54.657636 1710 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Aug 13 01:15:54.674319 kubelet[1710]: E0813 01:15:54.674301 1710 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 13 01:15:54.775030 kubelet[1710]: E0813 01:15:54.774949 1710 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 13 01:15:54.875470 kubelet[1710]: E0813 01:15:54.875452 1710 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 13 01:15:54.975916 kubelet[1710]: E0813 01:15:54.975885 1710 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 13 01:15:55.076433 kubelet[1710]: E0813 01:15:55.076367 1710 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 13 01:15:55.177405 kubelet[1710]: E0813 01:15:55.177381 1710 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 13 01:15:55.277943 kubelet[1710]: E0813 01:15:55.277919 1710 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 13 01:15:55.378500 kubelet[1710]: E0813 01:15:55.378417 1710 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 13 01:15:55.983505 kubelet[1710]: I0813 01:15:55.983473 1710 apiserver.go:52] "Watching apiserver" Aug 13 01:15:56.002259 kubelet[1710]: I0813 01:15:56.002242 1710 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Aug 13 01:15:56.709496 systemd[1]: Reloading. Aug 13 01:15:56.759864 /usr/lib/systemd/system-generators/torcx-generator[1995]: time="2025-08-13T01:15:56Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Aug 13 01:15:56.759881 /usr/lib/systemd/system-generators/torcx-generator[1995]: time="2025-08-13T01:15:56Z" level=info msg="torcx already run" Aug 13 01:15:56.831677 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Aug 13 01:15:56.831690 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Aug 13 01:15:56.847767 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 01:15:56.920002 kubelet[1710]: I0813 01:15:56.919898 1710 dynamic_cafile_content.go:174] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 13 01:15:56.920111 systemd[1]: Stopping kubelet.service... Aug 13 01:15:56.939924 systemd[1]: kubelet.service: Deactivated successfully. Aug 13 01:15:56.940042 systemd[1]: Stopped kubelet.service. Aug 13 01:15:56.941607 systemd[1]: Starting kubelet.service... Aug 13 01:15:58.995073 systemd[1]: Started kubelet.service. Aug 13 01:15:59.104532 kubelet[2059]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 01:15:59.104532 kubelet[2059]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Aug 13 01:15:59.104532 kubelet[2059]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 01:15:59.104811 kubelet[2059]: I0813 01:15:59.104562 2059 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 13 01:15:59.108750 kubelet[2059]: I0813 01:15:59.108731 2059 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Aug 13 01:15:59.108750 kubelet[2059]: I0813 01:15:59.108745 2059 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 13 01:15:59.108898 kubelet[2059]: I0813 01:15:59.108876 2059 server.go:934] "Client rotation is on, will bootstrap in background" Aug 13 01:15:59.117620 kubelet[2059]: I0813 01:15:59.117295 2059 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Aug 13 01:15:59.130811 kubelet[2059]: I0813 01:15:59.130788 2059 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 13 01:15:59.167241 kubelet[2059]: E0813 01:15:59.167221 2059 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Aug 13 01:15:59.167354 kubelet[2059]: I0813 01:15:59.167346 2059 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Aug 13 01:15:59.178395 kubelet[2059]: I0813 01:15:59.178370 2059 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 13 01:15:59.184565 kubelet[2059]: I0813 01:15:59.184142 2059 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Aug 13 01:15:59.184565 kubelet[2059]: I0813 01:15:59.184266 2059 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 13 01:15:59.184565 kubelet[2059]: I0813 01:15:59.184288 2059 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Aug 13 01:15:59.184565 kubelet[2059]: I0813 01:15:59.184402 2059 topology_manager.go:138] "Creating topology manager with none policy" Aug 13 01:15:59.184791 kubelet[2059]: I0813 01:15:59.184408 2059 container_manager_linux.go:300] "Creating device plugin manager" Aug 13 01:15:59.184791 kubelet[2059]: I0813 01:15:59.184436 2059 state_mem.go:36] "Initialized new in-memory state store" Aug 13 01:15:59.184791 kubelet[2059]: I0813 01:15:59.184507 2059 kubelet.go:408] "Attempting to sync node with API server" Aug 13 01:15:59.184791 kubelet[2059]: I0813 01:15:59.184516 2059 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 13 01:15:59.195329 kubelet[2059]: I0813 01:15:59.195311 2059 kubelet.go:314] "Adding apiserver pod source" Aug 13 01:15:59.195329 kubelet[2059]: I0813 01:15:59.195329 2059 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 13 01:15:59.208423 kubelet[2059]: I0813 01:15:59.208409 2059 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Aug 13 01:15:59.208816 kubelet[2059]: I0813 01:15:59.208807 2059 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Aug 13 01:15:59.209099 kubelet[2059]: I0813 01:15:59.209091 2059 server.go:1274] "Started kubelet" Aug 13 01:15:59.238790 sudo[2074]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Aug 13 01:15:59.238937 sudo[2074]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Aug 13 01:15:59.248011 kubelet[2059]: I0813 01:15:59.247293 2059 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 13 01:15:59.251143 kubelet[2059]: I0813 01:15:59.251134 2059 apiserver.go:52] "Watching apiserver" Aug 13 01:15:59.253408 kubelet[2059]: I0813 01:15:59.253395 2059 volume_manager.go:289] "Starting Kubelet Volume Manager" Aug 13 01:15:59.253877 kubelet[2059]: I0813 01:15:59.253855 2059 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Aug 13 01:15:59.253955 kubelet[2059]: I0813 01:15:59.253944 2059 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Aug 13 01:15:59.254328 kubelet[2059]: I0813 01:15:59.254316 2059 reconciler.go:26] "Reconciler: start to sync state" Aug 13 01:15:59.255134 kubelet[2059]: I0813 01:15:59.255125 2059 factory.go:221] Registration of the systemd container factory successfully Aug 13 01:15:59.255242 kubelet[2059]: I0813 01:15:59.255230 2059 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 13 01:15:59.255803 kubelet[2059]: I0813 01:15:59.255783 2059 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 13 01:15:59.256003 kubelet[2059]: I0813 01:15:59.255996 2059 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 13 01:15:59.262726 kubelet[2059]: I0813 01:15:59.262713 2059 server.go:449] "Adding debug handlers to kubelet server" Aug 13 01:15:59.270507 kubelet[2059]: I0813 01:15:59.270489 2059 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Aug 13 01:15:59.272400 kubelet[2059]: I0813 01:15:59.272389 2059 factory.go:221] Registration of the containerd container factory successfully Aug 13 01:15:59.276184 kubelet[2059]: E0813 01:15:59.276168 2059 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 13 01:15:59.297082 kubelet[2059]: I0813 01:15:59.297060 2059 cpu_manager.go:214] "Starting CPU manager" policy="none" Aug 13 01:15:59.297082 kubelet[2059]: I0813 01:15:59.297070 2059 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Aug 13 01:15:59.297082 kubelet[2059]: I0813 01:15:59.297085 2059 state_mem.go:36] "Initialized new in-memory state store" Aug 13 01:15:59.297217 kubelet[2059]: I0813 01:15:59.297186 2059 state_mem.go:88] "Updated default CPUSet" cpuSet="" Aug 13 01:15:59.297217 kubelet[2059]: I0813 01:15:59.297193 2059 state_mem.go:96] "Updated CPUSet assignments" assignments={} Aug 13 01:15:59.297217 kubelet[2059]: I0813 01:15:59.297209 2059 policy_none.go:49] "None policy: Start" Aug 13 01:15:59.297741 kubelet[2059]: I0813 01:15:59.297727 2059 memory_manager.go:170] "Starting memorymanager" policy="None" Aug 13 01:15:59.297782 kubelet[2059]: I0813 01:15:59.297746 2059 state_mem.go:35] "Initializing new in-memory state store" Aug 13 01:15:59.297840 kubelet[2059]: I0813 01:15:59.297830 2059 state_mem.go:75] "Updated machine memory state" Aug 13 01:15:59.305780 kubelet[2059]: I0813 01:15:59.305766 2059 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 13 01:15:59.305984 kubelet[2059]: I0813 01:15:59.305976 2059 eviction_manager.go:189] "Eviction manager: starting control loop" Aug 13 01:15:59.306055 kubelet[2059]: I0813 01:15:59.306034 2059 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 13 01:15:59.308043 kubelet[2059]: I0813 01:15:59.308034 2059 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 13 01:15:59.331295 kubelet[2059]: I0813 01:15:59.331272 2059 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 13 01:15:59.332110 kubelet[2059]: I0813 01:15:59.332101 2059 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 13 01:15:59.332174 kubelet[2059]: I0813 01:15:59.332167 2059 status_manager.go:217] "Starting to sync pod status with apiserver" Aug 13 01:15:59.332228 kubelet[2059]: I0813 01:15:59.332221 2059 kubelet.go:2321] "Starting kubelet main sync loop" Aug 13 01:15:59.332329 kubelet[2059]: E0813 01:15:59.332318 2059 kubelet.go:2345] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Aug 13 01:15:59.413521 kubelet[2059]: I0813 01:15:59.413500 2059 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Aug 13 01:15:59.422532 kubelet[2059]: I0813 01:15:59.422511 2059 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Aug 13 01:15:59.422638 kubelet[2059]: I0813 01:15:59.422556 2059 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Aug 13 01:15:59.454777 kubelet[2059]: I0813 01:15:59.454757 2059 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Aug 13 01:15:59.554967 kubelet[2059]: I0813 01:15:59.554915 2059 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5f0f3f312279b56beb51bcbf6f7a032d-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"5f0f3f312279b56beb51bcbf6f7a032d\") " pod="kube-system/kube-apiserver-localhost" Aug 13 01:15:59.555106 kubelet[2059]: I0813 01:15:59.555086 2059 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/407c569889bb86d746b0274843003fd0-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"407c569889bb86d746b0274843003fd0\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 01:15:59.555166 kubelet[2059]: I0813 01:15:59.555157 2059 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/407c569889bb86d746b0274843003fd0-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"407c569889bb86d746b0274843003fd0\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 01:15:59.555219 kubelet[2059]: I0813 01:15:59.555211 2059 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/407c569889bb86d746b0274843003fd0-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"407c569889bb86d746b0274843003fd0\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 01:15:59.555281 kubelet[2059]: I0813 01:15:59.555272 2059 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/407c569889bb86d746b0274843003fd0-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"407c569889bb86d746b0274843003fd0\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 01:15:59.555336 kubelet[2059]: I0813 01:15:59.555328 2059 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/407c569889bb86d746b0274843003fd0-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"407c569889bb86d746b0274843003fd0\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 01:15:59.555389 kubelet[2059]: I0813 01:15:59.555380 2059 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/27e4a50e94f48ec00f6bd509cb48ed05-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"27e4a50e94f48ec00f6bd509cb48ed05\") " pod="kube-system/kube-scheduler-localhost" Aug 13 01:15:59.555443 kubelet[2059]: I0813 01:15:59.555433 2059 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5f0f3f312279b56beb51bcbf6f7a032d-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"5f0f3f312279b56beb51bcbf6f7a032d\") " pod="kube-system/kube-apiserver-localhost" Aug 13 01:15:59.555494 kubelet[2059]: I0813 01:15:59.555483 2059 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5f0f3f312279b56beb51bcbf6f7a032d-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"5f0f3f312279b56beb51bcbf6f7a032d\") " pod="kube-system/kube-apiserver-localhost" Aug 13 01:15:59.802027 sudo[2074]: pam_unix(sudo:session): session closed for user root Aug 13 01:16:00.375169 kubelet[2059]: I0813 01:16:00.375128 2059 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.375113693 podStartE2EDuration="1.375113693s" podCreationTimestamp="2025-08-13 01:15:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 01:16:00.36281194 +0000 UTC m=+1.318830836" watchObservedRunningTime="2025-08-13 01:16:00.375113693 +0000 UTC m=+1.331132583" Aug 13 01:16:00.382523 kubelet[2059]: I0813 01:16:00.382489 2059 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.382476427 podStartE2EDuration="1.382476427s" podCreationTimestamp="2025-08-13 01:15:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 01:16:00.375526155 +0000 UTC m=+1.331545038" watchObservedRunningTime="2025-08-13 01:16:00.382476427 +0000 UTC m=+1.338495318" Aug 13 01:16:00.395525 kubelet[2059]: I0813 01:16:00.395481 2059 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.395469739 podStartE2EDuration="1.395469739s" podCreationTimestamp="2025-08-13 01:15:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 01:16:00.383017681 +0000 UTC m=+1.339036573" watchObservedRunningTime="2025-08-13 01:16:00.395469739 +0000 UTC m=+1.351488628" Aug 13 01:16:01.359517 kubelet[2059]: I0813 01:16:01.359491 2059 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Aug 13 01:16:01.360287 kubelet[2059]: I0813 01:16:01.359938 2059 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Aug 13 01:16:01.374074 env[1257]: time="2025-08-13T01:16:01.359831897Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Aug 13 01:16:01.856069 sudo[1441]: pam_unix(sudo:session): session closed for user root Aug 13 01:16:01.857380 sshd[1437]: pam_unix(sshd:session): session closed for user core Aug 13 01:16:01.859845 systemd-logind[1246]: Session 7 logged out. Waiting for processes to exit. Aug 13 01:16:01.861088 systemd[1]: sshd@4-139.178.70.100:22-139.178.68.195:44230.service: Deactivated successfully. Aug 13 01:16:01.861630 systemd[1]: session-7.scope: Deactivated successfully. Aug 13 01:16:01.861735 systemd[1]: session-7.scope: Consumed 3.686s CPU time. Aug 13 01:16:01.862581 systemd-logind[1246]: Removed session 7. Aug 13 01:16:02.078142 systemd[1]: Created slice kubepods-besteffort-pod9a8040d2_1446_41a0_a6e4_620a76f01452.slice. Aug 13 01:16:02.089340 systemd[1]: Created slice kubepods-burstable-pod794f0d21_c4a7_4cc5_a8d6_d8350a1a354f.slice. Aug 13 01:16:02.108698 kubelet[2059]: E0813 01:16:02.108612 2059 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[bpf-maps cilium-cgroup cilium-config-path cilium-run clustermesh-secrets cni-path etc-cni-netd host-proc-sys-kernel host-proc-sys-net hostproc hubble-tls kube-api-access-b8ctm lib-modules xtables-lock], unattached volumes=[], failed to process volumes=[bpf-maps cilium-cgroup cilium-config-path cilium-run clustermesh-secrets cni-path etc-cni-netd host-proc-sys-kernel host-proc-sys-net hostproc hubble-tls kube-api-access-b8ctm lib-modules xtables-lock]: context canceled" pod="kube-system/cilium-m2snl" podUID="794f0d21-c4a7-4cc5-a8d6-d8350a1a354f" Aug 13 01:16:02.271497 kubelet[2059]: I0813 01:16:02.271471 2059 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qkh2f\" (UniqueName: \"kubernetes.io/projected/9a8040d2-1446-41a0-a6e4-620a76f01452-kube-api-access-qkh2f\") pod \"kube-proxy-8lpn5\" (UID: \"9a8040d2-1446-41a0-a6e4-620a76f01452\") " pod="kube-system/kube-proxy-8lpn5" Aug 13 01:16:02.271650 kubelet[2059]: I0813 01:16:02.271638 2059 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/794f0d21-c4a7-4cc5-a8d6-d8350a1a354f-cilium-run\") pod \"cilium-m2snl\" (UID: \"794f0d21-c4a7-4cc5-a8d6-d8350a1a354f\") " pod="kube-system/cilium-m2snl" Aug 13 01:16:02.271753 kubelet[2059]: I0813 01:16:02.271742 2059 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9a8040d2-1446-41a0-a6e4-620a76f01452-xtables-lock\") pod \"kube-proxy-8lpn5\" (UID: \"9a8040d2-1446-41a0-a6e4-620a76f01452\") " pod="kube-system/kube-proxy-8lpn5" Aug 13 01:16:02.271888 kubelet[2059]: I0813 01:16:02.271877 2059 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/794f0d21-c4a7-4cc5-a8d6-d8350a1a354f-hostproc\") pod \"cilium-m2snl\" (UID: \"794f0d21-c4a7-4cc5-a8d6-d8350a1a354f\") " pod="kube-system/cilium-m2snl" Aug 13 01:16:02.271968 kubelet[2059]: I0813 01:16:02.271958 2059 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/794f0d21-c4a7-4cc5-a8d6-d8350a1a354f-host-proc-sys-net\") pod \"cilium-m2snl\" (UID: \"794f0d21-c4a7-4cc5-a8d6-d8350a1a354f\") " pod="kube-system/cilium-m2snl" Aug 13 01:16:02.272044 kubelet[2059]: I0813 01:16:02.272035 2059 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/794f0d21-c4a7-4cc5-a8d6-d8350a1a354f-host-proc-sys-kernel\") pod \"cilium-m2snl\" (UID: \"794f0d21-c4a7-4cc5-a8d6-d8350a1a354f\") " pod="kube-system/cilium-m2snl" Aug 13 01:16:02.272113 kubelet[2059]: I0813 01:16:02.272103 2059 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/794f0d21-c4a7-4cc5-a8d6-d8350a1a354f-cni-path\") pod \"cilium-m2snl\" (UID: \"794f0d21-c4a7-4cc5-a8d6-d8350a1a354f\") " pod="kube-system/cilium-m2snl" Aug 13 01:16:02.272191 kubelet[2059]: I0813 01:16:02.272181 2059 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/794f0d21-c4a7-4cc5-a8d6-d8350a1a354f-clustermesh-secrets\") pod \"cilium-m2snl\" (UID: \"794f0d21-c4a7-4cc5-a8d6-d8350a1a354f\") " pod="kube-system/cilium-m2snl" Aug 13 01:16:02.272247 kubelet[2059]: I0813 01:16:02.272237 2059 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/794f0d21-c4a7-4cc5-a8d6-d8350a1a354f-hubble-tls\") pod \"cilium-m2snl\" (UID: \"794f0d21-c4a7-4cc5-a8d6-d8350a1a354f\") " pod="kube-system/cilium-m2snl" Aug 13 01:16:02.272348 kubelet[2059]: I0813 01:16:02.272337 2059 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/9a8040d2-1446-41a0-a6e4-620a76f01452-kube-proxy\") pod \"kube-proxy-8lpn5\" (UID: \"9a8040d2-1446-41a0-a6e4-620a76f01452\") " pod="kube-system/kube-proxy-8lpn5" Aug 13 01:16:02.272423 kubelet[2059]: I0813 01:16:02.272415 2059 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/794f0d21-c4a7-4cc5-a8d6-d8350a1a354f-xtables-lock\") pod \"cilium-m2snl\" (UID: \"794f0d21-c4a7-4cc5-a8d6-d8350a1a354f\") " pod="kube-system/cilium-m2snl" Aug 13 01:16:02.272500 kubelet[2059]: I0813 01:16:02.272488 2059 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/794f0d21-c4a7-4cc5-a8d6-d8350a1a354f-cilium-config-path\") pod \"cilium-m2snl\" (UID: \"794f0d21-c4a7-4cc5-a8d6-d8350a1a354f\") " pod="kube-system/cilium-m2snl" Aug 13 01:16:02.272566 kubelet[2059]: I0813 01:16:02.272558 2059 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b8ctm\" (UniqueName: \"kubernetes.io/projected/794f0d21-c4a7-4cc5-a8d6-d8350a1a354f-kube-api-access-b8ctm\") pod \"cilium-m2snl\" (UID: \"794f0d21-c4a7-4cc5-a8d6-d8350a1a354f\") " pod="kube-system/cilium-m2snl" Aug 13 01:16:02.272624 kubelet[2059]: I0813 01:16:02.272615 2059 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/794f0d21-c4a7-4cc5-a8d6-d8350a1a354f-cilium-cgroup\") pod \"cilium-m2snl\" (UID: \"794f0d21-c4a7-4cc5-a8d6-d8350a1a354f\") " pod="kube-system/cilium-m2snl" Aug 13 01:16:02.272701 kubelet[2059]: I0813 01:16:02.272693 2059 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/794f0d21-c4a7-4cc5-a8d6-d8350a1a354f-etc-cni-netd\") pod \"cilium-m2snl\" (UID: \"794f0d21-c4a7-4cc5-a8d6-d8350a1a354f\") " pod="kube-system/cilium-m2snl" Aug 13 01:16:02.272779 kubelet[2059]: I0813 01:16:02.272768 2059 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/794f0d21-c4a7-4cc5-a8d6-d8350a1a354f-lib-modules\") pod \"cilium-m2snl\" (UID: \"794f0d21-c4a7-4cc5-a8d6-d8350a1a354f\") " pod="kube-system/cilium-m2snl" Aug 13 01:16:02.272849 kubelet[2059]: I0813 01:16:02.272839 2059 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9a8040d2-1446-41a0-a6e4-620a76f01452-lib-modules\") pod \"kube-proxy-8lpn5\" (UID: \"9a8040d2-1446-41a0-a6e4-620a76f01452\") " pod="kube-system/kube-proxy-8lpn5" Aug 13 01:16:02.272927 kubelet[2059]: I0813 01:16:02.272917 2059 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/794f0d21-c4a7-4cc5-a8d6-d8350a1a354f-bpf-maps\") pod \"cilium-m2snl\" (UID: \"794f0d21-c4a7-4cc5-a8d6-d8350a1a354f\") " pod="kube-system/cilium-m2snl" Aug 13 01:16:02.366084 systemd[1]: Created slice kubepods-besteffort-pod3d9b3505_7044_4cab_9ec6_bf9b840b2685.slice. Aug 13 01:16:02.373520 kubelet[2059]: I0813 01:16:02.373487 2059 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3d9b3505-7044-4cab-9ec6-bf9b840b2685-cilium-config-path\") pod \"cilium-operator-5d85765b45-dglxb\" (UID: \"3d9b3505-7044-4cab-9ec6-bf9b840b2685\") " pod="kube-system/cilium-operator-5d85765b45-dglxb" Aug 13 01:16:02.373635 kubelet[2059]: I0813 01:16:02.373562 2059 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hvpkj\" (UniqueName: \"kubernetes.io/projected/3d9b3505-7044-4cab-9ec6-bf9b840b2685-kube-api-access-hvpkj\") pod \"cilium-operator-5d85765b45-dglxb\" (UID: \"3d9b3505-7044-4cab-9ec6-bf9b840b2685\") " pod="kube-system/cilium-operator-5d85765b45-dglxb" Aug 13 01:16:02.375083 kubelet[2059]: I0813 01:16:02.375059 2059 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Aug 13 01:16:02.474607 kubelet[2059]: I0813 01:16:02.474570 2059 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/794f0d21-c4a7-4cc5-a8d6-d8350a1a354f-hostproc\") pod \"794f0d21-c4a7-4cc5-a8d6-d8350a1a354f\" (UID: \"794f0d21-c4a7-4cc5-a8d6-d8350a1a354f\") " Aug 13 01:16:02.474934 kubelet[2059]: I0813 01:16:02.474913 2059 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/794f0d21-c4a7-4cc5-a8d6-d8350a1a354f-bpf-maps\") pod \"794f0d21-c4a7-4cc5-a8d6-d8350a1a354f\" (UID: \"794f0d21-c4a7-4cc5-a8d6-d8350a1a354f\") " Aug 13 01:16:02.475062 kubelet[2059]: I0813 01:16:02.475048 2059 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/794f0d21-c4a7-4cc5-a8d6-d8350a1a354f-xtables-lock\") pod \"794f0d21-c4a7-4cc5-a8d6-d8350a1a354f\" (UID: \"794f0d21-c4a7-4cc5-a8d6-d8350a1a354f\") " Aug 13 01:16:02.475148 kubelet[2059]: I0813 01:16:02.475141 2059 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b8ctm\" (UniqueName: \"kubernetes.io/projected/794f0d21-c4a7-4cc5-a8d6-d8350a1a354f-kube-api-access-b8ctm\") pod \"794f0d21-c4a7-4cc5-a8d6-d8350a1a354f\" (UID: \"794f0d21-c4a7-4cc5-a8d6-d8350a1a354f\") " Aug 13 01:16:02.475367 kubelet[2059]: I0813 01:16:02.475358 2059 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/794f0d21-c4a7-4cc5-a8d6-d8350a1a354f-cilium-cgroup\") pod \"794f0d21-c4a7-4cc5-a8d6-d8350a1a354f\" (UID: \"794f0d21-c4a7-4cc5-a8d6-d8350a1a354f\") " Aug 13 01:16:02.475448 kubelet[2059]: I0813 01:16:02.475440 2059 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/794f0d21-c4a7-4cc5-a8d6-d8350a1a354f-cilium-run\") pod \"794f0d21-c4a7-4cc5-a8d6-d8350a1a354f\" (UID: \"794f0d21-c4a7-4cc5-a8d6-d8350a1a354f\") " Aug 13 01:16:02.475508 kubelet[2059]: I0813 01:16:02.475499 2059 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/794f0d21-c4a7-4cc5-a8d6-d8350a1a354f-cilium-config-path\") pod \"794f0d21-c4a7-4cc5-a8d6-d8350a1a354f\" (UID: \"794f0d21-c4a7-4cc5-a8d6-d8350a1a354f\") " Aug 13 01:16:02.475560 kubelet[2059]: I0813 01:16:02.475552 2059 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/794f0d21-c4a7-4cc5-a8d6-d8350a1a354f-host-proc-sys-kernel\") pod \"794f0d21-c4a7-4cc5-a8d6-d8350a1a354f\" (UID: \"794f0d21-c4a7-4cc5-a8d6-d8350a1a354f\") " Aug 13 01:16:02.475613 kubelet[2059]: I0813 01:16:02.475605 2059 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/794f0d21-c4a7-4cc5-a8d6-d8350a1a354f-hubble-tls\") pod \"794f0d21-c4a7-4cc5-a8d6-d8350a1a354f\" (UID: \"794f0d21-c4a7-4cc5-a8d6-d8350a1a354f\") " Aug 13 01:16:02.475675 kubelet[2059]: I0813 01:16:02.475657 2059 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/794f0d21-c4a7-4cc5-a8d6-d8350a1a354f-etc-cni-netd\") pod \"794f0d21-c4a7-4cc5-a8d6-d8350a1a354f\" (UID: \"794f0d21-c4a7-4cc5-a8d6-d8350a1a354f\") " Aug 13 01:16:02.475729 kubelet[2059]: I0813 01:16:02.475720 2059 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/794f0d21-c4a7-4cc5-a8d6-d8350a1a354f-lib-modules\") pod \"794f0d21-c4a7-4cc5-a8d6-d8350a1a354f\" (UID: \"794f0d21-c4a7-4cc5-a8d6-d8350a1a354f\") " Aug 13 01:16:02.475785 kubelet[2059]: I0813 01:16:02.475776 2059 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/794f0d21-c4a7-4cc5-a8d6-d8350a1a354f-host-proc-sys-net\") pod \"794f0d21-c4a7-4cc5-a8d6-d8350a1a354f\" (UID: \"794f0d21-c4a7-4cc5-a8d6-d8350a1a354f\") " Aug 13 01:16:02.475836 kubelet[2059]: I0813 01:16:02.475827 2059 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/794f0d21-c4a7-4cc5-a8d6-d8350a1a354f-cni-path\") pod \"794f0d21-c4a7-4cc5-a8d6-d8350a1a354f\" (UID: \"794f0d21-c4a7-4cc5-a8d6-d8350a1a354f\") " Aug 13 01:16:02.475934 kubelet[2059]: I0813 01:16:02.475920 2059 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/794f0d21-c4a7-4cc5-a8d6-d8350a1a354f-clustermesh-secrets\") pod \"794f0d21-c4a7-4cc5-a8d6-d8350a1a354f\" (UID: \"794f0d21-c4a7-4cc5-a8d6-d8350a1a354f\") " Aug 13 01:16:02.477522 kubelet[2059]: I0813 01:16:02.474888 2059 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/794f0d21-c4a7-4cc5-a8d6-d8350a1a354f-hostproc" (OuterVolumeSpecName: "hostproc") pod "794f0d21-c4a7-4cc5-a8d6-d8350a1a354f" (UID: "794f0d21-c4a7-4cc5-a8d6-d8350a1a354f"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 01:16:02.477583 kubelet[2059]: I0813 01:16:02.475024 2059 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/794f0d21-c4a7-4cc5-a8d6-d8350a1a354f-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "794f0d21-c4a7-4cc5-a8d6-d8350a1a354f" (UID: "794f0d21-c4a7-4cc5-a8d6-d8350a1a354f"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 01:16:02.477583 kubelet[2059]: I0813 01:16:02.475118 2059 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/794f0d21-c4a7-4cc5-a8d6-d8350a1a354f-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "794f0d21-c4a7-4cc5-a8d6-d8350a1a354f" (UID: "794f0d21-c4a7-4cc5-a8d6-d8350a1a354f"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 01:16:02.477583 kubelet[2059]: I0813 01:16:02.477542 2059 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/794f0d21-c4a7-4cc5-a8d6-d8350a1a354f-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "794f0d21-c4a7-4cc5-a8d6-d8350a1a354f" (UID: "794f0d21-c4a7-4cc5-a8d6-d8350a1a354f"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 01:16:02.477583 kubelet[2059]: I0813 01:16:02.477553 2059 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/794f0d21-c4a7-4cc5-a8d6-d8350a1a354f-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "794f0d21-c4a7-4cc5-a8d6-d8350a1a354f" (UID: "794f0d21-c4a7-4cc5-a8d6-d8350a1a354f"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 01:16:02.478427 kubelet[2059]: I0813 01:16:02.478412 2059 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/794f0d21-c4a7-4cc5-a8d6-d8350a1a354f-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "794f0d21-c4a7-4cc5-a8d6-d8350a1a354f" (UID: "794f0d21-c4a7-4cc5-a8d6-d8350a1a354f"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 01:16:02.478484 kubelet[2059]: I0813 01:16:02.478431 2059 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/794f0d21-c4a7-4cc5-a8d6-d8350a1a354f-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "794f0d21-c4a7-4cc5-a8d6-d8350a1a354f" (UID: "794f0d21-c4a7-4cc5-a8d6-d8350a1a354f"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 01:16:02.478484 kubelet[2059]: I0813 01:16:02.478442 2059 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/794f0d21-c4a7-4cc5-a8d6-d8350a1a354f-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "794f0d21-c4a7-4cc5-a8d6-d8350a1a354f" (UID: "794f0d21-c4a7-4cc5-a8d6-d8350a1a354f"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 01:16:02.478484 kubelet[2059]: I0813 01:16:02.478451 2059 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/794f0d21-c4a7-4cc5-a8d6-d8350a1a354f-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "794f0d21-c4a7-4cc5-a8d6-d8350a1a354f" (UID: "794f0d21-c4a7-4cc5-a8d6-d8350a1a354f"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 01:16:02.478484 kubelet[2059]: I0813 01:16:02.478466 2059 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/794f0d21-c4a7-4cc5-a8d6-d8350a1a354f-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "794f0d21-c4a7-4cc5-a8d6-d8350a1a354f" (UID: "794f0d21-c4a7-4cc5-a8d6-d8350a1a354f"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 01:16:02.481561 systemd[1]: var-lib-kubelet-pods-794f0d21\x2dc4a7\x2d4cc5\x2da8d6\x2dd8350a1a354f-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2db8ctm.mount: Deactivated successfully. Aug 13 01:16:02.482633 kubelet[2059]: I0813 01:16:02.482619 2059 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/794f0d21-c4a7-4cc5-a8d6-d8350a1a354f-cni-path" (OuterVolumeSpecName: "cni-path") pod "794f0d21-c4a7-4cc5-a8d6-d8350a1a354f" (UID: "794f0d21-c4a7-4cc5-a8d6-d8350a1a354f"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 01:16:02.483221 kubelet[2059]: I0813 01:16:02.483095 2059 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/794f0d21-c4a7-4cc5-a8d6-d8350a1a354f-kube-api-access-b8ctm" (OuterVolumeSpecName: "kube-api-access-b8ctm") pod "794f0d21-c4a7-4cc5-a8d6-d8350a1a354f" (UID: "794f0d21-c4a7-4cc5-a8d6-d8350a1a354f"). InnerVolumeSpecName "kube-api-access-b8ctm". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 01:16:02.484609 kubelet[2059]: I0813 01:16:02.484596 2059 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/794f0d21-c4a7-4cc5-a8d6-d8350a1a354f-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "794f0d21-c4a7-4cc5-a8d6-d8350a1a354f" (UID: "794f0d21-c4a7-4cc5-a8d6-d8350a1a354f"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Aug 13 01:16:02.485717 kubelet[2059]: I0813 01:16:02.485442 2059 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/794f0d21-c4a7-4cc5-a8d6-d8350a1a354f-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "794f0d21-c4a7-4cc5-a8d6-d8350a1a354f" (UID: "794f0d21-c4a7-4cc5-a8d6-d8350a1a354f"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 01:16:02.576715 kubelet[2059]: I0813 01:16:02.576682 2059 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/794f0d21-c4a7-4cc5-a8d6-d8350a1a354f-lib-modules\") on node \"localhost\" DevicePath \"\"" Aug 13 01:16:02.576846 kubelet[2059]: I0813 01:16:02.576836 2059 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/794f0d21-c4a7-4cc5-a8d6-d8350a1a354f-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Aug 13 01:16:02.576900 kubelet[2059]: I0813 01:16:02.576893 2059 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/794f0d21-c4a7-4cc5-a8d6-d8350a1a354f-cni-path\") on node \"localhost\" DevicePath \"\"" Aug 13 01:16:02.576949 kubelet[2059]: I0813 01:16:02.576941 2059 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/794f0d21-c4a7-4cc5-a8d6-d8350a1a354f-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Aug 13 01:16:02.577001 kubelet[2059]: I0813 01:16:02.576993 2059 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/794f0d21-c4a7-4cc5-a8d6-d8350a1a354f-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Aug 13 01:16:02.577048 kubelet[2059]: I0813 01:16:02.577040 2059 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/794f0d21-c4a7-4cc5-a8d6-d8350a1a354f-hostproc\") on node \"localhost\" DevicePath \"\"" Aug 13 01:16:02.577091 kubelet[2059]: I0813 01:16:02.577084 2059 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/794f0d21-c4a7-4cc5-a8d6-d8350a1a354f-bpf-maps\") on node \"localhost\" DevicePath \"\"" Aug 13 01:16:02.577135 kubelet[2059]: I0813 01:16:02.577128 2059 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b8ctm\" (UniqueName: \"kubernetes.io/projected/794f0d21-c4a7-4cc5-a8d6-d8350a1a354f-kube-api-access-b8ctm\") on node \"localhost\" DevicePath \"\"" Aug 13 01:16:02.577179 kubelet[2059]: I0813 01:16:02.577172 2059 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/794f0d21-c4a7-4cc5-a8d6-d8350a1a354f-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Aug 13 01:16:02.577227 kubelet[2059]: I0813 01:16:02.577220 2059 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/794f0d21-c4a7-4cc5-a8d6-d8350a1a354f-cilium-run\") on node \"localhost\" DevicePath \"\"" Aug 13 01:16:02.577278 kubelet[2059]: I0813 01:16:02.577271 2059 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/794f0d21-c4a7-4cc5-a8d6-d8350a1a354f-xtables-lock\") on node \"localhost\" DevicePath \"\"" Aug 13 01:16:02.577325 kubelet[2059]: I0813 01:16:02.577317 2059 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/794f0d21-c4a7-4cc5-a8d6-d8350a1a354f-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Aug 13 01:16:02.577368 kubelet[2059]: I0813 01:16:02.577361 2059 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/794f0d21-c4a7-4cc5-a8d6-d8350a1a354f-hubble-tls\") on node \"localhost\" DevicePath \"\"" Aug 13 01:16:02.577416 kubelet[2059]: I0813 01:16:02.577408 2059 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/794f0d21-c4a7-4cc5-a8d6-d8350a1a354f-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Aug 13 01:16:02.670118 env[1257]: time="2025-08-13T01:16:02.668749042Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-dglxb,Uid:3d9b3505-7044-4cab-9ec6-bf9b840b2685,Namespace:kube-system,Attempt:0,}" Aug 13 01:16:02.688160 env[1257]: time="2025-08-13T01:16:02.688131782Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8lpn5,Uid:9a8040d2-1446-41a0-a6e4-620a76f01452,Namespace:kube-system,Attempt:0,}" Aug 13 01:16:02.894560 env[1257]: time="2025-08-13T01:16:02.894434509Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 01:16:02.894560 env[1257]: time="2025-08-13T01:16:02.894456638Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 01:16:02.894560 env[1257]: time="2025-08-13T01:16:02.894463931Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 01:16:02.894766 env[1257]: time="2025-08-13T01:16:02.894735302Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/808de6d6b834695cf56e97511fec076b6f28ee7e27cf01409c09eeb5e0247440 pid=2143 runtime=io.containerd.runc.v2 Aug 13 01:16:02.902159 systemd[1]: Started cri-containerd-808de6d6b834695cf56e97511fec076b6f28ee7e27cf01409c09eeb5e0247440.scope. Aug 13 01:16:02.919699 env[1257]: time="2025-08-13T01:16:02.919321864Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 01:16:02.919699 env[1257]: time="2025-08-13T01:16:02.919352252Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 01:16:02.919699 env[1257]: time="2025-08-13T01:16:02.919364576Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 01:16:02.919699 env[1257]: time="2025-08-13T01:16:02.919451684Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c0774e390c1282ff9fddae271d8e52e7d43207eeed614d4b5a83a65c6ac62058 pid=2176 runtime=io.containerd.runc.v2 Aug 13 01:16:02.939704 systemd[1]: Started cri-containerd-c0774e390c1282ff9fddae271d8e52e7d43207eeed614d4b5a83a65c6ac62058.scope. Aug 13 01:16:02.943209 env[1257]: time="2025-08-13T01:16:02.943185653Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-dglxb,Uid:3d9b3505-7044-4cab-9ec6-bf9b840b2685,Namespace:kube-system,Attempt:0,} returns sandbox id \"808de6d6b834695cf56e97511fec076b6f28ee7e27cf01409c09eeb5e0247440\"" Aug 13 01:16:02.954519 env[1257]: time="2025-08-13T01:16:02.953896809Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Aug 13 01:16:02.961569 env[1257]: time="2025-08-13T01:16:02.961543944Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8lpn5,Uid:9a8040d2-1446-41a0-a6e4-620a76f01452,Namespace:kube-system,Attempt:0,} returns sandbox id \"c0774e390c1282ff9fddae271d8e52e7d43207eeed614d4b5a83a65c6ac62058\"" Aug 13 01:16:02.963395 env[1257]: time="2025-08-13T01:16:02.963078936Z" level=info msg="CreateContainer within sandbox \"c0774e390c1282ff9fddae271d8e52e7d43207eeed614d4b5a83a65c6ac62058\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Aug 13 01:16:02.992774 env[1257]: time="2025-08-13T01:16:02.992727867Z" level=info msg="CreateContainer within sandbox \"c0774e390c1282ff9fddae271d8e52e7d43207eeed614d4b5a83a65c6ac62058\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"6a7b7bfe1131cbb6b25ee3f1496ef79ff50276ceb5041d026a3f707137ba2363\"" Aug 13 01:16:02.994413 env[1257]: time="2025-08-13T01:16:02.993551972Z" level=info msg="StartContainer for \"6a7b7bfe1131cbb6b25ee3f1496ef79ff50276ceb5041d026a3f707137ba2363\"" Aug 13 01:16:03.007542 systemd[1]: Started cri-containerd-6a7b7bfe1131cbb6b25ee3f1496ef79ff50276ceb5041d026a3f707137ba2363.scope. Aug 13 01:16:03.034442 env[1257]: time="2025-08-13T01:16:03.033520952Z" level=info msg="StartContainer for \"6a7b7bfe1131cbb6b25ee3f1496ef79ff50276ceb5041d026a3f707137ba2363\" returns successfully" Aug 13 01:16:03.336765 systemd[1]: Removed slice kubepods-burstable-pod794f0d21_c4a7_4cc5_a8d6_d8350a1a354f.slice. Aug 13 01:16:03.352388 kubelet[2059]: I0813 01:16:03.352350 2059 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-8lpn5" podStartSLOduration=1.352339624 podStartE2EDuration="1.352339624s" podCreationTimestamp="2025-08-13 01:16:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 01:16:03.351397312 +0000 UTC m=+4.307416207" watchObservedRunningTime="2025-08-13 01:16:03.352339624 +0000 UTC m=+4.308358515" Aug 13 01:16:03.395087 systemd[1]: var-lib-kubelet-pods-794f0d21\x2dc4a7\x2d4cc5\x2da8d6\x2dd8350a1a354f-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Aug 13 01:16:03.400960 systemd[1]: var-lib-kubelet-pods-794f0d21\x2dc4a7\x2d4cc5\x2da8d6\x2dd8350a1a354f-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Aug 13 01:16:03.411464 systemd[1]: Created slice kubepods-burstable-podd19dd2b0_5d8c_44b2_82ac_9c6c490607f6.slice. Aug 13 01:16:03.483821 kubelet[2059]: I0813 01:16:03.483793 2059 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d19dd2b0-5d8c-44b2-82ac-9c6c490607f6-cni-path\") pod \"cilium-zrwwr\" (UID: \"d19dd2b0-5d8c-44b2-82ac-9c6c490607f6\") " pod="kube-system/cilium-zrwwr" Aug 13 01:16:03.483956 kubelet[2059]: I0813 01:16:03.483945 2059 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d19dd2b0-5d8c-44b2-82ac-9c6c490607f6-etc-cni-netd\") pod \"cilium-zrwwr\" (UID: \"d19dd2b0-5d8c-44b2-82ac-9c6c490607f6\") " pod="kube-system/cilium-zrwwr" Aug 13 01:16:03.484029 kubelet[2059]: I0813 01:16:03.484020 2059 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d19dd2b0-5d8c-44b2-82ac-9c6c490607f6-clustermesh-secrets\") pod \"cilium-zrwwr\" (UID: \"d19dd2b0-5d8c-44b2-82ac-9c6c490607f6\") " pod="kube-system/cilium-zrwwr" Aug 13 01:16:03.484094 kubelet[2059]: I0813 01:16:03.484086 2059 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d19dd2b0-5d8c-44b2-82ac-9c6c490607f6-cilium-run\") pod \"cilium-zrwwr\" (UID: \"d19dd2b0-5d8c-44b2-82ac-9c6c490607f6\") " pod="kube-system/cilium-zrwwr" Aug 13 01:16:03.484151 kubelet[2059]: I0813 01:16:03.484142 2059 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d19dd2b0-5d8c-44b2-82ac-9c6c490607f6-xtables-lock\") pod \"cilium-zrwwr\" (UID: \"d19dd2b0-5d8c-44b2-82ac-9c6c490607f6\") " pod="kube-system/cilium-zrwwr" Aug 13 01:16:03.484218 kubelet[2059]: I0813 01:16:03.484208 2059 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d19dd2b0-5d8c-44b2-82ac-9c6c490607f6-cilium-config-path\") pod \"cilium-zrwwr\" (UID: \"d19dd2b0-5d8c-44b2-82ac-9c6c490607f6\") " pod="kube-system/cilium-zrwwr" Aug 13 01:16:03.484282 kubelet[2059]: I0813 01:16:03.484273 2059 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d19dd2b0-5d8c-44b2-82ac-9c6c490607f6-host-proc-sys-kernel\") pod \"cilium-zrwwr\" (UID: \"d19dd2b0-5d8c-44b2-82ac-9c6c490607f6\") " pod="kube-system/cilium-zrwwr" Aug 13 01:16:03.484348 kubelet[2059]: I0813 01:16:03.484333 2059 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d19dd2b0-5d8c-44b2-82ac-9c6c490607f6-hostproc\") pod \"cilium-zrwwr\" (UID: \"d19dd2b0-5d8c-44b2-82ac-9c6c490607f6\") " pod="kube-system/cilium-zrwwr" Aug 13 01:16:03.484404 kubelet[2059]: I0813 01:16:03.484396 2059 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d19dd2b0-5d8c-44b2-82ac-9c6c490607f6-hubble-tls\") pod \"cilium-zrwwr\" (UID: \"d19dd2b0-5d8c-44b2-82ac-9c6c490607f6\") " pod="kube-system/cilium-zrwwr" Aug 13 01:16:03.484473 kubelet[2059]: I0813 01:16:03.484464 2059 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-24tkn\" (UniqueName: \"kubernetes.io/projected/d19dd2b0-5d8c-44b2-82ac-9c6c490607f6-kube-api-access-24tkn\") pod \"cilium-zrwwr\" (UID: \"d19dd2b0-5d8c-44b2-82ac-9c6c490607f6\") " pod="kube-system/cilium-zrwwr" Aug 13 01:16:03.484527 kubelet[2059]: I0813 01:16:03.484519 2059 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d19dd2b0-5d8c-44b2-82ac-9c6c490607f6-bpf-maps\") pod \"cilium-zrwwr\" (UID: \"d19dd2b0-5d8c-44b2-82ac-9c6c490607f6\") " pod="kube-system/cilium-zrwwr" Aug 13 01:16:03.484592 kubelet[2059]: I0813 01:16:03.484584 2059 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d19dd2b0-5d8c-44b2-82ac-9c6c490607f6-cilium-cgroup\") pod \"cilium-zrwwr\" (UID: \"d19dd2b0-5d8c-44b2-82ac-9c6c490607f6\") " pod="kube-system/cilium-zrwwr" Aug 13 01:16:03.484651 kubelet[2059]: I0813 01:16:03.484635 2059 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d19dd2b0-5d8c-44b2-82ac-9c6c490607f6-lib-modules\") pod \"cilium-zrwwr\" (UID: \"d19dd2b0-5d8c-44b2-82ac-9c6c490607f6\") " pod="kube-system/cilium-zrwwr" Aug 13 01:16:03.484725 kubelet[2059]: I0813 01:16:03.484717 2059 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d19dd2b0-5d8c-44b2-82ac-9c6c490607f6-host-proc-sys-net\") pod \"cilium-zrwwr\" (UID: \"d19dd2b0-5d8c-44b2-82ac-9c6c490607f6\") " pod="kube-system/cilium-zrwwr" Aug 13 01:16:03.713228 env[1257]: time="2025-08-13T01:16:03.713204115Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-zrwwr,Uid:d19dd2b0-5d8c-44b2-82ac-9c6c490607f6,Namespace:kube-system,Attempt:0,}" Aug 13 01:16:03.733700 env[1257]: time="2025-08-13T01:16:03.733619776Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 01:16:03.733700 env[1257]: time="2025-08-13T01:16:03.733651058Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 01:16:03.733873 env[1257]: time="2025-08-13T01:16:03.733659964Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 01:16:03.733873 env[1257]: time="2025-08-13T01:16:03.733778543Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/25d7eb61c853bae4247ef203845022a1d28faf015b7099f014dbebc783e4cbe0 pid=2263 runtime=io.containerd.runc.v2 Aug 13 01:16:03.757358 systemd[1]: Started cri-containerd-25d7eb61c853bae4247ef203845022a1d28faf015b7099f014dbebc783e4cbe0.scope. Aug 13 01:16:03.773162 env[1257]: time="2025-08-13T01:16:03.773132900Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-zrwwr,Uid:d19dd2b0-5d8c-44b2-82ac-9c6c490607f6,Namespace:kube-system,Attempt:0,} returns sandbox id \"25d7eb61c853bae4247ef203845022a1d28faf015b7099f014dbebc783e4cbe0\"" Aug 13 01:16:04.884565 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3899096533.mount: Deactivated successfully. Aug 13 01:16:05.334193 kubelet[2059]: I0813 01:16:05.334097 2059 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="794f0d21-c4a7-4cc5-a8d6-d8350a1a354f" path="/var/lib/kubelet/pods/794f0d21-c4a7-4cc5-a8d6-d8350a1a354f/volumes" Aug 13 01:16:05.900654 env[1257]: time="2025-08-13T01:16:05.900621019Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:16:05.902478 env[1257]: time="2025-08-13T01:16:05.902451169Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:16:05.904232 env[1257]: time="2025-08-13T01:16:05.904211280Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:16:05.904582 env[1257]: time="2025-08-13T01:16:05.904559865Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Aug 13 01:16:05.905948 env[1257]: time="2025-08-13T01:16:05.905565198Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Aug 13 01:16:05.906998 env[1257]: time="2025-08-13T01:16:05.906974757Z" level=info msg="CreateContainer within sandbox \"808de6d6b834695cf56e97511fec076b6f28ee7e27cf01409c09eeb5e0247440\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Aug 13 01:16:05.926884 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3702446598.mount: Deactivated successfully. Aug 13 01:16:05.930277 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3173486422.mount: Deactivated successfully. Aug 13 01:16:05.939301 env[1257]: time="2025-08-13T01:16:05.939269855Z" level=info msg="CreateContainer within sandbox \"808de6d6b834695cf56e97511fec076b6f28ee7e27cf01409c09eeb5e0247440\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"03606f2b62bc4ce176a18d9a67b5eacc6c92b952828bbf7f076a96e9e337e83d\"" Aug 13 01:16:05.940279 env[1257]: time="2025-08-13T01:16:05.939881098Z" level=info msg="StartContainer for \"03606f2b62bc4ce176a18d9a67b5eacc6c92b952828bbf7f076a96e9e337e83d\"" Aug 13 01:16:05.958248 systemd[1]: Started cri-containerd-03606f2b62bc4ce176a18d9a67b5eacc6c92b952828bbf7f076a96e9e337e83d.scope. Aug 13 01:16:06.016830 env[1257]: time="2025-08-13T01:16:06.016799424Z" level=info msg="StartContainer for \"03606f2b62bc4ce176a18d9a67b5eacc6c92b952828bbf7f076a96e9e337e83d\" returns successfully" Aug 13 01:16:10.781732 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount10441502.mount: Deactivated successfully. Aug 13 01:16:11.745378 kubelet[2059]: I0813 01:16:11.745337 2059 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-dglxb" podStartSLOduration=6.784627953 podStartE2EDuration="9.745323002s" podCreationTimestamp="2025-08-13 01:16:02 +0000 UTC" firstStartedPulling="2025-08-13 01:16:02.944794426 +0000 UTC m=+3.900813310" lastFinishedPulling="2025-08-13 01:16:05.905489476 +0000 UTC m=+6.861508359" observedRunningTime="2025-08-13 01:16:06.405914348 +0000 UTC m=+7.361933235" watchObservedRunningTime="2025-08-13 01:16:11.745323002 +0000 UTC m=+12.701341905" Aug 13 01:16:14.831183 env[1257]: time="2025-08-13T01:16:14.831146385Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:16:14.832320 env[1257]: time="2025-08-13T01:16:14.832302387Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:16:14.833455 env[1257]: time="2025-08-13T01:16:14.833441199Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:16:14.833876 env[1257]: time="2025-08-13T01:16:14.833858735Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Aug 13 01:16:14.874221 env[1257]: time="2025-08-13T01:16:14.874190501Z" level=info msg="CreateContainer within sandbox \"25d7eb61c853bae4247ef203845022a1d28faf015b7099f014dbebc783e4cbe0\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Aug 13 01:16:14.880797 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount42750466.mount: Deactivated successfully. Aug 13 01:16:14.886234 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3165332416.mount: Deactivated successfully. Aug 13 01:16:14.904470 env[1257]: time="2025-08-13T01:16:14.904408347Z" level=info msg="CreateContainer within sandbox \"25d7eb61c853bae4247ef203845022a1d28faf015b7099f014dbebc783e4cbe0\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"2f43818ac9614f947c0c5016f1e9cc7ef42dc5cdc4cf15b7b351863851ec6e96\"" Aug 13 01:16:14.905830 env[1257]: time="2025-08-13T01:16:14.905810247Z" level=info msg="StartContainer for \"2f43818ac9614f947c0c5016f1e9cc7ef42dc5cdc4cf15b7b351863851ec6e96\"" Aug 13 01:16:14.939640 systemd[1]: Started cri-containerd-2f43818ac9614f947c0c5016f1e9cc7ef42dc5cdc4cf15b7b351863851ec6e96.scope. Aug 13 01:16:14.969136 env[1257]: time="2025-08-13T01:16:14.969104811Z" level=info msg="StartContainer for \"2f43818ac9614f947c0c5016f1e9cc7ef42dc5cdc4cf15b7b351863851ec6e96\" returns successfully" Aug 13 01:16:15.007904 systemd[1]: cri-containerd-2f43818ac9614f947c0c5016f1e9cc7ef42dc5cdc4cf15b7b351863851ec6e96.scope: Deactivated successfully. Aug 13 01:16:15.467629 env[1257]: time="2025-08-13T01:16:15.467542909Z" level=info msg="shim disconnected" id=2f43818ac9614f947c0c5016f1e9cc7ef42dc5cdc4cf15b7b351863851ec6e96 Aug 13 01:16:15.467629 env[1257]: time="2025-08-13T01:16:15.467578593Z" level=warning msg="cleaning up after shim disconnected" id=2f43818ac9614f947c0c5016f1e9cc7ef42dc5cdc4cf15b7b351863851ec6e96 namespace=k8s.io Aug 13 01:16:15.467629 env[1257]: time="2025-08-13T01:16:15.467588190Z" level=info msg="cleaning up dead shim" Aug 13 01:16:15.474899 env[1257]: time="2025-08-13T01:16:15.474161144Z" level=warning msg="cleanup warnings time=\"2025-08-13T01:16:15Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2513 runtime=io.containerd.runc.v2\n" Aug 13 01:16:15.878869 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2f43818ac9614f947c0c5016f1e9cc7ef42dc5cdc4cf15b7b351863851ec6e96-rootfs.mount: Deactivated successfully. Aug 13 01:16:16.470263 env[1257]: time="2025-08-13T01:16:16.470001485Z" level=info msg="CreateContainer within sandbox \"25d7eb61c853bae4247ef203845022a1d28faf015b7099f014dbebc783e4cbe0\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Aug 13 01:16:16.477227 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount704761182.mount: Deactivated successfully. Aug 13 01:16:16.483426 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1080942713.mount: Deactivated successfully. Aug 13 01:16:16.486158 env[1257]: time="2025-08-13T01:16:16.486079573Z" level=info msg="CreateContainer within sandbox \"25d7eb61c853bae4247ef203845022a1d28faf015b7099f014dbebc783e4cbe0\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"919ef2a5ae9b65a4aece9a2c26ed916e2031a5e477606f1e7ce4fe05d41aacd3\"" Aug 13 01:16:16.486716 env[1257]: time="2025-08-13T01:16:16.486505165Z" level=info msg="StartContainer for \"919ef2a5ae9b65a4aece9a2c26ed916e2031a5e477606f1e7ce4fe05d41aacd3\"" Aug 13 01:16:16.503559 systemd[1]: Started cri-containerd-919ef2a5ae9b65a4aece9a2c26ed916e2031a5e477606f1e7ce4fe05d41aacd3.scope. Aug 13 01:16:16.543065 env[1257]: time="2025-08-13T01:16:16.543032854Z" level=info msg="StartContainer for \"919ef2a5ae9b65a4aece9a2c26ed916e2031a5e477606f1e7ce4fe05d41aacd3\" returns successfully" Aug 13 01:16:16.589221 systemd[1]: systemd-sysctl.service: Deactivated successfully. Aug 13 01:16:16.589405 systemd[1]: Stopped systemd-sysctl.service. Aug 13 01:16:16.589838 systemd[1]: Stopping systemd-sysctl.service... Aug 13 01:16:16.591503 systemd[1]: Starting systemd-sysctl.service... Aug 13 01:16:16.596770 systemd[1]: cri-containerd-919ef2a5ae9b65a4aece9a2c26ed916e2031a5e477606f1e7ce4fe05d41aacd3.scope: Deactivated successfully. Aug 13 01:16:16.661495 env[1257]: time="2025-08-13T01:16:16.661453558Z" level=info msg="shim disconnected" id=919ef2a5ae9b65a4aece9a2c26ed916e2031a5e477606f1e7ce4fe05d41aacd3 Aug 13 01:16:16.661495 env[1257]: time="2025-08-13T01:16:16.661493131Z" level=warning msg="cleaning up after shim disconnected" id=919ef2a5ae9b65a4aece9a2c26ed916e2031a5e477606f1e7ce4fe05d41aacd3 namespace=k8s.io Aug 13 01:16:16.661495 env[1257]: time="2025-08-13T01:16:16.661500540Z" level=info msg="cleaning up dead shim" Aug 13 01:16:16.666987 env[1257]: time="2025-08-13T01:16:16.666951462Z" level=warning msg="cleanup warnings time=\"2025-08-13T01:16:16Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2575 runtime=io.containerd.runc.v2\n" Aug 13 01:16:16.782463 systemd[1]: Finished systemd-sysctl.service. Aug 13 01:16:17.479719 env[1257]: time="2025-08-13T01:16:17.477960355Z" level=info msg="CreateContainer within sandbox \"25d7eb61c853bae4247ef203845022a1d28faf015b7099f014dbebc783e4cbe0\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Aug 13 01:16:17.504629 env[1257]: time="2025-08-13T01:16:17.504594185Z" level=info msg="CreateContainer within sandbox \"25d7eb61c853bae4247ef203845022a1d28faf015b7099f014dbebc783e4cbe0\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"aeafbc6c09b5089161642adcf72eb1993f76be3250d8a1c4c81b447314963032\"" Aug 13 01:16:17.513196 env[1257]: time="2025-08-13T01:16:17.505806256Z" level=info msg="StartContainer for \"aeafbc6c09b5089161642adcf72eb1993f76be3250d8a1c4c81b447314963032\"" Aug 13 01:16:17.524292 systemd[1]: Started cri-containerd-aeafbc6c09b5089161642adcf72eb1993f76be3250d8a1c4c81b447314963032.scope. Aug 13 01:16:17.563497 env[1257]: time="2025-08-13T01:16:17.563465720Z" level=info msg="StartContainer for \"aeafbc6c09b5089161642adcf72eb1993f76be3250d8a1c4c81b447314963032\" returns successfully" Aug 13 01:16:17.653868 systemd[1]: cri-containerd-aeafbc6c09b5089161642adcf72eb1993f76be3250d8a1c4c81b447314963032.scope: Deactivated successfully. Aug 13 01:16:17.809547 env[1257]: time="2025-08-13T01:16:17.809226150Z" level=info msg="shim disconnected" id=aeafbc6c09b5089161642adcf72eb1993f76be3250d8a1c4c81b447314963032 Aug 13 01:16:17.809547 env[1257]: time="2025-08-13T01:16:17.809256617Z" level=warning msg="cleaning up after shim disconnected" id=aeafbc6c09b5089161642adcf72eb1993f76be3250d8a1c4c81b447314963032 namespace=k8s.io Aug 13 01:16:17.809547 env[1257]: time="2025-08-13T01:16:17.809263271Z" level=info msg="cleaning up dead shim" Aug 13 01:16:17.814276 env[1257]: time="2025-08-13T01:16:17.814252972Z" level=warning msg="cleanup warnings time=\"2025-08-13T01:16:17Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2634 runtime=io.containerd.runc.v2\n" Aug 13 01:16:17.880115 systemd[1]: run-containerd-runc-k8s.io-aeafbc6c09b5089161642adcf72eb1993f76be3250d8a1c4c81b447314963032-runc.jExe06.mount: Deactivated successfully. Aug 13 01:16:17.880200 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-aeafbc6c09b5089161642adcf72eb1993f76be3250d8a1c4c81b447314963032-rootfs.mount: Deactivated successfully. Aug 13 01:16:18.476772 env[1257]: time="2025-08-13T01:16:18.475712744Z" level=info msg="CreateContainer within sandbox \"25d7eb61c853bae4247ef203845022a1d28faf015b7099f014dbebc783e4cbe0\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Aug 13 01:16:18.499195 env[1257]: time="2025-08-13T01:16:18.499158086Z" level=info msg="CreateContainer within sandbox \"25d7eb61c853bae4247ef203845022a1d28faf015b7099f014dbebc783e4cbe0\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"68de88d6913e2e96f18336627bb3a5ba41e6ea18cfa33d15f5a72bf0c8240344\"" Aug 13 01:16:18.499849 env[1257]: time="2025-08-13T01:16:18.499831042Z" level=info msg="StartContainer for \"68de88d6913e2e96f18336627bb3a5ba41e6ea18cfa33d15f5a72bf0c8240344\"" Aug 13 01:16:18.513837 systemd[1]: Started cri-containerd-68de88d6913e2e96f18336627bb3a5ba41e6ea18cfa33d15f5a72bf0c8240344.scope. Aug 13 01:16:18.536350 env[1257]: time="2025-08-13T01:16:18.536314204Z" level=info msg="StartContainer for \"68de88d6913e2e96f18336627bb3a5ba41e6ea18cfa33d15f5a72bf0c8240344\" returns successfully" Aug 13 01:16:18.539941 systemd[1]: cri-containerd-68de88d6913e2e96f18336627bb3a5ba41e6ea18cfa33d15f5a72bf0c8240344.scope: Deactivated successfully. Aug 13 01:16:18.553920 env[1257]: time="2025-08-13T01:16:18.553880230Z" level=info msg="shim disconnected" id=68de88d6913e2e96f18336627bb3a5ba41e6ea18cfa33d15f5a72bf0c8240344 Aug 13 01:16:18.553920 env[1257]: time="2025-08-13T01:16:18.553916283Z" level=warning msg="cleaning up after shim disconnected" id=68de88d6913e2e96f18336627bb3a5ba41e6ea18cfa33d15f5a72bf0c8240344 namespace=k8s.io Aug 13 01:16:18.553920 env[1257]: time="2025-08-13T01:16:18.553922889Z" level=info msg="cleaning up dead shim" Aug 13 01:16:18.560254 env[1257]: time="2025-08-13T01:16:18.560225438Z" level=warning msg="cleanup warnings time=\"2025-08-13T01:16:18Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2691 runtime=io.containerd.runc.v2\n" Aug 13 01:16:19.479323 env[1257]: time="2025-08-13T01:16:19.478810194Z" level=info msg="CreateContainer within sandbox \"25d7eb61c853bae4247ef203845022a1d28faf015b7099f014dbebc783e4cbe0\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Aug 13 01:16:19.532819 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1109189085.mount: Deactivated successfully. Aug 13 01:16:19.536037 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3735112706.mount: Deactivated successfully. Aug 13 01:16:19.768698 env[1257]: time="2025-08-13T01:16:19.768438606Z" level=info msg="CreateContainer within sandbox \"25d7eb61c853bae4247ef203845022a1d28faf015b7099f014dbebc783e4cbe0\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"e7a76a600c324ac88d210cbdb7c99a1e7dea9f7f7f544ab042725e947d3901a1\"" Aug 13 01:16:19.769083 env[1257]: time="2025-08-13T01:16:19.769069908Z" level=info msg="StartContainer for \"e7a76a600c324ac88d210cbdb7c99a1e7dea9f7f7f544ab042725e947d3901a1\"" Aug 13 01:16:19.779745 systemd[1]: Started cri-containerd-e7a76a600c324ac88d210cbdb7c99a1e7dea9f7f7f544ab042725e947d3901a1.scope. Aug 13 01:16:19.819408 env[1257]: time="2025-08-13T01:16:19.819377034Z" level=info msg="StartContainer for \"e7a76a600c324ac88d210cbdb7c99a1e7dea9f7f7f544ab042725e947d3901a1\" returns successfully" Aug 13 01:16:20.119069 kubelet[2059]: I0813 01:16:20.118992 2059 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Aug 13 01:16:20.226627 systemd[1]: Created slice kubepods-burstable-pod1b30ad91_0d4d_4685_ac87_72659444415a.slice. Aug 13 01:16:20.230833 systemd[1]: Created slice kubepods-burstable-podf45600f7_1783_4757_b5b8_e900a3642a47.slice. Aug 13 01:16:20.293956 kubelet[2059]: I0813 01:16:20.293929 2059 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-snw9k\" (UniqueName: \"kubernetes.io/projected/f45600f7-1783-4757-b5b8-e900a3642a47-kube-api-access-snw9k\") pod \"coredns-7c65d6cfc9-5sh76\" (UID: \"f45600f7-1783-4757-b5b8-e900a3642a47\") " pod="kube-system/coredns-7c65d6cfc9-5sh76" Aug 13 01:16:20.294110 kubelet[2059]: I0813 01:16:20.294098 2059 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dhkjf\" (UniqueName: \"kubernetes.io/projected/1b30ad91-0d4d-4685-ac87-72659444415a-kube-api-access-dhkjf\") pod \"coredns-7c65d6cfc9-mhbpb\" (UID: \"1b30ad91-0d4d-4685-ac87-72659444415a\") " pod="kube-system/coredns-7c65d6cfc9-mhbpb" Aug 13 01:16:20.294209 kubelet[2059]: I0813 01:16:20.294198 2059 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1b30ad91-0d4d-4685-ac87-72659444415a-config-volume\") pod \"coredns-7c65d6cfc9-mhbpb\" (UID: \"1b30ad91-0d4d-4685-ac87-72659444415a\") " pod="kube-system/coredns-7c65d6cfc9-mhbpb" Aug 13 01:16:20.294280 kubelet[2059]: I0813 01:16:20.294271 2059 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f45600f7-1783-4757-b5b8-e900a3642a47-config-volume\") pod \"coredns-7c65d6cfc9-5sh76\" (UID: \"f45600f7-1783-4757-b5b8-e900a3642a47\") " pod="kube-system/coredns-7c65d6cfc9-5sh76" Aug 13 01:16:20.493222 kubelet[2059]: I0813 01:16:20.493175 2059 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-zrwwr" podStartSLOduration=6.427762145 podStartE2EDuration="17.493162696s" podCreationTimestamp="2025-08-13 01:16:03 +0000 UTC" firstStartedPulling="2025-08-13 01:16:03.773821095 +0000 UTC m=+4.729839978" lastFinishedPulling="2025-08-13 01:16:14.839221643 +0000 UTC m=+15.795240529" observedRunningTime="2025-08-13 01:16:20.492048755 +0000 UTC m=+21.448067644" watchObservedRunningTime="2025-08-13 01:16:20.493162696 +0000 UTC m=+21.449181585" Aug 13 01:16:20.530168 env[1257]: time="2025-08-13T01:16:20.529911156Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-mhbpb,Uid:1b30ad91-0d4d-4685-ac87-72659444415a,Namespace:kube-system,Attempt:0,}" Aug 13 01:16:20.534070 env[1257]: time="2025-08-13T01:16:20.533875759Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-5sh76,Uid:f45600f7-1783-4757-b5b8-e900a3642a47,Namespace:kube-system,Attempt:0,}" Aug 13 01:16:21.156691 kernel: Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks! Aug 13 01:16:21.461693 kernel: Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks! Aug 13 01:16:23.919393 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Aug 13 01:16:23.925850 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Aug 13 01:16:23.918643 systemd-networkd[1063]: cilium_host: Link UP Aug 13 01:16:23.918815 systemd-networkd[1063]: cilium_net: Link UP Aug 13 01:16:23.919213 systemd-networkd[1063]: cilium_net: Gained carrier Aug 13 01:16:23.919545 systemd-networkd[1063]: cilium_host: Gained carrier Aug 13 01:16:24.125450 systemd-networkd[1063]: cilium_vxlan: Link UP Aug 13 01:16:24.125455 systemd-networkd[1063]: cilium_vxlan: Gained carrier Aug 13 01:16:24.130409 systemd-networkd[1063]: cilium_net: Gained IPv6LL Aug 13 01:16:24.904819 systemd-networkd[1063]: cilium_host: Gained IPv6LL Aug 13 01:16:25.112686 kernel: NET: Registered PF_ALG protocol family Aug 13 01:16:25.416758 systemd-networkd[1063]: cilium_vxlan: Gained IPv6LL Aug 13 01:16:25.913297 systemd-networkd[1063]: lxc_health: Link UP Aug 13 01:16:25.925523 systemd-networkd[1063]: lxc_health: Gained carrier Aug 13 01:16:25.925729 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Aug 13 01:16:26.079962 systemd-networkd[1063]: lxc930414f271af: Link UP Aug 13 01:16:26.089445 kernel: eth0: renamed from tmpdd280 Aug 13 01:16:26.092874 systemd-networkd[1063]: lxc930414f271af: Gained carrier Aug 13 01:16:26.095717 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc930414f271af: link becomes ready Aug 13 01:16:26.104117 systemd-networkd[1063]: lxc4d6f390db996: Link UP Aug 13 01:16:26.109684 kernel: eth0: renamed from tmpe93bc Aug 13 01:16:26.114224 systemd-networkd[1063]: lxc4d6f390db996: Gained carrier Aug 13 01:16:26.116715 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc4d6f390db996: link becomes ready Aug 13 01:16:27.144763 systemd-networkd[1063]: lxc_health: Gained IPv6LL Aug 13 01:16:27.464764 systemd-networkd[1063]: lxc930414f271af: Gained IPv6LL Aug 13 01:16:27.721744 systemd-networkd[1063]: lxc4d6f390db996: Gained IPv6LL Aug 13 01:16:28.898079 env[1257]: time="2025-08-13T01:16:28.895660912Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 01:16:28.898079 env[1257]: time="2025-08-13T01:16:28.895692740Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 01:16:28.898079 env[1257]: time="2025-08-13T01:16:28.895699565Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 01:16:28.898079 env[1257]: time="2025-08-13T01:16:28.895770754Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/e93bc25f759c992b9471b2edc3561fb962eb8b915dd11b3c00d4b338094b9931 pid=3253 runtime=io.containerd.runc.v2 Aug 13 01:16:28.905271 systemd[1]: Started cri-containerd-e93bc25f759c992b9471b2edc3561fb962eb8b915dd11b3c00d4b338094b9931.scope. Aug 13 01:16:28.917093 env[1257]: time="2025-08-13T01:16:28.914916086Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 01:16:28.917093 env[1257]: time="2025-08-13T01:16:28.914939820Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 01:16:28.917093 env[1257]: time="2025-08-13T01:16:28.914946640Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 01:16:28.917093 env[1257]: time="2025-08-13T01:16:28.915027991Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/dd280043be2eb4caa6b6abad560bdd9e20056e4cf5449e885d28aa2096e4209e pid=3283 runtime=io.containerd.runc.v2 Aug 13 01:16:28.934021 systemd[1]: Started cri-containerd-dd280043be2eb4caa6b6abad560bdd9e20056e4cf5449e885d28aa2096e4209e.scope. Aug 13 01:16:28.938813 systemd-resolved[1209]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 13 01:16:28.952868 systemd-resolved[1209]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 13 01:16:28.962935 env[1257]: time="2025-08-13T01:16:28.962911056Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-5sh76,Uid:f45600f7-1783-4757-b5b8-e900a3642a47,Namespace:kube-system,Attempt:0,} returns sandbox id \"e93bc25f759c992b9471b2edc3561fb962eb8b915dd11b3c00d4b338094b9931\"" Aug 13 01:16:28.964620 env[1257]: time="2025-08-13T01:16:28.964602533Z" level=info msg="CreateContainer within sandbox \"e93bc25f759c992b9471b2edc3561fb962eb8b915dd11b3c00d4b338094b9931\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 13 01:16:28.985147 env[1257]: time="2025-08-13T01:16:28.985115905Z" level=info msg="CreateContainer within sandbox \"e93bc25f759c992b9471b2edc3561fb962eb8b915dd11b3c00d4b338094b9931\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"378f17b5ff1eb679b5d61984b5e38fb3fb706bf850137a033531e8bcac78b988\"" Aug 13 01:16:28.986011 env[1257]: time="2025-08-13T01:16:28.985987084Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-mhbpb,Uid:1b30ad91-0d4d-4685-ac87-72659444415a,Namespace:kube-system,Attempt:0,} returns sandbox id \"dd280043be2eb4caa6b6abad560bdd9e20056e4cf5449e885d28aa2096e4209e\"" Aug 13 01:16:28.986248 env[1257]: time="2025-08-13T01:16:28.986227155Z" level=info msg="StartContainer for \"378f17b5ff1eb679b5d61984b5e38fb3fb706bf850137a033531e8bcac78b988\"" Aug 13 01:16:28.992719 env[1257]: time="2025-08-13T01:16:28.991178995Z" level=info msg="CreateContainer within sandbox \"dd280043be2eb4caa6b6abad560bdd9e20056e4cf5449e885d28aa2096e4209e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 13 01:16:28.997496 env[1257]: time="2025-08-13T01:16:28.997456755Z" level=info msg="CreateContainer within sandbox \"dd280043be2eb4caa6b6abad560bdd9e20056e4cf5449e885d28aa2096e4209e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"839a431abbdd1aaec30c6605435b3399b26fe3fb4399d8ba4478113e9b929e70\"" Aug 13 01:16:28.997997 env[1257]: time="2025-08-13T01:16:28.997967703Z" level=info msg="StartContainer for \"839a431abbdd1aaec30c6605435b3399b26fe3fb4399d8ba4478113e9b929e70\"" Aug 13 01:16:29.018985 systemd[1]: Started cri-containerd-378f17b5ff1eb679b5d61984b5e38fb3fb706bf850137a033531e8bcac78b988.scope. Aug 13 01:16:29.036213 systemd[1]: Started cri-containerd-839a431abbdd1aaec30c6605435b3399b26fe3fb4399d8ba4478113e9b929e70.scope. Aug 13 01:16:29.107829 env[1257]: time="2025-08-13T01:16:29.107767542Z" level=info msg="StartContainer for \"839a431abbdd1aaec30c6605435b3399b26fe3fb4399d8ba4478113e9b929e70\" returns successfully" Aug 13 01:16:29.109191 env[1257]: time="2025-08-13T01:16:29.109165833Z" level=info msg="StartContainer for \"378f17b5ff1eb679b5d61984b5e38fb3fb706bf850137a033531e8bcac78b988\" returns successfully" Aug 13 01:16:29.511937 kubelet[2059]: I0813 01:16:29.511896 2059 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-5sh76" podStartSLOduration=27.511880058 podStartE2EDuration="27.511880058s" podCreationTimestamp="2025-08-13 01:16:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 01:16:29.510983329 +0000 UTC m=+30.467002224" watchObservedRunningTime="2025-08-13 01:16:29.511880058 +0000 UTC m=+30.467898952" Aug 13 01:16:29.520179 kubelet[2059]: I0813 01:16:29.520134 2059 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-mhbpb" podStartSLOduration=27.520120123 podStartE2EDuration="27.520120123s" podCreationTimestamp="2025-08-13 01:16:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 01:16:29.519029143 +0000 UTC m=+30.475048038" watchObservedRunningTime="2025-08-13 01:16:29.520120123 +0000 UTC m=+30.476139018" Aug 13 01:16:29.898479 systemd[1]: run-containerd-runc-k8s.io-dd280043be2eb4caa6b6abad560bdd9e20056e4cf5449e885d28aa2096e4209e-runc.k6CZRH.mount: Deactivated successfully. Aug 13 01:17:11.627936 systemd[1]: Started sshd@5-139.178.70.100:22-139.178.68.195:37522.service. Aug 13 01:17:11.692492 sshd[3419]: Accepted publickey for core from 139.178.68.195 port 37522 ssh2: RSA SHA256:D9fG+3NI27jZdcTgqPkKAyN2+BKarYhwuSKj47TtA0s Aug 13 01:17:11.694112 sshd[3419]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 01:17:11.698883 systemd[1]: Started session-8.scope. Aug 13 01:17:11.699296 systemd-logind[1246]: New session 8 of user core. Aug 13 01:17:12.250310 sshd[3419]: pam_unix(sshd:session): session closed for user core Aug 13 01:17:12.251983 systemd[1]: sshd@5-139.178.70.100:22-139.178.68.195:37522.service: Deactivated successfully. Aug 13 01:17:12.252438 systemd[1]: session-8.scope: Deactivated successfully. Aug 13 01:17:12.253028 systemd-logind[1246]: Session 8 logged out. Waiting for processes to exit. Aug 13 01:17:12.253495 systemd-logind[1246]: Removed session 8. Aug 13 01:17:17.253997 systemd[1]: Started sshd@6-139.178.70.100:22-139.178.68.195:37524.service. Aug 13 01:17:17.289965 sshd[3433]: Accepted publickey for core from 139.178.68.195 port 37524 ssh2: RSA SHA256:D9fG+3NI27jZdcTgqPkKAyN2+BKarYhwuSKj47TtA0s Aug 13 01:17:17.291099 sshd[3433]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 01:17:17.294252 systemd[1]: Started session-9.scope. Aug 13 01:17:17.294685 systemd-logind[1246]: New session 9 of user core. Aug 13 01:17:17.392499 sshd[3433]: pam_unix(sshd:session): session closed for user core Aug 13 01:17:17.394122 systemd[1]: sshd@6-139.178.70.100:22-139.178.68.195:37524.service: Deactivated successfully. Aug 13 01:17:17.394657 systemd[1]: session-9.scope: Deactivated successfully. Aug 13 01:17:17.395196 systemd-logind[1246]: Session 9 logged out. Waiting for processes to exit. Aug 13 01:17:17.395682 systemd-logind[1246]: Removed session 9. Aug 13 01:17:22.396105 systemd[1]: Started sshd@7-139.178.70.100:22-139.178.68.195:59694.service. Aug 13 01:17:22.461505 sshd[3445]: Accepted publickey for core from 139.178.68.195 port 59694 ssh2: RSA SHA256:D9fG+3NI27jZdcTgqPkKAyN2+BKarYhwuSKj47TtA0s Aug 13 01:17:22.462675 sshd[3445]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 01:17:22.465804 systemd[1]: Started session-10.scope. Aug 13 01:17:22.466699 systemd-logind[1246]: New session 10 of user core. Aug 13 01:17:22.567945 sshd[3445]: pam_unix(sshd:session): session closed for user core Aug 13 01:17:22.569802 systemd[1]: sshd@7-139.178.70.100:22-139.178.68.195:59694.service: Deactivated successfully. Aug 13 01:17:22.570295 systemd[1]: session-10.scope: Deactivated successfully. Aug 13 01:17:22.570974 systemd-logind[1246]: Session 10 logged out. Waiting for processes to exit. Aug 13 01:17:22.571435 systemd-logind[1246]: Removed session 10. Aug 13 01:17:27.572429 systemd[1]: Started sshd@8-139.178.70.100:22-139.178.68.195:59708.service. Aug 13 01:17:27.600652 sshd[3460]: Accepted publickey for core from 139.178.68.195 port 59708 ssh2: RSA SHA256:D9fG+3NI27jZdcTgqPkKAyN2+BKarYhwuSKj47TtA0s Aug 13 01:17:27.602048 sshd[3460]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 01:17:27.605809 systemd[1]: Started session-11.scope. Aug 13 01:17:27.606140 systemd-logind[1246]: New session 11 of user core. Aug 13 01:17:27.746481 sshd[3460]: pam_unix(sshd:session): session closed for user core Aug 13 01:17:27.748250 systemd-logind[1246]: Session 11 logged out. Waiting for processes to exit. Aug 13 01:17:27.748351 systemd[1]: sshd@8-139.178.70.100:22-139.178.68.195:59708.service: Deactivated successfully. Aug 13 01:17:27.748803 systemd[1]: session-11.scope: Deactivated successfully. Aug 13 01:17:27.749280 systemd-logind[1246]: Removed session 11. Aug 13 01:17:32.749849 systemd[1]: Started sshd@9-139.178.70.100:22-139.178.68.195:50444.service. Aug 13 01:17:32.894705 sshd[3473]: Accepted publickey for core from 139.178.68.195 port 50444 ssh2: RSA SHA256:D9fG+3NI27jZdcTgqPkKAyN2+BKarYhwuSKj47TtA0s Aug 13 01:17:32.895931 sshd[3473]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 01:17:32.901202 systemd[1]: Started session-12.scope. Aug 13 01:17:32.901586 systemd-logind[1246]: New session 12 of user core. Aug 13 01:17:33.020427 sshd[3473]: pam_unix(sshd:session): session closed for user core Aug 13 01:17:33.024633 systemd[1]: Started sshd@10-139.178.70.100:22-139.178.68.195:50446.service. Aug 13 01:17:33.028038 systemd[1]: sshd@9-139.178.70.100:22-139.178.68.195:50444.service: Deactivated successfully. Aug 13 01:17:33.028779 systemd[1]: session-12.scope: Deactivated successfully. Aug 13 01:17:33.029622 systemd-logind[1246]: Session 12 logged out. Waiting for processes to exit. Aug 13 01:17:33.030624 systemd-logind[1246]: Removed session 12. Aug 13 01:17:33.060218 sshd[3484]: Accepted publickey for core from 139.178.68.195 port 50446 ssh2: RSA SHA256:D9fG+3NI27jZdcTgqPkKAyN2+BKarYhwuSKj47TtA0s Aug 13 01:17:33.061321 sshd[3484]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 01:17:33.064674 systemd[1]: Started session-13.scope. Aug 13 01:17:33.065157 systemd-logind[1246]: New session 13 of user core. Aug 13 01:17:33.214583 sshd[3484]: pam_unix(sshd:session): session closed for user core Aug 13 01:17:33.217455 systemd[1]: Started sshd@11-139.178.70.100:22-139.178.68.195:50454.service. Aug 13 01:17:33.228843 systemd[1]: sshd@10-139.178.70.100:22-139.178.68.195:50446.service: Deactivated successfully. Aug 13 01:17:33.229593 systemd[1]: session-13.scope: Deactivated successfully. Aug 13 01:17:33.230311 systemd-logind[1246]: Session 13 logged out. Waiting for processes to exit. Aug 13 01:17:33.230929 systemd-logind[1246]: Removed session 13. Aug 13 01:17:33.256379 sshd[3494]: Accepted publickey for core from 139.178.68.195 port 50454 ssh2: RSA SHA256:D9fG+3NI27jZdcTgqPkKAyN2+BKarYhwuSKj47TtA0s Aug 13 01:17:33.257203 sshd[3494]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 01:17:33.260644 systemd[1]: Started session-14.scope. Aug 13 01:17:33.261139 systemd-logind[1246]: New session 14 of user core. Aug 13 01:17:33.362985 sshd[3494]: pam_unix(sshd:session): session closed for user core Aug 13 01:17:33.365239 systemd[1]: sshd@11-139.178.70.100:22-139.178.68.195:50454.service: Deactivated successfully. Aug 13 01:17:33.365657 systemd[1]: session-14.scope: Deactivated successfully. Aug 13 01:17:33.366078 systemd-logind[1246]: Session 14 logged out. Waiting for processes to exit. Aug 13 01:17:33.366533 systemd-logind[1246]: Removed session 14. Aug 13 01:17:38.366691 systemd[1]: Started sshd@12-139.178.70.100:22-139.178.68.195:50470.service. Aug 13 01:17:38.400045 sshd[3509]: Accepted publickey for core from 139.178.68.195 port 50470 ssh2: RSA SHA256:D9fG+3NI27jZdcTgqPkKAyN2+BKarYhwuSKj47TtA0s Aug 13 01:17:38.400331 sshd[3509]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 01:17:38.402866 systemd-logind[1246]: New session 15 of user core. Aug 13 01:17:38.403393 systemd[1]: Started session-15.scope. Aug 13 01:17:38.502331 sshd[3509]: pam_unix(sshd:session): session closed for user core Aug 13 01:17:38.504098 systemd-logind[1246]: Session 15 logged out. Waiting for processes to exit. Aug 13 01:17:38.504273 systemd[1]: sshd@12-139.178.70.100:22-139.178.68.195:50470.service: Deactivated successfully. Aug 13 01:17:38.504837 systemd[1]: session-15.scope: Deactivated successfully. Aug 13 01:17:38.505350 systemd-logind[1246]: Removed session 15. Aug 13 01:17:43.506527 systemd[1]: Started sshd@13-139.178.70.100:22-139.178.68.195:34872.service. Aug 13 01:17:43.536798 sshd[3520]: Accepted publickey for core from 139.178.68.195 port 34872 ssh2: RSA SHA256:D9fG+3NI27jZdcTgqPkKAyN2+BKarYhwuSKj47TtA0s Aug 13 01:17:43.538048 sshd[3520]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 01:17:43.541225 systemd[1]: Started session-16.scope. Aug 13 01:17:43.541445 systemd-logind[1246]: New session 16 of user core. Aug 13 01:17:43.670344 sshd[3520]: pam_unix(sshd:session): session closed for user core Aug 13 01:17:43.673089 systemd[1]: Started sshd@14-139.178.70.100:22-139.178.68.195:34888.service. Aug 13 01:17:43.675811 systemd[1]: sshd@13-139.178.70.100:22-139.178.68.195:34872.service: Deactivated successfully. Aug 13 01:17:43.676251 systemd[1]: session-16.scope: Deactivated successfully. Aug 13 01:17:43.676653 systemd-logind[1246]: Session 16 logged out. Waiting for processes to exit. Aug 13 01:17:43.677129 systemd-logind[1246]: Removed session 16. Aug 13 01:17:43.702381 sshd[3531]: Accepted publickey for core from 139.178.68.195 port 34888 ssh2: RSA SHA256:D9fG+3NI27jZdcTgqPkKAyN2+BKarYhwuSKj47TtA0s Aug 13 01:17:43.703378 sshd[3531]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 01:17:43.706302 systemd[1]: Started session-17.scope. Aug 13 01:17:43.706758 systemd-logind[1246]: New session 17 of user core. Aug 13 01:17:47.308815 systemd[1]: Started sshd@15-139.178.70.100:22-139.178.68.195:34892.service. Aug 13 01:17:47.310287 sshd[3531]: pam_unix(sshd:session): session closed for user core Aug 13 01:17:47.321569 systemd[1]: sshd@14-139.178.70.100:22-139.178.68.195:34888.service: Deactivated successfully. Aug 13 01:17:47.322226 systemd[1]: session-17.scope: Deactivated successfully. Aug 13 01:17:47.322709 systemd-logind[1246]: Session 17 logged out. Waiting for processes to exit. Aug 13 01:17:47.323349 systemd-logind[1246]: Removed session 17. Aug 13 01:17:47.470417 sshd[3541]: Accepted publickey for core from 139.178.68.195 port 34892 ssh2: RSA SHA256:D9fG+3NI27jZdcTgqPkKAyN2+BKarYhwuSKj47TtA0s Aug 13 01:17:47.471604 sshd[3541]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 01:17:47.480703 systemd-logind[1246]: New session 18 of user core. Aug 13 01:17:47.481405 systemd[1]: Started session-18.scope. Aug 13 01:17:49.340765 sshd[3541]: pam_unix(sshd:session): session closed for user core Aug 13 01:17:49.343578 systemd[1]: Started sshd@16-139.178.70.100:22-139.178.68.195:34894.service. Aug 13 01:17:49.427335 systemd[1]: sshd@15-139.178.70.100:22-139.178.68.195:34892.service: Deactivated successfully. Aug 13 01:17:49.427873 systemd[1]: session-18.scope: Deactivated successfully. Aug 13 01:17:49.428321 systemd-logind[1246]: Session 18 logged out. Waiting for processes to exit. Aug 13 01:17:49.429001 systemd-logind[1246]: Removed session 18. Aug 13 01:17:49.487877 sshd[3561]: Accepted publickey for core from 139.178.68.195 port 34894 ssh2: RSA SHA256:D9fG+3NI27jZdcTgqPkKAyN2+BKarYhwuSKj47TtA0s Aug 13 01:17:49.493812 sshd[3561]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 01:17:49.498197 systemd-logind[1246]: New session 19 of user core. Aug 13 01:17:49.499364 systemd[1]: Started session-19.scope. Aug 13 01:17:50.101491 sshd[3561]: pam_unix(sshd:session): session closed for user core Aug 13 01:17:50.103842 systemd[1]: Started sshd@17-139.178.70.100:22-139.178.68.195:37538.service. Aug 13 01:17:50.106192 systemd[1]: sshd@16-139.178.70.100:22-139.178.68.195:34894.service: Deactivated successfully. Aug 13 01:17:50.106706 systemd[1]: session-19.scope: Deactivated successfully. Aug 13 01:17:50.108089 systemd-logind[1246]: Session 19 logged out. Waiting for processes to exit. Aug 13 01:17:50.111736 systemd-logind[1246]: Removed session 19. Aug 13 01:17:50.139358 sshd[3571]: Accepted publickey for core from 139.178.68.195 port 37538 ssh2: RSA SHA256:D9fG+3NI27jZdcTgqPkKAyN2+BKarYhwuSKj47TtA0s Aug 13 01:17:50.140978 sshd[3571]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 01:17:50.144725 systemd-logind[1246]: New session 20 of user core. Aug 13 01:17:50.145702 systemd[1]: Started session-20.scope. Aug 13 01:17:50.275305 sshd[3571]: pam_unix(sshd:session): session closed for user core Aug 13 01:17:50.276947 systemd-logind[1246]: Session 20 logged out. Waiting for processes to exit. Aug 13 01:17:50.277020 systemd[1]: sshd@17-139.178.70.100:22-139.178.68.195:37538.service: Deactivated successfully. Aug 13 01:17:50.277420 systemd[1]: session-20.scope: Deactivated successfully. Aug 13 01:17:50.278126 systemd-logind[1246]: Removed session 20. Aug 13 01:17:55.280634 systemd[1]: Started sshd@18-139.178.70.100:22-139.178.68.195:37540.service. Aug 13 01:17:55.314485 sshd[3583]: Accepted publickey for core from 139.178.68.195 port 37540 ssh2: RSA SHA256:D9fG+3NI27jZdcTgqPkKAyN2+BKarYhwuSKj47TtA0s Aug 13 01:17:55.315857 sshd[3583]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 01:17:55.319636 systemd[1]: Started session-21.scope. Aug 13 01:17:55.320143 systemd-logind[1246]: New session 21 of user core. Aug 13 01:17:55.433367 sshd[3583]: pam_unix(sshd:session): session closed for user core Aug 13 01:17:55.435013 systemd[1]: sshd@18-139.178.70.100:22-139.178.68.195:37540.service: Deactivated successfully. Aug 13 01:17:55.435535 systemd[1]: session-21.scope: Deactivated successfully. Aug 13 01:17:55.436004 systemd-logind[1246]: Session 21 logged out. Waiting for processes to exit. Aug 13 01:17:55.436519 systemd-logind[1246]: Removed session 21. Aug 13 01:18:00.437718 systemd[1]: Started sshd@19-139.178.70.100:22-139.178.68.195:56008.service. Aug 13 01:18:00.469976 sshd[3602]: Accepted publickey for core from 139.178.68.195 port 56008 ssh2: RSA SHA256:D9fG+3NI27jZdcTgqPkKAyN2+BKarYhwuSKj47TtA0s Aug 13 01:18:00.471222 sshd[3602]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 01:18:00.475528 systemd-logind[1246]: New session 22 of user core. Aug 13 01:18:00.476293 systemd[1]: Started session-22.scope. Aug 13 01:18:00.581463 sshd[3602]: pam_unix(sshd:session): session closed for user core Aug 13 01:18:00.584782 systemd-logind[1246]: Session 22 logged out. Waiting for processes to exit. Aug 13 01:18:00.584948 systemd[1]: sshd@19-139.178.70.100:22-139.178.68.195:56008.service: Deactivated successfully. Aug 13 01:18:00.585367 systemd[1]: session-22.scope: Deactivated successfully. Aug 13 01:18:00.585972 systemd-logind[1246]: Removed session 22. Aug 13 01:18:05.585429 systemd[1]: Started sshd@20-139.178.70.100:22-139.178.68.195:56010.service. Aug 13 01:18:05.669109 sshd[3616]: Accepted publickey for core from 139.178.68.195 port 56010 ssh2: RSA SHA256:D9fG+3NI27jZdcTgqPkKAyN2+BKarYhwuSKj47TtA0s Aug 13 01:18:05.670273 sshd[3616]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 01:18:05.673546 systemd[1]: Started session-23.scope. Aug 13 01:18:05.673924 systemd-logind[1246]: New session 23 of user core. Aug 13 01:18:05.804236 sshd[3616]: pam_unix(sshd:session): session closed for user core Aug 13 01:18:05.805785 systemd[1]: sshd@20-139.178.70.100:22-139.178.68.195:56010.service: Deactivated successfully. Aug 13 01:18:05.806286 systemd[1]: session-23.scope: Deactivated successfully. Aug 13 01:18:05.806794 systemd-logind[1246]: Session 23 logged out. Waiting for processes to exit. Aug 13 01:18:05.807230 systemd-logind[1246]: Removed session 23. Aug 13 01:18:10.809236 systemd[1]: Started sshd@21-139.178.70.100:22-139.178.68.195:60506.service. Aug 13 01:18:10.838926 sshd[3628]: Accepted publickey for core from 139.178.68.195 port 60506 ssh2: RSA SHA256:D9fG+3NI27jZdcTgqPkKAyN2+BKarYhwuSKj47TtA0s Aug 13 01:18:10.840295 sshd[3628]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 01:18:10.844509 systemd[1]: Started session-24.scope. Aug 13 01:18:10.844823 systemd-logind[1246]: New session 24 of user core. Aug 13 01:18:10.938662 sshd[3628]: pam_unix(sshd:session): session closed for user core Aug 13 01:18:10.941483 systemd[1]: Started sshd@22-139.178.70.100:22-139.178.68.195:60510.service. Aug 13 01:18:10.946147 systemd[1]: sshd@21-139.178.70.100:22-139.178.68.195:60506.service: Deactivated successfully. Aug 13 01:18:10.946888 systemd[1]: session-24.scope: Deactivated successfully. Aug 13 01:18:10.947462 systemd-logind[1246]: Session 24 logged out. Waiting for processes to exit. Aug 13 01:18:10.947965 systemd-logind[1246]: Removed session 24. Aug 13 01:18:10.972781 sshd[3639]: Accepted publickey for core from 139.178.68.195 port 60510 ssh2: RSA SHA256:D9fG+3NI27jZdcTgqPkKAyN2+BKarYhwuSKj47TtA0s Aug 13 01:18:10.973650 sshd[3639]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 01:18:10.976119 systemd-logind[1246]: New session 25 of user core. Aug 13 01:18:10.976747 systemd[1]: Started session-25.scope. Aug 13 01:18:13.211561 env[1257]: time="2025-08-13T01:18:13.211529104Z" level=info msg="StopContainer for \"03606f2b62bc4ce176a18d9a67b5eacc6c92b952828bbf7f076a96e9e337e83d\" with timeout 30 (s)" Aug 13 01:18:13.212428 env[1257]: time="2025-08-13T01:18:13.212290785Z" level=info msg="Stop container \"03606f2b62bc4ce176a18d9a67b5eacc6c92b952828bbf7f076a96e9e337e83d\" with signal terminated" Aug 13 01:18:13.226861 env[1257]: time="2025-08-13T01:18:13.226815246Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 13 01:18:13.228375 systemd[1]: cri-containerd-03606f2b62bc4ce176a18d9a67b5eacc6c92b952828bbf7f076a96e9e337e83d.scope: Deactivated successfully. Aug 13 01:18:13.234413 env[1257]: time="2025-08-13T01:18:13.234380749Z" level=info msg="StopContainer for \"e7a76a600c324ac88d210cbdb7c99a1e7dea9f7f7f544ab042725e947d3901a1\" with timeout 2 (s)" Aug 13 01:18:13.234557 env[1257]: time="2025-08-13T01:18:13.234524087Z" level=info msg="Stop container \"e7a76a600c324ac88d210cbdb7c99a1e7dea9f7f7f544ab042725e947d3901a1\" with signal terminated" Aug 13 01:18:13.244102 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-03606f2b62bc4ce176a18d9a67b5eacc6c92b952828bbf7f076a96e9e337e83d-rootfs.mount: Deactivated successfully. Aug 13 01:18:13.245295 systemd-networkd[1063]: lxc_health: Link DOWN Aug 13 01:18:13.245302 systemd-networkd[1063]: lxc_health: Lost carrier Aug 13 01:18:13.256161 env[1257]: time="2025-08-13T01:18:13.256096082Z" level=info msg="shim disconnected" id=03606f2b62bc4ce176a18d9a67b5eacc6c92b952828bbf7f076a96e9e337e83d Aug 13 01:18:13.256296 env[1257]: time="2025-08-13T01:18:13.256159704Z" level=warning msg="cleaning up after shim disconnected" id=03606f2b62bc4ce176a18d9a67b5eacc6c92b952828bbf7f076a96e9e337e83d namespace=k8s.io Aug 13 01:18:13.256296 env[1257]: time="2025-08-13T01:18:13.256172184Z" level=info msg="cleaning up dead shim" Aug 13 01:18:13.269362 env[1257]: time="2025-08-13T01:18:13.269324889Z" level=warning msg="cleanup warnings time=\"2025-08-13T01:18:13Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3693 runtime=io.containerd.runc.v2\n" Aug 13 01:18:13.270821 env[1257]: time="2025-08-13T01:18:13.270653414Z" level=info msg="StopContainer for \"03606f2b62bc4ce176a18d9a67b5eacc6c92b952828bbf7f076a96e9e337e83d\" returns successfully" Aug 13 01:18:13.271270 env[1257]: time="2025-08-13T01:18:13.271253956Z" level=info msg="StopPodSandbox for \"808de6d6b834695cf56e97511fec076b6f28ee7e27cf01409c09eeb5e0247440\"" Aug 13 01:18:13.271325 env[1257]: time="2025-08-13T01:18:13.271308437Z" level=info msg="Container to stop \"03606f2b62bc4ce176a18d9a67b5eacc6c92b952828bbf7f076a96e9e337e83d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 01:18:13.272627 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-808de6d6b834695cf56e97511fec076b6f28ee7e27cf01409c09eeb5e0247440-shm.mount: Deactivated successfully. Aug 13 01:18:13.274475 systemd[1]: cri-containerd-e7a76a600c324ac88d210cbdb7c99a1e7dea9f7f7f544ab042725e947d3901a1.scope: Deactivated successfully. Aug 13 01:18:13.274641 systemd[1]: cri-containerd-e7a76a600c324ac88d210cbdb7c99a1e7dea9f7f7f544ab042725e947d3901a1.scope: Consumed 4.698s CPU time. Aug 13 01:18:13.284189 systemd[1]: cri-containerd-808de6d6b834695cf56e97511fec076b6f28ee7e27cf01409c09eeb5e0247440.scope: Deactivated successfully. Aug 13 01:18:13.297874 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e7a76a600c324ac88d210cbdb7c99a1e7dea9f7f7f544ab042725e947d3901a1-rootfs.mount: Deactivated successfully. Aug 13 01:18:13.302392 env[1257]: time="2025-08-13T01:18:13.302300568Z" level=info msg="shim disconnected" id=e7a76a600c324ac88d210cbdb7c99a1e7dea9f7f7f544ab042725e947d3901a1 Aug 13 01:18:13.302392 env[1257]: time="2025-08-13T01:18:13.302355667Z" level=warning msg="cleaning up after shim disconnected" id=e7a76a600c324ac88d210cbdb7c99a1e7dea9f7f7f544ab042725e947d3901a1 namespace=k8s.io Aug 13 01:18:13.302392 env[1257]: time="2025-08-13T01:18:13.302364328Z" level=info msg="cleaning up dead shim" Aug 13 01:18:13.312176 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-808de6d6b834695cf56e97511fec076b6f28ee7e27cf01409c09eeb5e0247440-rootfs.mount: Deactivated successfully. Aug 13 01:18:13.315981 env[1257]: time="2025-08-13T01:18:13.315911872Z" level=info msg="shim disconnected" id=808de6d6b834695cf56e97511fec076b6f28ee7e27cf01409c09eeb5e0247440 Aug 13 01:18:13.315981 env[1257]: time="2025-08-13T01:18:13.315941869Z" level=warning msg="cleaning up after shim disconnected" id=808de6d6b834695cf56e97511fec076b6f28ee7e27cf01409c09eeb5e0247440 namespace=k8s.io Aug 13 01:18:13.315981 env[1257]: time="2025-08-13T01:18:13.315947984Z" level=info msg="cleaning up dead shim" Aug 13 01:18:13.316499 env[1257]: time="2025-08-13T01:18:13.316479919Z" level=warning msg="cleanup warnings time=\"2025-08-13T01:18:13Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3732 runtime=io.containerd.runc.v2\n" Aug 13 01:18:13.317112 env[1257]: time="2025-08-13T01:18:13.317089397Z" level=info msg="StopContainer for \"e7a76a600c324ac88d210cbdb7c99a1e7dea9f7f7f544ab042725e947d3901a1\" returns successfully" Aug 13 01:18:13.317443 env[1257]: time="2025-08-13T01:18:13.317428611Z" level=info msg="StopPodSandbox for \"25d7eb61c853bae4247ef203845022a1d28faf015b7099f014dbebc783e4cbe0\"" Aug 13 01:18:13.317526 env[1257]: time="2025-08-13T01:18:13.317512766Z" level=info msg="Container to stop \"e7a76a600c324ac88d210cbdb7c99a1e7dea9f7f7f544ab042725e947d3901a1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 01:18:13.317587 env[1257]: time="2025-08-13T01:18:13.317575464Z" level=info msg="Container to stop \"2f43818ac9614f947c0c5016f1e9cc7ef42dc5cdc4cf15b7b351863851ec6e96\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 01:18:13.317636 env[1257]: time="2025-08-13T01:18:13.317625302Z" level=info msg="Container to stop \"919ef2a5ae9b65a4aece9a2c26ed916e2031a5e477606f1e7ce4fe05d41aacd3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 01:18:13.317705 env[1257]: time="2025-08-13T01:18:13.317694808Z" level=info msg="Container to stop \"aeafbc6c09b5089161642adcf72eb1993f76be3250d8a1c4c81b447314963032\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 01:18:13.317757 env[1257]: time="2025-08-13T01:18:13.317746555Z" level=info msg="Container to stop \"68de88d6913e2e96f18336627bb3a5ba41e6ea18cfa33d15f5a72bf0c8240344\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 01:18:13.321647 systemd[1]: cri-containerd-25d7eb61c853bae4247ef203845022a1d28faf015b7099f014dbebc783e4cbe0.scope: Deactivated successfully. Aug 13 01:18:13.325014 env[1257]: time="2025-08-13T01:18:13.324994160Z" level=warning msg="cleanup warnings time=\"2025-08-13T01:18:13Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3753 runtime=io.containerd.runc.v2\n" Aug 13 01:18:13.331830 env[1257]: time="2025-08-13T01:18:13.331804339Z" level=info msg="TearDown network for sandbox \"808de6d6b834695cf56e97511fec076b6f28ee7e27cf01409c09eeb5e0247440\" successfully" Aug 13 01:18:13.331938 env[1257]: time="2025-08-13T01:18:13.331925557Z" level=info msg="StopPodSandbox for \"808de6d6b834695cf56e97511fec076b6f28ee7e27cf01409c09eeb5e0247440\" returns successfully" Aug 13 01:18:13.357048 env[1257]: time="2025-08-13T01:18:13.357012809Z" level=info msg="shim disconnected" id=25d7eb61c853bae4247ef203845022a1d28faf015b7099f014dbebc783e4cbe0 Aug 13 01:18:13.357048 env[1257]: time="2025-08-13T01:18:13.357044348Z" level=warning msg="cleaning up after shim disconnected" id=25d7eb61c853bae4247ef203845022a1d28faf015b7099f014dbebc783e4cbe0 namespace=k8s.io Aug 13 01:18:13.357048 env[1257]: time="2025-08-13T01:18:13.357051199Z" level=info msg="cleaning up dead shim" Aug 13 01:18:13.362283 env[1257]: time="2025-08-13T01:18:13.362256239Z" level=warning msg="cleanup warnings time=\"2025-08-13T01:18:13Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3785 runtime=io.containerd.runc.v2\n" Aug 13 01:18:13.363434 env[1257]: time="2025-08-13T01:18:13.363416675Z" level=info msg="TearDown network for sandbox \"25d7eb61c853bae4247ef203845022a1d28faf015b7099f014dbebc783e4cbe0\" successfully" Aug 13 01:18:13.363496 env[1257]: time="2025-08-13T01:18:13.363484124Z" level=info msg="StopPodSandbox for \"25d7eb61c853bae4247ef203845022a1d28faf015b7099f014dbebc783e4cbe0\" returns successfully" Aug 13 01:18:13.455914 kubelet[2059]: I0813 01:18:13.455883 2059 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d19dd2b0-5d8c-44b2-82ac-9c6c490607f6-clustermesh-secrets\") pod \"d19dd2b0-5d8c-44b2-82ac-9c6c490607f6\" (UID: \"d19dd2b0-5d8c-44b2-82ac-9c6c490607f6\") " Aug 13 01:18:13.456151 kubelet[2059]: I0813 01:18:13.455932 2059 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d19dd2b0-5d8c-44b2-82ac-9c6c490607f6-hubble-tls\") pod \"d19dd2b0-5d8c-44b2-82ac-9c6c490607f6\" (UID: \"d19dd2b0-5d8c-44b2-82ac-9c6c490607f6\") " Aug 13 01:18:13.456151 kubelet[2059]: I0813 01:18:13.455945 2059 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3d9b3505-7044-4cab-9ec6-bf9b840b2685-cilium-config-path\") pod \"3d9b3505-7044-4cab-9ec6-bf9b840b2685\" (UID: \"3d9b3505-7044-4cab-9ec6-bf9b840b2685\") " Aug 13 01:18:13.456151 kubelet[2059]: I0813 01:18:13.455958 2059 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d19dd2b0-5d8c-44b2-82ac-9c6c490607f6-xtables-lock\") pod \"d19dd2b0-5d8c-44b2-82ac-9c6c490607f6\" (UID: \"d19dd2b0-5d8c-44b2-82ac-9c6c490607f6\") " Aug 13 01:18:13.456151 kubelet[2059]: I0813 01:18:13.455970 2059 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d19dd2b0-5d8c-44b2-82ac-9c6c490607f6-host-proc-sys-kernel\") pod \"d19dd2b0-5d8c-44b2-82ac-9c6c490607f6\" (UID: \"d19dd2b0-5d8c-44b2-82ac-9c6c490607f6\") " Aug 13 01:18:13.456151 kubelet[2059]: I0813 01:18:13.455980 2059 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-24tkn\" (UniqueName: \"kubernetes.io/projected/d19dd2b0-5d8c-44b2-82ac-9c6c490607f6-kube-api-access-24tkn\") pod \"d19dd2b0-5d8c-44b2-82ac-9c6c490607f6\" (UID: \"d19dd2b0-5d8c-44b2-82ac-9c6c490607f6\") " Aug 13 01:18:13.456151 kubelet[2059]: I0813 01:18:13.455989 2059 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hvpkj\" (UniqueName: \"kubernetes.io/projected/3d9b3505-7044-4cab-9ec6-bf9b840b2685-kube-api-access-hvpkj\") pod \"3d9b3505-7044-4cab-9ec6-bf9b840b2685\" (UID: \"3d9b3505-7044-4cab-9ec6-bf9b840b2685\") " Aug 13 01:18:13.456288 kubelet[2059]: I0813 01:18:13.455997 2059 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d19dd2b0-5d8c-44b2-82ac-9c6c490607f6-bpf-maps\") pod \"d19dd2b0-5d8c-44b2-82ac-9c6c490607f6\" (UID: \"d19dd2b0-5d8c-44b2-82ac-9c6c490607f6\") " Aug 13 01:18:13.456288 kubelet[2059]: I0813 01:18:13.456004 2059 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d19dd2b0-5d8c-44b2-82ac-9c6c490607f6-host-proc-sys-net\") pod \"d19dd2b0-5d8c-44b2-82ac-9c6c490607f6\" (UID: \"d19dd2b0-5d8c-44b2-82ac-9c6c490607f6\") " Aug 13 01:18:13.456288 kubelet[2059]: I0813 01:18:13.456013 2059 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d19dd2b0-5d8c-44b2-82ac-9c6c490607f6-etc-cni-netd\") pod \"d19dd2b0-5d8c-44b2-82ac-9c6c490607f6\" (UID: \"d19dd2b0-5d8c-44b2-82ac-9c6c490607f6\") " Aug 13 01:18:13.456288 kubelet[2059]: I0813 01:18:13.456020 2059 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d19dd2b0-5d8c-44b2-82ac-9c6c490607f6-hostproc\") pod \"d19dd2b0-5d8c-44b2-82ac-9c6c490607f6\" (UID: \"d19dd2b0-5d8c-44b2-82ac-9c6c490607f6\") " Aug 13 01:18:13.456288 kubelet[2059]: I0813 01:18:13.456029 2059 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d19dd2b0-5d8c-44b2-82ac-9c6c490607f6-cilium-cgroup\") pod \"d19dd2b0-5d8c-44b2-82ac-9c6c490607f6\" (UID: \"d19dd2b0-5d8c-44b2-82ac-9c6c490607f6\") " Aug 13 01:18:13.456288 kubelet[2059]: I0813 01:18:13.456040 2059 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d19dd2b0-5d8c-44b2-82ac-9c6c490607f6-cilium-config-path\") pod \"d19dd2b0-5d8c-44b2-82ac-9c6c490607f6\" (UID: \"d19dd2b0-5d8c-44b2-82ac-9c6c490607f6\") " Aug 13 01:18:13.456409 kubelet[2059]: I0813 01:18:13.456048 2059 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d19dd2b0-5d8c-44b2-82ac-9c6c490607f6-lib-modules\") pod \"d19dd2b0-5d8c-44b2-82ac-9c6c490607f6\" (UID: \"d19dd2b0-5d8c-44b2-82ac-9c6c490607f6\") " Aug 13 01:18:13.456409 kubelet[2059]: I0813 01:18:13.456061 2059 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d19dd2b0-5d8c-44b2-82ac-9c6c490607f6-cilium-run\") pod \"d19dd2b0-5d8c-44b2-82ac-9c6c490607f6\" (UID: \"d19dd2b0-5d8c-44b2-82ac-9c6c490607f6\") " Aug 13 01:18:13.456409 kubelet[2059]: I0813 01:18:13.456072 2059 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d19dd2b0-5d8c-44b2-82ac-9c6c490607f6-cni-path\") pod \"d19dd2b0-5d8c-44b2-82ac-9c6c490607f6\" (UID: \"d19dd2b0-5d8c-44b2-82ac-9c6c490607f6\") " Aug 13 01:18:13.472436 kubelet[2059]: I0813 01:18:13.469695 2059 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d19dd2b0-5d8c-44b2-82ac-9c6c490607f6-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "d19dd2b0-5d8c-44b2-82ac-9c6c490607f6" (UID: "d19dd2b0-5d8c-44b2-82ac-9c6c490607f6"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 01:18:13.472527 kubelet[2059]: I0813 01:18:13.469679 2059 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d19dd2b0-5d8c-44b2-82ac-9c6c490607f6-cni-path" (OuterVolumeSpecName: "cni-path") pod "d19dd2b0-5d8c-44b2-82ac-9c6c490607f6" (UID: "d19dd2b0-5d8c-44b2-82ac-9c6c490607f6"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 01:18:13.472843 kubelet[2059]: I0813 01:18:13.472600 2059 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d19dd2b0-5d8c-44b2-82ac-9c6c490607f6-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "d19dd2b0-5d8c-44b2-82ac-9c6c490607f6" (UID: "d19dd2b0-5d8c-44b2-82ac-9c6c490607f6"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 01:18:13.474006 kubelet[2059]: I0813 01:18:13.473994 2059 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d19dd2b0-5d8c-44b2-82ac-9c6c490607f6-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "d19dd2b0-5d8c-44b2-82ac-9c6c490607f6" (UID: "d19dd2b0-5d8c-44b2-82ac-9c6c490607f6"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Aug 13 01:18:13.474087 kubelet[2059]: I0813 01:18:13.474077 2059 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d19dd2b0-5d8c-44b2-82ac-9c6c490607f6-hostproc" (OuterVolumeSpecName: "hostproc") pod "d19dd2b0-5d8c-44b2-82ac-9c6c490607f6" (UID: "d19dd2b0-5d8c-44b2-82ac-9c6c490607f6"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 01:18:13.474237 kubelet[2059]: I0813 01:18:13.474228 2059 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d19dd2b0-5d8c-44b2-82ac-9c6c490607f6-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "d19dd2b0-5d8c-44b2-82ac-9c6c490607f6" (UID: "d19dd2b0-5d8c-44b2-82ac-9c6c490607f6"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 01:18:13.474444 kubelet[2059]: I0813 01:18:13.474426 2059 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d19dd2b0-5d8c-44b2-82ac-9c6c490607f6-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "d19dd2b0-5d8c-44b2-82ac-9c6c490607f6" (UID: "d19dd2b0-5d8c-44b2-82ac-9c6c490607f6"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 01:18:13.474513 kubelet[2059]: I0813 01:18:13.474446 2059 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d19dd2b0-5d8c-44b2-82ac-9c6c490607f6-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "d19dd2b0-5d8c-44b2-82ac-9c6c490607f6" (UID: "d19dd2b0-5d8c-44b2-82ac-9c6c490607f6"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 01:18:13.474538 kubelet[2059]: I0813 01:18:13.474513 2059 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d19dd2b0-5d8c-44b2-82ac-9c6c490607f6-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "d19dd2b0-5d8c-44b2-82ac-9c6c490607f6" (UID: "d19dd2b0-5d8c-44b2-82ac-9c6c490607f6"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 01:18:13.474538 kubelet[2059]: I0813 01:18:13.474525 2059 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d19dd2b0-5d8c-44b2-82ac-9c6c490607f6-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "d19dd2b0-5d8c-44b2-82ac-9c6c490607f6" (UID: "d19dd2b0-5d8c-44b2-82ac-9c6c490607f6"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 01:18:13.474538 kubelet[2059]: I0813 01:18:13.474535 2059 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d19dd2b0-5d8c-44b2-82ac-9c6c490607f6-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "d19dd2b0-5d8c-44b2-82ac-9c6c490607f6" (UID: "d19dd2b0-5d8c-44b2-82ac-9c6c490607f6"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 01:18:13.475318 kubelet[2059]: I0813 01:18:13.475306 2059 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d19dd2b0-5d8c-44b2-82ac-9c6c490607f6-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "d19dd2b0-5d8c-44b2-82ac-9c6c490607f6" (UID: "d19dd2b0-5d8c-44b2-82ac-9c6c490607f6"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 01:18:13.476128 kubelet[2059]: I0813 01:18:13.476117 2059 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d19dd2b0-5d8c-44b2-82ac-9c6c490607f6-kube-api-access-24tkn" (OuterVolumeSpecName: "kube-api-access-24tkn") pod "d19dd2b0-5d8c-44b2-82ac-9c6c490607f6" (UID: "d19dd2b0-5d8c-44b2-82ac-9c6c490607f6"). InnerVolumeSpecName "kube-api-access-24tkn". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 01:18:13.476257 kubelet[2059]: I0813 01:18:13.476240 2059 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3d9b3505-7044-4cab-9ec6-bf9b840b2685-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "3d9b3505-7044-4cab-9ec6-bf9b840b2685" (UID: "3d9b3505-7044-4cab-9ec6-bf9b840b2685"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 01:18:13.477180 kubelet[2059]: I0813 01:18:13.477165 2059 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19dd2b0-5d8c-44b2-82ac-9c6c490607f6-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "d19dd2b0-5d8c-44b2-82ac-9c6c490607f6" (UID: "d19dd2b0-5d8c-44b2-82ac-9c6c490607f6"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 01:18:13.478241 kubelet[2059]: I0813 01:18:13.478225 2059 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d9b3505-7044-4cab-9ec6-bf9b840b2685-kube-api-access-hvpkj" (OuterVolumeSpecName: "kube-api-access-hvpkj") pod "3d9b3505-7044-4cab-9ec6-bf9b840b2685" (UID: "3d9b3505-7044-4cab-9ec6-bf9b840b2685"). InnerVolumeSpecName "kube-api-access-hvpkj". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 01:18:13.556232 kubelet[2059]: I0813 01:18:13.556192 2059 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d19dd2b0-5d8c-44b2-82ac-9c6c490607f6-hostproc\") on node \"localhost\" DevicePath \"\"" Aug 13 01:18:13.556232 kubelet[2059]: I0813 01:18:13.556221 2059 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d19dd2b0-5d8c-44b2-82ac-9c6c490607f6-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Aug 13 01:18:13.556232 kubelet[2059]: I0813 01:18:13.556234 2059 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d19dd2b0-5d8c-44b2-82ac-9c6c490607f6-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Aug 13 01:18:13.556434 kubelet[2059]: I0813 01:18:13.556242 2059 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d19dd2b0-5d8c-44b2-82ac-9c6c490607f6-lib-modules\") on node \"localhost\" DevicePath \"\"" Aug 13 01:18:13.556434 kubelet[2059]: I0813 01:18:13.556249 2059 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d19dd2b0-5d8c-44b2-82ac-9c6c490607f6-cilium-run\") on node \"localhost\" DevicePath \"\"" Aug 13 01:18:13.556434 kubelet[2059]: I0813 01:18:13.556255 2059 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d19dd2b0-5d8c-44b2-82ac-9c6c490607f6-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Aug 13 01:18:13.556434 kubelet[2059]: I0813 01:18:13.556261 2059 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d19dd2b0-5d8c-44b2-82ac-9c6c490607f6-cni-path\") on node \"localhost\" DevicePath \"\"" Aug 13 01:18:13.556434 kubelet[2059]: I0813 01:18:13.556269 2059 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3d9b3505-7044-4cab-9ec6-bf9b840b2685-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Aug 13 01:18:13.556434 kubelet[2059]: I0813 01:18:13.556275 2059 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d19dd2b0-5d8c-44b2-82ac-9c6c490607f6-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Aug 13 01:18:13.556434 kubelet[2059]: I0813 01:18:13.556281 2059 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d19dd2b0-5d8c-44b2-82ac-9c6c490607f6-hubble-tls\") on node \"localhost\" DevicePath \"\"" Aug 13 01:18:13.556434 kubelet[2059]: I0813 01:18:13.556286 2059 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d19dd2b0-5d8c-44b2-82ac-9c6c490607f6-xtables-lock\") on node \"localhost\" DevicePath \"\"" Aug 13 01:18:13.556639 kubelet[2059]: I0813 01:18:13.556293 2059 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d19dd2b0-5d8c-44b2-82ac-9c6c490607f6-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Aug 13 01:18:13.556639 kubelet[2059]: I0813 01:18:13.556299 2059 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-24tkn\" (UniqueName: \"kubernetes.io/projected/d19dd2b0-5d8c-44b2-82ac-9c6c490607f6-kube-api-access-24tkn\") on node \"localhost\" DevicePath \"\"" Aug 13 01:18:13.556639 kubelet[2059]: I0813 01:18:13.556306 2059 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hvpkj\" (UniqueName: \"kubernetes.io/projected/3d9b3505-7044-4cab-9ec6-bf9b840b2685-kube-api-access-hvpkj\") on node \"localhost\" DevicePath \"\"" Aug 13 01:18:13.556639 kubelet[2059]: I0813 01:18:13.556311 2059 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d19dd2b0-5d8c-44b2-82ac-9c6c490607f6-bpf-maps\") on node \"localhost\" DevicePath \"\"" Aug 13 01:18:13.556639 kubelet[2059]: I0813 01:18:13.556317 2059 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d19dd2b0-5d8c-44b2-82ac-9c6c490607f6-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Aug 13 01:18:13.658635 kubelet[2059]: I0813 01:18:13.658593 2059 scope.go:117] "RemoveContainer" containerID="e7a76a600c324ac88d210cbdb7c99a1e7dea9f7f7f544ab042725e947d3901a1" Aug 13 01:18:13.663196 systemd[1]: Removed slice kubepods-besteffort-pod3d9b3505_7044_4cab_9ec6_bf9b840b2685.slice. Aug 13 01:18:13.665251 systemd[1]: Removed slice kubepods-burstable-podd19dd2b0_5d8c_44b2_82ac_9c6c490607f6.slice. Aug 13 01:18:13.665323 systemd[1]: kubepods-burstable-podd19dd2b0_5d8c_44b2_82ac_9c6c490607f6.slice: Consumed 4.762s CPU time. Aug 13 01:18:13.675565 env[1257]: time="2025-08-13T01:18:13.675524909Z" level=info msg="RemoveContainer for \"e7a76a600c324ac88d210cbdb7c99a1e7dea9f7f7f544ab042725e947d3901a1\"" Aug 13 01:18:13.679131 env[1257]: time="2025-08-13T01:18:13.679009788Z" level=info msg="RemoveContainer for \"e7a76a600c324ac88d210cbdb7c99a1e7dea9f7f7f544ab042725e947d3901a1\" returns successfully" Aug 13 01:18:13.679342 kubelet[2059]: I0813 01:18:13.679328 2059 scope.go:117] "RemoveContainer" containerID="68de88d6913e2e96f18336627bb3a5ba41e6ea18cfa33d15f5a72bf0c8240344" Aug 13 01:18:13.680570 env[1257]: time="2025-08-13T01:18:13.680545071Z" level=info msg="RemoveContainer for \"68de88d6913e2e96f18336627bb3a5ba41e6ea18cfa33d15f5a72bf0c8240344\"" Aug 13 01:18:13.683815 env[1257]: time="2025-08-13T01:18:13.683207339Z" level=info msg="RemoveContainer for \"68de88d6913e2e96f18336627bb3a5ba41e6ea18cfa33d15f5a72bf0c8240344\" returns successfully" Aug 13 01:18:13.684108 kubelet[2059]: I0813 01:18:13.684093 2059 scope.go:117] "RemoveContainer" containerID="aeafbc6c09b5089161642adcf72eb1993f76be3250d8a1c4c81b447314963032" Aug 13 01:18:13.685572 env[1257]: time="2025-08-13T01:18:13.685377729Z" level=info msg="RemoveContainer for \"aeafbc6c09b5089161642adcf72eb1993f76be3250d8a1c4c81b447314963032\"" Aug 13 01:18:13.687749 env[1257]: time="2025-08-13T01:18:13.687719397Z" level=info msg="RemoveContainer for \"aeafbc6c09b5089161642adcf72eb1993f76be3250d8a1c4c81b447314963032\" returns successfully" Aug 13 01:18:13.688090 kubelet[2059]: I0813 01:18:13.688079 2059 scope.go:117] "RemoveContainer" containerID="919ef2a5ae9b65a4aece9a2c26ed916e2031a5e477606f1e7ce4fe05d41aacd3" Aug 13 01:18:13.689685 env[1257]: time="2025-08-13T01:18:13.689478227Z" level=info msg="RemoveContainer for \"919ef2a5ae9b65a4aece9a2c26ed916e2031a5e477606f1e7ce4fe05d41aacd3\"" Aug 13 01:18:13.692812 env[1257]: time="2025-08-13T01:18:13.692786093Z" level=info msg="RemoveContainer for \"919ef2a5ae9b65a4aece9a2c26ed916e2031a5e477606f1e7ce4fe05d41aacd3\" returns successfully" Aug 13 01:18:13.692916 kubelet[2059]: I0813 01:18:13.692902 2059 scope.go:117] "RemoveContainer" containerID="2f43818ac9614f947c0c5016f1e9cc7ef42dc5cdc4cf15b7b351863851ec6e96" Aug 13 01:18:13.694009 env[1257]: time="2025-08-13T01:18:13.693681951Z" level=info msg="RemoveContainer for \"2f43818ac9614f947c0c5016f1e9cc7ef42dc5cdc4cf15b7b351863851ec6e96\"" Aug 13 01:18:13.694957 env[1257]: time="2025-08-13T01:18:13.694919438Z" level=info msg="RemoveContainer for \"2f43818ac9614f947c0c5016f1e9cc7ef42dc5cdc4cf15b7b351863851ec6e96\" returns successfully" Aug 13 01:18:13.695052 kubelet[2059]: I0813 01:18:13.695044 2059 scope.go:117] "RemoveContainer" containerID="e7a76a600c324ac88d210cbdb7c99a1e7dea9f7f7f544ab042725e947d3901a1" Aug 13 01:18:13.695277 env[1257]: time="2025-08-13T01:18:13.695195209Z" level=error msg="ContainerStatus for \"e7a76a600c324ac88d210cbdb7c99a1e7dea9f7f7f544ab042725e947d3901a1\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e7a76a600c324ac88d210cbdb7c99a1e7dea9f7f7f544ab042725e947d3901a1\": not found" Aug 13 01:18:13.696390 kubelet[2059]: E0813 01:18:13.696378 2059 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e7a76a600c324ac88d210cbdb7c99a1e7dea9f7f7f544ab042725e947d3901a1\": not found" containerID="e7a76a600c324ac88d210cbdb7c99a1e7dea9f7f7f544ab042725e947d3901a1" Aug 13 01:18:13.697530 kubelet[2059]: I0813 01:18:13.696464 2059 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e7a76a600c324ac88d210cbdb7c99a1e7dea9f7f7f544ab042725e947d3901a1"} err="failed to get container status \"e7a76a600c324ac88d210cbdb7c99a1e7dea9f7f7f544ab042725e947d3901a1\": rpc error: code = NotFound desc = an error occurred when try to find container \"e7a76a600c324ac88d210cbdb7c99a1e7dea9f7f7f544ab042725e947d3901a1\": not found" Aug 13 01:18:13.697592 kubelet[2059]: I0813 01:18:13.697584 2059 scope.go:117] "RemoveContainer" containerID="68de88d6913e2e96f18336627bb3a5ba41e6ea18cfa33d15f5a72bf0c8240344" Aug 13 01:18:13.697775 env[1257]: time="2025-08-13T01:18:13.697729551Z" level=error msg="ContainerStatus for \"68de88d6913e2e96f18336627bb3a5ba41e6ea18cfa33d15f5a72bf0c8240344\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"68de88d6913e2e96f18336627bb3a5ba41e6ea18cfa33d15f5a72bf0c8240344\": not found" Aug 13 01:18:13.697864 kubelet[2059]: E0813 01:18:13.697852 2059 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"68de88d6913e2e96f18336627bb3a5ba41e6ea18cfa33d15f5a72bf0c8240344\": not found" containerID="68de88d6913e2e96f18336627bb3a5ba41e6ea18cfa33d15f5a72bf0c8240344" Aug 13 01:18:13.697937 kubelet[2059]: I0813 01:18:13.697924 2059 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"68de88d6913e2e96f18336627bb3a5ba41e6ea18cfa33d15f5a72bf0c8240344"} err="failed to get container status \"68de88d6913e2e96f18336627bb3a5ba41e6ea18cfa33d15f5a72bf0c8240344\": rpc error: code = NotFound desc = an error occurred when try to find container \"68de88d6913e2e96f18336627bb3a5ba41e6ea18cfa33d15f5a72bf0c8240344\": not found" Aug 13 01:18:13.698005 kubelet[2059]: I0813 01:18:13.697994 2059 scope.go:117] "RemoveContainer" containerID="aeafbc6c09b5089161642adcf72eb1993f76be3250d8a1c4c81b447314963032" Aug 13 01:18:13.698187 env[1257]: time="2025-08-13T01:18:13.698138225Z" level=error msg="ContainerStatus for \"aeafbc6c09b5089161642adcf72eb1993f76be3250d8a1c4c81b447314963032\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"aeafbc6c09b5089161642adcf72eb1993f76be3250d8a1c4c81b447314963032\": not found" Aug 13 01:18:13.698417 kubelet[2059]: E0813 01:18:13.698352 2059 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"aeafbc6c09b5089161642adcf72eb1993f76be3250d8a1c4c81b447314963032\": not found" containerID="aeafbc6c09b5089161642adcf72eb1993f76be3250d8a1c4c81b447314963032" Aug 13 01:18:13.698417 kubelet[2059]: I0813 01:18:13.698364 2059 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"aeafbc6c09b5089161642adcf72eb1993f76be3250d8a1c4c81b447314963032"} err="failed to get container status \"aeafbc6c09b5089161642adcf72eb1993f76be3250d8a1c4c81b447314963032\": rpc error: code = NotFound desc = an error occurred when try to find container \"aeafbc6c09b5089161642adcf72eb1993f76be3250d8a1c4c81b447314963032\": not found" Aug 13 01:18:13.698483 kubelet[2059]: I0813 01:18:13.698379 2059 scope.go:117] "RemoveContainer" containerID="919ef2a5ae9b65a4aece9a2c26ed916e2031a5e477606f1e7ce4fe05d41aacd3" Aug 13 01:18:13.698686 env[1257]: time="2025-08-13T01:18:13.698619398Z" level=error msg="ContainerStatus for \"919ef2a5ae9b65a4aece9a2c26ed916e2031a5e477606f1e7ce4fe05d41aacd3\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"919ef2a5ae9b65a4aece9a2c26ed916e2031a5e477606f1e7ce4fe05d41aacd3\": not found" Aug 13 01:18:13.698775 kubelet[2059]: E0813 01:18:13.698766 2059 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"919ef2a5ae9b65a4aece9a2c26ed916e2031a5e477606f1e7ce4fe05d41aacd3\": not found" containerID="919ef2a5ae9b65a4aece9a2c26ed916e2031a5e477606f1e7ce4fe05d41aacd3" Aug 13 01:18:13.698844 kubelet[2059]: I0813 01:18:13.698833 2059 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"919ef2a5ae9b65a4aece9a2c26ed916e2031a5e477606f1e7ce4fe05d41aacd3"} err="failed to get container status \"919ef2a5ae9b65a4aece9a2c26ed916e2031a5e477606f1e7ce4fe05d41aacd3\": rpc error: code = NotFound desc = an error occurred when try to find container \"919ef2a5ae9b65a4aece9a2c26ed916e2031a5e477606f1e7ce4fe05d41aacd3\": not found" Aug 13 01:18:13.698890 kubelet[2059]: I0813 01:18:13.698882 2059 scope.go:117] "RemoveContainer" containerID="2f43818ac9614f947c0c5016f1e9cc7ef42dc5cdc4cf15b7b351863851ec6e96" Aug 13 01:18:13.699090 env[1257]: time="2025-08-13T01:18:13.699043880Z" level=error msg="ContainerStatus for \"2f43818ac9614f947c0c5016f1e9cc7ef42dc5cdc4cf15b7b351863851ec6e96\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2f43818ac9614f947c0c5016f1e9cc7ef42dc5cdc4cf15b7b351863851ec6e96\": not found" Aug 13 01:18:13.699169 kubelet[2059]: E0813 01:18:13.699160 2059 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2f43818ac9614f947c0c5016f1e9cc7ef42dc5cdc4cf15b7b351863851ec6e96\": not found" containerID="2f43818ac9614f947c0c5016f1e9cc7ef42dc5cdc4cf15b7b351863851ec6e96" Aug 13 01:18:13.699245 kubelet[2059]: I0813 01:18:13.699233 2059 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2f43818ac9614f947c0c5016f1e9cc7ef42dc5cdc4cf15b7b351863851ec6e96"} err="failed to get container status \"2f43818ac9614f947c0c5016f1e9cc7ef42dc5cdc4cf15b7b351863851ec6e96\": rpc error: code = NotFound desc = an error occurred when try to find container \"2f43818ac9614f947c0c5016f1e9cc7ef42dc5cdc4cf15b7b351863851ec6e96\": not found" Aug 13 01:18:13.699300 kubelet[2059]: I0813 01:18:13.699286 2059 scope.go:117] "RemoveContainer" containerID="03606f2b62bc4ce176a18d9a67b5eacc6c92b952828bbf7f076a96e9e337e83d" Aug 13 01:18:13.700109 env[1257]: time="2025-08-13T01:18:13.699900288Z" level=info msg="RemoveContainer for \"03606f2b62bc4ce176a18d9a67b5eacc6c92b952828bbf7f076a96e9e337e83d\"" Aug 13 01:18:13.701015 env[1257]: time="2025-08-13T01:18:13.700977713Z" level=info msg="RemoveContainer for \"03606f2b62bc4ce176a18d9a67b5eacc6c92b952828bbf7f076a96e9e337e83d\" returns successfully" Aug 13 01:18:13.701100 kubelet[2059]: I0813 01:18:13.701092 2059 scope.go:117] "RemoveContainer" containerID="03606f2b62bc4ce176a18d9a67b5eacc6c92b952828bbf7f076a96e9e337e83d" Aug 13 01:18:13.701286 env[1257]: time="2025-08-13T01:18:13.701239277Z" level=error msg="ContainerStatus for \"03606f2b62bc4ce176a18d9a67b5eacc6c92b952828bbf7f076a96e9e337e83d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"03606f2b62bc4ce176a18d9a67b5eacc6c92b952828bbf7f076a96e9e337e83d\": not found" Aug 13 01:18:13.701369 kubelet[2059]: E0813 01:18:13.701360 2059 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"03606f2b62bc4ce176a18d9a67b5eacc6c92b952828bbf7f076a96e9e337e83d\": not found" containerID="03606f2b62bc4ce176a18d9a67b5eacc6c92b952828bbf7f076a96e9e337e83d" Aug 13 01:18:13.701457 kubelet[2059]: I0813 01:18:13.701446 2059 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"03606f2b62bc4ce176a18d9a67b5eacc6c92b952828bbf7f076a96e9e337e83d"} err="failed to get container status \"03606f2b62bc4ce176a18d9a67b5eacc6c92b952828bbf7f076a96e9e337e83d\": rpc error: code = NotFound desc = an error occurred when try to find container \"03606f2b62bc4ce176a18d9a67b5eacc6c92b952828bbf7f076a96e9e337e83d\": not found" Aug 13 01:18:14.197041 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-25d7eb61c853bae4247ef203845022a1d28faf015b7099f014dbebc783e4cbe0-rootfs.mount: Deactivated successfully. Aug 13 01:18:14.197121 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-25d7eb61c853bae4247ef203845022a1d28faf015b7099f014dbebc783e4cbe0-shm.mount: Deactivated successfully. Aug 13 01:18:14.197179 systemd[1]: var-lib-kubelet-pods-d19dd2b0\x2d5d8c\x2d44b2\x2d82ac\x2d9c6c490607f6-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d24tkn.mount: Deactivated successfully. Aug 13 01:18:14.197230 systemd[1]: var-lib-kubelet-pods-d19dd2b0\x2d5d8c\x2d44b2\x2d82ac\x2d9c6c490607f6-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Aug 13 01:18:14.197277 systemd[1]: var-lib-kubelet-pods-d19dd2b0\x2d5d8c\x2d44b2\x2d82ac\x2d9c6c490607f6-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Aug 13 01:18:14.197322 systemd[1]: var-lib-kubelet-pods-3d9b3505\x2d7044\x2d4cab\x2d9ec6\x2dbf9b840b2685-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dhvpkj.mount: Deactivated successfully. Aug 13 01:18:14.355660 kubelet[2059]: E0813 01:18:14.355620 2059 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Aug 13 01:18:14.974338 systemd[1]: Started sshd@23-139.178.70.100:22-139.178.68.195:60512.service. Aug 13 01:18:14.976546 sshd[3639]: pam_unix(sshd:session): session closed for user core Aug 13 01:18:14.980793 systemd[1]: sshd@22-139.178.70.100:22-139.178.68.195:60510.service: Deactivated successfully. Aug 13 01:18:14.981280 systemd[1]: session-25.scope: Deactivated successfully. Aug 13 01:18:14.982151 systemd-logind[1246]: Session 25 logged out. Waiting for processes to exit. Aug 13 01:18:14.982931 systemd-logind[1246]: Removed session 25. Aug 13 01:18:15.050944 sshd[3804]: Accepted publickey for core from 139.178.68.195 port 60512 ssh2: RSA SHA256:D9fG+3NI27jZdcTgqPkKAyN2+BKarYhwuSKj47TtA0s Aug 13 01:18:15.052223 sshd[3804]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 01:18:15.055591 systemd-logind[1246]: New session 26 of user core. Aug 13 01:18:15.055987 systemd[1]: Started session-26.scope. Aug 13 01:18:15.334025 kubelet[2059]: I0813 01:18:15.333966 2059 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3d9b3505-7044-4cab-9ec6-bf9b840b2685" path="/var/lib/kubelet/pods/3d9b3505-7044-4cab-9ec6-bf9b840b2685/volumes" Aug 13 01:18:15.335154 kubelet[2059]: I0813 01:18:15.335140 2059 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d19dd2b0-5d8c-44b2-82ac-9c6c490607f6" path="/var/lib/kubelet/pods/d19dd2b0-5d8c-44b2-82ac-9c6c490607f6/volumes" Aug 13 01:18:16.152296 sshd[3804]: pam_unix(sshd:session): session closed for user core Aug 13 01:18:16.155293 systemd[1]: Started sshd@24-139.178.70.100:22-139.178.68.195:60518.service. Aug 13 01:18:16.156834 systemd[1]: sshd@23-139.178.70.100:22-139.178.68.195:60512.service: Deactivated successfully. Aug 13 01:18:16.157269 systemd[1]: session-26.scope: Deactivated successfully. Aug 13 01:18:16.157900 systemd-logind[1246]: Session 26 logged out. Waiting for processes to exit. Aug 13 01:18:16.158440 systemd-logind[1246]: Removed session 26. Aug 13 01:18:16.184049 sshd[3815]: Accepted publickey for core from 139.178.68.195 port 60518 ssh2: RSA SHA256:D9fG+3NI27jZdcTgqPkKAyN2+BKarYhwuSKj47TtA0s Aug 13 01:18:16.185141 sshd[3815]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 01:18:16.188184 systemd[1]: Started session-27.scope. Aug 13 01:18:16.188697 systemd-logind[1246]: New session 27 of user core. Aug 13 01:18:16.446931 kubelet[2059]: E0813 01:18:16.446885 2059 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d19dd2b0-5d8c-44b2-82ac-9c6c490607f6" containerName="cilium-agent" Aug 13 01:18:16.446931 kubelet[2059]: E0813 01:18:16.446919 2059 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d19dd2b0-5d8c-44b2-82ac-9c6c490607f6" containerName="mount-cgroup" Aug 13 01:18:16.446931 kubelet[2059]: E0813 01:18:16.446923 2059 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d19dd2b0-5d8c-44b2-82ac-9c6c490607f6" containerName="clean-cilium-state" Aug 13 01:18:16.446931 kubelet[2059]: E0813 01:18:16.446928 2059 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3d9b3505-7044-4cab-9ec6-bf9b840b2685" containerName="cilium-operator" Aug 13 01:18:16.446931 kubelet[2059]: E0813 01:18:16.446935 2059 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d19dd2b0-5d8c-44b2-82ac-9c6c490607f6" containerName="apply-sysctl-overwrites" Aug 13 01:18:16.447309 kubelet[2059]: E0813 01:18:16.446941 2059 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d19dd2b0-5d8c-44b2-82ac-9c6c490607f6" containerName="mount-bpf-fs" Aug 13 01:18:16.473825 kubelet[2059]: I0813 01:18:16.473782 2059 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d9b3505-7044-4cab-9ec6-bf9b840b2685" containerName="cilium-operator" Aug 13 01:18:16.473825 kubelet[2059]: I0813 01:18:16.473808 2059 memory_manager.go:354] "RemoveStaleState removing state" podUID="d19dd2b0-5d8c-44b2-82ac-9c6c490607f6" containerName="cilium-agent" Aug 13 01:18:16.574820 kubelet[2059]: I0813 01:18:16.574799 2059 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/552c0443-1ed2-449e-bf8a-2e0d76eb818e-cilium-cgroup\") pod \"cilium-pbddw\" (UID: \"552c0443-1ed2-449e-bf8a-2e0d76eb818e\") " pod="kube-system/cilium-pbddw" Aug 13 01:18:16.574941 kubelet[2059]: I0813 01:18:16.574929 2059 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/552c0443-1ed2-449e-bf8a-2e0d76eb818e-lib-modules\") pod \"cilium-pbddw\" (UID: \"552c0443-1ed2-449e-bf8a-2e0d76eb818e\") " pod="kube-system/cilium-pbddw" Aug 13 01:18:16.574997 kubelet[2059]: I0813 01:18:16.574988 2059 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/552c0443-1ed2-449e-bf8a-2e0d76eb818e-xtables-lock\") pod \"cilium-pbddw\" (UID: \"552c0443-1ed2-449e-bf8a-2e0d76eb818e\") " pod="kube-system/cilium-pbddw" Aug 13 01:18:16.575063 kubelet[2059]: I0813 01:18:16.575053 2059 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/552c0443-1ed2-449e-bf8a-2e0d76eb818e-bpf-maps\") pod \"cilium-pbddw\" (UID: \"552c0443-1ed2-449e-bf8a-2e0d76eb818e\") " pod="kube-system/cilium-pbddw" Aug 13 01:18:16.575120 kubelet[2059]: I0813 01:18:16.575111 2059 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/552c0443-1ed2-449e-bf8a-2e0d76eb818e-cilium-config-path\") pod \"cilium-pbddw\" (UID: \"552c0443-1ed2-449e-bf8a-2e0d76eb818e\") " pod="kube-system/cilium-pbddw" Aug 13 01:18:16.575181 kubelet[2059]: I0813 01:18:16.575167 2059 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/552c0443-1ed2-449e-bf8a-2e0d76eb818e-hubble-tls\") pod \"cilium-pbddw\" (UID: \"552c0443-1ed2-449e-bf8a-2e0d76eb818e\") " pod="kube-system/cilium-pbddw" Aug 13 01:18:16.575234 kubelet[2059]: I0813 01:18:16.575225 2059 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/552c0443-1ed2-449e-bf8a-2e0d76eb818e-cni-path\") pod \"cilium-pbddw\" (UID: \"552c0443-1ed2-449e-bf8a-2e0d76eb818e\") " pod="kube-system/cilium-pbddw" Aug 13 01:18:16.575293 kubelet[2059]: I0813 01:18:16.575284 2059 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/552c0443-1ed2-449e-bf8a-2e0d76eb818e-cilium-ipsec-secrets\") pod \"cilium-pbddw\" (UID: \"552c0443-1ed2-449e-bf8a-2e0d76eb818e\") " pod="kube-system/cilium-pbddw" Aug 13 01:18:16.575347 kubelet[2059]: I0813 01:18:16.575336 2059 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pq8lm\" (UniqueName: \"kubernetes.io/projected/552c0443-1ed2-449e-bf8a-2e0d76eb818e-kube-api-access-pq8lm\") pod \"cilium-pbddw\" (UID: \"552c0443-1ed2-449e-bf8a-2e0d76eb818e\") " pod="kube-system/cilium-pbddw" Aug 13 01:18:16.575405 kubelet[2059]: I0813 01:18:16.575396 2059 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/552c0443-1ed2-449e-bf8a-2e0d76eb818e-host-proc-sys-net\") pod \"cilium-pbddw\" (UID: \"552c0443-1ed2-449e-bf8a-2e0d76eb818e\") " pod="kube-system/cilium-pbddw" Aug 13 01:18:16.575460 kubelet[2059]: I0813 01:18:16.575451 2059 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/552c0443-1ed2-449e-bf8a-2e0d76eb818e-cilium-run\") pod \"cilium-pbddw\" (UID: \"552c0443-1ed2-449e-bf8a-2e0d76eb818e\") " pod="kube-system/cilium-pbddw" Aug 13 01:18:16.575515 kubelet[2059]: I0813 01:18:16.575507 2059 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/552c0443-1ed2-449e-bf8a-2e0d76eb818e-hostproc\") pod \"cilium-pbddw\" (UID: \"552c0443-1ed2-449e-bf8a-2e0d76eb818e\") " pod="kube-system/cilium-pbddw" Aug 13 01:18:16.575568 kubelet[2059]: I0813 01:18:16.575559 2059 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/552c0443-1ed2-449e-bf8a-2e0d76eb818e-clustermesh-secrets\") pod \"cilium-pbddw\" (UID: \"552c0443-1ed2-449e-bf8a-2e0d76eb818e\") " pod="kube-system/cilium-pbddw" Aug 13 01:18:16.575625 kubelet[2059]: I0813 01:18:16.575616 2059 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/552c0443-1ed2-449e-bf8a-2e0d76eb818e-host-proc-sys-kernel\") pod \"cilium-pbddw\" (UID: \"552c0443-1ed2-449e-bf8a-2e0d76eb818e\") " pod="kube-system/cilium-pbddw" Aug 13 01:18:16.575690 kubelet[2059]: I0813 01:18:16.575682 2059 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/552c0443-1ed2-449e-bf8a-2e0d76eb818e-etc-cni-netd\") pod \"cilium-pbddw\" (UID: \"552c0443-1ed2-449e-bf8a-2e0d76eb818e\") " pod="kube-system/cilium-pbddw" Aug 13 01:18:16.953982 systemd[1]: Created slice kubepods-burstable-pod552c0443_1ed2_449e_bf8a_2e0d76eb818e.slice. Aug 13 01:18:17.265028 env[1257]: time="2025-08-13T01:18:17.264615523Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-pbddw,Uid:552c0443-1ed2-449e-bf8a-2e0d76eb818e,Namespace:kube-system,Attempt:0,}" Aug 13 01:18:17.364988 env[1257]: time="2025-08-13T01:18:17.364943810Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 01:18:17.365106 env[1257]: time="2025-08-13T01:18:17.365091695Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 01:18:17.365205 env[1257]: time="2025-08-13T01:18:17.365191125Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 01:18:17.365381 env[1257]: time="2025-08-13T01:18:17.365356244Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/71bc244bcbdfcb962c5faa12709e7ef334ed1f06634523bfa68bed54e9701c82 pid=3838 runtime=io.containerd.runc.v2 Aug 13 01:18:17.376716 systemd[1]: Started cri-containerd-71bc244bcbdfcb962c5faa12709e7ef334ed1f06634523bfa68bed54e9701c82.scope. Aug 13 01:18:17.393407 env[1257]: time="2025-08-13T01:18:17.393385212Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-pbddw,Uid:552c0443-1ed2-449e-bf8a-2e0d76eb818e,Namespace:kube-system,Attempt:0,} returns sandbox id \"71bc244bcbdfcb962c5faa12709e7ef334ed1f06634523bfa68bed54e9701c82\"" Aug 13 01:18:17.405614 env[1257]: time="2025-08-13T01:18:17.405588746Z" level=info msg="CreateContainer within sandbox \"71bc244bcbdfcb962c5faa12709e7ef334ed1f06634523bfa68bed54e9701c82\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Aug 13 01:18:17.535201 sshd[3815]: pam_unix(sshd:session): session closed for user core Aug 13 01:18:17.537032 systemd[1]: Started sshd@25-139.178.70.100:22-139.178.68.195:60524.service. Aug 13 01:18:17.538413 systemd[1]: sshd@24-139.178.70.100:22-139.178.68.195:60518.service: Deactivated successfully. Aug 13 01:18:17.538841 systemd[1]: session-27.scope: Deactivated successfully. Aug 13 01:18:17.539622 systemd-logind[1246]: Session 27 logged out. Waiting for processes to exit. Aug 13 01:18:17.540360 systemd-logind[1246]: Removed session 27. Aug 13 01:18:17.543571 env[1257]: time="2025-08-13T01:18:17.543546140Z" level=info msg="CreateContainer within sandbox \"71bc244bcbdfcb962c5faa12709e7ef334ed1f06634523bfa68bed54e9701c82\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"231a2e48e47605d0570a4c157f67b1a3f341cef692ae61bc7e118051feeedc0a\"" Aug 13 01:18:17.546987 env[1257]: time="2025-08-13T01:18:17.546177873Z" level=info msg="StartContainer for \"231a2e48e47605d0570a4c157f67b1a3f341cef692ae61bc7e118051feeedc0a\"" Aug 13 01:18:17.558168 systemd[1]: Started cri-containerd-231a2e48e47605d0570a4c157f67b1a3f341cef692ae61bc7e118051feeedc0a.scope. Aug 13 01:18:17.567679 systemd[1]: cri-containerd-231a2e48e47605d0570a4c157f67b1a3f341cef692ae61bc7e118051feeedc0a.scope: Deactivated successfully. Aug 13 01:18:17.567852 systemd[1]: Stopped cri-containerd-231a2e48e47605d0570a4c157f67b1a3f341cef692ae61bc7e118051feeedc0a.scope. Aug 13 01:18:17.569392 sshd[3874]: Accepted publickey for core from 139.178.68.195 port 60524 ssh2: RSA SHA256:D9fG+3NI27jZdcTgqPkKAyN2+BKarYhwuSKj47TtA0s Aug 13 01:18:17.570385 sshd[3874]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 01:18:17.575867 systemd[1]: Started session-28.scope. Aug 13 01:18:17.576912 systemd-logind[1246]: New session 28 of user core. Aug 13 01:18:17.729534 env[1257]: time="2025-08-13T01:18:17.729490590Z" level=info msg="shim disconnected" id=231a2e48e47605d0570a4c157f67b1a3f341cef692ae61bc7e118051feeedc0a Aug 13 01:18:17.729534 env[1257]: time="2025-08-13T01:18:17.729526676Z" level=warning msg="cleaning up after shim disconnected" id=231a2e48e47605d0570a4c157f67b1a3f341cef692ae61bc7e118051feeedc0a namespace=k8s.io Aug 13 01:18:17.729534 env[1257]: time="2025-08-13T01:18:17.729534126Z" level=info msg="cleaning up dead shim" Aug 13 01:18:17.740571 env[1257]: time="2025-08-13T01:18:17.740531818Z" level=warning msg="cleanup warnings time=\"2025-08-13T01:18:17Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3909 runtime=io.containerd.runc.v2\ntime=\"2025-08-13T01:18:17Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\ntime=\"2025-08-13T01:18:17Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/231a2e48e47605d0570a4c157f67b1a3f341cef692ae61bc7e118051feeedc0a/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Aug 13 01:18:17.741050 env[1257]: time="2025-08-13T01:18:17.740945775Z" level=error msg="copy shim log" error="read /proc/self/fd/27: file already closed" Aug 13 01:18:17.742596 env[1257]: time="2025-08-13T01:18:17.742560682Z" level=error msg="Failed to pipe stdout of container \"231a2e48e47605d0570a4c157f67b1a3f341cef692ae61bc7e118051feeedc0a\"" error="reading from a closed fifo" Aug 13 01:18:17.742692 env[1257]: time="2025-08-13T01:18:17.742627948Z" level=error msg="Failed to pipe stderr of container \"231a2e48e47605d0570a4c157f67b1a3f341cef692ae61bc7e118051feeedc0a\"" error="reading from a closed fifo" Aug 13 01:18:17.743287 env[1257]: time="2025-08-13T01:18:17.743233739Z" level=error msg="StartContainer for \"231a2e48e47605d0570a4c157f67b1a3f341cef692ae61bc7e118051feeedc0a\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Aug 13 01:18:17.743517 kubelet[2059]: E0813 01:18:17.743487 2059 log.go:32] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="231a2e48e47605d0570a4c157f67b1a3f341cef692ae61bc7e118051feeedc0a" Aug 13 01:18:17.746918 kubelet[2059]: E0813 01:18:17.746882 2059 kuberuntime_manager.go:1274] "Unhandled Error" err=< Aug 13 01:18:17.746918 kubelet[2059]: init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Aug 13 01:18:17.746918 kubelet[2059]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Aug 13 01:18:17.746918 kubelet[2059]: rm /hostbin/cilium-mount Aug 13 01:18:17.747082 kubelet[2059]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pq8lm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:&AppArmorProfile{Type:Unconfined,LocalhostProfile:nil,},},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-pbddw_kube-system(552c0443-1ed2-449e-bf8a-2e0d76eb818e): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Aug 13 01:18:17.747082 kubelet[2059]: > logger="UnhandledError" Aug 13 01:18:17.749207 kubelet[2059]: E0813 01:18:17.749161 2059 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-pbddw" podUID="552c0443-1ed2-449e-bf8a-2e0d76eb818e" Aug 13 01:18:18.004922 env[1257]: time="2025-08-13T01:18:18.004863981Z" level=info msg="StopPodSandbox for \"71bc244bcbdfcb962c5faa12709e7ef334ed1f06634523bfa68bed54e9701c82\"" Aug 13 01:18:18.005133 env[1257]: time="2025-08-13T01:18:18.005113778Z" level=info msg="Container to stop \"231a2e48e47605d0570a4c157f67b1a3f341cef692ae61bc7e118051feeedc0a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 01:18:18.006755 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-71bc244bcbdfcb962c5faa12709e7ef334ed1f06634523bfa68bed54e9701c82-shm.mount: Deactivated successfully. Aug 13 01:18:18.012484 systemd[1]: cri-containerd-71bc244bcbdfcb962c5faa12709e7ef334ed1f06634523bfa68bed54e9701c82.scope: Deactivated successfully. Aug 13 01:18:18.025954 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-71bc244bcbdfcb962c5faa12709e7ef334ed1f06634523bfa68bed54e9701c82-rootfs.mount: Deactivated successfully. Aug 13 01:18:18.135478 env[1257]: time="2025-08-13T01:18:18.135444064Z" level=info msg="shim disconnected" id=71bc244bcbdfcb962c5faa12709e7ef334ed1f06634523bfa68bed54e9701c82 Aug 13 01:18:18.135478 env[1257]: time="2025-08-13T01:18:18.135475133Z" level=warning msg="cleaning up after shim disconnected" id=71bc244bcbdfcb962c5faa12709e7ef334ed1f06634523bfa68bed54e9701c82 namespace=k8s.io Aug 13 01:18:18.135478 env[1257]: time="2025-08-13T01:18:18.135483618Z" level=info msg="cleaning up dead shim" Aug 13 01:18:18.140317 env[1257]: time="2025-08-13T01:18:18.140290148Z" level=warning msg="cleanup warnings time=\"2025-08-13T01:18:18Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3940 runtime=io.containerd.runc.v2\n" Aug 13 01:18:18.140605 env[1257]: time="2025-08-13T01:18:18.140588901Z" level=info msg="TearDown network for sandbox \"71bc244bcbdfcb962c5faa12709e7ef334ed1f06634523bfa68bed54e9701c82\" successfully" Aug 13 01:18:18.140683 env[1257]: time="2025-08-13T01:18:18.140656253Z" level=info msg="StopPodSandbox for \"71bc244bcbdfcb962c5faa12709e7ef334ed1f06634523bfa68bed54e9701c82\" returns successfully" Aug 13 01:18:18.151424 kubelet[2059]: I0813 01:18:18.148369 2059 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pq8lm\" (UniqueName: \"kubernetes.io/projected/552c0443-1ed2-449e-bf8a-2e0d76eb818e-kube-api-access-pq8lm\") pod \"552c0443-1ed2-449e-bf8a-2e0d76eb818e\" (UID: \"552c0443-1ed2-449e-bf8a-2e0d76eb818e\") " Aug 13 01:18:18.151424 kubelet[2059]: I0813 01:18:18.148397 2059 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/552c0443-1ed2-449e-bf8a-2e0d76eb818e-hubble-tls\") pod \"552c0443-1ed2-449e-bf8a-2e0d76eb818e\" (UID: \"552c0443-1ed2-449e-bf8a-2e0d76eb818e\") " Aug 13 01:18:18.151424 kubelet[2059]: I0813 01:18:18.148406 2059 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/552c0443-1ed2-449e-bf8a-2e0d76eb818e-cni-path\") pod \"552c0443-1ed2-449e-bf8a-2e0d76eb818e\" (UID: \"552c0443-1ed2-449e-bf8a-2e0d76eb818e\") " Aug 13 01:18:18.151424 kubelet[2059]: I0813 01:18:18.148415 2059 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/552c0443-1ed2-449e-bf8a-2e0d76eb818e-cilium-cgroup\") pod \"552c0443-1ed2-449e-bf8a-2e0d76eb818e\" (UID: \"552c0443-1ed2-449e-bf8a-2e0d76eb818e\") " Aug 13 01:18:18.151424 kubelet[2059]: I0813 01:18:18.148424 2059 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/552c0443-1ed2-449e-bf8a-2e0d76eb818e-xtables-lock\") pod \"552c0443-1ed2-449e-bf8a-2e0d76eb818e\" (UID: \"552c0443-1ed2-449e-bf8a-2e0d76eb818e\") " Aug 13 01:18:18.151424 kubelet[2059]: I0813 01:18:18.148432 2059 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/552c0443-1ed2-449e-bf8a-2e0d76eb818e-bpf-maps\") pod \"552c0443-1ed2-449e-bf8a-2e0d76eb818e\" (UID: \"552c0443-1ed2-449e-bf8a-2e0d76eb818e\") " Aug 13 01:18:18.151424 kubelet[2059]: I0813 01:18:18.148439 2059 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/552c0443-1ed2-449e-bf8a-2e0d76eb818e-cilium-run\") pod \"552c0443-1ed2-449e-bf8a-2e0d76eb818e\" (UID: \"552c0443-1ed2-449e-bf8a-2e0d76eb818e\") " Aug 13 01:18:18.151424 kubelet[2059]: I0813 01:18:18.148468 2059 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/552c0443-1ed2-449e-bf8a-2e0d76eb818e-clustermesh-secrets\") pod \"552c0443-1ed2-449e-bf8a-2e0d76eb818e\" (UID: \"552c0443-1ed2-449e-bf8a-2e0d76eb818e\") " Aug 13 01:18:18.151424 kubelet[2059]: I0813 01:18:18.148480 2059 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/552c0443-1ed2-449e-bf8a-2e0d76eb818e-cilium-config-path\") pod \"552c0443-1ed2-449e-bf8a-2e0d76eb818e\" (UID: \"552c0443-1ed2-449e-bf8a-2e0d76eb818e\") " Aug 13 01:18:18.151424 kubelet[2059]: I0813 01:18:18.148488 2059 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/552c0443-1ed2-449e-bf8a-2e0d76eb818e-host-proc-sys-net\") pod \"552c0443-1ed2-449e-bf8a-2e0d76eb818e\" (UID: \"552c0443-1ed2-449e-bf8a-2e0d76eb818e\") " Aug 13 01:18:18.151424 kubelet[2059]: I0813 01:18:18.148496 2059 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/552c0443-1ed2-449e-bf8a-2e0d76eb818e-etc-cni-netd\") pod \"552c0443-1ed2-449e-bf8a-2e0d76eb818e\" (UID: \"552c0443-1ed2-449e-bf8a-2e0d76eb818e\") " Aug 13 01:18:18.151424 kubelet[2059]: I0813 01:18:18.148509 2059 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/552c0443-1ed2-449e-bf8a-2e0d76eb818e-host-proc-sys-kernel\") pod \"552c0443-1ed2-449e-bf8a-2e0d76eb818e\" (UID: \"552c0443-1ed2-449e-bf8a-2e0d76eb818e\") " Aug 13 01:18:18.151424 kubelet[2059]: I0813 01:18:18.148520 2059 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/552c0443-1ed2-449e-bf8a-2e0d76eb818e-cilium-ipsec-secrets\") pod \"552c0443-1ed2-449e-bf8a-2e0d76eb818e\" (UID: \"552c0443-1ed2-449e-bf8a-2e0d76eb818e\") " Aug 13 01:18:18.151424 kubelet[2059]: I0813 01:18:18.148528 2059 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/552c0443-1ed2-449e-bf8a-2e0d76eb818e-lib-modules\") pod \"552c0443-1ed2-449e-bf8a-2e0d76eb818e\" (UID: \"552c0443-1ed2-449e-bf8a-2e0d76eb818e\") " Aug 13 01:18:18.151424 kubelet[2059]: I0813 01:18:18.148537 2059 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/552c0443-1ed2-449e-bf8a-2e0d76eb818e-hostproc\") pod \"552c0443-1ed2-449e-bf8a-2e0d76eb818e\" (UID: \"552c0443-1ed2-449e-bf8a-2e0d76eb818e\") " Aug 13 01:18:18.151424 kubelet[2059]: I0813 01:18:18.148576 2059 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/552c0443-1ed2-449e-bf8a-2e0d76eb818e-hostproc" (OuterVolumeSpecName: "hostproc") pod "552c0443-1ed2-449e-bf8a-2e0d76eb818e" (UID: "552c0443-1ed2-449e-bf8a-2e0d76eb818e"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 01:18:18.151417 systemd[1]: var-lib-kubelet-pods-552c0443\x2d1ed2\x2d449e\x2dbf8a\x2d2e0d76eb818e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dpq8lm.mount: Deactivated successfully. Aug 13 01:18:18.153101 kubelet[2059]: I0813 01:18:18.153073 2059 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/552c0443-1ed2-449e-bf8a-2e0d76eb818e-kube-api-access-pq8lm" (OuterVolumeSpecName: "kube-api-access-pq8lm") pod "552c0443-1ed2-449e-bf8a-2e0d76eb818e" (UID: "552c0443-1ed2-449e-bf8a-2e0d76eb818e"). InnerVolumeSpecName "kube-api-access-pq8lm". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 01:18:18.154300 kubelet[2059]: I0813 01:18:18.154285 2059 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/552c0443-1ed2-449e-bf8a-2e0d76eb818e-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "552c0443-1ed2-449e-bf8a-2e0d76eb818e" (UID: "552c0443-1ed2-449e-bf8a-2e0d76eb818e"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 01:18:18.154381 kubelet[2059]: I0813 01:18:18.154371 2059 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/552c0443-1ed2-449e-bf8a-2e0d76eb818e-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "552c0443-1ed2-449e-bf8a-2e0d76eb818e" (UID: "552c0443-1ed2-449e-bf8a-2e0d76eb818e"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 01:18:18.154440 kubelet[2059]: I0813 01:18:18.154430 2059 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/552c0443-1ed2-449e-bf8a-2e0d76eb818e-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "552c0443-1ed2-449e-bf8a-2e0d76eb818e" (UID: "552c0443-1ed2-449e-bf8a-2e0d76eb818e"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 01:18:18.154496 kubelet[2059]: I0813 01:18:18.154487 2059 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/552c0443-1ed2-449e-bf8a-2e0d76eb818e-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "552c0443-1ed2-449e-bf8a-2e0d76eb818e" (UID: "552c0443-1ed2-449e-bf8a-2e0d76eb818e"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 01:18:18.160403 kubelet[2059]: I0813 01:18:18.158688 2059 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/552c0443-1ed2-449e-bf8a-2e0d76eb818e-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "552c0443-1ed2-449e-bf8a-2e0d76eb818e" (UID: "552c0443-1ed2-449e-bf8a-2e0d76eb818e"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Aug 13 01:18:18.160403 kubelet[2059]: I0813 01:18:18.160105 2059 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/552c0443-1ed2-449e-bf8a-2e0d76eb818e-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "552c0443-1ed2-449e-bf8a-2e0d76eb818e" (UID: "552c0443-1ed2-449e-bf8a-2e0d76eb818e"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Aug 13 01:18:18.160403 kubelet[2059]: I0813 01:18:18.160127 2059 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/552c0443-1ed2-449e-bf8a-2e0d76eb818e-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "552c0443-1ed2-449e-bf8a-2e0d76eb818e" (UID: "552c0443-1ed2-449e-bf8a-2e0d76eb818e"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 01:18:18.156641 systemd[1]: var-lib-kubelet-pods-552c0443\x2d1ed2\x2d449e\x2dbf8a\x2d2e0d76eb818e-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Aug 13 01:18:18.161212 kubelet[2059]: I0813 01:18:18.161197 2059 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/552c0443-1ed2-449e-bf8a-2e0d76eb818e-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "552c0443-1ed2-449e-bf8a-2e0d76eb818e" (UID: "552c0443-1ed2-449e-bf8a-2e0d76eb818e"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 01:18:18.161290 kubelet[2059]: I0813 01:18:18.161280 2059 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/552c0443-1ed2-449e-bf8a-2e0d76eb818e-cni-path" (OuterVolumeSpecName: "cni-path") pod "552c0443-1ed2-449e-bf8a-2e0d76eb818e" (UID: "552c0443-1ed2-449e-bf8a-2e0d76eb818e"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 01:18:18.161421 kubelet[2059]: I0813 01:18:18.161377 2059 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/552c0443-1ed2-449e-bf8a-2e0d76eb818e-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "552c0443-1ed2-449e-bf8a-2e0d76eb818e" (UID: "552c0443-1ed2-449e-bf8a-2e0d76eb818e"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 01:18:18.161473 kubelet[2059]: I0813 01:18:18.161388 2059 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/552c0443-1ed2-449e-bf8a-2e0d76eb818e-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "552c0443-1ed2-449e-bf8a-2e0d76eb818e" (UID: "552c0443-1ed2-449e-bf8a-2e0d76eb818e"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 01:18:18.161522 kubelet[2059]: I0813 01:18:18.161395 2059 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/552c0443-1ed2-449e-bf8a-2e0d76eb818e-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "552c0443-1ed2-449e-bf8a-2e0d76eb818e" (UID: "552c0443-1ed2-449e-bf8a-2e0d76eb818e"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 01:18:18.161584 kubelet[2059]: I0813 01:18:18.161403 2059 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/552c0443-1ed2-449e-bf8a-2e0d76eb818e-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "552c0443-1ed2-449e-bf8a-2e0d76eb818e" (UID: "552c0443-1ed2-449e-bf8a-2e0d76eb818e"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 01:18:18.248808 kubelet[2059]: I0813 01:18:18.248780 2059 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/552c0443-1ed2-449e-bf8a-2e0d76eb818e-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Aug 13 01:18:18.248946 kubelet[2059]: I0813 01:18:18.248937 2059 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/552c0443-1ed2-449e-bf8a-2e0d76eb818e-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Aug 13 01:18:18.248999 kubelet[2059]: I0813 01:18:18.248990 2059 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/552c0443-1ed2-449e-bf8a-2e0d76eb818e-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Aug 13 01:18:18.249049 kubelet[2059]: I0813 01:18:18.249042 2059 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/552c0443-1ed2-449e-bf8a-2e0d76eb818e-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Aug 13 01:18:18.249105 kubelet[2059]: I0813 01:18:18.249097 2059 reconciler_common.go:293] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/552c0443-1ed2-449e-bf8a-2e0d76eb818e-cilium-ipsec-secrets\") on node \"localhost\" DevicePath \"\"" Aug 13 01:18:18.249220 kubelet[2059]: I0813 01:18:18.249212 2059 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/552c0443-1ed2-449e-bf8a-2e0d76eb818e-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Aug 13 01:18:18.249267 kubelet[2059]: I0813 01:18:18.249259 2059 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/552c0443-1ed2-449e-bf8a-2e0d76eb818e-lib-modules\") on node \"localhost\" DevicePath \"\"" Aug 13 01:18:18.249315 kubelet[2059]: I0813 01:18:18.249308 2059 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/552c0443-1ed2-449e-bf8a-2e0d76eb818e-hostproc\") on node \"localhost\" DevicePath \"\"" Aug 13 01:18:18.249369 kubelet[2059]: I0813 01:18:18.249360 2059 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/552c0443-1ed2-449e-bf8a-2e0d76eb818e-hubble-tls\") on node \"localhost\" DevicePath \"\"" Aug 13 01:18:18.249423 kubelet[2059]: I0813 01:18:18.249415 2059 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/552c0443-1ed2-449e-bf8a-2e0d76eb818e-cni-path\") on node \"localhost\" DevicePath \"\"" Aug 13 01:18:18.249472 kubelet[2059]: I0813 01:18:18.249464 2059 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pq8lm\" (UniqueName: \"kubernetes.io/projected/552c0443-1ed2-449e-bf8a-2e0d76eb818e-kube-api-access-pq8lm\") on node \"localhost\" DevicePath \"\"" Aug 13 01:18:18.249522 kubelet[2059]: I0813 01:18:18.249514 2059 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/552c0443-1ed2-449e-bf8a-2e0d76eb818e-bpf-maps\") on node \"localhost\" DevicePath \"\"" Aug 13 01:18:18.249567 kubelet[2059]: I0813 01:18:18.249559 2059 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/552c0443-1ed2-449e-bf8a-2e0d76eb818e-cilium-run\") on node \"localhost\" DevicePath \"\"" Aug 13 01:18:18.249614 kubelet[2059]: I0813 01:18:18.249607 2059 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/552c0443-1ed2-449e-bf8a-2e0d76eb818e-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Aug 13 01:18:18.249662 kubelet[2059]: I0813 01:18:18.249655 2059 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/552c0443-1ed2-449e-bf8a-2e0d76eb818e-xtables-lock\") on node \"localhost\" DevicePath \"\"" Aug 13 01:18:18.961731 systemd[1]: var-lib-kubelet-pods-552c0443\x2d1ed2\x2d449e\x2dbf8a\x2d2e0d76eb818e-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Aug 13 01:18:18.961833 systemd[1]: var-lib-kubelet-pods-552c0443\x2d1ed2\x2d449e\x2dbf8a\x2d2e0d76eb818e-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Aug 13 01:18:19.007221 kubelet[2059]: I0813 01:18:19.007203 2059 scope.go:117] "RemoveContainer" containerID="231a2e48e47605d0570a4c157f67b1a3f341cef692ae61bc7e118051feeedc0a" Aug 13 01:18:19.009060 env[1257]: time="2025-08-13T01:18:19.009027710Z" level=info msg="RemoveContainer for \"231a2e48e47605d0570a4c157f67b1a3f341cef692ae61bc7e118051feeedc0a\"" Aug 13 01:18:19.010280 systemd[1]: Removed slice kubepods-burstable-pod552c0443_1ed2_449e_bf8a_2e0d76eb818e.slice. Aug 13 01:18:19.018739 env[1257]: time="2025-08-13T01:18:19.018679409Z" level=info msg="RemoveContainer for \"231a2e48e47605d0570a4c157f67b1a3f341cef692ae61bc7e118051feeedc0a\" returns successfully" Aug 13 01:18:19.070275 kubelet[2059]: E0813 01:18:19.070252 2059 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="552c0443-1ed2-449e-bf8a-2e0d76eb818e" containerName="mount-cgroup" Aug 13 01:18:19.070395 kubelet[2059]: I0813 01:18:19.070280 2059 memory_manager.go:354] "RemoveStaleState removing state" podUID="552c0443-1ed2-449e-bf8a-2e0d76eb818e" containerName="mount-cgroup" Aug 13 01:18:19.076533 systemd[1]: Created slice kubepods-burstable-pod03b1be7f_8dbc_426c_8bdd_22349049853a.slice. Aug 13 01:18:19.255184 kubelet[2059]: I0813 01:18:19.255114 2059 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/03b1be7f-8dbc-426c-8bdd-22349049853a-cilium-run\") pod \"cilium-w5qwr\" (UID: \"03b1be7f-8dbc-426c-8bdd-22349049853a\") " pod="kube-system/cilium-w5qwr" Aug 13 01:18:19.255309 kubelet[2059]: I0813 01:18:19.255298 2059 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/03b1be7f-8dbc-426c-8bdd-22349049853a-cni-path\") pod \"cilium-w5qwr\" (UID: \"03b1be7f-8dbc-426c-8bdd-22349049853a\") " pod="kube-system/cilium-w5qwr" Aug 13 01:18:19.255384 kubelet[2059]: I0813 01:18:19.255374 2059 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/03b1be7f-8dbc-426c-8bdd-22349049853a-xtables-lock\") pod \"cilium-w5qwr\" (UID: \"03b1be7f-8dbc-426c-8bdd-22349049853a\") " pod="kube-system/cilium-w5qwr" Aug 13 01:18:19.255449 kubelet[2059]: I0813 01:18:19.255439 2059 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/03b1be7f-8dbc-426c-8bdd-22349049853a-cilium-config-path\") pod \"cilium-w5qwr\" (UID: \"03b1be7f-8dbc-426c-8bdd-22349049853a\") " pod="kube-system/cilium-w5qwr" Aug 13 01:18:19.255508 kubelet[2059]: I0813 01:18:19.255498 2059 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/03b1be7f-8dbc-426c-8bdd-22349049853a-cilium-cgroup\") pod \"cilium-w5qwr\" (UID: \"03b1be7f-8dbc-426c-8bdd-22349049853a\") " pod="kube-system/cilium-w5qwr" Aug 13 01:18:19.255576 kubelet[2059]: I0813 01:18:19.255566 2059 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/03b1be7f-8dbc-426c-8bdd-22349049853a-host-proc-sys-kernel\") pod \"cilium-w5qwr\" (UID: \"03b1be7f-8dbc-426c-8bdd-22349049853a\") " pod="kube-system/cilium-w5qwr" Aug 13 01:18:19.255643 kubelet[2059]: I0813 01:18:19.255627 2059 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m2gff\" (UniqueName: \"kubernetes.io/projected/03b1be7f-8dbc-426c-8bdd-22349049853a-kube-api-access-m2gff\") pod \"cilium-w5qwr\" (UID: \"03b1be7f-8dbc-426c-8bdd-22349049853a\") " pod="kube-system/cilium-w5qwr" Aug 13 01:18:19.255733 kubelet[2059]: I0813 01:18:19.255723 2059 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/03b1be7f-8dbc-426c-8bdd-22349049853a-hostproc\") pod \"cilium-w5qwr\" (UID: \"03b1be7f-8dbc-426c-8bdd-22349049853a\") " pod="kube-system/cilium-w5qwr" Aug 13 01:18:19.255794 kubelet[2059]: I0813 01:18:19.255784 2059 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/03b1be7f-8dbc-426c-8bdd-22349049853a-etc-cni-netd\") pod \"cilium-w5qwr\" (UID: \"03b1be7f-8dbc-426c-8bdd-22349049853a\") " pod="kube-system/cilium-w5qwr" Aug 13 01:18:19.255851 kubelet[2059]: I0813 01:18:19.255842 2059 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/03b1be7f-8dbc-426c-8bdd-22349049853a-host-proc-sys-net\") pod \"cilium-w5qwr\" (UID: \"03b1be7f-8dbc-426c-8bdd-22349049853a\") " pod="kube-system/cilium-w5qwr" Aug 13 01:18:19.255918 kubelet[2059]: I0813 01:18:19.255907 2059 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/03b1be7f-8dbc-426c-8bdd-22349049853a-cilium-ipsec-secrets\") pod \"cilium-w5qwr\" (UID: \"03b1be7f-8dbc-426c-8bdd-22349049853a\") " pod="kube-system/cilium-w5qwr" Aug 13 01:18:19.255978 kubelet[2059]: I0813 01:18:19.255968 2059 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/03b1be7f-8dbc-426c-8bdd-22349049853a-bpf-maps\") pod \"cilium-w5qwr\" (UID: \"03b1be7f-8dbc-426c-8bdd-22349049853a\") " pod="kube-system/cilium-w5qwr" Aug 13 01:18:19.256066 kubelet[2059]: I0813 01:18:19.256057 2059 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/03b1be7f-8dbc-426c-8bdd-22349049853a-clustermesh-secrets\") pod \"cilium-w5qwr\" (UID: \"03b1be7f-8dbc-426c-8bdd-22349049853a\") " pod="kube-system/cilium-w5qwr" Aug 13 01:18:19.256124 kubelet[2059]: I0813 01:18:19.256115 2059 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/03b1be7f-8dbc-426c-8bdd-22349049853a-lib-modules\") pod \"cilium-w5qwr\" (UID: \"03b1be7f-8dbc-426c-8bdd-22349049853a\") " pod="kube-system/cilium-w5qwr" Aug 13 01:18:19.256181 kubelet[2059]: I0813 01:18:19.256171 2059 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/03b1be7f-8dbc-426c-8bdd-22349049853a-hubble-tls\") pod \"cilium-w5qwr\" (UID: \"03b1be7f-8dbc-426c-8bdd-22349049853a\") " pod="kube-system/cilium-w5qwr" Aug 13 01:18:19.333640 kubelet[2059]: I0813 01:18:19.333621 2059 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="552c0443-1ed2-449e-bf8a-2e0d76eb818e" path="/var/lib/kubelet/pods/552c0443-1ed2-449e-bf8a-2e0d76eb818e/volumes" Aug 13 01:18:19.364199 kubelet[2059]: E0813 01:18:19.364178 2059 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Aug 13 01:18:19.679749 env[1257]: time="2025-08-13T01:18:19.679714282Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-w5qwr,Uid:03b1be7f-8dbc-426c-8bdd-22349049853a,Namespace:kube-system,Attempt:0,}" Aug 13 01:18:19.687799 env[1257]: time="2025-08-13T01:18:19.687660591Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 01:18:19.687799 env[1257]: time="2025-08-13T01:18:19.687704302Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 01:18:19.687799 env[1257]: time="2025-08-13T01:18:19.687711596Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 01:18:19.687942 env[1257]: time="2025-08-13T01:18:19.687831805Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/78684553592af44851aa0c697672560760c3e1756cd2d93e5ff9442d822c3927 pid=3968 runtime=io.containerd.runc.v2 Aug 13 01:18:19.695055 systemd[1]: Started cri-containerd-78684553592af44851aa0c697672560760c3e1756cd2d93e5ff9442d822c3927.scope. Aug 13 01:18:19.721588 env[1257]: time="2025-08-13T01:18:19.721562700Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-w5qwr,Uid:03b1be7f-8dbc-426c-8bdd-22349049853a,Namespace:kube-system,Attempt:0,} returns sandbox id \"78684553592af44851aa0c697672560760c3e1756cd2d93e5ff9442d822c3927\"" Aug 13 01:18:19.724585 env[1257]: time="2025-08-13T01:18:19.723079910Z" level=info msg="CreateContainer within sandbox \"78684553592af44851aa0c697672560760c3e1756cd2d93e5ff9442d822c3927\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Aug 13 01:18:19.787363 env[1257]: time="2025-08-13T01:18:19.787324438Z" level=info msg="CreateContainer within sandbox \"78684553592af44851aa0c697672560760c3e1756cd2d93e5ff9442d822c3927\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"098cf18a76461f5b897bb9d2e70c0d2afb6767f46dedc081160534e9a6d6d149\"" Aug 13 01:18:19.787848 env[1257]: time="2025-08-13T01:18:19.787836005Z" level=info msg="StartContainer for \"098cf18a76461f5b897bb9d2e70c0d2afb6767f46dedc081160534e9a6d6d149\"" Aug 13 01:18:19.799720 systemd[1]: Started cri-containerd-098cf18a76461f5b897bb9d2e70c0d2afb6767f46dedc081160534e9a6d6d149.scope. Aug 13 01:18:19.821500 env[1257]: time="2025-08-13T01:18:19.821475610Z" level=info msg="StartContainer for \"098cf18a76461f5b897bb9d2e70c0d2afb6767f46dedc081160534e9a6d6d149\" returns successfully" Aug 13 01:18:19.840873 systemd[1]: cri-containerd-098cf18a76461f5b897bb9d2e70c0d2afb6767f46dedc081160534e9a6d6d149.scope: Deactivated successfully. Aug 13 01:18:19.857782 env[1257]: time="2025-08-13T01:18:19.857754479Z" level=info msg="shim disconnected" id=098cf18a76461f5b897bb9d2e70c0d2afb6767f46dedc081160534e9a6d6d149 Aug 13 01:18:19.857928 env[1257]: time="2025-08-13T01:18:19.857916574Z" level=warning msg="cleaning up after shim disconnected" id=098cf18a76461f5b897bb9d2e70c0d2afb6767f46dedc081160534e9a6d6d149 namespace=k8s.io Aug 13 01:18:19.857983 env[1257]: time="2025-08-13T01:18:19.857967147Z" level=info msg="cleaning up dead shim" Aug 13 01:18:19.862716 env[1257]: time="2025-08-13T01:18:19.862691902Z" level=warning msg="cleanup warnings time=\"2025-08-13T01:18:19Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4050 runtime=io.containerd.runc.v2\n" Aug 13 01:18:20.013800 env[1257]: time="2025-08-13T01:18:20.012529849Z" level=info msg="CreateContainer within sandbox \"78684553592af44851aa0c697672560760c3e1756cd2d93e5ff9442d822c3927\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Aug 13 01:18:20.048723 env[1257]: time="2025-08-13T01:18:20.048609353Z" level=info msg="CreateContainer within sandbox \"78684553592af44851aa0c697672560760c3e1756cd2d93e5ff9442d822c3927\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"4aa13ef3466d3ea4170880c977632c776340adc090b92f18eb40b0dd7b377c1f\"" Aug 13 01:18:20.049891 env[1257]: time="2025-08-13T01:18:20.049858420Z" level=info msg="StartContainer for \"4aa13ef3466d3ea4170880c977632c776340adc090b92f18eb40b0dd7b377c1f\"" Aug 13 01:18:20.066718 systemd[1]: Started cri-containerd-4aa13ef3466d3ea4170880c977632c776340adc090b92f18eb40b0dd7b377c1f.scope. Aug 13 01:18:20.087054 env[1257]: time="2025-08-13T01:18:20.087012122Z" level=info msg="StartContainer for \"4aa13ef3466d3ea4170880c977632c776340adc090b92f18eb40b0dd7b377c1f\" returns successfully" Aug 13 01:18:20.100831 systemd[1]: cri-containerd-4aa13ef3466d3ea4170880c977632c776340adc090b92f18eb40b0dd7b377c1f.scope: Deactivated successfully. Aug 13 01:18:20.143561 env[1257]: time="2025-08-13T01:18:20.143525884Z" level=info msg="shim disconnected" id=4aa13ef3466d3ea4170880c977632c776340adc090b92f18eb40b0dd7b377c1f Aug 13 01:18:20.143771 env[1257]: time="2025-08-13T01:18:20.143760073Z" level=warning msg="cleaning up after shim disconnected" id=4aa13ef3466d3ea4170880c977632c776340adc090b92f18eb40b0dd7b377c1f namespace=k8s.io Aug 13 01:18:20.143840 env[1257]: time="2025-08-13T01:18:20.143830663Z" level=info msg="cleaning up dead shim" Aug 13 01:18:20.148281 env[1257]: time="2025-08-13T01:18:20.148250732Z" level=warning msg="cleanup warnings time=\"2025-08-13T01:18:20Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4116 runtime=io.containerd.runc.v2\n" Aug 13 01:18:20.845191 kubelet[2059]: W0813 01:18:20.845152 2059 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod552c0443_1ed2_449e_bf8a_2e0d76eb818e.slice/cri-containerd-231a2e48e47605d0570a4c157f67b1a3f341cef692ae61bc7e118051feeedc0a.scope WatchSource:0}: container "231a2e48e47605d0570a4c157f67b1a3f341cef692ae61bc7e118051feeedc0a" in namespace "k8s.io": not found Aug 13 01:18:20.961816 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4aa13ef3466d3ea4170880c977632c776340adc090b92f18eb40b0dd7b377c1f-rootfs.mount: Deactivated successfully. Aug 13 01:18:21.016980 env[1257]: time="2025-08-13T01:18:21.016949897Z" level=info msg="CreateContainer within sandbox \"78684553592af44851aa0c697672560760c3e1756cd2d93e5ff9442d822c3927\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Aug 13 01:18:21.048533 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3286321881.mount: Deactivated successfully. Aug 13 01:18:21.051864 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount546549471.mount: Deactivated successfully. Aug 13 01:18:21.073550 env[1257]: time="2025-08-13T01:18:21.073508784Z" level=info msg="CreateContainer within sandbox \"78684553592af44851aa0c697672560760c3e1756cd2d93e5ff9442d822c3927\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"28097c0f6e49e5bb4da99a25e4e2ec67a77189f46c69294620e157fa4d0767c0\"" Aug 13 01:18:21.078813 env[1257]: time="2025-08-13T01:18:21.078787288Z" level=info msg="StartContainer for \"28097c0f6e49e5bb4da99a25e4e2ec67a77189f46c69294620e157fa4d0767c0\"" Aug 13 01:18:21.089018 systemd[1]: Started cri-containerd-28097c0f6e49e5bb4da99a25e4e2ec67a77189f46c69294620e157fa4d0767c0.scope. Aug 13 01:18:21.114883 env[1257]: time="2025-08-13T01:18:21.114419316Z" level=info msg="StartContainer for \"28097c0f6e49e5bb4da99a25e4e2ec67a77189f46c69294620e157fa4d0767c0\" returns successfully" Aug 13 01:18:21.183362 systemd[1]: cri-containerd-28097c0f6e49e5bb4da99a25e4e2ec67a77189f46c69294620e157fa4d0767c0.scope: Deactivated successfully. Aug 13 01:18:21.380018 env[1257]: time="2025-08-13T01:18:21.379780692Z" level=info msg="shim disconnected" id=28097c0f6e49e5bb4da99a25e4e2ec67a77189f46c69294620e157fa4d0767c0 Aug 13 01:18:21.380018 env[1257]: time="2025-08-13T01:18:21.379811626Z" level=warning msg="cleaning up after shim disconnected" id=28097c0f6e49e5bb4da99a25e4e2ec67a77189f46c69294620e157fa4d0767c0 namespace=k8s.io Aug 13 01:18:21.380018 env[1257]: time="2025-08-13T01:18:21.379817779Z" level=info msg="cleaning up dead shim" Aug 13 01:18:21.385540 env[1257]: time="2025-08-13T01:18:21.385506800Z" level=warning msg="cleanup warnings time=\"2025-08-13T01:18:21Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4174 runtime=io.containerd.runc.v2\n" Aug 13 01:18:21.440286 kubelet[2059]: I0813 01:18:21.440254 2059 setters.go:600] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T01:18:21Z","lastTransitionTime":"2025-08-13T01:18:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Aug 13 01:18:22.025178 env[1257]: time="2025-08-13T01:18:22.025151955Z" level=info msg="CreateContainer within sandbox \"78684553592af44851aa0c697672560760c3e1756cd2d93e5ff9442d822c3927\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Aug 13 01:18:22.084353 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount114553678.mount: Deactivated successfully. Aug 13 01:18:22.088379 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3620190039.mount: Deactivated successfully. Aug 13 01:18:22.116794 env[1257]: time="2025-08-13T01:18:22.116758813Z" level=info msg="CreateContainer within sandbox \"78684553592af44851aa0c697672560760c3e1756cd2d93e5ff9442d822c3927\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"5a003ddd0acc82bab6fa4cd3ec8c5869331fcefe4fc0c7d19521b8bb5ee86eba\"" Aug 13 01:18:22.117821 env[1257]: time="2025-08-13T01:18:22.117796883Z" level=info msg="StartContainer for \"5a003ddd0acc82bab6fa4cd3ec8c5869331fcefe4fc0c7d19521b8bb5ee86eba\"" Aug 13 01:18:22.127917 systemd[1]: Started cri-containerd-5a003ddd0acc82bab6fa4cd3ec8c5869331fcefe4fc0c7d19521b8bb5ee86eba.scope. Aug 13 01:18:22.148907 systemd[1]: cri-containerd-5a003ddd0acc82bab6fa4cd3ec8c5869331fcefe4fc0c7d19521b8bb5ee86eba.scope: Deactivated successfully. Aug 13 01:18:22.154078 env[1257]: time="2025-08-13T01:18:22.154045727Z" level=info msg="StartContainer for \"5a003ddd0acc82bab6fa4cd3ec8c5869331fcefe4fc0c7d19521b8bb5ee86eba\" returns successfully" Aug 13 01:18:22.179265 env[1257]: time="2025-08-13T01:18:22.179228937Z" level=info msg="shim disconnected" id=5a003ddd0acc82bab6fa4cd3ec8c5869331fcefe4fc0c7d19521b8bb5ee86eba Aug 13 01:18:22.179523 env[1257]: time="2025-08-13T01:18:22.179509262Z" level=warning msg="cleaning up after shim disconnected" id=5a003ddd0acc82bab6fa4cd3ec8c5869331fcefe4fc0c7d19521b8bb5ee86eba namespace=k8s.io Aug 13 01:18:22.179613 env[1257]: time="2025-08-13T01:18:22.179603062Z" level=info msg="cleaning up dead shim" Aug 13 01:18:22.186415 env[1257]: time="2025-08-13T01:18:22.186372948Z" level=warning msg="cleanup warnings time=\"2025-08-13T01:18:22Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4229 runtime=io.containerd.runc.v2\n" Aug 13 01:18:23.027447 env[1257]: time="2025-08-13T01:18:23.027415985Z" level=info msg="CreateContainer within sandbox \"78684553592af44851aa0c697672560760c3e1756cd2d93e5ff9442d822c3927\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Aug 13 01:18:23.068287 env[1257]: time="2025-08-13T01:18:23.068240621Z" level=info msg="CreateContainer within sandbox \"78684553592af44851aa0c697672560760c3e1756cd2d93e5ff9442d822c3927\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"258751a18cf622ef5bb41cacb2508cb84d38eaec77489abcd8fdcbfdcefd499e\"" Aug 13 01:18:23.069005 env[1257]: time="2025-08-13T01:18:23.068979093Z" level=info msg="StartContainer for \"258751a18cf622ef5bb41cacb2508cb84d38eaec77489abcd8fdcbfdcefd499e\"" Aug 13 01:18:23.087771 systemd[1]: Started cri-containerd-258751a18cf622ef5bb41cacb2508cb84d38eaec77489abcd8fdcbfdcefd499e.scope. Aug 13 01:18:23.110327 env[1257]: time="2025-08-13T01:18:23.110303372Z" level=info msg="StartContainer for \"258751a18cf622ef5bb41cacb2508cb84d38eaec77489abcd8fdcbfdcefd499e\" returns successfully" Aug 13 01:18:23.816690 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Aug 13 01:18:23.954318 kubelet[2059]: W0813 01:18:23.953610 2059 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod03b1be7f_8dbc_426c_8bdd_22349049853a.slice/cri-containerd-098cf18a76461f5b897bb9d2e70c0d2afb6767f46dedc081160534e9a6d6d149.scope WatchSource:0}: task 098cf18a76461f5b897bb9d2e70c0d2afb6767f46dedc081160534e9a6d6d149 not found: not found Aug 13 01:18:24.049474 kubelet[2059]: I0813 01:18:24.049438 2059 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-w5qwr" podStartSLOduration=5.049424404 podStartE2EDuration="5.049424404s" podCreationTimestamp="2025-08-13 01:18:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 01:18:24.049375931 +0000 UTC m=+145.005394825" watchObservedRunningTime="2025-08-13 01:18:24.049424404 +0000 UTC m=+145.005443291" Aug 13 01:18:25.850194 systemd[1]: run-containerd-runc-k8s.io-258751a18cf622ef5bb41cacb2508cb84d38eaec77489abcd8fdcbfdcefd499e-runc.P6bxw0.mount: Deactivated successfully. Aug 13 01:18:26.661004 systemd-networkd[1063]: lxc_health: Link UP Aug 13 01:18:26.705171 systemd-networkd[1063]: lxc_health: Gained carrier Aug 13 01:18:26.705679 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Aug 13 01:18:27.060423 kubelet[2059]: W0813 01:18:27.060389 2059 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod03b1be7f_8dbc_426c_8bdd_22349049853a.slice/cri-containerd-4aa13ef3466d3ea4170880c977632c776340adc090b92f18eb40b0dd7b377c1f.scope WatchSource:0}: task 4aa13ef3466d3ea4170880c977632c776340adc090b92f18eb40b0dd7b377c1f not found: not found Aug 13 01:18:28.060939 systemd[1]: run-containerd-runc-k8s.io-258751a18cf622ef5bb41cacb2508cb84d38eaec77489abcd8fdcbfdcefd499e-runc.eonXLe.mount: Deactivated successfully. Aug 13 01:18:28.232831 systemd-networkd[1063]: lxc_health: Gained IPv6LL Aug 13 01:18:30.167234 kubelet[2059]: W0813 01:18:30.167197 2059 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod03b1be7f_8dbc_426c_8bdd_22349049853a.slice/cri-containerd-28097c0f6e49e5bb4da99a25e4e2ec67a77189f46c69294620e157fa4d0767c0.scope WatchSource:0}: task 28097c0f6e49e5bb4da99a25e4e2ec67a77189f46c69294620e157fa4d0767c0 not found: not found Aug 13 01:18:30.247888 systemd[1]: run-containerd-runc-k8s.io-258751a18cf622ef5bb41cacb2508cb84d38eaec77489abcd8fdcbfdcefd499e-runc.fLksu2.mount: Deactivated successfully. Aug 13 01:18:32.318369 systemd[1]: run-containerd-runc-k8s.io-258751a18cf622ef5bb41cacb2508cb84d38eaec77489abcd8fdcbfdcefd499e-runc.0cAsyE.mount: Deactivated successfully. Aug 13 01:18:32.361783 sshd[3874]: pam_unix(sshd:session): session closed for user core Aug 13 01:18:32.369463 systemd[1]: sshd@25-139.178.70.100:22-139.178.68.195:60524.service: Deactivated successfully. Aug 13 01:18:32.370019 systemd[1]: session-28.scope: Deactivated successfully. Aug 13 01:18:32.370463 systemd-logind[1246]: Session 28 logged out. Waiting for processes to exit. Aug 13 01:18:32.371175 systemd-logind[1246]: Removed session 28. Aug 13 01:18:33.276507 kubelet[2059]: W0813 01:18:33.276472 2059 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod03b1be7f_8dbc_426c_8bdd_22349049853a.slice/cri-containerd-5a003ddd0acc82bab6fa4cd3ec8c5869331fcefe4fc0c7d19521b8bb5ee86eba.scope WatchSource:0}: task 5a003ddd0acc82bab6fa4cd3ec8c5869331fcefe4fc0c7d19521b8bb5ee86eba not found: not found