Sep 13 01:03:18.658654 kernel: Linux version 5.15.192-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Fri Sep 12 23:13:49 -00 2025 Sep 13 01:03:18.658669 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=65d14b740db9e581daa1d0206188b16d2f1a39e5c5e0878b6855323cd7c584ec Sep 13 01:03:18.658675 kernel: Disabled fast string operations Sep 13 01:03:18.658679 kernel: BIOS-provided physical RAM map: Sep 13 01:03:18.658683 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ebff] usable Sep 13 01:03:18.658687 kernel: BIOS-e820: [mem 0x000000000009ec00-0x000000000009ffff] reserved Sep 13 01:03:18.658693 kernel: BIOS-e820: [mem 0x00000000000dc000-0x00000000000fffff] reserved Sep 13 01:03:18.658698 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007fedffff] usable Sep 13 01:03:18.658702 kernel: BIOS-e820: [mem 0x000000007fee0000-0x000000007fefefff] ACPI data Sep 13 01:03:18.658706 kernel: BIOS-e820: [mem 0x000000007feff000-0x000000007fefffff] ACPI NVS Sep 13 01:03:18.658710 kernel: BIOS-e820: [mem 0x000000007ff00000-0x000000007fffffff] usable Sep 13 01:03:18.658714 kernel: BIOS-e820: [mem 0x00000000f0000000-0x00000000f7ffffff] reserved Sep 13 01:03:18.658718 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec0ffff] reserved Sep 13 01:03:18.658723 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved Sep 13 01:03:18.658729 kernel: BIOS-e820: [mem 0x00000000fffe0000-0x00000000ffffffff] reserved Sep 13 01:03:18.658734 kernel: NX (Execute Disable) protection: active Sep 13 01:03:18.658738 kernel: SMBIOS 2.7 present. Sep 13 01:03:18.658743 kernel: DMI: VMware, Inc. VMware Virtual Platform/440BX Desktop Reference Platform, BIOS 6.00 05/28/2020 Sep 13 01:03:18.658748 kernel: vmware: hypercall mode: 0x00 Sep 13 01:03:18.658752 kernel: Hypervisor detected: VMware Sep 13 01:03:18.666111 kernel: vmware: TSC freq read from hypervisor : 3408.000 MHz Sep 13 01:03:18.666120 kernel: vmware: Host bus clock speed read from hypervisor : 66000000 Hz Sep 13 01:03:18.666125 kernel: vmware: using clock offset of 8584358057 ns Sep 13 01:03:18.666130 kernel: tsc: Detected 3408.000 MHz processor Sep 13 01:03:18.666136 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 13 01:03:18.666141 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 13 01:03:18.666146 kernel: last_pfn = 0x80000 max_arch_pfn = 0x400000000 Sep 13 01:03:18.666151 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 13 01:03:18.666156 kernel: total RAM covered: 3072M Sep 13 01:03:18.666164 kernel: Found optimal setting for mtrr clean up Sep 13 01:03:18.666169 kernel: gran_size: 64K chunk_size: 64K num_reg: 2 lose cover RAM: 0G Sep 13 01:03:18.666174 kernel: Using GB pages for direct mapping Sep 13 01:03:18.666179 kernel: ACPI: Early table checksum verification disabled Sep 13 01:03:18.666184 kernel: ACPI: RSDP 0x00000000000F6A00 000024 (v02 PTLTD ) Sep 13 01:03:18.666189 kernel: ACPI: XSDT 0x000000007FEE965B 00005C (v01 INTEL 440BX 06040000 VMW 01324272) Sep 13 01:03:18.666194 kernel: ACPI: FACP 0x000000007FEFEE73 0000F4 (v04 INTEL 440BX 06040000 PTL 000F4240) Sep 13 01:03:18.666198 kernel: ACPI: DSDT 0x000000007FEEAD55 01411E (v01 PTLTD Custom 06040000 MSFT 03000001) Sep 13 01:03:18.666203 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 Sep 13 01:03:18.666208 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 Sep 13 01:03:18.666214 kernel: ACPI: BOOT 0x000000007FEEAD2D 000028 (v01 PTLTD $SBFTBL$ 06040000 LTP 00000001) Sep 13 01:03:18.666221 kernel: ACPI: APIC 0x000000007FEEA5EB 000742 (v01 PTLTD ? APIC 06040000 LTP 00000000) Sep 13 01:03:18.666226 kernel: ACPI: MCFG 0x000000007FEEA5AF 00003C (v01 PTLTD $PCITBL$ 06040000 LTP 00000001) Sep 13 01:03:18.666231 kernel: ACPI: SRAT 0x000000007FEE9757 0008A8 (v02 VMWARE MEMPLUG 06040000 VMW 00000001) Sep 13 01:03:18.666237 kernel: ACPI: HPET 0x000000007FEE971F 000038 (v01 VMWARE VMW HPET 06040000 VMW 00000001) Sep 13 01:03:18.666243 kernel: ACPI: WAET 0x000000007FEE96F7 000028 (v01 VMWARE VMW WAET 06040000 VMW 00000001) Sep 13 01:03:18.666248 kernel: ACPI: Reserving FACP table memory at [mem 0x7fefee73-0x7fefef66] Sep 13 01:03:18.666253 kernel: ACPI: Reserving DSDT table memory at [mem 0x7feead55-0x7fefee72] Sep 13 01:03:18.666259 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] Sep 13 01:03:18.666264 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] Sep 13 01:03:18.666269 kernel: ACPI: Reserving BOOT table memory at [mem 0x7feead2d-0x7feead54] Sep 13 01:03:18.666274 kernel: ACPI: Reserving APIC table memory at [mem 0x7feea5eb-0x7feead2c] Sep 13 01:03:18.666281 kernel: ACPI: Reserving MCFG table memory at [mem 0x7feea5af-0x7feea5ea] Sep 13 01:03:18.666288 kernel: ACPI: Reserving SRAT table memory at [mem 0x7fee9757-0x7fee9ffe] Sep 13 01:03:18.666298 kernel: ACPI: Reserving HPET table memory at [mem 0x7fee971f-0x7fee9756] Sep 13 01:03:18.666303 kernel: ACPI: Reserving WAET table memory at [mem 0x7fee96f7-0x7fee971e] Sep 13 01:03:18.666308 kernel: system APIC only can use physical flat Sep 13 01:03:18.666313 kernel: Setting APIC routing to physical flat. Sep 13 01:03:18.666319 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Sep 13 01:03:18.666325 kernel: SRAT: PXM 0 -> APIC 0x02 -> Node 0 Sep 13 01:03:18.666333 kernel: SRAT: PXM 0 -> APIC 0x04 -> Node 0 Sep 13 01:03:18.666341 kernel: SRAT: PXM 0 -> APIC 0x06 -> Node 0 Sep 13 01:03:18.666349 kernel: SRAT: PXM 0 -> APIC 0x08 -> Node 0 Sep 13 01:03:18.666359 kernel: SRAT: PXM 0 -> APIC 0x0a -> Node 0 Sep 13 01:03:18.666364 kernel: SRAT: PXM 0 -> APIC 0x0c -> Node 0 Sep 13 01:03:18.666370 kernel: SRAT: PXM 0 -> APIC 0x0e -> Node 0 Sep 13 01:03:18.666375 kernel: SRAT: PXM 0 -> APIC 0x10 -> Node 0 Sep 13 01:03:18.666380 kernel: SRAT: PXM 0 -> APIC 0x12 -> Node 0 Sep 13 01:03:18.666385 kernel: SRAT: PXM 0 -> APIC 0x14 -> Node 0 Sep 13 01:03:18.666390 kernel: SRAT: PXM 0 -> APIC 0x16 -> Node 0 Sep 13 01:03:18.666395 kernel: SRAT: PXM 0 -> APIC 0x18 -> Node 0 Sep 13 01:03:18.666400 kernel: SRAT: PXM 0 -> APIC 0x1a -> Node 0 Sep 13 01:03:18.666405 kernel: SRAT: PXM 0 -> APIC 0x1c -> Node 0 Sep 13 01:03:18.666412 kernel: SRAT: PXM 0 -> APIC 0x1e -> Node 0 Sep 13 01:03:18.666418 kernel: SRAT: PXM 0 -> APIC 0x20 -> Node 0 Sep 13 01:03:18.666435 kernel: SRAT: PXM 0 -> APIC 0x22 -> Node 0 Sep 13 01:03:18.666442 kernel: SRAT: PXM 0 -> APIC 0x24 -> Node 0 Sep 13 01:03:18.666447 kernel: SRAT: PXM 0 -> APIC 0x26 -> Node 0 Sep 13 01:03:18.666452 kernel: SRAT: PXM 0 -> APIC 0x28 -> Node 0 Sep 13 01:03:18.666457 kernel: SRAT: PXM 0 -> APIC 0x2a -> Node 0 Sep 13 01:03:18.666462 kernel: SRAT: PXM 0 -> APIC 0x2c -> Node 0 Sep 13 01:03:18.666467 kernel: SRAT: PXM 0 -> APIC 0x2e -> Node 0 Sep 13 01:03:18.666475 kernel: SRAT: PXM 0 -> APIC 0x30 -> Node 0 Sep 13 01:03:18.666480 kernel: SRAT: PXM 0 -> APIC 0x32 -> Node 0 Sep 13 01:03:18.666485 kernel: SRAT: PXM 0 -> APIC 0x34 -> Node 0 Sep 13 01:03:18.666490 kernel: SRAT: PXM 0 -> APIC 0x36 -> Node 0 Sep 13 01:03:18.666495 kernel: SRAT: PXM 0 -> APIC 0x38 -> Node 0 Sep 13 01:03:18.666502 kernel: SRAT: PXM 0 -> APIC 0x3a -> Node 0 Sep 13 01:03:18.666509 kernel: SRAT: PXM 0 -> APIC 0x3c -> Node 0 Sep 13 01:03:18.666517 kernel: SRAT: PXM 0 -> APIC 0x3e -> Node 0 Sep 13 01:03:18.666523 kernel: SRAT: PXM 0 -> APIC 0x40 -> Node 0 Sep 13 01:03:18.666528 kernel: SRAT: PXM 0 -> APIC 0x42 -> Node 0 Sep 13 01:03:18.666535 kernel: SRAT: PXM 0 -> APIC 0x44 -> Node 0 Sep 13 01:03:18.666540 kernel: SRAT: PXM 0 -> APIC 0x46 -> Node 0 Sep 13 01:03:18.666545 kernel: SRAT: PXM 0 -> APIC 0x48 -> Node 0 Sep 13 01:03:18.666550 kernel: SRAT: PXM 0 -> APIC 0x4a -> Node 0 Sep 13 01:03:18.666555 kernel: SRAT: PXM 0 -> APIC 0x4c -> Node 0 Sep 13 01:03:18.666560 kernel: SRAT: PXM 0 -> APIC 0x4e -> Node 0 Sep 13 01:03:18.666565 kernel: SRAT: PXM 0 -> APIC 0x50 -> Node 0 Sep 13 01:03:18.666570 kernel: SRAT: PXM 0 -> APIC 0x52 -> Node 0 Sep 13 01:03:18.666575 kernel: SRAT: PXM 0 -> APIC 0x54 -> Node 0 Sep 13 01:03:18.666583 kernel: SRAT: PXM 0 -> APIC 0x56 -> Node 0 Sep 13 01:03:18.666591 kernel: SRAT: PXM 0 -> APIC 0x58 -> Node 0 Sep 13 01:03:18.666599 kernel: SRAT: PXM 0 -> APIC 0x5a -> Node 0 Sep 13 01:03:18.666604 kernel: SRAT: PXM 0 -> APIC 0x5c -> Node 0 Sep 13 01:03:18.666609 kernel: SRAT: PXM 0 -> APIC 0x5e -> Node 0 Sep 13 01:03:18.666614 kernel: SRAT: PXM 0 -> APIC 0x60 -> Node 0 Sep 13 01:03:18.666620 kernel: SRAT: PXM 0 -> APIC 0x62 -> Node 0 Sep 13 01:03:18.666625 kernel: SRAT: PXM 0 -> APIC 0x64 -> Node 0 Sep 13 01:03:18.666630 kernel: SRAT: PXM 0 -> APIC 0x66 -> Node 0 Sep 13 01:03:18.666635 kernel: SRAT: PXM 0 -> APIC 0x68 -> Node 0 Sep 13 01:03:18.666641 kernel: SRAT: PXM 0 -> APIC 0x6a -> Node 0 Sep 13 01:03:18.666646 kernel: SRAT: PXM 0 -> APIC 0x6c -> Node 0 Sep 13 01:03:18.666651 kernel: SRAT: PXM 0 -> APIC 0x6e -> Node 0 Sep 13 01:03:18.666656 kernel: SRAT: PXM 0 -> APIC 0x70 -> Node 0 Sep 13 01:03:18.666661 kernel: SRAT: PXM 0 -> APIC 0x72 -> Node 0 Sep 13 01:03:18.666669 kernel: SRAT: PXM 0 -> APIC 0x74 -> Node 0 Sep 13 01:03:18.666676 kernel: SRAT: PXM 0 -> APIC 0x76 -> Node 0 Sep 13 01:03:18.666684 kernel: SRAT: PXM 0 -> APIC 0x78 -> Node 0 Sep 13 01:03:18.666694 kernel: SRAT: PXM 0 -> APIC 0x7a -> Node 0 Sep 13 01:03:18.666700 kernel: SRAT: PXM 0 -> APIC 0x7c -> Node 0 Sep 13 01:03:18.666706 kernel: SRAT: PXM 0 -> APIC 0x7e -> Node 0 Sep 13 01:03:18.666711 kernel: SRAT: PXM 0 -> APIC 0x80 -> Node 0 Sep 13 01:03:18.666718 kernel: SRAT: PXM 0 -> APIC 0x82 -> Node 0 Sep 13 01:03:18.666723 kernel: SRAT: PXM 0 -> APIC 0x84 -> Node 0 Sep 13 01:03:18.666729 kernel: SRAT: PXM 0 -> APIC 0x86 -> Node 0 Sep 13 01:03:18.666734 kernel: SRAT: PXM 0 -> APIC 0x88 -> Node 0 Sep 13 01:03:18.666740 kernel: SRAT: PXM 0 -> APIC 0x8a -> Node 0 Sep 13 01:03:18.666747 kernel: SRAT: PXM 0 -> APIC 0x8c -> Node 0 Sep 13 01:03:18.666755 kernel: SRAT: PXM 0 -> APIC 0x8e -> Node 0 Sep 13 01:03:18.666764 kernel: SRAT: PXM 0 -> APIC 0x90 -> Node 0 Sep 13 01:03:18.666770 kernel: SRAT: PXM 0 -> APIC 0x92 -> Node 0 Sep 13 01:03:18.666775 kernel: SRAT: PXM 0 -> APIC 0x94 -> Node 0 Sep 13 01:03:18.666781 kernel: SRAT: PXM 0 -> APIC 0x96 -> Node 0 Sep 13 01:03:18.666786 kernel: SRAT: PXM 0 -> APIC 0x98 -> Node 0 Sep 13 01:03:18.666792 kernel: SRAT: PXM 0 -> APIC 0x9a -> Node 0 Sep 13 01:03:18.666798 kernel: SRAT: PXM 0 -> APIC 0x9c -> Node 0 Sep 13 01:03:18.666803 kernel: SRAT: PXM 0 -> APIC 0x9e -> Node 0 Sep 13 01:03:18.666810 kernel: SRAT: PXM 0 -> APIC 0xa0 -> Node 0 Sep 13 01:03:18.666815 kernel: SRAT: PXM 0 -> APIC 0xa2 -> Node 0 Sep 13 01:03:18.666821 kernel: SRAT: PXM 0 -> APIC 0xa4 -> Node 0 Sep 13 01:03:18.666826 kernel: SRAT: PXM 0 -> APIC 0xa6 -> Node 0 Sep 13 01:03:18.666834 kernel: SRAT: PXM 0 -> APIC 0xa8 -> Node 0 Sep 13 01:03:18.666843 kernel: SRAT: PXM 0 -> APIC 0xaa -> Node 0 Sep 13 01:03:18.666849 kernel: SRAT: PXM 0 -> APIC 0xac -> Node 0 Sep 13 01:03:18.666855 kernel: SRAT: PXM 0 -> APIC 0xae -> Node 0 Sep 13 01:03:18.666860 kernel: SRAT: PXM 0 -> APIC 0xb0 -> Node 0 Sep 13 01:03:18.666866 kernel: SRAT: PXM 0 -> APIC 0xb2 -> Node 0 Sep 13 01:03:18.666873 kernel: SRAT: PXM 0 -> APIC 0xb4 -> Node 0 Sep 13 01:03:18.666878 kernel: SRAT: PXM 0 -> APIC 0xb6 -> Node 0 Sep 13 01:03:18.666883 kernel: SRAT: PXM 0 -> APIC 0xb8 -> Node 0 Sep 13 01:03:18.666889 kernel: SRAT: PXM 0 -> APIC 0xba -> Node 0 Sep 13 01:03:18.666894 kernel: SRAT: PXM 0 -> APIC 0xbc -> Node 0 Sep 13 01:03:18.666900 kernel: SRAT: PXM 0 -> APIC 0xbe -> Node 0 Sep 13 01:03:18.666906 kernel: SRAT: PXM 0 -> APIC 0xc0 -> Node 0 Sep 13 01:03:18.666913 kernel: SRAT: PXM 0 -> APIC 0xc2 -> Node 0 Sep 13 01:03:18.666921 kernel: SRAT: PXM 0 -> APIC 0xc4 -> Node 0 Sep 13 01:03:18.666929 kernel: SRAT: PXM 0 -> APIC 0xc6 -> Node 0 Sep 13 01:03:18.666935 kernel: SRAT: PXM 0 -> APIC 0xc8 -> Node 0 Sep 13 01:03:18.666941 kernel: SRAT: PXM 0 -> APIC 0xca -> Node 0 Sep 13 01:03:18.666946 kernel: SRAT: PXM 0 -> APIC 0xcc -> Node 0 Sep 13 01:03:18.666952 kernel: SRAT: PXM 0 -> APIC 0xce -> Node 0 Sep 13 01:03:18.666957 kernel: SRAT: PXM 0 -> APIC 0xd0 -> Node 0 Sep 13 01:03:18.666963 kernel: SRAT: PXM 0 -> APIC 0xd2 -> Node 0 Sep 13 01:03:18.666968 kernel: SRAT: PXM 0 -> APIC 0xd4 -> Node 0 Sep 13 01:03:18.666974 kernel: SRAT: PXM 0 -> APIC 0xd6 -> Node 0 Sep 13 01:03:18.666979 kernel: SRAT: PXM 0 -> APIC 0xd8 -> Node 0 Sep 13 01:03:18.666986 kernel: SRAT: PXM 0 -> APIC 0xda -> Node 0 Sep 13 01:03:18.666993 kernel: SRAT: PXM 0 -> APIC 0xdc -> Node 0 Sep 13 01:03:18.667001 kernel: SRAT: PXM 0 -> APIC 0xde -> Node 0 Sep 13 01:03:18.667009 kernel: SRAT: PXM 0 -> APIC 0xe0 -> Node 0 Sep 13 01:03:18.667014 kernel: SRAT: PXM 0 -> APIC 0xe2 -> Node 0 Sep 13 01:03:18.667020 kernel: SRAT: PXM 0 -> APIC 0xe4 -> Node 0 Sep 13 01:03:18.667025 kernel: SRAT: PXM 0 -> APIC 0xe6 -> Node 0 Sep 13 01:03:18.667031 kernel: SRAT: PXM 0 -> APIC 0xe8 -> Node 0 Sep 13 01:03:18.667036 kernel: SRAT: PXM 0 -> APIC 0xea -> Node 0 Sep 13 01:03:18.667043 kernel: SRAT: PXM 0 -> APIC 0xec -> Node 0 Sep 13 01:03:18.667048 kernel: SRAT: PXM 0 -> APIC 0xee -> Node 0 Sep 13 01:03:18.667054 kernel: SRAT: PXM 0 -> APIC 0xf0 -> Node 0 Sep 13 01:03:18.667059 kernel: SRAT: PXM 0 -> APIC 0xf2 -> Node 0 Sep 13 01:03:18.667065 kernel: SRAT: PXM 0 -> APIC 0xf4 -> Node 0 Sep 13 01:03:18.667071 kernel: SRAT: PXM 0 -> APIC 0xf6 -> Node 0 Sep 13 01:03:18.667079 kernel: SRAT: PXM 0 -> APIC 0xf8 -> Node 0 Sep 13 01:03:18.667088 kernel: SRAT: PXM 0 -> APIC 0xfa -> Node 0 Sep 13 01:03:18.667094 kernel: SRAT: PXM 0 -> APIC 0xfc -> Node 0 Sep 13 01:03:18.667099 kernel: SRAT: PXM 0 -> APIC 0xfe -> Node 0 Sep 13 01:03:18.667106 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Sep 13 01:03:18.667112 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Sep 13 01:03:18.667117 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000-0xbfffffff] hotplug Sep 13 01:03:18.667123 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7fffffff] -> [mem 0x00000000-0x7fffffff] Sep 13 01:03:18.667129 kernel: NODE_DATA(0) allocated [mem 0x7fffa000-0x7fffffff] Sep 13 01:03:18.667134 kernel: Zone ranges: Sep 13 01:03:18.667140 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 13 01:03:18.667146 kernel: DMA32 [mem 0x0000000001000000-0x000000007fffffff] Sep 13 01:03:18.667152 kernel: Normal empty Sep 13 01:03:18.667162 kernel: Movable zone start for each node Sep 13 01:03:18.667171 kernel: Early memory node ranges Sep 13 01:03:18.667176 kernel: node 0: [mem 0x0000000000001000-0x000000000009dfff] Sep 13 01:03:18.667182 kernel: node 0: [mem 0x0000000000100000-0x000000007fedffff] Sep 13 01:03:18.667187 kernel: node 0: [mem 0x000000007ff00000-0x000000007fffffff] Sep 13 01:03:18.667193 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007fffffff] Sep 13 01:03:18.667199 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 13 01:03:18.667205 kernel: On node 0, zone DMA: 98 pages in unavailable ranges Sep 13 01:03:18.667211 kernel: On node 0, zone DMA32: 32 pages in unavailable ranges Sep 13 01:03:18.667217 kernel: ACPI: PM-Timer IO Port: 0x1008 Sep 13 01:03:18.667224 kernel: system APIC only can use physical flat Sep 13 01:03:18.667229 kernel: ACPI: LAPIC_NMI (acpi_id[0x00] high edge lint[0x1]) Sep 13 01:03:18.667236 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) Sep 13 01:03:18.667244 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) Sep 13 01:03:18.667252 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) Sep 13 01:03:18.667258 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) Sep 13 01:03:18.667264 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) Sep 13 01:03:18.667269 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) Sep 13 01:03:18.667275 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) Sep 13 01:03:18.667282 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) Sep 13 01:03:18.667287 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) Sep 13 01:03:18.667293 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) Sep 13 01:03:18.667298 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) Sep 13 01:03:18.667304 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) Sep 13 01:03:18.667309 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) Sep 13 01:03:18.667314 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) Sep 13 01:03:18.667322 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) Sep 13 01:03:18.667331 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) Sep 13 01:03:18.667339 kernel: ACPI: LAPIC_NMI (acpi_id[0x11] high edge lint[0x1]) Sep 13 01:03:18.667345 kernel: ACPI: LAPIC_NMI (acpi_id[0x12] high edge lint[0x1]) Sep 13 01:03:18.667350 kernel: ACPI: LAPIC_NMI (acpi_id[0x13] high edge lint[0x1]) Sep 13 01:03:18.667355 kernel: ACPI: LAPIC_NMI (acpi_id[0x14] high edge lint[0x1]) Sep 13 01:03:18.667361 kernel: ACPI: LAPIC_NMI (acpi_id[0x15] high edge lint[0x1]) Sep 13 01:03:18.667366 kernel: ACPI: LAPIC_NMI (acpi_id[0x16] high edge lint[0x1]) Sep 13 01:03:18.667372 kernel: ACPI: LAPIC_NMI (acpi_id[0x17] high edge lint[0x1]) Sep 13 01:03:18.667377 kernel: ACPI: LAPIC_NMI (acpi_id[0x18] high edge lint[0x1]) Sep 13 01:03:18.667383 kernel: ACPI: LAPIC_NMI (acpi_id[0x19] high edge lint[0x1]) Sep 13 01:03:18.667389 kernel: ACPI: LAPIC_NMI (acpi_id[0x1a] high edge lint[0x1]) Sep 13 01:03:18.667395 kernel: ACPI: LAPIC_NMI (acpi_id[0x1b] high edge lint[0x1]) Sep 13 01:03:18.667402 kernel: ACPI: LAPIC_NMI (acpi_id[0x1c] high edge lint[0x1]) Sep 13 01:03:18.667410 kernel: ACPI: LAPIC_NMI (acpi_id[0x1d] high edge lint[0x1]) Sep 13 01:03:18.667418 kernel: ACPI: LAPIC_NMI (acpi_id[0x1e] high edge lint[0x1]) Sep 13 01:03:18.667429 kernel: ACPI: LAPIC_NMI (acpi_id[0x1f] high edge lint[0x1]) Sep 13 01:03:18.667435 kernel: ACPI: LAPIC_NMI (acpi_id[0x20] high edge lint[0x1]) Sep 13 01:03:18.667440 kernel: ACPI: LAPIC_NMI (acpi_id[0x21] high edge lint[0x1]) Sep 13 01:03:18.667446 kernel: ACPI: LAPIC_NMI (acpi_id[0x22] high edge lint[0x1]) Sep 13 01:03:18.667451 kernel: ACPI: LAPIC_NMI (acpi_id[0x23] high edge lint[0x1]) Sep 13 01:03:18.667458 kernel: ACPI: LAPIC_NMI (acpi_id[0x24] high edge lint[0x1]) Sep 13 01:03:18.667464 kernel: ACPI: LAPIC_NMI (acpi_id[0x25] high edge lint[0x1]) Sep 13 01:03:18.667469 kernel: ACPI: LAPIC_NMI (acpi_id[0x26] high edge lint[0x1]) Sep 13 01:03:18.667475 kernel: ACPI: LAPIC_NMI (acpi_id[0x27] high edge lint[0x1]) Sep 13 01:03:18.667482 kernel: ACPI: LAPIC_NMI (acpi_id[0x28] high edge lint[0x1]) Sep 13 01:03:18.667491 kernel: ACPI: LAPIC_NMI (acpi_id[0x29] high edge lint[0x1]) Sep 13 01:03:18.667498 kernel: ACPI: LAPIC_NMI (acpi_id[0x2a] high edge lint[0x1]) Sep 13 01:03:18.667504 kernel: ACPI: LAPIC_NMI (acpi_id[0x2b] high edge lint[0x1]) Sep 13 01:03:18.667509 kernel: ACPI: LAPIC_NMI (acpi_id[0x2c] high edge lint[0x1]) Sep 13 01:03:18.667516 kernel: ACPI: LAPIC_NMI (acpi_id[0x2d] high edge lint[0x1]) Sep 13 01:03:18.667522 kernel: ACPI: LAPIC_NMI (acpi_id[0x2e] high edge lint[0x1]) Sep 13 01:03:18.667527 kernel: ACPI: LAPIC_NMI (acpi_id[0x2f] high edge lint[0x1]) Sep 13 01:03:18.667533 kernel: ACPI: LAPIC_NMI (acpi_id[0x30] high edge lint[0x1]) Sep 13 01:03:18.667538 kernel: ACPI: LAPIC_NMI (acpi_id[0x31] high edge lint[0x1]) Sep 13 01:03:18.667544 kernel: ACPI: LAPIC_NMI (acpi_id[0x32] high edge lint[0x1]) Sep 13 01:03:18.667550 kernel: ACPI: LAPIC_NMI (acpi_id[0x33] high edge lint[0x1]) Sep 13 01:03:18.667555 kernel: ACPI: LAPIC_NMI (acpi_id[0x34] high edge lint[0x1]) Sep 13 01:03:18.667562 kernel: ACPI: LAPIC_NMI (acpi_id[0x35] high edge lint[0x1]) Sep 13 01:03:18.667571 kernel: ACPI: LAPIC_NMI (acpi_id[0x36] high edge lint[0x1]) Sep 13 01:03:18.667580 kernel: ACPI: LAPIC_NMI (acpi_id[0x37] high edge lint[0x1]) Sep 13 01:03:18.667585 kernel: ACPI: LAPIC_NMI (acpi_id[0x38] high edge lint[0x1]) Sep 13 01:03:18.667591 kernel: ACPI: LAPIC_NMI (acpi_id[0x39] high edge lint[0x1]) Sep 13 01:03:18.667596 kernel: ACPI: LAPIC_NMI (acpi_id[0x3a] high edge lint[0x1]) Sep 13 01:03:18.667602 kernel: ACPI: LAPIC_NMI (acpi_id[0x3b] high edge lint[0x1]) Sep 13 01:03:18.667607 kernel: ACPI: LAPIC_NMI (acpi_id[0x3c] high edge lint[0x1]) Sep 13 01:03:18.667612 kernel: ACPI: LAPIC_NMI (acpi_id[0x3d] high edge lint[0x1]) Sep 13 01:03:18.667618 kernel: ACPI: LAPIC_NMI (acpi_id[0x3e] high edge lint[0x1]) Sep 13 01:03:18.667625 kernel: ACPI: LAPIC_NMI (acpi_id[0x3f] high edge lint[0x1]) Sep 13 01:03:18.667630 kernel: ACPI: LAPIC_NMI (acpi_id[0x40] high edge lint[0x1]) Sep 13 01:03:18.667636 kernel: ACPI: LAPIC_NMI (acpi_id[0x41] high edge lint[0x1]) Sep 13 01:03:18.667642 kernel: ACPI: LAPIC_NMI (acpi_id[0x42] high edge lint[0x1]) Sep 13 01:03:18.667650 kernel: ACPI: LAPIC_NMI (acpi_id[0x43] high edge lint[0x1]) Sep 13 01:03:18.667659 kernel: ACPI: LAPIC_NMI (acpi_id[0x44] high edge lint[0x1]) Sep 13 01:03:18.667665 kernel: ACPI: LAPIC_NMI (acpi_id[0x45] high edge lint[0x1]) Sep 13 01:03:18.667670 kernel: ACPI: LAPIC_NMI (acpi_id[0x46] high edge lint[0x1]) Sep 13 01:03:18.667676 kernel: ACPI: LAPIC_NMI (acpi_id[0x47] high edge lint[0x1]) Sep 13 01:03:18.667683 kernel: ACPI: LAPIC_NMI (acpi_id[0x48] high edge lint[0x1]) Sep 13 01:03:18.667688 kernel: ACPI: LAPIC_NMI (acpi_id[0x49] high edge lint[0x1]) Sep 13 01:03:18.667694 kernel: ACPI: LAPIC_NMI (acpi_id[0x4a] high edge lint[0x1]) Sep 13 01:03:18.667699 kernel: ACPI: LAPIC_NMI (acpi_id[0x4b] high edge lint[0x1]) Sep 13 01:03:18.667705 kernel: ACPI: LAPIC_NMI (acpi_id[0x4c] high edge lint[0x1]) Sep 13 01:03:18.667710 kernel: ACPI: LAPIC_NMI (acpi_id[0x4d] high edge lint[0x1]) Sep 13 01:03:18.667716 kernel: ACPI: LAPIC_NMI (acpi_id[0x4e] high edge lint[0x1]) Sep 13 01:03:18.667721 kernel: ACPI: LAPIC_NMI (acpi_id[0x4f] high edge lint[0x1]) Sep 13 01:03:18.667729 kernel: ACPI: LAPIC_NMI (acpi_id[0x50] high edge lint[0x1]) Sep 13 01:03:18.667737 kernel: ACPI: LAPIC_NMI (acpi_id[0x51] high edge lint[0x1]) Sep 13 01:03:18.667746 kernel: ACPI: LAPIC_NMI (acpi_id[0x52] high edge lint[0x1]) Sep 13 01:03:18.667751 kernel: ACPI: LAPIC_NMI (acpi_id[0x53] high edge lint[0x1]) Sep 13 01:03:18.667757 kernel: ACPI: LAPIC_NMI (acpi_id[0x54] high edge lint[0x1]) Sep 13 01:03:18.667762 kernel: ACPI: LAPIC_NMI (acpi_id[0x55] high edge lint[0x1]) Sep 13 01:03:18.667768 kernel: ACPI: LAPIC_NMI (acpi_id[0x56] high edge lint[0x1]) Sep 13 01:03:18.667774 kernel: ACPI: LAPIC_NMI (acpi_id[0x57] high edge lint[0x1]) Sep 13 01:03:18.667779 kernel: ACPI: LAPIC_NMI (acpi_id[0x58] high edge lint[0x1]) Sep 13 01:03:18.667784 kernel: ACPI: LAPIC_NMI (acpi_id[0x59] high edge lint[0x1]) Sep 13 01:03:18.667790 kernel: ACPI: LAPIC_NMI (acpi_id[0x5a] high edge lint[0x1]) Sep 13 01:03:18.667797 kernel: ACPI: LAPIC_NMI (acpi_id[0x5b] high edge lint[0x1]) Sep 13 01:03:18.667802 kernel: ACPI: LAPIC_NMI (acpi_id[0x5c] high edge lint[0x1]) Sep 13 01:03:18.667809 kernel: ACPI: LAPIC_NMI (acpi_id[0x5d] high edge lint[0x1]) Sep 13 01:03:18.667817 kernel: ACPI: LAPIC_NMI (acpi_id[0x5e] high edge lint[0x1]) Sep 13 01:03:18.667826 kernel: ACPI: LAPIC_NMI (acpi_id[0x5f] high edge lint[0x1]) Sep 13 01:03:18.667831 kernel: ACPI: LAPIC_NMI (acpi_id[0x60] high edge lint[0x1]) Sep 13 01:03:18.667837 kernel: ACPI: LAPIC_NMI (acpi_id[0x61] high edge lint[0x1]) Sep 13 01:03:18.667842 kernel: ACPI: LAPIC_NMI (acpi_id[0x62] high edge lint[0x1]) Sep 13 01:03:18.667848 kernel: ACPI: LAPIC_NMI (acpi_id[0x63] high edge lint[0x1]) Sep 13 01:03:18.667855 kernel: ACPI: LAPIC_NMI (acpi_id[0x64] high edge lint[0x1]) Sep 13 01:03:18.667860 kernel: ACPI: LAPIC_NMI (acpi_id[0x65] high edge lint[0x1]) Sep 13 01:03:18.667866 kernel: ACPI: LAPIC_NMI (acpi_id[0x66] high edge lint[0x1]) Sep 13 01:03:18.667872 kernel: ACPI: LAPIC_NMI (acpi_id[0x67] high edge lint[0x1]) Sep 13 01:03:18.667877 kernel: ACPI: LAPIC_NMI (acpi_id[0x68] high edge lint[0x1]) Sep 13 01:03:18.667883 kernel: ACPI: LAPIC_NMI (acpi_id[0x69] high edge lint[0x1]) Sep 13 01:03:18.667889 kernel: ACPI: LAPIC_NMI (acpi_id[0x6a] high edge lint[0x1]) Sep 13 01:03:18.667897 kernel: ACPI: LAPIC_NMI (acpi_id[0x6b] high edge lint[0x1]) Sep 13 01:03:18.667906 kernel: ACPI: LAPIC_NMI (acpi_id[0x6c] high edge lint[0x1]) Sep 13 01:03:18.667913 kernel: ACPI: LAPIC_NMI (acpi_id[0x6d] high edge lint[0x1]) Sep 13 01:03:18.667919 kernel: ACPI: LAPIC_NMI (acpi_id[0x6e] high edge lint[0x1]) Sep 13 01:03:18.667924 kernel: ACPI: LAPIC_NMI (acpi_id[0x6f] high edge lint[0x1]) Sep 13 01:03:18.667930 kernel: ACPI: LAPIC_NMI (acpi_id[0x70] high edge lint[0x1]) Sep 13 01:03:18.667935 kernel: ACPI: LAPIC_NMI (acpi_id[0x71] high edge lint[0x1]) Sep 13 01:03:18.667941 kernel: ACPI: LAPIC_NMI (acpi_id[0x72] high edge lint[0x1]) Sep 13 01:03:18.667946 kernel: ACPI: LAPIC_NMI (acpi_id[0x73] high edge lint[0x1]) Sep 13 01:03:18.667951 kernel: ACPI: LAPIC_NMI (acpi_id[0x74] high edge lint[0x1]) Sep 13 01:03:18.667957 kernel: ACPI: LAPIC_NMI (acpi_id[0x75] high edge lint[0x1]) Sep 13 01:03:18.667963 kernel: ACPI: LAPIC_NMI (acpi_id[0x76] high edge lint[0x1]) Sep 13 01:03:18.667969 kernel: ACPI: LAPIC_NMI (acpi_id[0x77] high edge lint[0x1]) Sep 13 01:03:18.667978 kernel: ACPI: LAPIC_NMI (acpi_id[0x78] high edge lint[0x1]) Sep 13 01:03:18.667986 kernel: ACPI: LAPIC_NMI (acpi_id[0x79] high edge lint[0x1]) Sep 13 01:03:18.667992 kernel: ACPI: LAPIC_NMI (acpi_id[0x7a] high edge lint[0x1]) Sep 13 01:03:18.667998 kernel: ACPI: LAPIC_NMI (acpi_id[0x7b] high edge lint[0x1]) Sep 13 01:03:18.668003 kernel: ACPI: LAPIC_NMI (acpi_id[0x7c] high edge lint[0x1]) Sep 13 01:03:18.668009 kernel: ACPI: LAPIC_NMI (acpi_id[0x7d] high edge lint[0x1]) Sep 13 01:03:18.668014 kernel: ACPI: LAPIC_NMI (acpi_id[0x7e] high edge lint[0x1]) Sep 13 01:03:18.668020 kernel: ACPI: LAPIC_NMI (acpi_id[0x7f] high edge lint[0x1]) Sep 13 01:03:18.668026 kernel: IOAPIC[0]: apic_id 1, version 17, address 0xfec00000, GSI 0-23 Sep 13 01:03:18.668032 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 high edge) Sep 13 01:03:18.668038 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 13 01:03:18.668043 kernel: ACPI: HPET id: 0x8086af01 base: 0xfed00000 Sep 13 01:03:18.668049 kernel: TSC deadline timer available Sep 13 01:03:18.668056 kernel: smpboot: Allowing 128 CPUs, 126 hotplug CPUs Sep 13 01:03:18.668065 kernel: [mem 0x80000000-0xefffffff] available for PCI devices Sep 13 01:03:18.668072 kernel: Booting paravirtualized kernel on VMware hypervisor Sep 13 01:03:18.668078 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 13 01:03:18.668085 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:128 nr_node_ids:1 Sep 13 01:03:18.668090 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u262144 Sep 13 01:03:18.668096 kernel: pcpu-alloc: s188696 r8192 d32488 u262144 alloc=1*2097152 Sep 13 01:03:18.668102 kernel: pcpu-alloc: [0] 000 001 002 003 004 005 006 007 Sep 13 01:03:18.668108 kernel: pcpu-alloc: [0] 008 009 010 011 012 013 014 015 Sep 13 01:03:18.668113 kernel: pcpu-alloc: [0] 016 017 018 019 020 021 022 023 Sep 13 01:03:18.668118 kernel: pcpu-alloc: [0] 024 025 026 027 028 029 030 031 Sep 13 01:03:18.668124 kernel: pcpu-alloc: [0] 032 033 034 035 036 037 038 039 Sep 13 01:03:18.668130 kernel: pcpu-alloc: [0] 040 041 042 043 044 045 046 047 Sep 13 01:03:18.668139 kernel: pcpu-alloc: [0] 048 049 050 051 052 053 054 055 Sep 13 01:03:18.668154 kernel: pcpu-alloc: [0] 056 057 058 059 060 061 062 063 Sep 13 01:03:18.668161 kernel: pcpu-alloc: [0] 064 065 066 067 068 069 070 071 Sep 13 01:03:18.668168 kernel: pcpu-alloc: [0] 072 073 074 075 076 077 078 079 Sep 13 01:03:18.668173 kernel: pcpu-alloc: [0] 080 081 082 083 084 085 086 087 Sep 13 01:03:18.668179 kernel: pcpu-alloc: [0] 088 089 090 091 092 093 094 095 Sep 13 01:03:18.668185 kernel: pcpu-alloc: [0] 096 097 098 099 100 101 102 103 Sep 13 01:03:18.668191 kernel: pcpu-alloc: [0] 104 105 106 107 108 109 110 111 Sep 13 01:03:18.668197 kernel: pcpu-alloc: [0] 112 113 114 115 116 117 118 119 Sep 13 01:03:18.668203 kernel: pcpu-alloc: [0] 120 121 122 123 124 125 126 127 Sep 13 01:03:18.668209 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515808 Sep 13 01:03:18.668217 kernel: Policy zone: DMA32 Sep 13 01:03:18.668227 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=65d14b740db9e581daa1d0206188b16d2f1a39e5c5e0878b6855323cd7c584ec Sep 13 01:03:18.668235 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 13 01:03:18.668241 kernel: printk: log_buf_len individual max cpu contribution: 4096 bytes Sep 13 01:03:18.668247 kernel: printk: log_buf_len total cpu_extra contributions: 520192 bytes Sep 13 01:03:18.668254 kernel: printk: log_buf_len min size: 262144 bytes Sep 13 01:03:18.668260 kernel: printk: log_buf_len: 1048576 bytes Sep 13 01:03:18.668266 kernel: printk: early log buf free: 239728(91%) Sep 13 01:03:18.668272 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 13 01:03:18.668278 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Sep 13 01:03:18.668284 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 13 01:03:18.668291 kernel: Memory: 1940392K/2096628K available (12295K kernel code, 2276K rwdata, 13732K rodata, 47492K init, 4088K bss, 155976K reserved, 0K cma-reserved) Sep 13 01:03:18.668298 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=128, Nodes=1 Sep 13 01:03:18.668307 kernel: ftrace: allocating 34614 entries in 136 pages Sep 13 01:03:18.668316 kernel: ftrace: allocated 136 pages with 2 groups Sep 13 01:03:18.668323 kernel: rcu: Hierarchical RCU implementation. Sep 13 01:03:18.668329 kernel: rcu: RCU event tracing is enabled. Sep 13 01:03:18.668335 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=128. Sep 13 01:03:18.668342 kernel: Rude variant of Tasks RCU enabled. Sep 13 01:03:18.668348 kernel: Tracing variant of Tasks RCU enabled. Sep 13 01:03:18.668355 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 13 01:03:18.668361 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=128 Sep 13 01:03:18.668367 kernel: NR_IRQS: 33024, nr_irqs: 1448, preallocated irqs: 16 Sep 13 01:03:18.668372 kernel: random: crng init done Sep 13 01:03:18.668379 kernel: Console: colour VGA+ 80x25 Sep 13 01:03:18.668388 kernel: printk: console [tty0] enabled Sep 13 01:03:18.668398 kernel: printk: console [ttyS0] enabled Sep 13 01:03:18.668404 kernel: ACPI: Core revision 20210730 Sep 13 01:03:18.668411 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 133484882848 ns Sep 13 01:03:18.668418 kernel: APIC: Switch to symmetric I/O mode setup Sep 13 01:03:18.668464 kernel: x2apic enabled Sep 13 01:03:18.668475 kernel: Switched APIC routing to physical x2apic. Sep 13 01:03:18.668484 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Sep 13 01:03:18.668490 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns Sep 13 01:03:18.668496 kernel: Calibrating delay loop (skipped) preset value.. 6816.00 BogoMIPS (lpj=3408000) Sep 13 01:03:18.668502 kernel: Disabled fast string operations Sep 13 01:03:18.668508 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Sep 13 01:03:18.668514 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Sep 13 01:03:18.668523 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 13 01:03:18.668529 kernel: Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks! Sep 13 01:03:18.668535 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Sep 13 01:03:18.668541 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Sep 13 01:03:18.668548 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Sep 13 01:03:18.668558 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Sep 13 01:03:18.668565 kernel: RETBleed: Mitigation: Enhanced IBRS Sep 13 01:03:18.668572 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Sep 13 01:03:18.668579 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Sep 13 01:03:18.668589 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Sep 13 01:03:18.668595 kernel: SRBDS: Unknown: Dependent on hypervisor status Sep 13 01:03:18.668601 kernel: GDS: Unknown: Dependent on hypervisor status Sep 13 01:03:18.668607 kernel: active return thunk: its_return_thunk Sep 13 01:03:18.668613 kernel: ITS: Mitigation: Aligned branch/return thunks Sep 13 01:03:18.668619 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 13 01:03:18.668625 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 13 01:03:18.668634 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 13 01:03:18.668645 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 13 01:03:18.668651 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Sep 13 01:03:18.668657 kernel: Freeing SMP alternatives memory: 32K Sep 13 01:03:18.668663 kernel: pid_max: default: 131072 minimum: 1024 Sep 13 01:03:18.668670 kernel: LSM: Security Framework initializing Sep 13 01:03:18.668676 kernel: SELinux: Initializing. Sep 13 01:03:18.668682 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Sep 13 01:03:18.668688 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Sep 13 01:03:18.668694 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) Sep 13 01:03:18.668701 kernel: Performance Events: Skylake events, core PMU driver. Sep 13 01:03:18.668708 kernel: core: CPUID marked event: 'cpu cycles' unavailable Sep 13 01:03:18.668717 kernel: core: CPUID marked event: 'instructions' unavailable Sep 13 01:03:18.668726 kernel: core: CPUID marked event: 'bus cycles' unavailable Sep 13 01:03:18.668732 kernel: core: CPUID marked event: 'cache references' unavailable Sep 13 01:03:18.668738 kernel: core: CPUID marked event: 'cache misses' unavailable Sep 13 01:03:18.668743 kernel: core: CPUID marked event: 'branch instructions' unavailable Sep 13 01:03:18.668749 kernel: core: CPUID marked event: 'branch misses' unavailable Sep 13 01:03:18.668755 kernel: ... version: 1 Sep 13 01:03:18.668762 kernel: ... bit width: 48 Sep 13 01:03:18.668768 kernel: ... generic registers: 4 Sep 13 01:03:18.668774 kernel: ... value mask: 0000ffffffffffff Sep 13 01:03:18.668780 kernel: ... max period: 000000007fffffff Sep 13 01:03:18.668786 kernel: ... fixed-purpose events: 0 Sep 13 01:03:18.668794 kernel: ... event mask: 000000000000000f Sep 13 01:03:18.668802 kernel: signal: max sigframe size: 1776 Sep 13 01:03:18.668811 kernel: rcu: Hierarchical SRCU implementation. Sep 13 01:03:18.668816 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Sep 13 01:03:18.668824 kernel: smp: Bringing up secondary CPUs ... Sep 13 01:03:18.668830 kernel: x86: Booting SMP configuration: Sep 13 01:03:18.668836 kernel: .... node #0, CPUs: #1 Sep 13 01:03:18.668841 kernel: Disabled fast string operations Sep 13 01:03:18.668847 kernel: smpboot: CPU 1 Converting physical 2 to logical package 1 Sep 13 01:03:18.668853 kernel: smpboot: CPU 1 Converting physical 0 to logical die 1 Sep 13 01:03:18.668859 kernel: smp: Brought up 1 node, 2 CPUs Sep 13 01:03:18.668865 kernel: smpboot: Max logical packages: 128 Sep 13 01:03:18.668872 kernel: smpboot: Total of 2 processors activated (13632.00 BogoMIPS) Sep 13 01:03:18.668881 kernel: devtmpfs: initialized Sep 13 01:03:18.668892 kernel: x86/mm: Memory block size: 128MB Sep 13 01:03:18.668898 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7feff000-0x7fefffff] (4096 bytes) Sep 13 01:03:18.668904 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 13 01:03:18.668910 kernel: futex hash table entries: 32768 (order: 9, 2097152 bytes, linear) Sep 13 01:03:18.668916 kernel: pinctrl core: initialized pinctrl subsystem Sep 13 01:03:18.668922 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 13 01:03:18.668928 kernel: audit: initializing netlink subsys (disabled) Sep 13 01:03:18.668934 kernel: audit: type=2000 audit(1757725397.085:1): state=initialized audit_enabled=0 res=1 Sep 13 01:03:18.668940 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 13 01:03:18.668947 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 13 01:03:18.668954 kernel: cpuidle: using governor menu Sep 13 01:03:18.668963 kernel: Simple Boot Flag at 0x36 set to 0x80 Sep 13 01:03:18.668973 kernel: ACPI: bus type PCI registered Sep 13 01:03:18.668979 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 13 01:03:18.668985 kernel: dca service started, version 1.12.1 Sep 13 01:03:18.668991 kernel: PCI: MMCONFIG for domain 0000 [bus 00-7f] at [mem 0xf0000000-0xf7ffffff] (base 0xf0000000) Sep 13 01:03:18.668997 kernel: PCI: MMCONFIG at [mem 0xf0000000-0xf7ffffff] reserved in E820 Sep 13 01:03:18.669003 kernel: PCI: Using configuration type 1 for base access Sep 13 01:03:18.669010 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 13 01:03:18.669016 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Sep 13 01:03:18.669022 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Sep 13 01:03:18.669028 kernel: ACPI: Added _OSI(Module Device) Sep 13 01:03:18.669034 kernel: ACPI: Added _OSI(Processor Device) Sep 13 01:03:18.669042 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 13 01:03:18.669051 kernel: ACPI: Added _OSI(Linux-Dell-Video) Sep 13 01:03:18.669058 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Sep 13 01:03:18.669064 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Sep 13 01:03:18.669072 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 13 01:03:18.669078 kernel: ACPI: [Firmware Bug]: BIOS _OSI(Linux) query ignored Sep 13 01:03:18.669083 kernel: ACPI: Interpreter enabled Sep 13 01:03:18.669090 kernel: ACPI: PM: (supports S0 S1 S5) Sep 13 01:03:18.669095 kernel: ACPI: Using IOAPIC for interrupt routing Sep 13 01:03:18.669101 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 13 01:03:18.669107 kernel: ACPI: Enabled 4 GPEs in block 00 to 0F Sep 13 01:03:18.669114 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-7f]) Sep 13 01:03:18.669199 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 13 01:03:18.669263 kernel: acpi PNP0A03:00: _OSC: platform does not support [AER LTR] Sep 13 01:03:18.669321 kernel: acpi PNP0A03:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability] Sep 13 01:03:18.669330 kernel: PCI host bridge to bus 0000:00 Sep 13 01:03:18.669388 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 13 01:03:18.669445 kernel: pci_bus 0000:00: root bus resource [mem 0x000cc000-0x000dbfff window] Sep 13 01:03:18.669498 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Sep 13 01:03:18.669553 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 13 01:03:18.669598 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xfeff window] Sep 13 01:03:18.669649 kernel: pci_bus 0000:00: root bus resource [bus 00-7f] Sep 13 01:03:18.669719 kernel: pci 0000:00:00.0: [8086:7190] type 00 class 0x060000 Sep 13 01:03:18.669778 kernel: pci 0000:00:01.0: [8086:7191] type 01 class 0x060400 Sep 13 01:03:18.669842 kernel: pci 0000:00:07.0: [8086:7110] type 00 class 0x060100 Sep 13 01:03:18.669908 kernel: pci 0000:00:07.1: [8086:7111] type 00 class 0x01018a Sep 13 01:03:18.669970 kernel: pci 0000:00:07.1: reg 0x20: [io 0x1060-0x106f] Sep 13 01:03:18.670022 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Sep 13 01:03:18.670081 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Sep 13 01:03:18.670138 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Sep 13 01:03:18.670189 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Sep 13 01:03:18.670250 kernel: pci 0000:00:07.3: [8086:7113] type 00 class 0x068000 Sep 13 01:03:18.670312 kernel: pci 0000:00:07.3: quirk: [io 0x1000-0x103f] claimed by PIIX4 ACPI Sep 13 01:03:18.670363 kernel: pci 0000:00:07.3: quirk: [io 0x1040-0x104f] claimed by PIIX4 SMB Sep 13 01:03:18.679452 kernel: pci 0000:00:07.7: [15ad:0740] type 00 class 0x088000 Sep 13 01:03:18.679531 kernel: pci 0000:00:07.7: reg 0x10: [io 0x1080-0x10bf] Sep 13 01:03:18.679584 kernel: pci 0000:00:07.7: reg 0x14: [mem 0xfebfe000-0xfebfffff 64bit] Sep 13 01:03:18.679639 kernel: pci 0000:00:0f.0: [15ad:0405] type 00 class 0x030000 Sep 13 01:03:18.679692 kernel: pci 0000:00:0f.0: reg 0x10: [io 0x1070-0x107f] Sep 13 01:03:18.679739 kernel: pci 0000:00:0f.0: reg 0x14: [mem 0xe8000000-0xefffffff pref] Sep 13 01:03:18.679787 kernel: pci 0000:00:0f.0: reg 0x18: [mem 0xfe000000-0xfe7fffff] Sep 13 01:03:18.679833 kernel: pci 0000:00:0f.0: reg 0x30: [mem 0x00000000-0x00007fff pref] Sep 13 01:03:18.679879 kernel: pci 0000:00:0f.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 13 01:03:18.679931 kernel: pci 0000:00:11.0: [15ad:0790] type 01 class 0x060401 Sep 13 01:03:18.679983 kernel: pci 0000:00:15.0: [15ad:07a0] type 01 class 0x060400 Sep 13 01:03:18.680033 kernel: pci 0000:00:15.0: PME# supported from D0 D3hot D3cold Sep 13 01:03:18.680086 kernel: pci 0000:00:15.1: [15ad:07a0] type 01 class 0x060400 Sep 13 01:03:18.680136 kernel: pci 0000:00:15.1: PME# supported from D0 D3hot D3cold Sep 13 01:03:18.680187 kernel: pci 0000:00:15.2: [15ad:07a0] type 01 class 0x060400 Sep 13 01:03:18.680235 kernel: pci 0000:00:15.2: PME# supported from D0 D3hot D3cold Sep 13 01:03:18.680287 kernel: pci 0000:00:15.3: [15ad:07a0] type 01 class 0x060400 Sep 13 01:03:18.680337 kernel: pci 0000:00:15.3: PME# supported from D0 D3hot D3cold Sep 13 01:03:18.680388 kernel: pci 0000:00:15.4: [15ad:07a0] type 01 class 0x060400 Sep 13 01:03:18.680452 kernel: pci 0000:00:15.4: PME# supported from D0 D3hot D3cold Sep 13 01:03:18.680507 kernel: pci 0000:00:15.5: [15ad:07a0] type 01 class 0x060400 Sep 13 01:03:18.680556 kernel: pci 0000:00:15.5: PME# supported from D0 D3hot D3cold Sep 13 01:03:18.680607 kernel: pci 0000:00:15.6: [15ad:07a0] type 01 class 0x060400 Sep 13 01:03:18.680657 kernel: pci 0000:00:15.6: PME# supported from D0 D3hot D3cold Sep 13 01:03:18.680709 kernel: pci 0000:00:15.7: [15ad:07a0] type 01 class 0x060400 Sep 13 01:03:18.680756 kernel: pci 0000:00:15.7: PME# supported from D0 D3hot D3cold Sep 13 01:03:18.680806 kernel: pci 0000:00:16.0: [15ad:07a0] type 01 class 0x060400 Sep 13 01:03:18.680854 kernel: pci 0000:00:16.0: PME# supported from D0 D3hot D3cold Sep 13 01:03:18.680905 kernel: pci 0000:00:16.1: [15ad:07a0] type 01 class 0x060400 Sep 13 01:03:18.680955 kernel: pci 0000:00:16.1: PME# supported from D0 D3hot D3cold Sep 13 01:03:18.681005 kernel: pci 0000:00:16.2: [15ad:07a0] type 01 class 0x060400 Sep 13 01:03:18.681053 kernel: pci 0000:00:16.2: PME# supported from D0 D3hot D3cold Sep 13 01:03:18.681105 kernel: pci 0000:00:16.3: [15ad:07a0] type 01 class 0x060400 Sep 13 01:03:18.681159 kernel: pci 0000:00:16.3: PME# supported from D0 D3hot D3cold Sep 13 01:03:18.681220 kernel: pci 0000:00:16.4: [15ad:07a0] type 01 class 0x060400 Sep 13 01:03:18.681271 kernel: pci 0000:00:16.4: PME# supported from D0 D3hot D3cold Sep 13 01:03:18.681330 kernel: pci 0000:00:16.5: [15ad:07a0] type 01 class 0x060400 Sep 13 01:03:18.681380 kernel: pci 0000:00:16.5: PME# supported from D0 D3hot D3cold Sep 13 01:03:18.681442 kernel: pci 0000:00:16.6: [15ad:07a0] type 01 class 0x060400 Sep 13 01:03:18.681492 kernel: pci 0000:00:16.6: PME# supported from D0 D3hot D3cold Sep 13 01:03:18.681544 kernel: pci 0000:00:16.7: [15ad:07a0] type 01 class 0x060400 Sep 13 01:03:18.681599 kernel: pci 0000:00:16.7: PME# supported from D0 D3hot D3cold Sep 13 01:03:18.681652 kernel: pci 0000:00:17.0: [15ad:07a0] type 01 class 0x060400 Sep 13 01:03:18.681700 kernel: pci 0000:00:17.0: PME# supported from D0 D3hot D3cold Sep 13 01:03:18.681753 kernel: pci 0000:00:17.1: [15ad:07a0] type 01 class 0x060400 Sep 13 01:03:18.681801 kernel: pci 0000:00:17.1: PME# supported from D0 D3hot D3cold Sep 13 01:03:18.681852 kernel: pci 0000:00:17.2: [15ad:07a0] type 01 class 0x060400 Sep 13 01:03:18.681902 kernel: pci 0000:00:17.2: PME# supported from D0 D3hot D3cold Sep 13 01:03:18.681954 kernel: pci 0000:00:17.3: [15ad:07a0] type 01 class 0x060400 Sep 13 01:03:18.682002 kernel: pci 0000:00:17.3: PME# supported from D0 D3hot D3cold Sep 13 01:03:18.682055 kernel: pci 0000:00:17.4: [15ad:07a0] type 01 class 0x060400 Sep 13 01:03:18.682103 kernel: pci 0000:00:17.4: PME# supported from D0 D3hot D3cold Sep 13 01:03:18.682155 kernel: pci 0000:00:17.5: [15ad:07a0] type 01 class 0x060400 Sep 13 01:03:18.682204 kernel: pci 0000:00:17.5: PME# supported from D0 D3hot D3cold Sep 13 01:03:18.682259 kernel: pci 0000:00:17.6: [15ad:07a0] type 01 class 0x060400 Sep 13 01:03:18.682307 kernel: pci 0000:00:17.6: PME# supported from D0 D3hot D3cold Sep 13 01:03:18.682358 kernel: pci 0000:00:17.7: [15ad:07a0] type 01 class 0x060400 Sep 13 01:03:18.682407 kernel: pci 0000:00:17.7: PME# supported from D0 D3hot D3cold Sep 13 01:03:18.684535 kernel: pci 0000:00:18.0: [15ad:07a0] type 01 class 0x060400 Sep 13 01:03:18.684593 kernel: pci 0000:00:18.0: PME# supported from D0 D3hot D3cold Sep 13 01:03:18.684651 kernel: pci 0000:00:18.1: [15ad:07a0] type 01 class 0x060400 Sep 13 01:03:18.684701 kernel: pci 0000:00:18.1: PME# supported from D0 D3hot D3cold Sep 13 01:03:18.684753 kernel: pci 0000:00:18.2: [15ad:07a0] type 01 class 0x060400 Sep 13 01:03:18.684801 kernel: pci 0000:00:18.2: PME# supported from D0 D3hot D3cold Sep 13 01:03:18.684852 kernel: pci 0000:00:18.3: [15ad:07a0] type 01 class 0x060400 Sep 13 01:03:18.684900 kernel: pci 0000:00:18.3: PME# supported from D0 D3hot D3cold Sep 13 01:03:18.684952 kernel: pci 0000:00:18.4: [15ad:07a0] type 01 class 0x060400 Sep 13 01:03:18.685001 kernel: pci 0000:00:18.4: PME# supported from D0 D3hot D3cold Sep 13 01:03:18.685052 kernel: pci 0000:00:18.5: [15ad:07a0] type 01 class 0x060400 Sep 13 01:03:18.685100 kernel: pci 0000:00:18.5: PME# supported from D0 D3hot D3cold Sep 13 01:03:18.685153 kernel: pci 0000:00:18.6: [15ad:07a0] type 01 class 0x060400 Sep 13 01:03:18.685201 kernel: pci 0000:00:18.6: PME# supported from D0 D3hot D3cold Sep 13 01:03:18.685254 kernel: pci 0000:00:18.7: [15ad:07a0] type 01 class 0x060400 Sep 13 01:03:18.685301 kernel: pci 0000:00:18.7: PME# supported from D0 D3hot D3cold Sep 13 01:03:18.685353 kernel: pci_bus 0000:01: extended config space not accessible Sep 13 01:03:18.685403 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Sep 13 01:03:18.685461 kernel: pci_bus 0000:02: extended config space not accessible Sep 13 01:03:18.685470 kernel: acpiphp: Slot [32] registered Sep 13 01:03:18.685477 kernel: acpiphp: Slot [33] registered Sep 13 01:03:18.685485 kernel: acpiphp: Slot [34] registered Sep 13 01:03:18.685491 kernel: acpiphp: Slot [35] registered Sep 13 01:03:18.685497 kernel: acpiphp: Slot [36] registered Sep 13 01:03:18.685503 kernel: acpiphp: Slot [37] registered Sep 13 01:03:18.685509 kernel: acpiphp: Slot [38] registered Sep 13 01:03:18.685515 kernel: acpiphp: Slot [39] registered Sep 13 01:03:18.685520 kernel: acpiphp: Slot [40] registered Sep 13 01:03:18.685526 kernel: acpiphp: Slot [41] registered Sep 13 01:03:18.685532 kernel: acpiphp: Slot [42] registered Sep 13 01:03:18.685538 kernel: acpiphp: Slot [43] registered Sep 13 01:03:18.685545 kernel: acpiphp: Slot [44] registered Sep 13 01:03:18.685551 kernel: acpiphp: Slot [45] registered Sep 13 01:03:18.685557 kernel: acpiphp: Slot [46] registered Sep 13 01:03:18.685563 kernel: acpiphp: Slot [47] registered Sep 13 01:03:18.685569 kernel: acpiphp: Slot [48] registered Sep 13 01:03:18.685575 kernel: acpiphp: Slot [49] registered Sep 13 01:03:18.685581 kernel: acpiphp: Slot [50] registered Sep 13 01:03:18.685587 kernel: acpiphp: Slot [51] registered Sep 13 01:03:18.685593 kernel: acpiphp: Slot [52] registered Sep 13 01:03:18.685599 kernel: acpiphp: Slot [53] registered Sep 13 01:03:18.685606 kernel: acpiphp: Slot [54] registered Sep 13 01:03:18.685611 kernel: acpiphp: Slot [55] registered Sep 13 01:03:18.685617 kernel: acpiphp: Slot [56] registered Sep 13 01:03:18.685623 kernel: acpiphp: Slot [57] registered Sep 13 01:03:18.685629 kernel: acpiphp: Slot [58] registered Sep 13 01:03:18.685635 kernel: acpiphp: Slot [59] registered Sep 13 01:03:18.685641 kernel: acpiphp: Slot [60] registered Sep 13 01:03:18.685647 kernel: acpiphp: Slot [61] registered Sep 13 01:03:18.685653 kernel: acpiphp: Slot [62] registered Sep 13 01:03:18.685660 kernel: acpiphp: Slot [63] registered Sep 13 01:03:18.685709 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] (subtractive decode) Sep 13 01:03:18.685758 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] Sep 13 01:03:18.685805 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] Sep 13 01:03:18.685852 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] Sep 13 01:03:18.685907 kernel: pci 0000:00:11.0: bridge window [mem 0x000a0000-0x000bffff window] (subtractive decode) Sep 13 01:03:18.685961 kernel: pci 0000:00:11.0: bridge window [mem 0x000cc000-0x000dbfff window] (subtractive decode) Sep 13 01:03:18.686011 kernel: pci 0000:00:11.0: bridge window [mem 0xc0000000-0xfebfffff window] (subtractive decode) Sep 13 01:03:18.686064 kernel: pci 0000:00:11.0: bridge window [io 0x0000-0x0cf7 window] (subtractive decode) Sep 13 01:03:18.686113 kernel: pci 0000:00:11.0: bridge window [io 0x0d00-0xfeff window] (subtractive decode) Sep 13 01:03:18.686168 kernel: pci 0000:03:00.0: [15ad:07c0] type 00 class 0x010700 Sep 13 01:03:18.686219 kernel: pci 0000:03:00.0: reg 0x10: [io 0x4000-0x4007] Sep 13 01:03:18.686269 kernel: pci 0000:03:00.0: reg 0x14: [mem 0xfd5f8000-0xfd5fffff 64bit] Sep 13 01:03:18.686317 kernel: pci 0000:03:00.0: reg 0x30: [mem 0x00000000-0x0000ffff pref] Sep 13 01:03:18.686368 kernel: pci 0000:03:00.0: PME# supported from D0 D3hot D3cold Sep 13 01:03:18.686416 kernel: pci 0000:03:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' Sep 13 01:03:18.688525 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] Sep 13 01:03:18.688580 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] Sep 13 01:03:18.688629 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] Sep 13 01:03:18.688679 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] Sep 13 01:03:18.688727 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] Sep 13 01:03:18.688774 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] Sep 13 01:03:18.688824 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] Sep 13 01:03:18.688873 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] Sep 13 01:03:18.688920 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] Sep 13 01:03:18.688973 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] Sep 13 01:03:18.689027 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] Sep 13 01:03:18.689077 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] Sep 13 01:03:18.689124 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] Sep 13 01:03:18.689170 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] Sep 13 01:03:18.689221 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] Sep 13 01:03:18.689268 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] Sep 13 01:03:18.689314 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] Sep 13 01:03:18.689363 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] Sep 13 01:03:18.689411 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] Sep 13 01:03:18.689466 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] Sep 13 01:03:18.689516 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] Sep 13 01:03:18.689563 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] Sep 13 01:03:18.689620 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] Sep 13 01:03:18.689671 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] Sep 13 01:03:18.689719 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] Sep 13 01:03:18.689766 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] Sep 13 01:03:18.689825 kernel: pci 0000:0b:00.0: [15ad:07b0] type 00 class 0x020000 Sep 13 01:03:18.689876 kernel: pci 0000:0b:00.0: reg 0x10: [mem 0xfd4fc000-0xfd4fcfff] Sep 13 01:03:18.689926 kernel: pci 0000:0b:00.0: reg 0x14: [mem 0xfd4fd000-0xfd4fdfff] Sep 13 01:03:18.689974 kernel: pci 0000:0b:00.0: reg 0x18: [mem 0xfd4fe000-0xfd4fffff] Sep 13 01:03:18.690031 kernel: pci 0000:0b:00.0: reg 0x1c: [io 0x5000-0x500f] Sep 13 01:03:18.690080 kernel: pci 0000:0b:00.0: reg 0x30: [mem 0x00000000-0x0000ffff pref] Sep 13 01:03:18.690129 kernel: pci 0000:0b:00.0: supports D1 D2 Sep 13 01:03:18.690180 kernel: pci 0000:0b:00.0: PME# supported from D0 D1 D2 D3hot D3cold Sep 13 01:03:18.690230 kernel: pci 0000:0b:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' Sep 13 01:03:18.690280 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] Sep 13 01:03:18.690327 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] Sep 13 01:03:18.690375 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] Sep 13 01:03:18.699653 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] Sep 13 01:03:18.699770 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] Sep 13 01:03:18.699868 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] Sep 13 01:03:18.699924 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] Sep 13 01:03:18.699976 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] Sep 13 01:03:18.700024 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] Sep 13 01:03:18.700072 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] Sep 13 01:03:18.700120 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] Sep 13 01:03:18.700170 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] Sep 13 01:03:18.700217 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] Sep 13 01:03:18.700263 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] Sep 13 01:03:18.700315 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] Sep 13 01:03:18.700363 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] Sep 13 01:03:18.700410 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] Sep 13 01:03:18.700474 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] Sep 13 01:03:18.700523 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] Sep 13 01:03:18.700569 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] Sep 13 01:03:18.700619 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] Sep 13 01:03:18.700666 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] Sep 13 01:03:18.700716 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] Sep 13 01:03:18.700766 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] Sep 13 01:03:18.700813 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] Sep 13 01:03:18.700859 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] Sep 13 01:03:18.700907 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] Sep 13 01:03:18.700954 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] Sep 13 01:03:18.701008 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] Sep 13 01:03:18.701056 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] Sep 13 01:03:18.701107 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] Sep 13 01:03:18.701154 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] Sep 13 01:03:18.701201 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] Sep 13 01:03:18.701248 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] Sep 13 01:03:18.701297 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] Sep 13 01:03:18.701344 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] Sep 13 01:03:18.701391 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] Sep 13 01:03:18.701451 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] Sep 13 01:03:18.701501 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] Sep 13 01:03:18.701548 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] Sep 13 01:03:18.701600 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] Sep 13 01:03:18.701651 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] Sep 13 01:03:18.701698 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] Sep 13 01:03:18.701745 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] Sep 13 01:03:18.701795 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] Sep 13 01:03:18.701846 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] Sep 13 01:03:18.701893 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] Sep 13 01:03:18.701942 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] Sep 13 01:03:18.701990 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] Sep 13 01:03:18.702037 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] Sep 13 01:03:18.702086 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] Sep 13 01:03:18.702133 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] Sep 13 01:03:18.702181 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] Sep 13 01:03:18.702232 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] Sep 13 01:03:18.702281 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] Sep 13 01:03:18.702327 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] Sep 13 01:03:18.702375 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] Sep 13 01:03:18.702429 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] Sep 13 01:03:18.702478 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] Sep 13 01:03:18.702526 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] Sep 13 01:03:18.702574 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] Sep 13 01:03:18.702627 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] Sep 13 01:03:18.702674 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] Sep 13 01:03:18.702722 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] Sep 13 01:03:18.702771 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] Sep 13 01:03:18.702834 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] Sep 13 01:03:18.702884 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] Sep 13 01:03:18.702933 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] Sep 13 01:03:18.702983 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] Sep 13 01:03:18.703031 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] Sep 13 01:03:18.703080 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] Sep 13 01:03:18.703128 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] Sep 13 01:03:18.703175 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] Sep 13 01:03:18.703225 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] Sep 13 01:03:18.703272 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] Sep 13 01:03:18.703320 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] Sep 13 01:03:18.703368 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] Sep 13 01:03:18.703418 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] Sep 13 01:03:18.703532 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] Sep 13 01:03:18.703541 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 9 Sep 13 01:03:18.703547 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 0 Sep 13 01:03:18.703553 kernel: ACPI: PCI: Interrupt link LNKB disabled Sep 13 01:03:18.703560 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 13 01:03:18.703566 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 10 Sep 13 01:03:18.703572 kernel: iommu: Default domain type: Translated Sep 13 01:03:18.703581 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 13 01:03:18.703643 kernel: pci 0000:00:0f.0: vgaarb: setting as boot VGA device Sep 13 01:03:18.703691 kernel: pci 0000:00:0f.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 13 01:03:18.703737 kernel: pci 0000:00:0f.0: vgaarb: bridge control possible Sep 13 01:03:18.703745 kernel: vgaarb: loaded Sep 13 01:03:18.703751 kernel: pps_core: LinuxPPS API ver. 1 registered Sep 13 01:03:18.703758 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Sep 13 01:03:18.703764 kernel: PTP clock support registered Sep 13 01:03:18.703770 kernel: PCI: Using ACPI for IRQ routing Sep 13 01:03:18.703778 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 13 01:03:18.703784 kernel: e820: reserve RAM buffer [mem 0x0009ec00-0x0009ffff] Sep 13 01:03:18.703790 kernel: e820: reserve RAM buffer [mem 0x7fee0000-0x7fffffff] Sep 13 01:03:18.703796 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 Sep 13 01:03:18.703802 kernel: hpet0: 16 comparators, 64-bit 14.318180 MHz counter Sep 13 01:03:18.703809 kernel: clocksource: Switched to clocksource tsc-early Sep 13 01:03:18.703815 kernel: VFS: Disk quotas dquot_6.6.0 Sep 13 01:03:18.703821 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 13 01:03:18.703827 kernel: pnp: PnP ACPI init Sep 13 01:03:18.703882 kernel: system 00:00: [io 0x1000-0x103f] has been reserved Sep 13 01:03:18.703926 kernel: system 00:00: [io 0x1040-0x104f] has been reserved Sep 13 01:03:18.703969 kernel: system 00:00: [io 0x0cf0-0x0cf1] has been reserved Sep 13 01:03:18.704015 kernel: system 00:04: [mem 0xfed00000-0xfed003ff] has been reserved Sep 13 01:03:18.704063 kernel: pnp 00:06: [dma 2] Sep 13 01:03:18.704109 kernel: system 00:07: [io 0xfce0-0xfcff] has been reserved Sep 13 01:03:18.704154 kernel: system 00:07: [mem 0xf0000000-0xf7ffffff] has been reserved Sep 13 01:03:18.704197 kernel: system 00:07: [mem 0xfe800000-0xfe9fffff] has been reserved Sep 13 01:03:18.704205 kernel: pnp: PnP ACPI: found 8 devices Sep 13 01:03:18.704211 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 13 01:03:18.704217 kernel: NET: Registered PF_INET protocol family Sep 13 01:03:18.704223 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 13 01:03:18.704229 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Sep 13 01:03:18.704236 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 13 01:03:18.704242 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Sep 13 01:03:18.704249 kernel: TCP bind hash table entries: 16384 (order: 6, 262144 bytes, linear) Sep 13 01:03:18.704255 kernel: TCP: Hash tables configured (established 16384 bind 16384) Sep 13 01:03:18.704262 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Sep 13 01:03:18.704268 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Sep 13 01:03:18.704274 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 13 01:03:18.704279 kernel: NET: Registered PF_XDP protocol family Sep 13 01:03:18.704329 kernel: pci 0000:00:15.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 Sep 13 01:03:18.704379 kernel: pci 0000:00:15.3: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Sep 13 01:03:18.704437 kernel: pci 0000:00:15.4: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Sep 13 01:03:18.704488 kernel: pci 0000:00:15.5: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Sep 13 01:03:18.704537 kernel: pci 0000:00:15.6: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Sep 13 01:03:18.704587 kernel: pci 0000:00:15.7: bridge window [io 0x1000-0x0fff] to [bus 0a] add_size 1000 Sep 13 01:03:18.704637 kernel: pci 0000:00:16.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 0b] add_size 200000 add_align 100000 Sep 13 01:03:18.705017 kernel: pci 0000:00:16.3: bridge window [io 0x1000-0x0fff] to [bus 0e] add_size 1000 Sep 13 01:03:18.705077 kernel: pci 0000:00:16.4: bridge window [io 0x1000-0x0fff] to [bus 0f] add_size 1000 Sep 13 01:03:18.705127 kernel: pci 0000:00:16.5: bridge window [io 0x1000-0x0fff] to [bus 10] add_size 1000 Sep 13 01:03:18.705176 kernel: pci 0000:00:16.6: bridge window [io 0x1000-0x0fff] to [bus 11] add_size 1000 Sep 13 01:03:18.705224 kernel: pci 0000:00:16.7: bridge window [io 0x1000-0x0fff] to [bus 12] add_size 1000 Sep 13 01:03:18.705273 kernel: pci 0000:00:17.3: bridge window [io 0x1000-0x0fff] to [bus 16] add_size 1000 Sep 13 01:03:18.705322 kernel: pci 0000:00:17.4: bridge window [io 0x1000-0x0fff] to [bus 17] add_size 1000 Sep 13 01:03:18.705371 kernel: pci 0000:00:17.5: bridge window [io 0x1000-0x0fff] to [bus 18] add_size 1000 Sep 13 01:03:18.705418 kernel: pci 0000:00:17.6: bridge window [io 0x1000-0x0fff] to [bus 19] add_size 1000 Sep 13 01:03:18.705506 kernel: pci 0000:00:17.7: bridge window [io 0x1000-0x0fff] to [bus 1a] add_size 1000 Sep 13 01:03:18.705556 kernel: pci 0000:00:18.2: bridge window [io 0x1000-0x0fff] to [bus 1d] add_size 1000 Sep 13 01:03:18.705604 kernel: pci 0000:00:18.3: bridge window [io 0x1000-0x0fff] to [bus 1e] add_size 1000 Sep 13 01:03:18.705654 kernel: pci 0000:00:18.4: bridge window [io 0x1000-0x0fff] to [bus 1f] add_size 1000 Sep 13 01:03:18.705709 kernel: pci 0000:00:18.5: bridge window [io 0x1000-0x0fff] to [bus 20] add_size 1000 Sep 13 01:03:18.705758 kernel: pci 0000:00:18.6: bridge window [io 0x1000-0x0fff] to [bus 21] add_size 1000 Sep 13 01:03:18.705805 kernel: pci 0000:00:18.7: bridge window [io 0x1000-0x0fff] to [bus 22] add_size 1000 Sep 13 01:03:18.705852 kernel: pci 0000:00:15.0: BAR 15: assigned [mem 0xc0000000-0xc01fffff 64bit pref] Sep 13 01:03:18.706197 kernel: pci 0000:00:16.0: BAR 15: assigned [mem 0xc0200000-0xc03fffff 64bit pref] Sep 13 01:03:18.706255 kernel: pci 0000:00:15.3: BAR 13: no space for [io size 0x1000] Sep 13 01:03:18.706336 kernel: pci 0000:00:15.3: BAR 13: failed to assign [io size 0x1000] Sep 13 01:03:18.706401 kernel: pci 0000:00:15.4: BAR 13: no space for [io size 0x1000] Sep 13 01:03:18.706464 kernel: pci 0000:00:15.4: BAR 13: failed to assign [io size 0x1000] Sep 13 01:03:18.706515 kernel: pci 0000:00:15.5: BAR 13: no space for [io size 0x1000] Sep 13 01:03:18.706561 kernel: pci 0000:00:15.5: BAR 13: failed to assign [io size 0x1000] Sep 13 01:03:18.706617 kernel: pci 0000:00:15.6: BAR 13: no space for [io size 0x1000] Sep 13 01:03:18.706665 kernel: pci 0000:00:15.6: BAR 13: failed to assign [io size 0x1000] Sep 13 01:03:18.706718 kernel: pci 0000:00:15.7: BAR 13: no space for [io size 0x1000] Sep 13 01:03:18.706767 kernel: pci 0000:00:15.7: BAR 13: failed to assign [io size 0x1000] Sep 13 01:03:18.706815 kernel: pci 0000:00:16.3: BAR 13: no space for [io size 0x1000] Sep 13 01:03:18.706862 kernel: pci 0000:00:16.3: BAR 13: failed to assign [io size 0x1000] Sep 13 01:03:18.706910 kernel: pci 0000:00:16.4: BAR 13: no space for [io size 0x1000] Sep 13 01:03:18.706957 kernel: pci 0000:00:16.4: BAR 13: failed to assign [io size 0x1000] Sep 13 01:03:18.707005 kernel: pci 0000:00:16.5: BAR 13: no space for [io size 0x1000] Sep 13 01:03:18.707052 kernel: pci 0000:00:16.5: BAR 13: failed to assign [io size 0x1000] Sep 13 01:03:18.707110 kernel: pci 0000:00:16.6: BAR 13: no space for [io size 0x1000] Sep 13 01:03:18.707181 kernel: pci 0000:00:16.6: BAR 13: failed to assign [io size 0x1000] Sep 13 01:03:18.707231 kernel: pci 0000:00:16.7: BAR 13: no space for [io size 0x1000] Sep 13 01:03:18.707279 kernel: pci 0000:00:16.7: BAR 13: failed to assign [io size 0x1000] Sep 13 01:03:18.707327 kernel: pci 0000:00:17.3: BAR 13: no space for [io size 0x1000] Sep 13 01:03:18.707535 kernel: pci 0000:00:17.3: BAR 13: failed to assign [io size 0x1000] Sep 13 01:03:18.707588 kernel: pci 0000:00:17.4: BAR 13: no space for [io size 0x1000] Sep 13 01:03:18.707636 kernel: pci 0000:00:17.4: BAR 13: failed to assign [io size 0x1000] Sep 13 01:03:18.707685 kernel: pci 0000:00:17.5: BAR 13: no space for [io size 0x1000] Sep 13 01:03:18.707736 kernel: pci 0000:00:17.5: BAR 13: failed to assign [io size 0x1000] Sep 13 01:03:18.707784 kernel: pci 0000:00:17.6: BAR 13: no space for [io size 0x1000] Sep 13 01:03:18.707835 kernel: pci 0000:00:17.6: BAR 13: failed to assign [io size 0x1000] Sep 13 01:03:18.707891 kernel: pci 0000:00:17.7: BAR 13: no space for [io size 0x1000] Sep 13 01:03:18.707939 kernel: pci 0000:00:17.7: BAR 13: failed to assign [io size 0x1000] Sep 13 01:03:18.707986 kernel: pci 0000:00:18.2: BAR 13: no space for [io size 0x1000] Sep 13 01:03:18.708033 kernel: pci 0000:00:18.2: BAR 13: failed to assign [io size 0x1000] Sep 13 01:03:18.708091 kernel: pci 0000:00:18.3: BAR 13: no space for [io size 0x1000] Sep 13 01:03:18.708158 kernel: pci 0000:00:18.3: BAR 13: failed to assign [io size 0x1000] Sep 13 01:03:18.708217 kernel: pci 0000:00:18.4: BAR 13: no space for [io size 0x1000] Sep 13 01:03:18.708281 kernel: pci 0000:00:18.4: BAR 13: failed to assign [io size 0x1000] Sep 13 01:03:18.708331 kernel: pci 0000:00:18.5: BAR 13: no space for [io size 0x1000] Sep 13 01:03:18.708633 kernel: pci 0000:00:18.5: BAR 13: failed to assign [io size 0x1000] Sep 13 01:03:18.708684 kernel: pci 0000:00:18.6: BAR 13: no space for [io size 0x1000] Sep 13 01:03:18.708732 kernel: pci 0000:00:18.6: BAR 13: failed to assign [io size 0x1000] Sep 13 01:03:18.709038 kernel: pci 0000:00:18.7: BAR 13: no space for [io size 0x1000] Sep 13 01:03:18.709101 kernel: pci 0000:00:18.7: BAR 13: failed to assign [io size 0x1000] Sep 13 01:03:18.709151 kernel: pci 0000:00:18.7: BAR 13: no space for [io size 0x1000] Sep 13 01:03:18.709200 kernel: pci 0000:00:18.7: BAR 13: failed to assign [io size 0x1000] Sep 13 01:03:18.709248 kernel: pci 0000:00:18.6: BAR 13: no space for [io size 0x1000] Sep 13 01:03:18.709295 kernel: pci 0000:00:18.6: BAR 13: failed to assign [io size 0x1000] Sep 13 01:03:18.709342 kernel: pci 0000:00:18.5: BAR 13: no space for [io size 0x1000] Sep 13 01:03:18.709390 kernel: pci 0000:00:18.5: BAR 13: failed to assign [io size 0x1000] Sep 13 01:03:18.709449 kernel: pci 0000:00:18.4: BAR 13: no space for [io size 0x1000] Sep 13 01:03:18.709498 kernel: pci 0000:00:18.4: BAR 13: failed to assign [io size 0x1000] Sep 13 01:03:18.709734 kernel: pci 0000:00:18.3: BAR 13: no space for [io size 0x1000] Sep 13 01:03:18.709786 kernel: pci 0000:00:18.3: BAR 13: failed to assign [io size 0x1000] Sep 13 01:03:18.709835 kernel: pci 0000:00:18.2: BAR 13: no space for [io size 0x1000] Sep 13 01:03:18.709882 kernel: pci 0000:00:18.2: BAR 13: failed to assign [io size 0x1000] Sep 13 01:03:18.709930 kernel: pci 0000:00:17.7: BAR 13: no space for [io size 0x1000] Sep 13 01:03:18.709977 kernel: pci 0000:00:17.7: BAR 13: failed to assign [io size 0x1000] Sep 13 01:03:18.710024 kernel: pci 0000:00:17.6: BAR 13: no space for [io size 0x1000] Sep 13 01:03:18.710071 kernel: pci 0000:00:17.6: BAR 13: failed to assign [io size 0x1000] Sep 13 01:03:18.710117 kernel: pci 0000:00:17.5: BAR 13: no space for [io size 0x1000] Sep 13 01:03:18.710167 kernel: pci 0000:00:17.5: BAR 13: failed to assign [io size 0x1000] Sep 13 01:03:18.710214 kernel: pci 0000:00:17.4: BAR 13: no space for [io size 0x1000] Sep 13 01:03:18.710261 kernel: pci 0000:00:17.4: BAR 13: failed to assign [io size 0x1000] Sep 13 01:03:18.710309 kernel: pci 0000:00:17.3: BAR 13: no space for [io size 0x1000] Sep 13 01:03:18.710356 kernel: pci 0000:00:17.3: BAR 13: failed to assign [io size 0x1000] Sep 13 01:03:18.710403 kernel: pci 0000:00:16.7: BAR 13: no space for [io size 0x1000] Sep 13 01:03:18.710480 kernel: pci 0000:00:16.7: BAR 13: failed to assign [io size 0x1000] Sep 13 01:03:18.710529 kernel: pci 0000:00:16.6: BAR 13: no space for [io size 0x1000] Sep 13 01:03:18.710576 kernel: pci 0000:00:16.6: BAR 13: failed to assign [io size 0x1000] Sep 13 01:03:18.710622 kernel: pci 0000:00:16.5: BAR 13: no space for [io size 0x1000] Sep 13 01:03:18.710696 kernel: pci 0000:00:16.5: BAR 13: failed to assign [io size 0x1000] Sep 13 01:03:18.710989 kernel: pci 0000:00:16.4: BAR 13: no space for [io size 0x1000] Sep 13 01:03:18.711043 kernel: pci 0000:00:16.4: BAR 13: failed to assign [io size 0x1000] Sep 13 01:03:18.711093 kernel: pci 0000:00:16.3: BAR 13: no space for [io size 0x1000] Sep 13 01:03:18.711140 kernel: pci 0000:00:16.3: BAR 13: failed to assign [io size 0x1000] Sep 13 01:03:18.711188 kernel: pci 0000:00:15.7: BAR 13: no space for [io size 0x1000] Sep 13 01:03:18.711236 kernel: pci 0000:00:15.7: BAR 13: failed to assign [io size 0x1000] Sep 13 01:03:18.711283 kernel: pci 0000:00:15.6: BAR 13: no space for [io size 0x1000] Sep 13 01:03:18.711331 kernel: pci 0000:00:15.6: BAR 13: failed to assign [io size 0x1000] Sep 13 01:03:18.711381 kernel: pci 0000:00:15.5: BAR 13: no space for [io size 0x1000] Sep 13 01:03:18.711438 kernel: pci 0000:00:15.5: BAR 13: failed to assign [io size 0x1000] Sep 13 01:03:18.711491 kernel: pci 0000:00:15.4: BAR 13: no space for [io size 0x1000] Sep 13 01:03:18.711539 kernel: pci 0000:00:15.4: BAR 13: failed to assign [io size 0x1000] Sep 13 01:03:18.711675 kernel: pci 0000:00:15.3: BAR 13: no space for [io size 0x1000] Sep 13 01:03:18.711726 kernel: pci 0000:00:15.3: BAR 13: failed to assign [io size 0x1000] Sep 13 01:03:18.711777 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Sep 13 01:03:18.711827 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] Sep 13 01:03:18.711875 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] Sep 13 01:03:18.711922 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] Sep 13 01:03:18.711972 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] Sep 13 01:03:18.712024 kernel: pci 0000:03:00.0: BAR 6: assigned [mem 0xfd500000-0xfd50ffff pref] Sep 13 01:03:18.712072 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] Sep 13 01:03:18.712119 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] Sep 13 01:03:18.712166 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] Sep 13 01:03:18.712214 kernel: pci 0000:00:15.0: bridge window [mem 0xc0000000-0xc01fffff 64bit pref] Sep 13 01:03:18.712262 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] Sep 13 01:03:18.712310 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] Sep 13 01:03:18.712359 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] Sep 13 01:03:18.712406 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] Sep 13 01:03:18.712502 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] Sep 13 01:03:18.712552 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] Sep 13 01:03:18.712604 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] Sep 13 01:03:18.712835 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] Sep 13 01:03:18.712889 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] Sep 13 01:03:18.712937 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] Sep 13 01:03:18.712985 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] Sep 13 01:03:18.713059 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] Sep 13 01:03:18.713361 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] Sep 13 01:03:18.713607 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] Sep 13 01:03:18.713664 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] Sep 13 01:03:18.713713 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] Sep 13 01:03:18.713761 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] Sep 13 01:03:18.713811 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] Sep 13 01:03:18.713859 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] Sep 13 01:03:18.713906 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] Sep 13 01:03:18.713954 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] Sep 13 01:03:18.714001 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] Sep 13 01:03:18.714049 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] Sep 13 01:03:18.714101 kernel: pci 0000:0b:00.0: BAR 6: assigned [mem 0xfd400000-0xfd40ffff pref] Sep 13 01:03:18.714150 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] Sep 13 01:03:18.714198 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] Sep 13 01:03:18.714246 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] Sep 13 01:03:18.714294 kernel: pci 0000:00:16.0: bridge window [mem 0xc0200000-0xc03fffff 64bit pref] Sep 13 01:03:18.714341 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] Sep 13 01:03:18.714406 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] Sep 13 01:03:18.714506 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] Sep 13 01:03:18.714554 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] Sep 13 01:03:18.714603 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] Sep 13 01:03:18.714665 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] Sep 13 01:03:18.714960 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] Sep 13 01:03:18.715017 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] Sep 13 01:03:18.715156 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] Sep 13 01:03:18.715210 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] Sep 13 01:03:18.715259 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] Sep 13 01:03:18.715491 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] Sep 13 01:03:18.715547 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] Sep 13 01:03:18.715596 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] Sep 13 01:03:18.715645 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] Sep 13 01:03:18.715718 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] Sep 13 01:03:18.716007 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] Sep 13 01:03:18.716242 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] Sep 13 01:03:18.716297 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] Sep 13 01:03:18.716348 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] Sep 13 01:03:18.716397 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] Sep 13 01:03:18.716480 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] Sep 13 01:03:18.716529 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] Sep 13 01:03:18.716577 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] Sep 13 01:03:18.716629 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] Sep 13 01:03:18.716700 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] Sep 13 01:03:18.716990 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] Sep 13 01:03:18.717119 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] Sep 13 01:03:18.717180 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] Sep 13 01:03:18.717229 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] Sep 13 01:03:18.717277 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] Sep 13 01:03:18.717619 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] Sep 13 01:03:18.717674 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] Sep 13 01:03:18.717723 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] Sep 13 01:03:18.717771 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] Sep 13 01:03:18.717820 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] Sep 13 01:03:18.717870 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] Sep 13 01:03:18.717932 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] Sep 13 01:03:18.717982 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] Sep 13 01:03:18.718029 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] Sep 13 01:03:18.718076 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] Sep 13 01:03:18.718124 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] Sep 13 01:03:18.718171 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] Sep 13 01:03:18.718219 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] Sep 13 01:03:18.718267 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] Sep 13 01:03:18.718315 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] Sep 13 01:03:18.718364 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] Sep 13 01:03:18.718411 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] Sep 13 01:03:18.718495 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] Sep 13 01:03:18.718544 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] Sep 13 01:03:18.718593 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] Sep 13 01:03:18.718640 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] Sep 13 01:03:18.718816 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] Sep 13 01:03:18.718867 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] Sep 13 01:03:18.718916 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] Sep 13 01:03:18.718967 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] Sep 13 01:03:18.719014 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] Sep 13 01:03:18.719061 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] Sep 13 01:03:18.719107 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] Sep 13 01:03:18.719154 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] Sep 13 01:03:18.719202 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] Sep 13 01:03:18.719249 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] Sep 13 01:03:18.719297 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] Sep 13 01:03:18.719344 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] Sep 13 01:03:18.719390 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] Sep 13 01:03:18.719447 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] Sep 13 01:03:18.719496 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] Sep 13 01:03:18.719544 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] Sep 13 01:03:18.719595 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] Sep 13 01:03:18.719644 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] Sep 13 01:03:18.719692 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] Sep 13 01:03:18.719739 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] Sep 13 01:03:18.719786 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] Sep 13 01:03:18.719834 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] Sep 13 01:03:18.719884 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] Sep 13 01:03:18.719931 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] Sep 13 01:03:18.719977 kernel: pci_bus 0000:00: resource 4 [mem 0x000a0000-0x000bffff window] Sep 13 01:03:18.720021 kernel: pci_bus 0000:00: resource 5 [mem 0x000cc000-0x000dbfff window] Sep 13 01:03:18.720064 kernel: pci_bus 0000:00: resource 6 [mem 0xc0000000-0xfebfffff window] Sep 13 01:03:18.720105 kernel: pci_bus 0000:00: resource 7 [io 0x0000-0x0cf7 window] Sep 13 01:03:18.720147 kernel: pci_bus 0000:00: resource 8 [io 0x0d00-0xfeff window] Sep 13 01:03:18.720194 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x3fff] Sep 13 01:03:18.720242 kernel: pci_bus 0000:02: resource 1 [mem 0xfd600000-0xfdffffff] Sep 13 01:03:18.720286 kernel: pci_bus 0000:02: resource 2 [mem 0xe7b00000-0xe7ffffff 64bit pref] Sep 13 01:03:18.720331 kernel: pci_bus 0000:02: resource 4 [mem 0x000a0000-0x000bffff window] Sep 13 01:03:18.720375 kernel: pci_bus 0000:02: resource 5 [mem 0x000cc000-0x000dbfff window] Sep 13 01:03:18.720419 kernel: pci_bus 0000:02: resource 6 [mem 0xc0000000-0xfebfffff window] Sep 13 01:03:18.720477 kernel: pci_bus 0000:02: resource 7 [io 0x0000-0x0cf7 window] Sep 13 01:03:18.720522 kernel: pci_bus 0000:02: resource 8 [io 0x0d00-0xfeff window] Sep 13 01:03:18.720575 kernel: pci_bus 0000:03: resource 0 [io 0x4000-0x4fff] Sep 13 01:03:18.720621 kernel: pci_bus 0000:03: resource 1 [mem 0xfd500000-0xfd5fffff] Sep 13 01:03:18.720665 kernel: pci_bus 0000:03: resource 2 [mem 0xc0000000-0xc01fffff 64bit pref] Sep 13 01:03:18.720714 kernel: pci_bus 0000:04: resource 0 [io 0x8000-0x8fff] Sep 13 01:03:18.721055 kernel: pci_bus 0000:04: resource 1 [mem 0xfd100000-0xfd1fffff] Sep 13 01:03:18.721106 kernel: pci_bus 0000:04: resource 2 [mem 0xe7800000-0xe78fffff 64bit pref] Sep 13 01:03:18.721158 kernel: pci_bus 0000:05: resource 0 [io 0xc000-0xcfff] Sep 13 01:03:18.721207 kernel: pci_bus 0000:05: resource 1 [mem 0xfcd00000-0xfcdfffff] Sep 13 01:03:18.721405 kernel: pci_bus 0000:05: resource 2 [mem 0xe7400000-0xe74fffff 64bit pref] Sep 13 01:03:18.721504 kernel: pci_bus 0000:06: resource 1 [mem 0xfc900000-0xfc9fffff] Sep 13 01:03:18.721568 kernel: pci_bus 0000:06: resource 2 [mem 0xe7000000-0xe70fffff 64bit pref] Sep 13 01:03:18.721619 kernel: pci_bus 0000:07: resource 1 [mem 0xfc500000-0xfc5fffff] Sep 13 01:03:18.721664 kernel: pci_bus 0000:07: resource 2 [mem 0xe6c00000-0xe6cfffff 64bit pref] Sep 13 01:03:18.721712 kernel: pci_bus 0000:08: resource 1 [mem 0xfc100000-0xfc1fffff] Sep 13 01:03:18.721760 kernel: pci_bus 0000:08: resource 2 [mem 0xe6800000-0xe68fffff 64bit pref] Sep 13 01:03:18.721809 kernel: pci_bus 0000:09: resource 1 [mem 0xfbd00000-0xfbdfffff] Sep 13 01:03:18.721853 kernel: pci_bus 0000:09: resource 2 [mem 0xe6400000-0xe64fffff 64bit pref] Sep 13 01:03:18.722114 kernel: pci_bus 0000:0a: resource 1 [mem 0xfb900000-0xfb9fffff] Sep 13 01:03:18.722163 kernel: pci_bus 0000:0a: resource 2 [mem 0xe6000000-0xe60fffff 64bit pref] Sep 13 01:03:18.722215 kernel: pci_bus 0000:0b: resource 0 [io 0x5000-0x5fff] Sep 13 01:03:18.722613 kernel: pci_bus 0000:0b: resource 1 [mem 0xfd400000-0xfd4fffff] Sep 13 01:03:18.722667 kernel: pci_bus 0000:0b: resource 2 [mem 0xc0200000-0xc03fffff 64bit pref] Sep 13 01:03:18.722728 kernel: pci_bus 0000:0c: resource 0 [io 0x9000-0x9fff] Sep 13 01:03:18.722776 kernel: pci_bus 0000:0c: resource 1 [mem 0xfd000000-0xfd0fffff] Sep 13 01:03:18.723187 kernel: pci_bus 0000:0c: resource 2 [mem 0xe7700000-0xe77fffff 64bit pref] Sep 13 01:03:18.723248 kernel: pci_bus 0000:0d: resource 0 [io 0xd000-0xdfff] Sep 13 01:03:18.723297 kernel: pci_bus 0000:0d: resource 1 [mem 0xfcc00000-0xfccfffff] Sep 13 01:03:18.723343 kernel: pci_bus 0000:0d: resource 2 [mem 0xe7300000-0xe73fffff 64bit pref] Sep 13 01:03:18.723392 kernel: pci_bus 0000:0e: resource 1 [mem 0xfc800000-0xfc8fffff] Sep 13 01:03:18.723452 kernel: pci_bus 0000:0e: resource 2 [mem 0xe6f00000-0xe6ffffff 64bit pref] Sep 13 01:03:18.723675 kernel: pci_bus 0000:0f: resource 1 [mem 0xfc400000-0xfc4fffff] Sep 13 01:03:18.723734 kernel: pci_bus 0000:0f: resource 2 [mem 0xe6b00000-0xe6bfffff 64bit pref] Sep 13 01:03:18.723789 kernel: pci_bus 0000:10: resource 1 [mem 0xfc000000-0xfc0fffff] Sep 13 01:03:18.723835 kernel: pci_bus 0000:10: resource 2 [mem 0xe6700000-0xe67fffff 64bit pref] Sep 13 01:03:18.724103 kernel: pci_bus 0000:11: resource 1 [mem 0xfbc00000-0xfbcfffff] Sep 13 01:03:18.724153 kernel: pci_bus 0000:11: resource 2 [mem 0xe6300000-0xe63fffff 64bit pref] Sep 13 01:03:18.724206 kernel: pci_bus 0000:12: resource 1 [mem 0xfb800000-0xfb8fffff] Sep 13 01:03:18.724253 kernel: pci_bus 0000:12: resource 2 [mem 0xe5f00000-0xe5ffffff 64bit pref] Sep 13 01:03:18.724326 kernel: pci_bus 0000:13: resource 0 [io 0x6000-0x6fff] Sep 13 01:03:18.724629 kernel: pci_bus 0000:13: resource 1 [mem 0xfd300000-0xfd3fffff] Sep 13 01:03:18.724678 kernel: pci_bus 0000:13: resource 2 [mem 0xe7a00000-0xe7afffff 64bit pref] Sep 13 01:03:18.724727 kernel: pci_bus 0000:14: resource 0 [io 0xa000-0xafff] Sep 13 01:03:18.724774 kernel: pci_bus 0000:14: resource 1 [mem 0xfcf00000-0xfcffffff] Sep 13 01:03:18.724817 kernel: pci_bus 0000:14: resource 2 [mem 0xe7600000-0xe76fffff 64bit pref] Sep 13 01:03:18.724866 kernel: pci_bus 0000:15: resource 0 [io 0xe000-0xefff] Sep 13 01:03:18.724915 kernel: pci_bus 0000:15: resource 1 [mem 0xfcb00000-0xfcbfffff] Sep 13 01:03:18.724960 kernel: pci_bus 0000:15: resource 2 [mem 0xe7200000-0xe72fffff 64bit pref] Sep 13 01:03:18.725009 kernel: pci_bus 0000:16: resource 1 [mem 0xfc700000-0xfc7fffff] Sep 13 01:03:18.725054 kernel: pci_bus 0000:16: resource 2 [mem 0xe6e00000-0xe6efffff 64bit pref] Sep 13 01:03:18.725205 kernel: pci_bus 0000:17: resource 1 [mem 0xfc300000-0xfc3fffff] Sep 13 01:03:18.725258 kernel: pci_bus 0000:17: resource 2 [mem 0xe6a00000-0xe6afffff 64bit pref] Sep 13 01:03:18.725310 kernel: pci_bus 0000:18: resource 1 [mem 0xfbf00000-0xfbffffff] Sep 13 01:03:18.725623 kernel: pci_bus 0000:18: resource 2 [mem 0xe6600000-0xe66fffff 64bit pref] Sep 13 01:03:18.725680 kernel: pci_bus 0000:19: resource 1 [mem 0xfbb00000-0xfbbfffff] Sep 13 01:03:18.725727 kernel: pci_bus 0000:19: resource 2 [mem 0xe6200000-0xe62fffff 64bit pref] Sep 13 01:03:18.725800 kernel: pci_bus 0000:1a: resource 1 [mem 0xfb700000-0xfb7fffff] Sep 13 01:03:18.726092 kernel: pci_bus 0000:1a: resource 2 [mem 0xe5e00000-0xe5efffff 64bit pref] Sep 13 01:03:18.726146 kernel: pci_bus 0000:1b: resource 0 [io 0x7000-0x7fff] Sep 13 01:03:18.726297 kernel: pci_bus 0000:1b: resource 1 [mem 0xfd200000-0xfd2fffff] Sep 13 01:03:18.726345 kernel: pci_bus 0000:1b: resource 2 [mem 0xe7900000-0xe79fffff 64bit pref] Sep 13 01:03:18.726394 kernel: pci_bus 0000:1c: resource 0 [io 0xb000-0xbfff] Sep 13 01:03:18.726455 kernel: pci_bus 0000:1c: resource 1 [mem 0xfce00000-0xfcefffff] Sep 13 01:03:18.726800 kernel: pci_bus 0000:1c: resource 2 [mem 0xe7500000-0xe75fffff 64bit pref] Sep 13 01:03:18.726854 kernel: pci_bus 0000:1d: resource 1 [mem 0xfca00000-0xfcafffff] Sep 13 01:03:18.726900 kernel: pci_bus 0000:1d: resource 2 [mem 0xe7100000-0xe71fffff 64bit pref] Sep 13 01:03:18.726955 kernel: pci_bus 0000:1e: resource 1 [mem 0xfc600000-0xfc6fffff] Sep 13 01:03:18.727001 kernel: pci_bus 0000:1e: resource 2 [mem 0xe6d00000-0xe6dfffff 64bit pref] Sep 13 01:03:18.727049 kernel: pci_bus 0000:1f: resource 1 [mem 0xfc200000-0xfc2fffff] Sep 13 01:03:18.727094 kernel: pci_bus 0000:1f: resource 2 [mem 0xe6900000-0xe69fffff 64bit pref] Sep 13 01:03:18.727143 kernel: pci_bus 0000:20: resource 1 [mem 0xfbe00000-0xfbefffff] Sep 13 01:03:18.727190 kernel: pci_bus 0000:20: resource 2 [mem 0xe6500000-0xe65fffff 64bit pref] Sep 13 01:03:18.727239 kernel: pci_bus 0000:21: resource 1 [mem 0xfba00000-0xfbafffff] Sep 13 01:03:18.727284 kernel: pci_bus 0000:21: resource 2 [mem 0xe6100000-0xe61fffff 64bit pref] Sep 13 01:03:18.727332 kernel: pci_bus 0000:22: resource 1 [mem 0xfb600000-0xfb6fffff] Sep 13 01:03:18.727377 kernel: pci_bus 0000:22: resource 2 [mem 0xe5d00000-0xe5dfffff 64bit pref] Sep 13 01:03:18.727469 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Sep 13 01:03:18.727483 kernel: PCI: CLS 32 bytes, default 64 Sep 13 01:03:18.727490 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Sep 13 01:03:18.727497 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns Sep 13 01:03:18.727503 kernel: clocksource: Switched to clocksource tsc Sep 13 01:03:18.727509 kernel: Initialise system trusted keyrings Sep 13 01:03:18.727516 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Sep 13 01:03:18.727522 kernel: Key type asymmetric registered Sep 13 01:03:18.727529 kernel: Asymmetric key parser 'x509' registered Sep 13 01:03:18.727535 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Sep 13 01:03:18.727543 kernel: io scheduler mq-deadline registered Sep 13 01:03:18.727549 kernel: io scheduler kyber registered Sep 13 01:03:18.727555 kernel: io scheduler bfq registered Sep 13 01:03:18.727608 kernel: pcieport 0000:00:15.0: PME: Signaling with IRQ 24 Sep 13 01:03:18.727780 kernel: pcieport 0000:00:15.0: pciehp: Slot #160 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 13 01:03:18.727840 kernel: pcieport 0000:00:15.1: PME: Signaling with IRQ 25 Sep 13 01:03:18.727890 kernel: pcieport 0000:00:15.1: pciehp: Slot #161 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 13 01:03:18.727939 kernel: pcieport 0000:00:15.2: PME: Signaling with IRQ 26 Sep 13 01:03:18.728283 kernel: pcieport 0000:00:15.2: pciehp: Slot #162 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 13 01:03:18.728339 kernel: pcieport 0000:00:15.3: PME: Signaling with IRQ 27 Sep 13 01:03:18.728390 kernel: pcieport 0000:00:15.3: pciehp: Slot #163 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 13 01:03:18.728448 kernel: pcieport 0000:00:15.4: PME: Signaling with IRQ 28 Sep 13 01:03:18.728498 kernel: pcieport 0000:00:15.4: pciehp: Slot #164 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 13 01:03:18.728547 kernel: pcieport 0000:00:15.5: PME: Signaling with IRQ 29 Sep 13 01:03:18.728598 kernel: pcieport 0000:00:15.5: pciehp: Slot #165 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 13 01:03:18.728937 kernel: pcieport 0000:00:15.6: PME: Signaling with IRQ 30 Sep 13 01:03:18.728992 kernel: pcieport 0000:00:15.6: pciehp: Slot #166 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 13 01:03:18.729045 kernel: pcieport 0000:00:15.7: PME: Signaling with IRQ 31 Sep 13 01:03:18.729094 kernel: pcieport 0000:00:15.7: pciehp: Slot #167 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 13 01:03:18.729148 kernel: pcieport 0000:00:16.0: PME: Signaling with IRQ 32 Sep 13 01:03:18.729196 kernel: pcieport 0000:00:16.0: pciehp: Slot #192 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 13 01:03:18.729247 kernel: pcieport 0000:00:16.1: PME: Signaling with IRQ 33 Sep 13 01:03:18.729295 kernel: pcieport 0000:00:16.1: pciehp: Slot #193 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 13 01:03:18.729345 kernel: pcieport 0000:00:16.2: PME: Signaling with IRQ 34 Sep 13 01:03:18.729393 kernel: pcieport 0000:00:16.2: pciehp: Slot #194 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 13 01:03:18.729479 kernel: pcieport 0000:00:16.3: PME: Signaling with IRQ 35 Sep 13 01:03:18.729531 kernel: pcieport 0000:00:16.3: pciehp: Slot #195 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 13 01:03:18.729580 kernel: pcieport 0000:00:16.4: PME: Signaling with IRQ 36 Sep 13 01:03:18.729638 kernel: pcieport 0000:00:16.4: pciehp: Slot #196 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 13 01:03:18.730196 kernel: pcieport 0000:00:16.5: PME: Signaling with IRQ 37 Sep 13 01:03:18.730254 kernel: pcieport 0000:00:16.5: pciehp: Slot #197 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 13 01:03:18.730307 kernel: pcieport 0000:00:16.6: PME: Signaling with IRQ 38 Sep 13 01:03:18.730564 kernel: pcieport 0000:00:16.6: pciehp: Slot #198 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 13 01:03:18.730619 kernel: pcieport 0000:00:16.7: PME: Signaling with IRQ 39 Sep 13 01:03:18.730670 kernel: pcieport 0000:00:16.7: pciehp: Slot #199 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 13 01:03:18.730736 kernel: pcieport 0000:00:17.0: PME: Signaling with IRQ 40 Sep 13 01:03:18.731040 kernel: pcieport 0000:00:17.0: pciehp: Slot #224 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 13 01:03:18.731103 kernel: pcieport 0000:00:17.1: PME: Signaling with IRQ 41 Sep 13 01:03:18.731344 kernel: pcieport 0000:00:17.1: pciehp: Slot #225 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 13 01:03:18.731403 kernel: pcieport 0000:00:17.2: PME: Signaling with IRQ 42 Sep 13 01:03:18.731503 kernel: pcieport 0000:00:17.2: pciehp: Slot #226 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 13 01:03:18.731574 kernel: pcieport 0000:00:17.3: PME: Signaling with IRQ 43 Sep 13 01:03:18.731624 kernel: pcieport 0000:00:17.3: pciehp: Slot #227 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 13 01:03:18.731674 kernel: pcieport 0000:00:17.4: PME: Signaling with IRQ 44 Sep 13 01:03:18.731725 kernel: pcieport 0000:00:17.4: pciehp: Slot #228 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 13 01:03:18.731773 kernel: pcieport 0000:00:17.5: PME: Signaling with IRQ 45 Sep 13 01:03:18.731822 kernel: pcieport 0000:00:17.5: pciehp: Slot #229 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 13 01:03:18.731871 kernel: pcieport 0000:00:17.6: PME: Signaling with IRQ 46 Sep 13 01:03:18.731919 kernel: pcieport 0000:00:17.6: pciehp: Slot #230 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 13 01:03:18.731970 kernel: pcieport 0000:00:17.7: PME: Signaling with IRQ 47 Sep 13 01:03:18.732019 kernel: pcieport 0000:00:17.7: pciehp: Slot #231 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 13 01:03:18.732068 kernel: pcieport 0000:00:18.0: PME: Signaling with IRQ 48 Sep 13 01:03:18.732115 kernel: pcieport 0000:00:18.0: pciehp: Slot #256 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 13 01:03:18.732164 kernel: pcieport 0000:00:18.1: PME: Signaling with IRQ 49 Sep 13 01:03:18.732224 kernel: pcieport 0000:00:18.1: pciehp: Slot #257 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 13 01:03:18.732277 kernel: pcieport 0000:00:18.2: PME: Signaling with IRQ 50 Sep 13 01:03:18.732327 kernel: pcieport 0000:00:18.2: pciehp: Slot #258 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 13 01:03:18.732376 kernel: pcieport 0000:00:18.3: PME: Signaling with IRQ 51 Sep 13 01:03:18.732431 kernel: pcieport 0000:00:18.3: pciehp: Slot #259 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 13 01:03:18.732482 kernel: pcieport 0000:00:18.4: PME: Signaling with IRQ 52 Sep 13 01:03:18.732530 kernel: pcieport 0000:00:18.4: pciehp: Slot #260 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 13 01:03:18.732593 kernel: pcieport 0000:00:18.5: PME: Signaling with IRQ 53 Sep 13 01:03:18.732660 kernel: pcieport 0000:00:18.5: pciehp: Slot #261 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 13 01:03:18.732710 kernel: pcieport 0000:00:18.6: PME: Signaling with IRQ 54 Sep 13 01:03:18.732759 kernel: pcieport 0000:00:18.6: pciehp: Slot #262 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 13 01:03:18.732808 kernel: pcieport 0000:00:18.7: PME: Signaling with IRQ 55 Sep 13 01:03:18.732859 kernel: pcieport 0000:00:18.7: pciehp: Slot #263 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 13 01:03:18.732869 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 13 01:03:18.732876 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 13 01:03:18.732883 kernel: 00:05: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 13 01:03:18.732889 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBC,PNP0f13:MOUS] at 0x60,0x64 irq 1,12 Sep 13 01:03:18.732896 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 13 01:03:18.732902 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 13 01:03:18.732951 kernel: rtc_cmos 00:01: registered as rtc0 Sep 13 01:03:18.732999 kernel: rtc_cmos 00:01: setting system clock to 2025-09-13T01:03:18 UTC (1757725398) Sep 13 01:03:18.733043 kernel: rtc_cmos 00:01: alarms up to one month, y3k, 114 bytes nvram Sep 13 01:03:18.733052 kernel: intel_pstate: CPU model not supported Sep 13 01:03:18.733059 kernel: NET: Registered PF_INET6 protocol family Sep 13 01:03:18.733065 kernel: Segment Routing with IPv6 Sep 13 01:03:18.733072 kernel: In-situ OAM (IOAM) with IPv6 Sep 13 01:03:18.733078 kernel: NET: Registered PF_PACKET protocol family Sep 13 01:03:18.733085 kernel: Key type dns_resolver registered Sep 13 01:03:18.733093 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Sep 13 01:03:18.733101 kernel: IPI shorthand broadcast: enabled Sep 13 01:03:18.733107 kernel: sched_clock: Marking stable (874003153, 223437545)->(1161633095, -64192397) Sep 13 01:03:18.733114 kernel: registered taskstats version 1 Sep 13 01:03:18.733120 kernel: Loading compiled-in X.509 certificates Sep 13 01:03:18.733126 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.192-flatcar: d4931373bb0d9b9f95da11f02ae07d3649cc6c37' Sep 13 01:03:18.733133 kernel: Key type .fscrypt registered Sep 13 01:03:18.733139 kernel: Key type fscrypt-provisioning registered Sep 13 01:03:18.733146 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 13 01:03:18.733153 kernel: ima: Allocated hash algorithm: sha1 Sep 13 01:03:18.733160 kernel: ima: No architecture policies found Sep 13 01:03:18.733166 kernel: clk: Disabling unused clocks Sep 13 01:03:18.733173 kernel: Freeing unused kernel image (initmem) memory: 47492K Sep 13 01:03:18.733180 kernel: Write protecting the kernel read-only data: 28672k Sep 13 01:03:18.733186 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Sep 13 01:03:18.733193 kernel: Freeing unused kernel image (rodata/data gap) memory: 604K Sep 13 01:03:18.733199 kernel: Run /init as init process Sep 13 01:03:18.733205 kernel: with arguments: Sep 13 01:03:18.733213 kernel: /init Sep 13 01:03:18.733220 kernel: with environment: Sep 13 01:03:18.733226 kernel: HOME=/ Sep 13 01:03:18.733232 kernel: TERM=linux Sep 13 01:03:18.733238 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 13 01:03:18.733246 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Sep 13 01:03:18.733255 systemd[1]: Detected virtualization vmware. Sep 13 01:03:18.733261 systemd[1]: Detected architecture x86-64. Sep 13 01:03:18.733268 systemd[1]: Running in initrd. Sep 13 01:03:18.733275 systemd[1]: No hostname configured, using default hostname. Sep 13 01:03:18.733281 systemd[1]: Hostname set to . Sep 13 01:03:18.733288 systemd[1]: Initializing machine ID from random generator. Sep 13 01:03:18.733294 systemd[1]: Queued start job for default target initrd.target. Sep 13 01:03:18.733300 systemd[1]: Started systemd-ask-password-console.path. Sep 13 01:03:18.733306 systemd[1]: Reached target cryptsetup.target. Sep 13 01:03:18.733313 systemd[1]: Reached target paths.target. Sep 13 01:03:18.733319 systemd[1]: Reached target slices.target. Sep 13 01:03:18.733326 systemd[1]: Reached target swap.target. Sep 13 01:03:18.733332 systemd[1]: Reached target timers.target. Sep 13 01:03:18.733338 systemd[1]: Listening on iscsid.socket. Sep 13 01:03:18.733344 systemd[1]: Listening on iscsiuio.socket. Sep 13 01:03:18.733351 systemd[1]: Listening on systemd-journald-audit.socket. Sep 13 01:03:18.733357 systemd[1]: Listening on systemd-journald-dev-log.socket. Sep 13 01:03:18.733363 systemd[1]: Listening on systemd-journald.socket. Sep 13 01:03:18.733371 systemd[1]: Listening on systemd-networkd.socket. Sep 13 01:03:18.733378 systemd[1]: Listening on systemd-udevd-control.socket. Sep 13 01:03:18.733384 systemd[1]: Listening on systemd-udevd-kernel.socket. Sep 13 01:03:18.733390 systemd[1]: Reached target sockets.target. Sep 13 01:03:18.733396 systemd[1]: Starting kmod-static-nodes.service... Sep 13 01:03:18.733403 systemd[1]: Finished network-cleanup.service. Sep 13 01:03:18.733409 systemd[1]: Starting systemd-fsck-usr.service... Sep 13 01:03:18.733415 systemd[1]: Starting systemd-journald.service... Sep 13 01:03:18.733422 systemd[1]: Starting systemd-modules-load.service... Sep 13 01:03:18.733460 systemd[1]: Starting systemd-resolved.service... Sep 13 01:03:18.733467 systemd[1]: Starting systemd-vconsole-setup.service... Sep 13 01:03:18.733473 systemd[1]: Finished kmod-static-nodes.service. Sep 13 01:03:18.733480 systemd[1]: Finished systemd-fsck-usr.service. Sep 13 01:03:18.733486 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Sep 13 01:03:18.733493 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Sep 13 01:03:18.733499 kernel: audit: type=1130 audit(1757725398.657:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:18.733506 systemd[1]: Finished systemd-vconsole-setup.service. Sep 13 01:03:18.733514 systemd[1]: Starting dracut-cmdline-ask.service... Sep 13 01:03:18.733521 kernel: audit: type=1130 audit(1757725398.660:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:18.733528 systemd[1]: Finished dracut-cmdline-ask.service. Sep 13 01:03:18.733534 systemd[1]: Starting dracut-cmdline.service... Sep 13 01:03:18.733540 kernel: audit: type=1130 audit(1757725398.671:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:18.733547 systemd[1]: Started systemd-resolved.service. Sep 13 01:03:18.733553 systemd[1]: Reached target nss-lookup.target. Sep 13 01:03:18.733561 kernel: audit: type=1130 audit(1757725398.694:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:18.733568 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 13 01:03:18.733575 kernel: Bridge firewalling registered Sep 13 01:03:18.733581 kernel: SCSI subsystem initialized Sep 13 01:03:18.733605 systemd-journald[217]: Journal started Sep 13 01:03:18.733640 systemd-journald[217]: Runtime Journal (/run/log/journal/d6a9244f6fcb4602ba6da098d26921ac) is 4.8M, max 38.8M, 34.0M free. Sep 13 01:03:18.657000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:18.660000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:18.671000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:18.694000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:18.650870 systemd-modules-load[218]: Inserted module 'overlay' Sep 13 01:03:18.734533 systemd[1]: Started systemd-journald.service. Sep 13 01:03:18.690653 systemd-resolved[219]: Positive Trust Anchors: Sep 13 01:03:18.733000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:18.690659 systemd-resolved[219]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 13 01:03:18.690679 systemd-resolved[219]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Sep 13 01:03:18.737808 kernel: audit: type=1130 audit(1757725398.733:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:18.694422 systemd-resolved[219]: Defaulting to hostname 'linux'. Sep 13 01:03:18.707367 systemd-modules-load[218]: Inserted module 'br_netfilter' Sep 13 01:03:18.738150 dracut-cmdline[233]: dracut-dracut-053 Sep 13 01:03:18.738150 dracut-cmdline[233]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LA Sep 13 01:03:18.738150 dracut-cmdline[233]: BEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=65d14b740db9e581daa1d0206188b16d2f1a39e5c5e0878b6855323cd7c584ec Sep 13 01:03:18.742437 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 13 01:03:18.746796 kernel: device-mapper: uevent: version 1.0.3 Sep 13 01:03:18.746818 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Sep 13 01:03:18.746827 kernel: Loading iSCSI transport class v2.0-870. Sep 13 01:03:18.747135 systemd-modules-load[218]: Inserted module 'dm_multipath' Sep 13 01:03:18.749000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:18.750561 systemd[1]: Finished systemd-modules-load.service. Sep 13 01:03:18.751066 systemd[1]: Starting systemd-sysctl.service... Sep 13 01:03:18.754438 kernel: audit: type=1130 audit(1757725398.749:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:18.756951 systemd[1]: Finished systemd-sysctl.service. Sep 13 01:03:18.755000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:18.760433 kernel: audit: type=1130 audit(1757725398.755:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:18.764437 kernel: iscsi: registered transport (tcp) Sep 13 01:03:18.780777 kernel: iscsi: registered transport (qla4xxx) Sep 13 01:03:18.780807 kernel: QLogic iSCSI HBA Driver Sep 13 01:03:18.797635 systemd[1]: Finished dracut-cmdline.service. Sep 13 01:03:18.796000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:18.798257 systemd[1]: Starting dracut-pre-udev.service... Sep 13 01:03:18.800637 kernel: audit: type=1130 audit(1757725398.796:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:18.836451 kernel: raid6: avx2x4 gen() 47654 MB/s Sep 13 01:03:18.852441 kernel: raid6: avx2x4 xor() 21732 MB/s Sep 13 01:03:18.869438 kernel: raid6: avx2x2 gen() 52973 MB/s Sep 13 01:03:18.886481 kernel: raid6: avx2x2 xor() 31831 MB/s Sep 13 01:03:18.903444 kernel: raid6: avx2x1 gen() 43306 MB/s Sep 13 01:03:18.920442 kernel: raid6: avx2x1 xor() 27358 MB/s Sep 13 01:03:18.937439 kernel: raid6: sse2x4 gen() 20994 MB/s Sep 13 01:03:18.954441 kernel: raid6: sse2x4 xor() 11875 MB/s Sep 13 01:03:18.971442 kernel: raid6: sse2x2 gen() 21394 MB/s Sep 13 01:03:18.988438 kernel: raid6: sse2x2 xor() 13297 MB/s Sep 13 01:03:19.005443 kernel: raid6: sse2x1 gen() 18027 MB/s Sep 13 01:03:19.022720 kernel: raid6: sse2x1 xor() 8721 MB/s Sep 13 01:03:19.022742 kernel: raid6: using algorithm avx2x2 gen() 52973 MB/s Sep 13 01:03:19.022750 kernel: raid6: .... xor() 31831 MB/s, rmw enabled Sep 13 01:03:19.023875 kernel: raid6: using avx2x2 recovery algorithm Sep 13 01:03:19.032436 kernel: xor: automatically using best checksumming function avx Sep 13 01:03:19.094444 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Sep 13 01:03:19.098940 systemd[1]: Finished dracut-pre-udev.service. Sep 13 01:03:19.097000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:19.100000 audit: BPF prog-id=7 op=LOAD Sep 13 01:03:19.100000 audit: BPF prog-id=8 op=LOAD Sep 13 01:03:19.101731 systemd[1]: Starting systemd-udevd.service... Sep 13 01:03:19.102460 kernel: audit: type=1130 audit(1757725399.097:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:19.109837 systemd-udevd[415]: Using default interface naming scheme 'v252'. Sep 13 01:03:19.112743 systemd[1]: Started systemd-udevd.service. Sep 13 01:03:19.111000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:19.113559 systemd[1]: Starting dracut-pre-trigger.service... Sep 13 01:03:19.120488 dracut-pre-trigger[418]: rd.md=0: removing MD RAID activation Sep 13 01:03:19.135604 systemd[1]: Finished dracut-pre-trigger.service. Sep 13 01:03:19.134000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:19.136136 systemd[1]: Starting systemd-udev-trigger.service... Sep 13 01:03:19.201028 systemd[1]: Finished systemd-udev-trigger.service. Sep 13 01:03:19.199000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:19.260935 kernel: VMware vmxnet3 virtual NIC driver - version 1.6.0.0-k-NAPI Sep 13 01:03:19.260969 kernel: VMware PVSCSI driver - version 1.0.7.0-k Sep 13 01:03:19.263437 kernel: vmxnet3 0000:0b:00.0: # of Tx queues : 2, # of Rx queues : 2 Sep 13 01:03:19.271695 kernel: vmxnet3 0000:0b:00.0 eth0: NIC Link is Up 10000 Mbps Sep 13 01:03:19.271776 kernel: vmw_pvscsi: using 64bit dma Sep 13 01:03:19.271785 kernel: vmw_pvscsi: max_id: 16 Sep 13 01:03:19.271795 kernel: vmw_pvscsi: setting ring_pages to 8 Sep 13 01:03:19.284441 kernel: cryptd: max_cpu_qlen set to 1000 Sep 13 01:03:19.290834 kernel: vmw_pvscsi: enabling reqCallThreshold Sep 13 01:03:19.290872 kernel: vmw_pvscsi: driver-based request coalescing enabled Sep 13 01:03:19.290881 kernel: vmw_pvscsi: using MSI-X Sep 13 01:03:19.290889 kernel: scsi host0: VMware PVSCSI storage adapter rev 2, req/cmp/msg rings: 8/8/1 pages, cmd_per_lun=254 Sep 13 01:03:19.292833 kernel: vmw_pvscsi 0000:03:00.0: VMware PVSCSI rev 2 host #0 Sep 13 01:03:19.294465 kernel: scsi 0:0:0:0: Direct-Access VMware Virtual disk 2.0 PQ: 0 ANSI: 6 Sep 13 01:03:19.300243 kernel: AVX2 version of gcm_enc/dec engaged. Sep 13 01:03:19.300278 kernel: AES CTR mode by8 optimization enabled Sep 13 01:03:19.300287 kernel: vmxnet3 0000:0b:00.0 ens192: renamed from eth0 Sep 13 01:03:19.301443 kernel: libata version 3.00 loaded. Sep 13 01:03:19.304659 kernel: ata_piix 0000:00:07.1: version 2.13 Sep 13 01:03:19.307041 kernel: scsi host1: ata_piix Sep 13 01:03:19.307125 kernel: scsi host2: ata_piix Sep 13 01:03:19.307215 kernel: ata1: PATA max UDMA/33 cmd 0x1f0 ctl 0x3f6 bmdma 0x1060 irq 14 Sep 13 01:03:19.307225 kernel: ata2: PATA max UDMA/33 cmd 0x170 ctl 0x376 bmdma 0x1068 irq 15 Sep 13 01:03:19.472492 kernel: ata2.00: ATAPI: VMware Virtual IDE CDROM Drive, 00000001, max UDMA/33 Sep 13 01:03:19.476480 kernel: scsi 2:0:0:0: CD-ROM NECVMWar VMware IDE CDR10 1.00 PQ: 0 ANSI: 5 Sep 13 01:03:19.484406 kernel: sd 0:0:0:0: [sda] 17805312 512-byte logical blocks: (9.12 GB/8.49 GiB) Sep 13 01:03:19.493452 kernel: sd 0:0:0:0: [sda] Write Protect is off Sep 13 01:03:19.493530 kernel: sd 0:0:0:0: [sda] Mode Sense: 31 00 00 00 Sep 13 01:03:19.493600 kernel: sd 0:0:0:0: [sda] Cache data unavailable Sep 13 01:03:19.493746 kernel: sd 0:0:0:0: [sda] Assuming drive cache: write through Sep 13 01:03:19.493807 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 13 01:03:19.493816 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Sep 13 01:03:19.516442 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 1x/1x writer dvd-ram cd/rw xa/form2 cdda tray Sep 13 01:03:19.532769 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 13 01:03:19.532780 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Sep 13 01:03:19.539440 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/sda6 scanned by (udev-worker) (473) Sep 13 01:03:19.544700 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Sep 13 01:03:19.544834 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Sep 13 01:03:19.545357 systemd[1]: Starting disk-uuid.service... Sep 13 01:03:19.547773 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Sep 13 01:03:19.552360 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Sep 13 01:03:19.554470 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Sep 13 01:03:19.576442 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 13 01:03:20.683028 disk-uuid[548]: The operation has completed successfully. Sep 13 01:03:20.683436 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 13 01:03:21.170791 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 13 01:03:21.170863 systemd[1]: Finished disk-uuid.service. Sep 13 01:03:21.169000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:21.169000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:21.171622 systemd[1]: Starting verity-setup.service... Sep 13 01:03:21.211445 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Sep 13 01:03:21.430047 systemd[1]: Found device dev-mapper-usr.device. Sep 13 01:03:21.430942 systemd[1]: Mounting sysusr-usr.mount... Sep 13 01:03:21.432359 systemd[1]: Finished verity-setup.service. Sep 13 01:03:21.431000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:21.531456 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Sep 13 01:03:21.531203 systemd[1]: Mounted sysusr-usr.mount. Sep 13 01:03:21.531852 systemd[1]: Starting afterburn-network-kargs.service... Sep 13 01:03:21.532362 systemd[1]: Starting ignition-setup.service... Sep 13 01:03:21.572917 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Sep 13 01:03:21.572953 kernel: BTRFS info (device sda6): using free space tree Sep 13 01:03:21.572961 kernel: BTRFS info (device sda6): has skinny extents Sep 13 01:03:21.582439 kernel: BTRFS info (device sda6): enabling ssd optimizations Sep 13 01:03:21.592128 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 13 01:03:21.598062 systemd[1]: Finished ignition-setup.service. Sep 13 01:03:21.596000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:21.598639 systemd[1]: Starting ignition-fetch-offline.service... Sep 13 01:03:21.801403 systemd[1]: Finished afterburn-network-kargs.service. Sep 13 01:03:21.800000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=afterburn-network-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:21.802183 systemd[1]: Starting parse-ip-for-networkd.service... Sep 13 01:03:21.846000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:21.847000 audit: BPF prog-id=9 op=LOAD Sep 13 01:03:21.848301 systemd[1]: Finished parse-ip-for-networkd.service. Sep 13 01:03:21.849191 systemd[1]: Starting systemd-networkd.service... Sep 13 01:03:21.864155 systemd-networkd[731]: lo: Link UP Sep 13 01:03:21.864164 systemd-networkd[731]: lo: Gained carrier Sep 13 01:03:21.864639 systemd-networkd[731]: Enumeration completed Sep 13 01:03:21.864693 systemd[1]: Started systemd-networkd.service. Sep 13 01:03:21.863000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:21.864887 systemd[1]: Reached target network.target. Sep 13 01:03:21.865584 systemd[1]: Starting iscsiuio.service... Sep 13 01:03:21.868960 kernel: vmxnet3 0000:0b:00.0 ens192: intr type 3, mode 0, 3 vectors allocated Sep 13 01:03:21.869064 kernel: vmxnet3 0000:0b:00.0 ens192: NIC Link is Up 10000 Mbps Sep 13 01:03:21.865971 systemd-networkd[731]: ens192: Configuring with /etc/systemd/network/10-dracut-cmdline-99.network. Sep 13 01:03:21.869540 systemd[1]: Started iscsiuio.service. Sep 13 01:03:21.868000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:21.870123 systemd[1]: Starting iscsid.service... Sep 13 01:03:21.870543 systemd-networkd[731]: ens192: Link UP Sep 13 01:03:21.870545 systemd-networkd[731]: ens192: Gained carrier Sep 13 01:03:21.872323 iscsid[738]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Sep 13 01:03:21.872323 iscsid[738]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Sep 13 01:03:21.872323 iscsid[738]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Sep 13 01:03:21.872323 iscsid[738]: If using hardware iscsi like qla4xxx this message can be ignored. Sep 13 01:03:21.873136 iscsid[738]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Sep 13 01:03:21.873136 iscsid[738]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Sep 13 01:03:21.872000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:21.873432 systemd[1]: Started iscsid.service. Sep 13 01:03:21.873974 systemd[1]: Starting dracut-initqueue.service... Sep 13 01:03:21.880327 systemd[1]: Finished dracut-initqueue.service. Sep 13 01:03:21.879000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:21.880648 systemd[1]: Reached target remote-fs-pre.target. Sep 13 01:03:21.880857 systemd[1]: Reached target remote-cryptsetup.target. Sep 13 01:03:21.881067 systemd[1]: Reached target remote-fs.target. Sep 13 01:03:21.881695 systemd[1]: Starting dracut-pre-mount.service... Sep 13 01:03:21.886644 systemd[1]: Finished dracut-pre-mount.service. Sep 13 01:03:21.885000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:22.501273 ignition[605]: Ignition 2.14.0 Sep 13 01:03:22.501552 ignition[605]: Stage: fetch-offline Sep 13 01:03:22.501703 ignition[605]: reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 13 01:03:22.501862 ignition[605]: parsing config with SHA512: bd85a898f7da4744ff98e02742aa4854e1ceea8026a4e95cb6fb599b39b54cff0db353847df13d3c55ae196a9dc5d648977228d55e5da3ea20cd600fa7cec8ed Sep 13 01:03:22.507392 ignition[605]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Sep 13 01:03:22.507641 ignition[605]: parsed url from cmdline: "" Sep 13 01:03:22.507684 ignition[605]: no config URL provided Sep 13 01:03:22.507800 ignition[605]: reading system config file "/usr/lib/ignition/user.ign" Sep 13 01:03:22.507938 ignition[605]: no config at "/usr/lib/ignition/user.ign" Sep 13 01:03:22.516121 ignition[605]: config successfully fetched Sep 13 01:03:22.516204 ignition[605]: parsing config with SHA512: f8f31bb1f4d8a8769c2270dd33ec229ae01832e2ce84705053edf9a880596227871d86531044197644668e5d8c3285638b00c681bfa8fa95b5cf17062bf62063 Sep 13 01:03:22.518457 unknown[605]: fetched base config from "system" Sep 13 01:03:22.518628 unknown[605]: fetched user config from "vmware" Sep 13 01:03:22.519143 ignition[605]: fetch-offline: fetch-offline passed Sep 13 01:03:22.519311 ignition[605]: Ignition finished successfully Sep 13 01:03:22.518000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:22.519950 systemd[1]: Finished ignition-fetch-offline.service. Sep 13 01:03:22.520094 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 13 01:03:22.520557 systemd[1]: Starting ignition-kargs.service... Sep 13 01:03:22.525962 ignition[753]: Ignition 2.14.0 Sep 13 01:03:22.525969 ignition[753]: Stage: kargs Sep 13 01:03:22.526032 ignition[753]: reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 13 01:03:22.526042 ignition[753]: parsing config with SHA512: bd85a898f7da4744ff98e02742aa4854e1ceea8026a4e95cb6fb599b39b54cff0db353847df13d3c55ae196a9dc5d648977228d55e5da3ea20cd600fa7cec8ed Sep 13 01:03:22.527351 ignition[753]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Sep 13 01:03:22.529040 ignition[753]: kargs: kargs passed Sep 13 01:03:22.529078 ignition[753]: Ignition finished successfully Sep 13 01:03:22.530066 systemd[1]: Finished ignition-kargs.service. Sep 13 01:03:22.528000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:22.530708 systemd[1]: Starting ignition-disks.service... Sep 13 01:03:22.535048 ignition[759]: Ignition 2.14.0 Sep 13 01:03:22.535267 ignition[759]: Stage: disks Sep 13 01:03:22.535464 ignition[759]: reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 13 01:03:22.535620 ignition[759]: parsing config with SHA512: bd85a898f7da4744ff98e02742aa4854e1ceea8026a4e95cb6fb599b39b54cff0db353847df13d3c55ae196a9dc5d648977228d55e5da3ea20cd600fa7cec8ed Sep 13 01:03:22.536930 ignition[759]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Sep 13 01:03:22.538732 ignition[759]: disks: disks passed Sep 13 01:03:22.538887 ignition[759]: Ignition finished successfully Sep 13 01:03:22.539505 systemd[1]: Finished ignition-disks.service. Sep 13 01:03:22.539687 systemd[1]: Reached target initrd-root-device.target. Sep 13 01:03:22.538000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:22.539800 systemd[1]: Reached target local-fs-pre.target. Sep 13 01:03:22.539945 systemd[1]: Reached target local-fs.target. Sep 13 01:03:22.540111 systemd[1]: Reached target sysinit.target. Sep 13 01:03:22.540266 systemd[1]: Reached target basic.target. Sep 13 01:03:22.540908 systemd[1]: Starting systemd-fsck-root.service... Sep 13 01:03:23.044900 systemd-fsck[767]: ROOT: clean, 629/1628000 files, 124065/1617920 blocks Sep 13 01:03:23.052555 systemd[1]: Finished systemd-fsck-root.service. Sep 13 01:03:23.055862 kernel: kauditd_printk_skb: 20 callbacks suppressed Sep 13 01:03:23.055902 kernel: audit: type=1130 audit(1757725403.051:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:23.051000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:23.053603 systemd[1]: Mounting sysroot.mount... Sep 13 01:03:23.128450 kernel: EXT4-fs (sda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Sep 13 01:03:23.128608 systemd[1]: Mounted sysroot.mount. Sep 13 01:03:23.128945 systemd[1]: Reached target initrd-root-fs.target. Sep 13 01:03:23.130945 systemd[1]: Mounting sysroot-usr.mount... Sep 13 01:03:23.131642 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Sep 13 01:03:23.131892 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 13 01:03:23.132156 systemd[1]: Reached target ignition-diskful.target. Sep 13 01:03:23.133184 systemd[1]: Mounted sysroot-usr.mount. Sep 13 01:03:23.137413 systemd[1]: Mounting sysroot-usr-share-oem.mount... Sep 13 01:03:23.138233 systemd[1]: Starting initrd-setup-root.service... Sep 13 01:03:23.142438 initrd-setup-root[778]: cut: /sysroot/etc/passwd: No such file or directory Sep 13 01:03:23.147121 initrd-setup-root[786]: cut: /sysroot/etc/group: No such file or directory Sep 13 01:03:23.150441 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (773) Sep 13 01:03:23.153444 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Sep 13 01:03:23.153460 kernel: BTRFS info (device sda6): using free space tree Sep 13 01:03:23.153469 kernel: BTRFS info (device sda6): has skinny extents Sep 13 01:03:23.154188 initrd-setup-root[810]: cut: /sysroot/etc/shadow: No such file or directory Sep 13 01:03:23.157086 initrd-setup-root[818]: cut: /sysroot/etc/gshadow: No such file or directory Sep 13 01:03:23.159438 kernel: BTRFS info (device sda6): enabling ssd optimizations Sep 13 01:03:23.160916 systemd[1]: Mounted sysroot-usr-share-oem.mount. Sep 13 01:03:23.196452 systemd[1]: Finished initrd-setup-root.service. Sep 13 01:03:23.201145 kernel: audit: type=1130 audit(1757725403.195:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:23.195000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:23.197007 systemd[1]: Starting ignition-mount.service... Sep 13 01:03:23.199485 systemd[1]: Starting sysroot-boot.service... Sep 13 01:03:23.201229 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Sep 13 01:03:23.201276 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Sep 13 01:03:23.208164 ignition[838]: INFO : Ignition 2.14.0 Sep 13 01:03:23.208390 ignition[838]: INFO : Stage: mount Sep 13 01:03:23.208565 ignition[838]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 13 01:03:23.208794 ignition[838]: DEBUG : parsing config with SHA512: bd85a898f7da4744ff98e02742aa4854e1ceea8026a4e95cb6fb599b39b54cff0db353847df13d3c55ae196a9dc5d648977228d55e5da3ea20cd600fa7cec8ed Sep 13 01:03:23.210412 ignition[838]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Sep 13 01:03:23.211961 ignition[838]: INFO : mount: mount passed Sep 13 01:03:23.212103 ignition[838]: INFO : Ignition finished successfully Sep 13 01:03:23.212809 systemd[1]: Finished ignition-mount.service. Sep 13 01:03:23.215958 kernel: audit: type=1130 audit(1757725403.211:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:23.211000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:23.213378 systemd[1]: Starting ignition-files.service... Sep 13 01:03:23.217949 systemd[1]: Mounting sysroot-usr-share-oem.mount... Sep 13 01:03:23.228129 systemd[1]: Finished sysroot-boot.service. Sep 13 01:03:23.229312 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (846) Sep 13 01:03:23.228000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:23.234392 kernel: audit: type=1130 audit(1757725403.228:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:23.234418 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Sep 13 01:03:23.234436 kernel: BTRFS info (device sda6): using free space tree Sep 13 01:03:23.234446 kernel: BTRFS info (device sda6): has skinny extents Sep 13 01:03:23.239440 kernel: BTRFS info (device sda6): enabling ssd optimizations Sep 13 01:03:23.241722 systemd[1]: Mounted sysroot-usr-share-oem.mount. Sep 13 01:03:23.247228 ignition[867]: INFO : Ignition 2.14.0 Sep 13 01:03:23.247228 ignition[867]: INFO : Stage: files Sep 13 01:03:23.247570 ignition[867]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 13 01:03:23.247570 ignition[867]: DEBUG : parsing config with SHA512: bd85a898f7da4744ff98e02742aa4854e1ceea8026a4e95cb6fb599b39b54cff0db353847df13d3c55ae196a9dc5d648977228d55e5da3ea20cd600fa7cec8ed Sep 13 01:03:23.248794 ignition[867]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Sep 13 01:03:23.251913 ignition[867]: DEBUG : files: compiled without relabeling support, skipping Sep 13 01:03:23.252373 ignition[867]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 13 01:03:23.252373 ignition[867]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 13 01:03:23.257459 ignition[867]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 13 01:03:23.257665 ignition[867]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 13 01:03:23.259672 unknown[867]: wrote ssh authorized keys file for user: core Sep 13 01:03:23.260113 ignition[867]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 13 01:03:23.261237 ignition[867]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Sep 13 01:03:23.261237 ignition[867]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Sep 13 01:03:23.311441 ignition[867]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 13 01:03:23.442359 ignition[867]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Sep 13 01:03:23.445264 ignition[867]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 13 01:03:23.445465 ignition[867]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Sep 13 01:03:23.522695 systemd-networkd[731]: ens192: Gained IPv6LL Sep 13 01:03:23.672858 ignition[867]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 13 01:03:23.836186 ignition[867]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 13 01:03:23.836441 ignition[867]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Sep 13 01:03:23.836441 ignition[867]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Sep 13 01:03:23.836441 ignition[867]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 13 01:03:23.836916 ignition[867]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 13 01:03:23.836916 ignition[867]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 13 01:03:23.836916 ignition[867]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 13 01:03:23.836916 ignition[867]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 13 01:03:23.836916 ignition[867]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 13 01:03:23.836916 ignition[867]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 13 01:03:23.836916 ignition[867]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 13 01:03:23.836916 ignition[867]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 13 01:03:23.838198 ignition[867]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 13 01:03:23.838198 ignition[867]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/etc/systemd/system/vmtoolsd.service" Sep 13 01:03:23.838198 ignition[867]: INFO : files: createFilesystemsFiles: createFiles: op(b): oem config not found in "/usr/share/oem", looking on oem partition Sep 13 01:03:23.842566 ignition[867]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(c): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1399123772" Sep 13 01:03:23.842756 ignition[867]: CRITICAL : files: createFilesystemsFiles: createFiles: op(b): op(c): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1399123772": device or resource busy Sep 13 01:03:23.842756 ignition[867]: ERROR : files: createFilesystemsFiles: createFiles: op(b): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem1399123772", trying btrfs: device or resource busy Sep 13 01:03:23.842756 ignition[867]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(d): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1399123772" Sep 13 01:03:23.842756 ignition[867]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(d): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1399123772" Sep 13 01:03:23.843685 ignition[867]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(e): [started] unmounting "/mnt/oem1399123772" Sep 13 01:03:23.843685 ignition[867]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(e): [finished] unmounting "/mnt/oem1399123772" Sep 13 01:03:23.843685 ignition[867]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/etc/systemd/system/vmtoolsd.service" Sep 13 01:03:23.843685 ignition[867]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 13 01:03:23.843685 ignition[867]: INFO : files: createFilesystemsFiles: createFiles: op(f): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Sep 13 01:03:24.230200 ignition[867]: INFO : files: createFilesystemsFiles: createFiles: op(f): GET result: OK Sep 13 01:03:24.793858 ignition[867]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 13 01:03:24.802960 ignition[867]: INFO : files: createFilesystemsFiles: createFiles: op(10): [started] writing file "/sysroot/etc/systemd/network/00-vmware.network" Sep 13 01:03:24.803147 ignition[867]: INFO : files: createFilesystemsFiles: createFiles: op(10): [finished] writing file "/sysroot/etc/systemd/network/00-vmware.network" Sep 13 01:03:24.803147 ignition[867]: INFO : files: op(11): [started] processing unit "vmtoolsd.service" Sep 13 01:03:24.803147 ignition[867]: INFO : files: op(11): [finished] processing unit "vmtoolsd.service" Sep 13 01:03:24.803147 ignition[867]: INFO : files: op(12): [started] processing unit "prepare-helm.service" Sep 13 01:03:24.803147 ignition[867]: INFO : files: op(12): op(13): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 13 01:03:24.803147 ignition[867]: INFO : files: op(12): op(13): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 13 01:03:24.803147 ignition[867]: INFO : files: op(12): [finished] processing unit "prepare-helm.service" Sep 13 01:03:24.803147 ignition[867]: INFO : files: op(14): [started] processing unit "coreos-metadata.service" Sep 13 01:03:24.803147 ignition[867]: INFO : files: op(14): op(15): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 13 01:03:24.804469 ignition[867]: INFO : files: op(14): op(15): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 13 01:03:24.804469 ignition[867]: INFO : files: op(14): [finished] processing unit "coreos-metadata.service" Sep 13 01:03:24.804469 ignition[867]: INFO : files: op(16): [started] setting preset to enabled for "vmtoolsd.service" Sep 13 01:03:24.804469 ignition[867]: INFO : files: op(16): [finished] setting preset to enabled for "vmtoolsd.service" Sep 13 01:03:24.804469 ignition[867]: INFO : files: op(17): [started] setting preset to enabled for "prepare-helm.service" Sep 13 01:03:24.804469 ignition[867]: INFO : files: op(17): [finished] setting preset to enabled for "prepare-helm.service" Sep 13 01:03:24.804469 ignition[867]: INFO : files: op(18): [started] setting preset to disabled for "coreos-metadata.service" Sep 13 01:03:24.804469 ignition[867]: INFO : files: op(18): op(19): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 13 01:03:25.217623 ignition[867]: INFO : files: op(18): op(19): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 13 01:03:25.217623 ignition[867]: INFO : files: op(18): [finished] setting preset to disabled for "coreos-metadata.service" Sep 13 01:03:25.218049 ignition[867]: INFO : files: createResultFile: createFiles: op(1a): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 13 01:03:25.218049 ignition[867]: INFO : files: createResultFile: createFiles: op(1a): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 13 01:03:25.218049 ignition[867]: INFO : files: files passed Sep 13 01:03:25.218049 ignition[867]: INFO : Ignition finished successfully Sep 13 01:03:25.222346 kernel: audit: type=1130 audit(1757725405.217:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:25.217000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:25.218416 systemd[1]: Finished ignition-files.service. Sep 13 01:03:25.219746 systemd[1]: Starting initrd-setup-root-after-ignition.service... Sep 13 01:03:25.221031 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Sep 13 01:03:25.221434 systemd[1]: Starting ignition-quench.service... Sep 13 01:03:25.223000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:25.224945 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 13 01:03:25.230254 kernel: audit: type=1130 audit(1757725405.223:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:25.230268 kernel: audit: type=1131 audit(1757725405.223:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:25.223000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:25.224992 systemd[1]: Finished ignition-quench.service. Sep 13 01:03:25.232902 kernel: audit: type=1130 audit(1757725405.228:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:25.228000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:25.232944 initrd-setup-root-after-ignition[893]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 13 01:03:25.226321 systemd[1]: Finished initrd-setup-root-after-ignition.service. Sep 13 01:03:25.230367 systemd[1]: Reached target ignition-complete.target. Sep 13 01:03:25.233382 systemd[1]: Starting initrd-parse-etc.service... Sep 13 01:03:25.242275 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 13 01:03:25.242500 systemd[1]: Finished initrd-parse-etc.service. Sep 13 01:03:25.242774 systemd[1]: Reached target initrd-fs.target. Sep 13 01:03:25.241000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:25.247464 kernel: audit: type=1130 audit(1757725405.241:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:25.247479 kernel: audit: type=1131 audit(1757725405.241:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:25.241000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:25.247759 systemd[1]: Reached target initrd.target. Sep 13 01:03:25.247892 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Sep 13 01:03:25.248375 systemd[1]: Starting dracut-pre-pivot.service... Sep 13 01:03:25.255300 systemd[1]: Finished dracut-pre-pivot.service. Sep 13 01:03:25.256054 systemd[1]: Starting initrd-cleanup.service... Sep 13 01:03:25.254000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:25.261916 systemd[1]: Stopped target nss-lookup.target. Sep 13 01:03:25.262225 systemd[1]: Stopped target remote-cryptsetup.target. Sep 13 01:03:25.262514 systemd[1]: Stopped target timers.target. Sep 13 01:03:25.262761 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 13 01:03:25.262964 systemd[1]: Stopped dracut-pre-pivot.service. Sep 13 01:03:25.261000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:25.263317 systemd[1]: Stopped target initrd.target. Sep 13 01:03:25.263577 systemd[1]: Stopped target basic.target. Sep 13 01:03:25.263831 systemd[1]: Stopped target ignition-complete.target. Sep 13 01:03:25.264091 systemd[1]: Stopped target ignition-diskful.target. Sep 13 01:03:25.264345 systemd[1]: Stopped target initrd-root-device.target. Sep 13 01:03:25.264651 systemd[1]: Stopped target remote-fs.target. Sep 13 01:03:25.264900 systemd[1]: Stopped target remote-fs-pre.target. Sep 13 01:03:25.265157 systemd[1]: Stopped target sysinit.target. Sep 13 01:03:25.265407 systemd[1]: Stopped target local-fs.target. Sep 13 01:03:25.265661 systemd[1]: Stopped target local-fs-pre.target. Sep 13 01:03:25.265913 systemd[1]: Stopped target swap.target. Sep 13 01:03:25.266135 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 13 01:03:25.266325 systemd[1]: Stopped dracut-pre-mount.service. Sep 13 01:03:25.265000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:25.266656 systemd[1]: Stopped target cryptsetup.target. Sep 13 01:03:25.266890 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 13 01:03:25.267086 systemd[1]: Stopped dracut-initqueue.service. Sep 13 01:03:25.265000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:25.267396 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 13 01:03:25.267514 systemd[1]: Stopped ignition-fetch-offline.service. Sep 13 01:03:25.266000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:25.267723 systemd[1]: Stopped target paths.target. Sep 13 01:03:25.267873 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 13 01:03:25.269462 systemd[1]: Stopped systemd-ask-password-console.path. Sep 13 01:03:25.269623 systemd[1]: Stopped target slices.target. Sep 13 01:03:25.269795 systemd[1]: Stopped target sockets.target. Sep 13 01:03:25.269968 systemd[1]: iscsid.socket: Deactivated successfully. Sep 13 01:03:25.270026 systemd[1]: Closed iscsid.socket. Sep 13 01:03:25.270444 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 13 01:03:25.270528 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Sep 13 01:03:25.269000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:25.270778 systemd[1]: ignition-files.service: Deactivated successfully. Sep 13 01:03:25.270854 systemd[1]: Stopped ignition-files.service. Sep 13 01:03:25.269000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:25.271604 systemd[1]: Stopping ignition-mount.service... Sep 13 01:03:25.271000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:25.271000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:25.272278 systemd[1]: Stopping iscsiuio.service... Sep 13 01:03:25.272778 systemd[1]: Stopping sysroot-boot.service... Sep 13 01:03:25.272874 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 13 01:03:25.273000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:25.272963 systemd[1]: Stopped systemd-udev-trigger.service. Sep 13 01:03:25.274000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:25.274000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:25.273169 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 13 01:03:25.273247 systemd[1]: Stopped dracut-pre-trigger.service. Sep 13 01:03:25.275090 systemd[1]: iscsiuio.service: Deactivated successfully. Sep 13 01:03:25.275149 systemd[1]: Stopped iscsiuio.service. Sep 13 01:03:25.276124 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 13 01:03:25.276172 systemd[1]: Finished initrd-cleanup.service. Sep 13 01:03:25.276770 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 13 01:03:25.276788 systemd[1]: Closed iscsiuio.socket. Sep 13 01:03:25.279455 ignition[906]: INFO : Ignition 2.14.0 Sep 13 01:03:25.279455 ignition[906]: INFO : Stage: umount Sep 13 01:03:25.279455 ignition[906]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 13 01:03:25.279455 ignition[906]: DEBUG : parsing config with SHA512: bd85a898f7da4744ff98e02742aa4854e1ceea8026a4e95cb6fb599b39b54cff0db353847df13d3c55ae196a9dc5d648977228d55e5da3ea20cd600fa7cec8ed Sep 13 01:03:25.281838 ignition[906]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Sep 13 01:03:25.282575 ignition[906]: INFO : umount: umount passed Sep 13 01:03:25.282704 ignition[906]: INFO : Ignition finished successfully Sep 13 01:03:25.283289 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 13 01:03:25.283353 systemd[1]: Stopped ignition-mount.service. Sep 13 01:03:25.282000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:25.283617 systemd[1]: Stopped target network.target. Sep 13 01:03:25.283731 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 13 01:03:25.282000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:25.283755 systemd[1]: Stopped ignition-disks.service. Sep 13 01:03:25.284198 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 13 01:03:25.284219 systemd[1]: Stopped ignition-kargs.service. Sep 13 01:03:25.282000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:25.284416 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 13 01:03:25.283000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:25.284446 systemd[1]: Stopped ignition-setup.service. Sep 13 01:03:25.285112 systemd[1]: Stopping systemd-networkd.service... Sep 13 01:03:25.285292 systemd[1]: Stopping systemd-resolved.service... Sep 13 01:03:25.287587 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 13 01:03:25.287640 systemd[1]: Stopped systemd-networkd.service. Sep 13 01:03:25.286000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:25.288404 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 13 01:03:25.288852 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 13 01:03:25.288872 systemd[1]: Closed systemd-networkd.socket. Sep 13 01:03:25.288000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:25.288000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=afterburn-network-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:25.288000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:25.288000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:25.289466 systemd[1]: Stopping network-cleanup.service... Sep 13 01:03:25.289561 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 13 01:03:25.289592 systemd[1]: Stopped parse-ip-for-networkd.service. Sep 13 01:03:25.289722 systemd[1]: afterburn-network-kargs.service: Deactivated successfully. Sep 13 01:03:25.289742 systemd[1]: Stopped afterburn-network-kargs.service. Sep 13 01:03:25.289843 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 13 01:03:25.289867 systemd[1]: Stopped systemd-sysctl.service. Sep 13 01:03:25.290028 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 13 01:03:25.290048 systemd[1]: Stopped systemd-modules-load.service. Sep 13 01:03:25.291000 audit: BPF prog-id=9 op=UNLOAD Sep 13 01:03:25.293026 systemd[1]: Stopping systemd-udevd.service... Sep 13 01:03:25.293796 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 13 01:03:25.294086 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 13 01:03:25.294146 systemd[1]: Stopped systemd-resolved.service. Sep 13 01:03:25.292000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:25.295408 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 13 01:03:25.295490 systemd[1]: Stopped systemd-udevd.service. Sep 13 01:03:25.294000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:25.295000 audit: BPF prog-id=6 op=UNLOAD Sep 13 01:03:25.295000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:25.295000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:25.295000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:25.296000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:25.296247 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 13 01:03:25.296271 systemd[1]: Closed systemd-udevd-control.socket. Sep 13 01:03:25.296384 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 13 01:03:25.296406 systemd[1]: Closed systemd-udevd-kernel.socket. Sep 13 01:03:25.296536 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 13 01:03:25.296558 systemd[1]: Stopped dracut-pre-udev.service. Sep 13 01:03:25.296663 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 13 01:03:25.296683 systemd[1]: Stopped dracut-cmdline.service. Sep 13 01:03:25.296000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:25.296000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:25.296782 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 13 01:03:25.296800 systemd[1]: Stopped dracut-cmdline-ask.service. Sep 13 01:03:25.297268 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Sep 13 01:03:25.297370 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 13 01:03:25.297402 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Sep 13 01:03:25.297644 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 13 01:03:25.297665 systemd[1]: Stopped kmod-static-nodes.service. Sep 13 01:03:25.297773 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 13 01:03:25.297793 systemd[1]: Stopped systemd-vconsole-setup.service. Sep 13 01:03:25.298524 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Sep 13 01:03:25.301588 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 13 01:03:25.300000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:25.301634 systemd[1]: Stopped network-cleanup.service. Sep 13 01:03:25.300000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:25.300000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:25.301917 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 13 01:03:25.301957 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Sep 13 01:03:25.408063 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 13 01:03:25.408120 systemd[1]: Stopped sysroot-boot.service. Sep 13 01:03:25.406000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:25.408388 systemd[1]: Reached target initrd-switch-root.target. Sep 13 01:03:25.408502 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 13 01:03:25.407000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:25.408526 systemd[1]: Stopped initrd-setup-root.service. Sep 13 01:03:25.409073 systemd[1]: Starting initrd-switch-root.service... Sep 13 01:03:25.415637 systemd[1]: Switching root. Sep 13 01:03:25.429712 iscsid[738]: iscsid shutting down. Sep 13 01:03:25.429855 systemd-journald[217]: Journal stopped Sep 13 01:03:28.324496 systemd-journald[217]: Received SIGTERM from PID 1 (systemd). Sep 13 01:03:28.324519 kernel: SELinux: Class mctp_socket not defined in policy. Sep 13 01:03:28.324528 kernel: SELinux: Class anon_inode not defined in policy. Sep 13 01:03:28.324535 kernel: SELinux: the above unknown classes and permissions will be allowed Sep 13 01:03:28.324540 kernel: SELinux: policy capability network_peer_controls=1 Sep 13 01:03:28.324548 kernel: SELinux: policy capability open_perms=1 Sep 13 01:03:28.324554 kernel: SELinux: policy capability extended_socket_class=1 Sep 13 01:03:28.324560 kernel: SELinux: policy capability always_check_network=0 Sep 13 01:03:28.324566 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 13 01:03:28.324572 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 13 01:03:28.324577 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 13 01:03:28.324583 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 13 01:03:28.324591 systemd[1]: Successfully loaded SELinux policy in 50.116ms. Sep 13 01:03:28.324598 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 5.753ms. Sep 13 01:03:28.324607 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Sep 13 01:03:28.324614 systemd[1]: Detected virtualization vmware. Sep 13 01:03:28.324621 systemd[1]: Detected architecture x86-64. Sep 13 01:03:28.324629 systemd[1]: Detected first boot. Sep 13 01:03:28.324636 systemd[1]: Initializing machine ID from random generator. Sep 13 01:03:28.324642 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Sep 13 01:03:28.324649 systemd[1]: Populated /etc with preset unit settings. Sep 13 01:03:28.324655 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 13 01:03:28.324663 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 13 01:03:28.324670 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 01:03:28.324678 kernel: kauditd_printk_skb: 52 callbacks suppressed Sep 13 01:03:28.324684 kernel: audit: type=1334 audit(1757725408.208:86): prog-id=12 op=LOAD Sep 13 01:03:28.324691 kernel: audit: type=1334 audit(1757725408.208:87): prog-id=3 op=UNLOAD Sep 13 01:03:28.324697 kernel: audit: type=1334 audit(1757725408.210:88): prog-id=13 op=LOAD Sep 13 01:03:28.324703 kernel: audit: type=1334 audit(1757725408.211:89): prog-id=14 op=LOAD Sep 13 01:03:28.324709 kernel: audit: type=1334 audit(1757725408.211:90): prog-id=4 op=UNLOAD Sep 13 01:03:28.324715 kernel: audit: type=1334 audit(1757725408.211:91): prog-id=5 op=UNLOAD Sep 13 01:03:28.324722 kernel: audit: type=1334 audit(1757725408.212:92): prog-id=15 op=LOAD Sep 13 01:03:28.324728 kernel: audit: type=1334 audit(1757725408.212:93): prog-id=12 op=UNLOAD Sep 13 01:03:28.324734 kernel: audit: type=1334 audit(1757725408.213:94): prog-id=16 op=LOAD Sep 13 01:03:28.324741 kernel: audit: type=1334 audit(1757725408.214:95): prog-id=17 op=LOAD Sep 13 01:03:28.324747 systemd[1]: iscsid.service: Deactivated successfully. Sep 13 01:03:28.324753 systemd[1]: Stopped iscsid.service. Sep 13 01:03:28.324760 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 13 01:03:28.324768 systemd[1]: Stopped initrd-switch-root.service. Sep 13 01:03:28.324776 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 13 01:03:28.324784 systemd[1]: Created slice system-addon\x2dconfig.slice. Sep 13 01:03:28.324792 systemd[1]: Created slice system-addon\x2drun.slice. Sep 13 01:03:28.324799 systemd[1]: Created slice system-getty.slice. Sep 13 01:03:28.324805 systemd[1]: Created slice system-modprobe.slice. Sep 13 01:03:28.324812 systemd[1]: Created slice system-serial\x2dgetty.slice. Sep 13 01:03:28.324819 systemd[1]: Created slice system-system\x2dcloudinit.slice. Sep 13 01:03:28.324826 systemd[1]: Created slice system-systemd\x2dfsck.slice. Sep 13 01:03:28.324834 systemd[1]: Created slice user.slice. Sep 13 01:03:28.324841 systemd[1]: Started systemd-ask-password-console.path. Sep 13 01:03:28.324848 systemd[1]: Started systemd-ask-password-wall.path. Sep 13 01:03:28.324855 systemd[1]: Set up automount boot.automount. Sep 13 01:03:28.324861 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Sep 13 01:03:28.324868 systemd[1]: Stopped target initrd-switch-root.target. Sep 13 01:03:28.324875 systemd[1]: Stopped target initrd-fs.target. Sep 13 01:03:28.324881 systemd[1]: Stopped target initrd-root-fs.target. Sep 13 01:03:28.324889 systemd[1]: Reached target integritysetup.target. Sep 13 01:03:28.324896 systemd[1]: Reached target remote-cryptsetup.target. Sep 13 01:03:28.324903 systemd[1]: Reached target remote-fs.target. Sep 13 01:03:28.324910 systemd[1]: Reached target slices.target. Sep 13 01:03:28.324917 systemd[1]: Reached target swap.target. Sep 13 01:03:28.324924 systemd[1]: Reached target torcx.target. Sep 13 01:03:28.324931 systemd[1]: Reached target veritysetup.target. Sep 13 01:03:28.324938 systemd[1]: Listening on systemd-coredump.socket. Sep 13 01:03:28.324944 systemd[1]: Listening on systemd-initctl.socket. Sep 13 01:03:28.324953 systemd[1]: Listening on systemd-networkd.socket. Sep 13 01:03:28.324960 systemd[1]: Listening on systemd-udevd-control.socket. Sep 13 01:03:28.324967 systemd[1]: Listening on systemd-udevd-kernel.socket. Sep 13 01:03:28.324974 systemd[1]: Listening on systemd-userdbd.socket. Sep 13 01:03:28.324981 systemd[1]: Mounting dev-hugepages.mount... Sep 13 01:03:28.324989 systemd[1]: Mounting dev-mqueue.mount... Sep 13 01:03:28.324996 systemd[1]: Mounting media.mount... Sep 13 01:03:28.325003 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 01:03:28.325010 systemd[1]: Mounting sys-kernel-debug.mount... Sep 13 01:03:28.325017 systemd[1]: Mounting sys-kernel-tracing.mount... Sep 13 01:03:28.325024 systemd[1]: Mounting tmp.mount... Sep 13 01:03:28.325031 systemd[1]: Starting flatcar-tmpfiles.service... Sep 13 01:03:28.325038 systemd[1]: Starting ignition-delete-config.service... Sep 13 01:03:28.325045 systemd[1]: Starting kmod-static-nodes.service... Sep 13 01:03:28.325054 systemd[1]: Starting modprobe@configfs.service... Sep 13 01:03:28.325061 systemd[1]: Starting modprobe@dm_mod.service... Sep 13 01:03:28.325069 systemd[1]: Starting modprobe@drm.service... Sep 13 01:03:28.325076 systemd[1]: Starting modprobe@efi_pstore.service... Sep 13 01:03:28.325083 systemd[1]: Starting modprobe@fuse.service... Sep 13 01:03:28.325090 systemd[1]: Starting modprobe@loop.service... Sep 13 01:03:28.325097 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 13 01:03:28.325104 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 13 01:03:28.325111 systemd[1]: Stopped systemd-fsck-root.service. Sep 13 01:03:28.325119 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 13 01:03:28.325126 systemd[1]: Stopped systemd-fsck-usr.service. Sep 13 01:03:28.325133 systemd[1]: Stopped systemd-journald.service. Sep 13 01:03:28.325140 systemd[1]: Starting systemd-journald.service... Sep 13 01:03:28.325147 kernel: fuse: init (API version 7.34) Sep 13 01:03:28.325153 systemd[1]: Starting systemd-modules-load.service... Sep 13 01:03:28.325161 systemd[1]: Starting systemd-network-generator.service... Sep 13 01:03:28.325168 systemd[1]: Starting systemd-remount-fs.service... Sep 13 01:03:28.325176 systemd[1]: Starting systemd-udev-trigger.service... Sep 13 01:03:28.325183 systemd[1]: verity-setup.service: Deactivated successfully. Sep 13 01:03:28.325190 systemd[1]: Stopped verity-setup.service. Sep 13 01:03:28.325197 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 01:03:28.325204 systemd[1]: Mounted dev-hugepages.mount. Sep 13 01:03:28.325211 systemd[1]: Mounted dev-mqueue.mount. Sep 13 01:03:28.325219 systemd[1]: Mounted media.mount. Sep 13 01:03:28.325226 systemd[1]: Mounted sys-kernel-debug.mount. Sep 13 01:03:28.325234 systemd[1]: Mounted sys-kernel-tracing.mount. Sep 13 01:03:28.325242 systemd[1]: Mounted tmp.mount. Sep 13 01:03:28.325253 systemd-journald[1028]: Journal started Sep 13 01:03:28.325284 systemd-journald[1028]: Runtime Journal (/run/log/journal/b6aa083376aa43588a8ad1c9fd6436e6) is 4.8M, max 38.8M, 34.0M free. Sep 13 01:03:25.583000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 13 01:03:25.665000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Sep 13 01:03:25.665000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Sep 13 01:03:25.665000 audit: BPF prog-id=10 op=LOAD Sep 13 01:03:25.665000 audit: BPF prog-id=10 op=UNLOAD Sep 13 01:03:25.665000 audit: BPF prog-id=11 op=LOAD Sep 13 01:03:25.665000 audit: BPF prog-id=11 op=UNLOAD Sep 13 01:03:26.091000 audit[941]: AVC avc: denied { associate } for pid=941 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Sep 13 01:03:26.091000 audit[941]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c00014d8b4 a1=c0000cede0 a2=c0000d70c0 a3=32 items=0 ppid=924 pid=941 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:03:26.091000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Sep 13 01:03:26.093000 audit[941]: AVC avc: denied { associate } for pid=941 comm="torcx-generator" name="bin" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Sep 13 01:03:26.093000 audit[941]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c00014d999 a2=1ed a3=0 items=2 ppid=924 pid=941 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:03:26.093000 audit: CWD cwd="/" Sep 13 01:03:26.093000 audit: PATH item=0 name=(null) inode=2 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:03:26.093000 audit: PATH item=1 name=(null) inode=3 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:03:26.093000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Sep 13 01:03:28.208000 audit: BPF prog-id=12 op=LOAD Sep 13 01:03:28.208000 audit: BPF prog-id=3 op=UNLOAD Sep 13 01:03:28.210000 audit: BPF prog-id=13 op=LOAD Sep 13 01:03:28.211000 audit: BPF prog-id=14 op=LOAD Sep 13 01:03:28.211000 audit: BPF prog-id=4 op=UNLOAD Sep 13 01:03:28.211000 audit: BPF prog-id=5 op=UNLOAD Sep 13 01:03:28.212000 audit: BPF prog-id=15 op=LOAD Sep 13 01:03:28.212000 audit: BPF prog-id=12 op=UNLOAD Sep 13 01:03:28.213000 audit: BPF prog-id=16 op=LOAD Sep 13 01:03:28.214000 audit: BPF prog-id=17 op=LOAD Sep 13 01:03:28.214000 audit: BPF prog-id=13 op=UNLOAD Sep 13 01:03:28.214000 audit: BPF prog-id=14 op=UNLOAD Sep 13 01:03:28.215000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:28.219000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:28.221000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:28.221000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:28.224000 audit: BPF prog-id=15 op=UNLOAD Sep 13 01:03:28.291000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:28.292000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:28.293000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:28.293000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:28.296000 audit: BPF prog-id=18 op=LOAD Sep 13 01:03:28.296000 audit: BPF prog-id=19 op=LOAD Sep 13 01:03:28.296000 audit: BPF prog-id=20 op=LOAD Sep 13 01:03:28.296000 audit: BPF prog-id=16 op=UNLOAD Sep 13 01:03:28.296000 audit: BPF prog-id=17 op=UNLOAD Sep 13 01:03:28.314000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:28.320000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Sep 13 01:03:28.320000 audit[1028]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=4 a1=7fff8334bf50 a2=4000 a3=7fff8334bfec items=0 ppid=1 pid=1028 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:03:28.320000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Sep 13 01:03:26.084158 /usr/lib/systemd/system-generators/torcx-generator[941]: time="2025-09-13T01:03:26Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 13 01:03:28.207778 systemd[1]: Queued start job for default target multi-user.target. Sep 13 01:03:26.085879 /usr/lib/systemd/system-generators/torcx-generator[941]: time="2025-09-13T01:03:26Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Sep 13 01:03:28.207787 systemd[1]: Unnecessary job was removed for dev-sda6.device. Sep 13 01:03:26.085899 /usr/lib/systemd/system-generators/torcx-generator[941]: time="2025-09-13T01:03:26Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Sep 13 01:03:28.216376 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 13 01:03:26.085928 /usr/lib/systemd/system-generators/torcx-generator[941]: time="2025-09-13T01:03:26Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Sep 13 01:03:26.085938 /usr/lib/systemd/system-generators/torcx-generator[941]: time="2025-09-13T01:03:26Z" level=debug msg="skipped missing lower profile" missing profile=oem Sep 13 01:03:26.085968 /usr/lib/systemd/system-generators/torcx-generator[941]: time="2025-09-13T01:03:26Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Sep 13 01:03:26.085979 /usr/lib/systemd/system-generators/torcx-generator[941]: time="2025-09-13T01:03:26Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Sep 13 01:03:26.086156 /usr/lib/systemd/system-generators/torcx-generator[941]: time="2025-09-13T01:03:26Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Sep 13 01:03:26.086189 /usr/lib/systemd/system-generators/torcx-generator[941]: time="2025-09-13T01:03:26Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Sep 13 01:03:28.326737 jq[1007]: true Sep 13 01:03:26.086200 /usr/lib/systemd/system-generators/torcx-generator[941]: time="2025-09-13T01:03:26Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Sep 13 01:03:26.091931 /usr/lib/systemd/system-generators/torcx-generator[941]: time="2025-09-13T01:03:26Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Sep 13 01:03:26.091953 /usr/lib/systemd/system-generators/torcx-generator[941]: time="2025-09-13T01:03:26Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Sep 13 01:03:26.091965 /usr/lib/systemd/system-generators/torcx-generator[941]: time="2025-09-13T01:03:26Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.8: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.8 Sep 13 01:03:28.329076 systemd[1]: Started systemd-journald.service. Sep 13 01:03:28.325000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:28.326000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:28.326000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:28.326000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:28.326000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:28.326000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:28.327000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:28.327000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:28.327000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:28.327000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:26.091974 /usr/lib/systemd/system-generators/torcx-generator[941]: time="2025-09-13T01:03:26Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Sep 13 01:03:28.327490 systemd[1]: Finished kmod-static-nodes.service. Sep 13 01:03:26.091988 /usr/lib/systemd/system-generators/torcx-generator[941]: time="2025-09-13T01:03:26Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.8: no such file or directory" path=/var/lib/torcx/store/3510.3.8 Sep 13 01:03:28.327755 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 13 01:03:26.092003 /usr/lib/systemd/system-generators/torcx-generator[941]: time="2025-09-13T01:03:26Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Sep 13 01:03:28.327865 systemd[1]: Finished modprobe@configfs.service. Sep 13 01:03:27.920650 /usr/lib/systemd/system-generators/torcx-generator[941]: time="2025-09-13T01:03:27Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Sep 13 01:03:28.328114 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 01:03:27.920805 /usr/lib/systemd/system-generators/torcx-generator[941]: time="2025-09-13T01:03:27Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Sep 13 01:03:28.328209 systemd[1]: Finished modprobe@dm_mod.service. Sep 13 01:03:27.920866 /usr/lib/systemd/system-generators/torcx-generator[941]: time="2025-09-13T01:03:27Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Sep 13 01:03:28.328593 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 13 01:03:27.920966 /usr/lib/systemd/system-generators/torcx-generator[941]: time="2025-09-13T01:03:27Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Sep 13 01:03:28.328712 systemd[1]: Finished modprobe@drm.service. Sep 13 01:03:27.920999 /usr/lib/systemd/system-generators/torcx-generator[941]: time="2025-09-13T01:03:27Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Sep 13 01:03:28.329156 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 01:03:27.921041 /usr/lib/systemd/system-generators/torcx-generator[941]: time="2025-09-13T01:03:27Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Sep 13 01:03:28.329234 systemd[1]: Finished modprobe@efi_pstore.service. Sep 13 01:03:28.329481 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 13 01:03:28.329558 systemd[1]: Finished modprobe@fuse.service. Sep 13 01:03:28.329000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:28.329000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:28.329000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:28.329000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:28.329000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:28.330875 systemd[1]: Finished systemd-modules-load.service. Sep 13 01:03:28.331100 systemd[1]: Finished systemd-network-generator.service. Sep 13 01:03:28.331322 systemd[1]: Finished systemd-remount-fs.service. Sep 13 01:03:28.331675 systemd[1]: Reached target network-pre.target. Sep 13 01:03:28.334183 systemd[1]: Mounting sys-fs-fuse-connections.mount... Sep 13 01:03:28.335097 systemd[1]: Mounting sys-kernel-config.mount... Sep 13 01:03:28.337096 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 13 01:03:28.337436 jq[1037]: true Sep 13 01:03:28.339438 kernel: loop: module loaded Sep 13 01:03:28.339703 systemd[1]: Starting systemd-hwdb-update.service... Sep 13 01:03:28.340801 systemd[1]: Starting systemd-journal-flush.service... Sep 13 01:03:28.343000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:28.343000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:28.340934 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 13 01:03:28.341755 systemd[1]: Starting systemd-random-seed.service... Sep 13 01:03:28.343850 systemd[1]: Starting systemd-sysctl.service... Sep 13 01:03:28.345098 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 01:03:28.345205 systemd[1]: Finished modprobe@loop.service. Sep 13 01:03:28.345863 systemd[1]: Mounted sys-fs-fuse-connections.mount. Sep 13 01:03:28.346033 systemd[1]: Mounted sys-kernel-config.mount. Sep 13 01:03:28.347521 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 13 01:03:28.354554 systemd-journald[1028]: Time spent on flushing to /var/log/journal/b6aa083376aa43588a8ad1c9fd6436e6 is 54.869ms for 2021 entries. Sep 13 01:03:28.354554 systemd-journald[1028]: System Journal (/var/log/journal/b6aa083376aa43588a8ad1c9fd6436e6) is 8.0M, max 584.8M, 576.8M free. Sep 13 01:03:28.433986 systemd-journald[1028]: Received client request to flush runtime journal. Sep 13 01:03:28.355000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:28.376000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:28.389000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:28.356727 systemd[1]: Finished systemd-random-seed.service. Sep 13 01:03:28.356903 systemd[1]: Reached target first-boot-complete.target. Sep 13 01:03:28.377637 systemd[1]: Finished systemd-sysctl.service. Sep 13 01:03:28.391132 systemd[1]: Finished flatcar-tmpfiles.service. Sep 13 01:03:28.392596 systemd[1]: Starting systemd-sysusers.service... Sep 13 01:03:28.434667 systemd[1]: Finished systemd-journal-flush.service. Sep 13 01:03:28.433000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:28.460722 systemd[1]: Finished systemd-sysusers.service. Sep 13 01:03:28.459000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:28.461756 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Sep 13 01:03:28.467204 systemd[1]: Finished systemd-udev-trigger.service. Sep 13 01:03:28.465000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:28.468179 systemd[1]: Starting systemd-udev-settle.service... Sep 13 01:03:28.475509 udevadm[1073]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Sep 13 01:03:28.519136 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Sep 13 01:03:28.517000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:28.543614 ignition[1053]: Ignition 2.14.0 Sep 13 01:03:28.543984 ignition[1053]: deleting config from guestinfo properties Sep 13 01:03:28.552170 ignition[1053]: Successfully deleted config Sep 13 01:03:28.552854 systemd[1]: Finished ignition-delete-config.service. Sep 13 01:03:28.551000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ignition-delete-config comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:29.120327 systemd[1]: Finished systemd-hwdb-update.service. Sep 13 01:03:29.119000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:29.119000 audit: BPF prog-id=21 op=LOAD Sep 13 01:03:29.119000 audit: BPF prog-id=22 op=LOAD Sep 13 01:03:29.119000 audit: BPF prog-id=7 op=UNLOAD Sep 13 01:03:29.119000 audit: BPF prog-id=8 op=UNLOAD Sep 13 01:03:29.121454 systemd[1]: Starting systemd-udevd.service... Sep 13 01:03:29.133598 systemd-udevd[1074]: Using default interface naming scheme 'v252'. Sep 13 01:03:29.491729 systemd[1]: Started systemd-udevd.service. Sep 13 01:03:29.490000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:29.491000 audit: BPF prog-id=23 op=LOAD Sep 13 01:03:29.493004 systemd[1]: Starting systemd-networkd.service... Sep 13 01:03:29.508684 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. Sep 13 01:03:29.526000 audit: BPF prog-id=24 op=LOAD Sep 13 01:03:29.526000 audit: BPF prog-id=25 op=LOAD Sep 13 01:03:29.526000 audit: BPF prog-id=26 op=LOAD Sep 13 01:03:29.528507 systemd[1]: Starting systemd-userdbd.service... Sep 13 01:03:29.539440 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Sep 13 01:03:29.550438 kernel: ACPI: button: Power Button [PWRF] Sep 13 01:03:29.581861 systemd[1]: Started systemd-userdbd.service. Sep 13 01:03:29.580000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:29.623000 audit[1089]: AVC avc: denied { confidentiality } for pid=1089 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Sep 13 01:03:29.629659 kernel: vmw_vmci 0000:00:07.7: Found VMCI PCI device at 0x11080, irq 16 Sep 13 01:03:29.630804 kernel: vmw_vmci 0000:00:07.7: Using capabilities 0xc Sep 13 01:03:29.623000 audit[1089]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=5626c7957420 a1=338ec a2=7fb388e46bc5 a3=5 items=110 ppid=1074 pid=1089 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:03:29.623000 audit: CWD cwd="/" Sep 13 01:03:29.623000 audit: PATH item=0 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:03:29.623000 audit: PATH item=1 name=(null) inode=24272 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:03:29.623000 audit: PATH item=2 name=(null) inode=24272 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:03:29.623000 audit: PATH item=3 name=(null) inode=24273 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:03:29.623000 audit: PATH item=4 name=(null) inode=24272 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:03:29.623000 audit: PATH item=5 name=(null) inode=24274 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:03:29.623000 audit: PATH item=6 name=(null) inode=24272 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:03:29.623000 audit: PATH item=7 name=(null) inode=24275 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:03:29.623000 audit: PATH item=8 name=(null) inode=24275 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:03:29.623000 audit: PATH item=9 name=(null) inode=24276 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:03:29.623000 audit: PATH item=10 name=(null) inode=24275 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:03:29.623000 audit: PATH item=11 name=(null) inode=24277 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:03:29.623000 audit: PATH item=12 name=(null) inode=24275 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:03:29.623000 audit: PATH item=13 name=(null) inode=24278 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:03:29.623000 audit: PATH item=14 name=(null) inode=24275 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:03:29.623000 audit: PATH item=15 name=(null) inode=24279 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:03:29.623000 audit: PATH item=16 name=(null) inode=24275 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:03:29.623000 audit: PATH item=17 name=(null) inode=24280 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:03:29.623000 audit: PATH item=18 name=(null) inode=24272 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:03:29.623000 audit: PATH item=19 name=(null) inode=24281 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:03:29.623000 audit: PATH item=20 name=(null) inode=24281 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:03:29.623000 audit: PATH item=21 name=(null) inode=24282 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:03:29.623000 audit: PATH item=22 name=(null) inode=24281 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:03:29.623000 audit: PATH item=23 name=(null) inode=24283 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:03:29.623000 audit: PATH item=24 name=(null) inode=24281 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:03:29.623000 audit: PATH item=25 name=(null) inode=24284 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:03:29.623000 audit: PATH item=26 name=(null) inode=24281 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:03:29.623000 audit: PATH item=27 name=(null) inode=24285 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:03:29.623000 audit: PATH item=28 name=(null) inode=24281 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:03:29.623000 audit: PATH item=29 name=(null) inode=24286 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:03:29.623000 audit: PATH item=30 name=(null) inode=24272 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:03:29.623000 audit: PATH item=31 name=(null) inode=24287 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:03:29.623000 audit: PATH item=32 name=(null) inode=24287 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:03:29.623000 audit: PATH item=33 name=(null) inode=24288 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:03:29.623000 audit: PATH item=34 name=(null) inode=24287 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:03:29.623000 audit: PATH item=35 name=(null) inode=24289 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:03:29.623000 audit: PATH item=36 name=(null) inode=24287 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:03:29.623000 audit: PATH item=37 name=(null) inode=24290 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:03:29.623000 audit: PATH item=38 name=(null) inode=24287 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:03:29.623000 audit: PATH item=39 name=(null) inode=24291 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:03:29.623000 audit: PATH item=40 name=(null) inode=24287 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:03:29.623000 audit: PATH item=41 name=(null) inode=24292 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:03:29.623000 audit: PATH item=42 name=(null) inode=24272 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:03:29.623000 audit: PATH item=43 name=(null) inode=24293 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:03:29.623000 audit: PATH item=44 name=(null) inode=24293 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:03:29.623000 audit: PATH item=45 name=(null) inode=24294 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:03:29.623000 audit: PATH item=46 name=(null) inode=24293 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:03:29.623000 audit: PATH item=47 name=(null) inode=24295 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:03:29.623000 audit: PATH item=48 name=(null) inode=24293 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:03:29.623000 audit: PATH item=49 name=(null) inode=24296 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:03:29.623000 audit: PATH item=50 name=(null) inode=24293 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:03:29.623000 audit: PATH item=51 name=(null) inode=24297 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:03:29.623000 audit: PATH item=52 name=(null) inode=24293 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:03:29.623000 audit: PATH item=53 name=(null) inode=24298 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:03:29.623000 audit: PATH item=54 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:03:29.623000 audit: PATH item=55 name=(null) inode=24299 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:03:29.623000 audit: PATH item=56 name=(null) inode=24299 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:03:29.623000 audit: PATH item=57 name=(null) inode=24300 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:03:29.623000 audit: PATH item=58 name=(null) inode=24299 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:03:29.623000 audit: PATH item=59 name=(null) inode=24301 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:03:29.623000 audit: PATH item=60 name=(null) inode=24299 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:03:29.623000 audit: PATH item=61 name=(null) inode=24302 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:03:29.623000 audit: PATH item=62 name=(null) inode=24302 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:03:29.623000 audit: PATH item=63 name=(null) inode=24303 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:03:29.623000 audit: PATH item=64 name=(null) inode=24302 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:03:29.623000 audit: PATH item=65 name=(null) inode=24304 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:03:29.623000 audit: PATH item=66 name=(null) inode=24302 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:03:29.623000 audit: PATH item=67 name=(null) inode=24305 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:03:29.623000 audit: PATH item=68 name=(null) inode=24302 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:03:29.623000 audit: PATH item=69 name=(null) inode=24306 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:03:29.623000 audit: PATH item=70 name=(null) inode=24302 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:03:29.623000 audit: PATH item=71 name=(null) inode=24307 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:03:29.623000 audit: PATH item=72 name=(null) inode=24299 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:03:29.623000 audit: PATH item=73 name=(null) inode=24308 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:03:29.623000 audit: PATH item=74 name=(null) inode=24308 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:03:29.623000 audit: PATH item=75 name=(null) inode=24309 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:03:29.623000 audit: PATH item=76 name=(null) inode=24308 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:03:29.623000 audit: PATH item=77 name=(null) inode=24310 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:03:29.623000 audit: PATH item=78 name=(null) inode=24308 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:03:29.623000 audit: PATH item=79 name=(null) inode=24311 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:03:29.623000 audit: PATH item=80 name=(null) inode=24308 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:03:29.623000 audit: PATH item=81 name=(null) inode=24312 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:03:29.623000 audit: PATH item=82 name=(null) inode=24308 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:03:29.623000 audit: PATH item=83 name=(null) inode=24313 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:03:29.623000 audit: PATH item=84 name=(null) inode=24299 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:03:29.623000 audit: PATH item=85 name=(null) inode=24314 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:03:29.623000 audit: PATH item=86 name=(null) inode=24314 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:03:29.623000 audit: PATH item=87 name=(null) inode=24315 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:03:29.623000 audit: PATH item=88 name=(null) inode=24314 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:03:29.623000 audit: PATH item=89 name=(null) inode=24316 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:03:29.623000 audit: PATH item=90 name=(null) inode=24314 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:03:29.623000 audit: PATH item=91 name=(null) inode=24317 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:03:29.623000 audit: PATH item=92 name=(null) inode=24314 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:03:29.623000 audit: PATH item=93 name=(null) inode=24318 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:03:29.623000 audit: PATH item=94 name=(null) inode=24314 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:03:29.623000 audit: PATH item=95 name=(null) inode=24319 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:03:29.623000 audit: PATH item=96 name=(null) inode=24299 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:03:29.623000 audit: PATH item=97 name=(null) inode=24320 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:03:29.623000 audit: PATH item=98 name=(null) inode=24320 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:03:29.623000 audit: PATH item=99 name=(null) inode=24321 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:03:29.623000 audit: PATH item=100 name=(null) inode=24320 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:03:29.623000 audit: PATH item=101 name=(null) inode=24322 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:03:29.623000 audit: PATH item=102 name=(null) inode=24320 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:03:29.623000 audit: PATH item=103 name=(null) inode=24323 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:03:29.623000 audit: PATH item=104 name=(null) inode=24320 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:03:29.623000 audit: PATH item=105 name=(null) inode=24324 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:03:29.623000 audit: PATH item=106 name=(null) inode=24320 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:03:29.623000 audit: PATH item=107 name=(null) inode=24325 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:03:29.623000 audit: PATH item=108 name=(null) inode=1 dev=00:07 mode=040700 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:03:29.623000 audit: PATH item=109 name=(null) inode=24326 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:03:29.623000 audit: PROCTITLE proctitle="(udev-worker)" Sep 13 01:03:29.636597 kernel: Guest personality initialized and is active Sep 13 01:03:29.636629 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Sep 13 01:03:29.636643 kernel: Initialized host personality Sep 13 01:03:29.643507 kernel: piix4_smbus 0000:00:07.3: SMBus Host Controller not enabled! Sep 13 01:03:29.655444 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input3 Sep 13 01:03:29.657763 (udev-worker)[1088]: id: Truncating stdout of 'dmi_memory_id' up to 16384 byte. Sep 13 01:03:29.668473 kernel: mousedev: PS/2 mouse device common for all mice Sep 13 01:03:29.793837 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Sep 13 01:03:29.848647 systemd[1]: Finished systemd-udev-settle.service. Sep 13 01:03:29.847000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:29.849642 systemd[1]: Starting lvm2-activation-early.service... Sep 13 01:03:30.049015 systemd-networkd[1081]: lo: Link UP Sep 13 01:03:30.048000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:30.049020 systemd-networkd[1081]: lo: Gained carrier Sep 13 01:03:30.049334 systemd-networkd[1081]: Enumeration completed Sep 13 01:03:30.049432 systemd[1]: Started systemd-networkd.service. Sep 13 01:03:30.049878 systemd-networkd[1081]: ens192: Configuring with /etc/systemd/network/00-vmware.network. Sep 13 01:03:30.070139 kernel: vmxnet3 0000:0b:00.0 ens192: intr type 3, mode 0, 3 vectors allocated Sep 13 01:03:30.070279 kernel: vmxnet3 0000:0b:00.0 ens192: NIC Link is Up 10000 Mbps Sep 13 01:03:30.071326 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): ens192: link becomes ready Sep 13 01:03:30.071494 systemd-networkd[1081]: ens192: Link UP Sep 13 01:03:30.071624 systemd-networkd[1081]: ens192: Gained carrier Sep 13 01:03:30.092469 lvm[1107]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 13 01:03:30.122021 systemd[1]: Finished lvm2-activation-early.service. Sep 13 01:03:30.120000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:30.122211 systemd[1]: Reached target cryptsetup.target. Sep 13 01:03:30.123179 systemd[1]: Starting lvm2-activation.service... Sep 13 01:03:30.125872 lvm[1108]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 13 01:03:30.151026 systemd[1]: Finished lvm2-activation.service. Sep 13 01:03:30.149000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:30.151209 systemd[1]: Reached target local-fs-pre.target. Sep 13 01:03:30.151323 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 13 01:03:30.151342 systemd[1]: Reached target local-fs.target. Sep 13 01:03:30.151442 systemd[1]: Reached target machines.target. Sep 13 01:03:30.152402 systemd[1]: Starting ldconfig.service... Sep 13 01:03:30.156964 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 13 01:03:30.156992 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 13 01:03:30.157765 systemd[1]: Starting systemd-boot-update.service... Sep 13 01:03:30.158644 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Sep 13 01:03:30.159604 systemd[1]: Starting systemd-machine-id-commit.service... Sep 13 01:03:30.160694 systemd[1]: Starting systemd-sysext.service... Sep 13 01:03:30.171723 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1110 (bootctl) Sep 13 01:03:30.172544 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Sep 13 01:03:30.174385 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Sep 13 01:03:30.173000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:30.175892 systemd[1]: Unmounting usr-share-oem.mount... Sep 13 01:03:30.199945 systemd[1]: usr-share-oem.mount: Deactivated successfully. Sep 13 01:03:30.200055 systemd[1]: Unmounted usr-share-oem.mount. Sep 13 01:03:30.227450 kernel: loop0: detected capacity change from 0 to 224512 Sep 13 01:03:31.195448 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 13 01:03:31.210940 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 13 01:03:31.211372 systemd[1]: Finished systemd-machine-id-commit.service. Sep 13 01:03:31.210000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:31.219509 systemd-fsck[1119]: fsck.fat 4.2 (2021-01-31) Sep 13 01:03:31.219509 systemd-fsck[1119]: /dev/sda1: 790 files, 120761/258078 clusters Sep 13 01:03:31.221328 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Sep 13 01:03:31.220000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:31.222481 systemd[1]: Mounting boot.mount... Sep 13 01:03:31.232439 kernel: loop1: detected capacity change from 0 to 224512 Sep 13 01:03:31.253180 systemd[1]: Mounted boot.mount. Sep 13 01:03:31.283443 systemd[1]: Finished systemd-boot-update.service. Sep 13 01:03:31.282000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:31.336309 (sd-sysext)[1124]: Using extensions 'kubernetes'. Sep 13 01:03:31.337066 (sd-sysext)[1124]: Merged extensions into '/usr'. Sep 13 01:03:31.352643 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 01:03:31.353798 systemd[1]: Mounting usr-share-oem.mount... Sep 13 01:03:31.354666 systemd[1]: Starting modprobe@dm_mod.service... Sep 13 01:03:31.356536 systemd[1]: Starting modprobe@efi_pstore.service... Sep 13 01:03:31.358172 systemd[1]: Starting modprobe@loop.service... Sep 13 01:03:31.358313 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 13 01:03:31.358402 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 13 01:03:31.358499 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 01:03:31.360182 systemd[1]: Mounted usr-share-oem.mount. Sep 13 01:03:31.360465 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 01:03:31.360590 systemd[1]: Finished modprobe@dm_mod.service. Sep 13 01:03:31.359000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:31.359000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:31.360950 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 01:03:31.361035 systemd[1]: Finished modprobe@efi_pstore.service. Sep 13 01:03:31.359000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:31.359000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:31.361374 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 01:03:31.361453 systemd[1]: Finished modprobe@loop.service. Sep 13 01:03:31.360000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:31.360000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:31.361779 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 13 01:03:31.361878 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 13 01:03:31.362470 systemd[1]: Finished systemd-sysext.service. Sep 13 01:03:31.361000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:31.363608 systemd[1]: Starting ensure-sysext.service... Sep 13 01:03:31.364632 systemd[1]: Starting systemd-tmpfiles-setup.service... Sep 13 01:03:31.369483 systemd[1]: Reloading. Sep 13 01:03:31.375219 systemd-tmpfiles[1131]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Sep 13 01:03:31.378337 systemd-tmpfiles[1131]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 13 01:03:31.380807 systemd-tmpfiles[1131]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 13 01:03:31.413824 /usr/lib/systemd/system-generators/torcx-generator[1150]: time="2025-09-13T01:03:31Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 13 01:03:31.415258 /usr/lib/systemd/system-generators/torcx-generator[1150]: time="2025-09-13T01:03:31Z" level=info msg="torcx already run" Sep 13 01:03:31.474142 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 13 01:03:31.474154 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 13 01:03:31.488662 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 01:03:31.523000 audit: BPF prog-id=27 op=LOAD Sep 13 01:03:31.523000 audit: BPF prog-id=24 op=UNLOAD Sep 13 01:03:31.523000 audit: BPF prog-id=28 op=LOAD Sep 13 01:03:31.523000 audit: BPF prog-id=29 op=LOAD Sep 13 01:03:31.523000 audit: BPF prog-id=25 op=UNLOAD Sep 13 01:03:31.523000 audit: BPF prog-id=26 op=UNLOAD Sep 13 01:03:31.524000 audit: BPF prog-id=30 op=LOAD Sep 13 01:03:31.524000 audit: BPF prog-id=23 op=UNLOAD Sep 13 01:03:31.525000 audit: BPF prog-id=31 op=LOAD Sep 13 01:03:31.525000 audit: BPF prog-id=18 op=UNLOAD Sep 13 01:03:31.525000 audit: BPF prog-id=32 op=LOAD Sep 13 01:03:31.525000 audit: BPF prog-id=33 op=LOAD Sep 13 01:03:31.525000 audit: BPF prog-id=19 op=UNLOAD Sep 13 01:03:31.525000 audit: BPF prog-id=20 op=UNLOAD Sep 13 01:03:31.526000 audit: BPF prog-id=34 op=LOAD Sep 13 01:03:31.526000 audit: BPF prog-id=35 op=LOAD Sep 13 01:03:31.526000 audit: BPF prog-id=21 op=UNLOAD Sep 13 01:03:31.526000 audit: BPF prog-id=22 op=UNLOAD Sep 13 01:03:31.535896 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 01:03:31.536723 systemd[1]: Starting modprobe@dm_mod.service... Sep 13 01:03:31.537948 systemd[1]: Starting modprobe@efi_pstore.service... Sep 13 01:03:31.538716 systemd[1]: Starting modprobe@loop.service... Sep 13 01:03:31.538925 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 13 01:03:31.538989 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 13 01:03:31.539050 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 01:03:31.539739 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 01:03:31.539817 systemd[1]: Finished modprobe@dm_mod.service. Sep 13 01:03:31.538000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:31.538000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:31.540332 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 01:03:31.540402 systemd[1]: Finished modprobe@efi_pstore.service. Sep 13 01:03:31.539000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:31.539000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:31.540883 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 01:03:31.540950 systemd[1]: Finished modprobe@loop.service. Sep 13 01:03:31.539000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:31.539000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:31.542103 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 01:03:31.543002 systemd[1]: Starting modprobe@dm_mod.service... Sep 13 01:03:31.544141 systemd[1]: Starting modprobe@efi_pstore.service... Sep 13 01:03:31.544913 systemd[1]: Starting modprobe@loop.service... Sep 13 01:03:31.545126 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 13 01:03:31.545194 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 13 01:03:31.545256 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 01:03:31.545867 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 01:03:31.545951 systemd[1]: Finished modprobe@dm_mod.service. Sep 13 01:03:31.544000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:31.544000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:31.546492 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 01:03:31.546615 systemd[1]: Finished modprobe@efi_pstore.service. Sep 13 01:03:31.545000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:31.545000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:31.547057 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 01:03:31.547176 systemd[1]: Finished modprobe@loop.service. Sep 13 01:03:31.545000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:31.545000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:31.547693 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 13 01:03:31.547811 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 13 01:03:31.549691 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 01:03:31.551325 systemd[1]: Starting modprobe@dm_mod.service... Sep 13 01:03:31.552468 systemd[1]: Starting modprobe@drm.service... Sep 13 01:03:31.553760 systemd[1]: Starting modprobe@efi_pstore.service... Sep 13 01:03:31.554688 systemd[1]: Starting modprobe@loop.service... Sep 13 01:03:31.555581 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 13 01:03:31.555656 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 13 01:03:31.556485 systemd[1]: Starting systemd-networkd-wait-online.service... Sep 13 01:03:31.556662 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 01:03:31.557295 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 01:03:31.557387 systemd[1]: Finished modprobe@dm_mod.service. Sep 13 01:03:31.556000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:31.556000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:31.557723 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 13 01:03:31.557796 systemd[1]: Finished modprobe@drm.service. Sep 13 01:03:31.556000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:31.556000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:31.558085 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 01:03:31.558154 systemd[1]: Finished modprobe@efi_pstore.service. Sep 13 01:03:31.556000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:31.556000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:31.557000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:31.557000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:31.558633 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 01:03:31.558707 systemd[1]: Finished modprobe@loop.service. Sep 13 01:03:31.559114 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 13 01:03:31.559175 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 13 01:03:31.558000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:31.560299 systemd[1]: Finished ensure-sysext.service. Sep 13 01:03:31.640000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:31.641820 systemd[1]: Finished systemd-tmpfiles-setup.service. Sep 13 01:03:31.642918 systemd[1]: Starting audit-rules.service... Sep 13 01:03:31.643769 systemd[1]: Starting clean-ca-certificates.service... Sep 13 01:03:31.644780 systemd[1]: Starting systemd-journal-catalog-update.service... Sep 13 01:03:31.644000 audit: BPF prog-id=36 op=LOAD Sep 13 01:03:31.645956 systemd[1]: Starting systemd-resolved.service... Sep 13 01:03:31.645000 audit: BPF prog-id=37 op=LOAD Sep 13 01:03:31.647408 systemd[1]: Starting systemd-timesyncd.service... Sep 13 01:03:31.648859 systemd[1]: Starting systemd-update-utmp.service... Sep 13 01:03:31.655000 audit[1227]: SYSTEM_BOOT pid=1227 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Sep 13 01:03:31.657853 systemd[1]: Finished systemd-update-utmp.service. Sep 13 01:03:31.656000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:31.669342 systemd[1]: Finished clean-ca-certificates.service. Sep 13 01:03:31.668000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:31.669522 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 13 01:03:31.703048 systemd[1]: Started systemd-timesyncd.service. Sep 13 01:03:31.701000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:31.703250 systemd[1]: Reached target time-set.target. Sep 13 01:03:31.708873 systemd-resolved[1225]: Positive Trust Anchors: Sep 13 01:03:31.708887 systemd-resolved[1225]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 13 01:03:31.708906 systemd-resolved[1225]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Sep 13 01:03:31.714484 systemd-networkd[1081]: ens192: Gained IPv6LL Sep 13 01:03:31.713000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:31.715259 systemd[1]: Finished systemd-networkd-wait-online.service. Sep 13 01:03:31.730000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:31.732102 systemd[1]: Finished systemd-journal-catalog-update.service. Sep 13 01:03:31.753000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Sep 13 01:03:31.753000 audit[1243]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffd64547ce0 a2=420 a3=0 items=0 ppid=1222 pid=1243 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:03:31.753000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Sep 13 01:03:31.754881 augenrules[1243]: No rules Sep 13 01:03:31.755284 systemd[1]: Finished audit-rules.service. Sep 13 01:03:31.819356 systemd-resolved[1225]: Defaulting to hostname 'linux'. Sep 13 01:03:31.820863 systemd[1]: Started systemd-resolved.service. Sep 13 01:03:31.821053 systemd[1]: Reached target network.target. Sep 13 01:03:31.821171 systemd[1]: Reached target network-online.target. Sep 13 01:03:31.821291 systemd[1]: Reached target nss-lookup.target. Sep 13 01:05:05.106388 systemd-resolved[1225]: Clock change detected. Flushing caches. Sep 13 01:05:05.106492 systemd-timesyncd[1226]: Contacted time server 144.202.0.197:123 (0.flatcar.pool.ntp.org). Sep 13 01:05:05.106596 systemd-timesyncd[1226]: Initial clock synchronization to Sat 2025-09-13 01:05:05.106307 UTC. Sep 13 01:05:05.989346 ldconfig[1109]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 13 01:05:06.212085 systemd[1]: Finished ldconfig.service. Sep 13 01:05:06.213176 systemd[1]: Starting systemd-update-done.service... Sep 13 01:05:06.221751 systemd[1]: Finished systemd-update-done.service. Sep 13 01:05:06.221930 systemd[1]: Reached target sysinit.target. Sep 13 01:05:06.222114 systemd[1]: Started motdgen.path. Sep 13 01:05:06.222220 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Sep 13 01:05:06.222421 systemd[1]: Started logrotate.timer. Sep 13 01:05:06.222550 systemd[1]: Started mdadm.timer. Sep 13 01:05:06.222638 systemd[1]: Started systemd-tmpfiles-clean.timer. Sep 13 01:05:06.222734 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 13 01:05:06.222755 systemd[1]: Reached target paths.target. Sep 13 01:05:06.222848 systemd[1]: Reached target timers.target. Sep 13 01:05:06.223115 systemd[1]: Listening on dbus.socket. Sep 13 01:05:06.224013 systemd[1]: Starting docker.socket... Sep 13 01:05:06.229765 systemd[1]: Listening on sshd.socket. Sep 13 01:05:06.230037 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 13 01:05:06.230319 systemd[1]: Listening on docker.socket. Sep 13 01:05:06.230575 systemd[1]: Reached target sockets.target. Sep 13 01:05:06.230716 systemd[1]: Reached target basic.target. Sep 13 01:05:06.230874 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Sep 13 01:05:06.230889 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Sep 13 01:05:06.231738 systemd[1]: Starting containerd.service... Sep 13 01:05:06.232687 systemd[1]: Starting dbus.service... Sep 13 01:05:06.233901 systemd[1]: Starting enable-oem-cloudinit.service... Sep 13 01:05:06.234757 systemd[1]: Starting extend-filesystems.service... Sep 13 01:05:06.235314 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Sep 13 01:05:06.236292 jq[1253]: false Sep 13 01:05:06.238636 systemd[1]: Starting kubelet.service... Sep 13 01:05:06.239695 systemd[1]: Starting motdgen.service... Sep 13 01:05:06.240820 systemd[1]: Starting prepare-helm.service... Sep 13 01:05:06.242517 systemd[1]: Starting ssh-key-proc-cmdline.service... Sep 13 01:05:06.243487 systemd[1]: Starting sshd-keygen.service... Sep 13 01:05:06.246494 systemd[1]: Starting systemd-logind.service... Sep 13 01:05:06.246739 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 13 01:05:06.246778 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 13 01:05:06.247244 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 13 01:05:06.248558 systemd[1]: Starting update-engine.service... Sep 13 01:05:06.250198 systemd[1]: Starting update-ssh-keys-after-ignition.service... Sep 13 01:05:06.251873 systemd[1]: Starting vmtoolsd.service... Sep 13 01:05:06.254530 jq[1265]: true Sep 13 01:05:06.256560 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 13 01:05:06.256733 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Sep 13 01:05:06.263875 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 13 01:05:06.263980 systemd[1]: Finished ssh-key-proc-cmdline.service. Sep 13 01:05:06.264257 jq[1271]: true Sep 13 01:05:06.280667 extend-filesystems[1254]: Found loop1 Sep 13 01:05:06.280966 extend-filesystems[1254]: Found sda Sep 13 01:05:06.281113 extend-filesystems[1254]: Found sda1 Sep 13 01:05:06.281256 extend-filesystems[1254]: Found sda2 Sep 13 01:05:06.281406 extend-filesystems[1254]: Found sda3 Sep 13 01:05:06.281542 extend-filesystems[1254]: Found usr Sep 13 01:05:06.281686 extend-filesystems[1254]: Found sda4 Sep 13 01:05:06.281825 extend-filesystems[1254]: Found sda6 Sep 13 01:05:06.281960 extend-filesystems[1254]: Found sda7 Sep 13 01:05:06.282106 extend-filesystems[1254]: Found sda9 Sep 13 01:05:06.283008 extend-filesystems[1254]: Checking size of /dev/sda9 Sep 13 01:05:06.289006 systemd[1]: Started vmtoolsd.service. Sep 13 01:05:06.292708 systemd[1]: motdgen.service: Deactivated successfully. Sep 13 01:05:06.292824 systemd[1]: Finished motdgen.service. Sep 13 01:05:06.300784 tar[1270]: linux-amd64/LICENSE Sep 13 01:05:06.300959 tar[1270]: linux-amd64/helm Sep 13 01:05:06.319911 extend-filesystems[1254]: Old size kept for /dev/sda9 Sep 13 01:05:06.319911 extend-filesystems[1254]: Found sr0 Sep 13 01:05:06.319835 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 13 01:05:06.320495 systemd[1]: Finished extend-filesystems.service. Sep 13 01:05:06.327470 systemd-logind[1262]: Watching system buttons on /dev/input/event1 (Power Button) Sep 13 01:05:06.327483 systemd-logind[1262]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 13 01:05:06.327761 systemd-logind[1262]: New seat seat0. Sep 13 01:05:06.331157 bash[1290]: Updated "/home/core/.ssh/authorized_keys" Sep 13 01:05:06.331628 systemd[1]: Finished update-ssh-keys-after-ignition.service. Sep 13 01:05:06.345151 env[1291]: time="2025-09-13T01:05:06.345123936Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Sep 13 01:05:06.384185 env[1291]: time="2025-09-13T01:05:06.384160361Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 13 01:05:06.385252 env[1291]: time="2025-09-13T01:05:06.385239707Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 13 01:05:06.386554 env[1291]: time="2025-09-13T01:05:06.386535277Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.192-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 13 01:05:06.387341 dbus-daemon[1252]: [system] SELinux support is enabled Sep 13 01:05:06.387477 systemd[1]: Started dbus.service. Sep 13 01:05:06.388803 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 13 01:05:06.389149 dbus-daemon[1252]: [system] Successfully activated service 'org.freedesktop.systemd1' Sep 13 01:05:06.388822 systemd[1]: Reached target system-config.target. Sep 13 01:05:06.388946 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 13 01:05:06.388961 systemd[1]: Reached target user-config.target. Sep 13 01:05:06.389079 systemd[1]: Started systemd-logind.service. Sep 13 01:05:06.390406 env[1291]: time="2025-09-13T01:05:06.390391183Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 13 01:05:06.390610 env[1291]: time="2025-09-13T01:05:06.390597538Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 13 01:05:06.390668 env[1291]: time="2025-09-13T01:05:06.390657613Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 13 01:05:06.390714 env[1291]: time="2025-09-13T01:05:06.390703589Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Sep 13 01:05:06.390771 env[1291]: time="2025-09-13T01:05:06.390761913Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 13 01:05:06.390867 env[1291]: time="2025-09-13T01:05:06.390858062Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 13 01:05:06.391054 env[1291]: time="2025-09-13T01:05:06.391044965Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 13 01:05:06.392989 env[1291]: time="2025-09-13T01:05:06.392974703Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 13 01:05:06.393048 env[1291]: time="2025-09-13T01:05:06.393029171Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 13 01:05:06.393140 env[1291]: time="2025-09-13T01:05:06.393130078Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Sep 13 01:05:06.393197 env[1291]: time="2025-09-13T01:05:06.393180566Z" level=info msg="metadata content store policy set" policy=shared Sep 13 01:05:06.400113 env[1291]: time="2025-09-13T01:05:06.399099449Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 13 01:05:06.400113 env[1291]: time="2025-09-13T01:05:06.399120843Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 13 01:05:06.400113 env[1291]: time="2025-09-13T01:05:06.399128549Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 13 01:05:06.400113 env[1291]: time="2025-09-13T01:05:06.399152929Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 13 01:05:06.400113 env[1291]: time="2025-09-13T01:05:06.399161873Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 13 01:05:06.400113 env[1291]: time="2025-09-13T01:05:06.399169409Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 13 01:05:06.400113 env[1291]: time="2025-09-13T01:05:06.399176693Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 13 01:05:06.400113 env[1291]: time="2025-09-13T01:05:06.399184931Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 13 01:05:06.400113 env[1291]: time="2025-09-13T01:05:06.399192185Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Sep 13 01:05:06.400113 env[1291]: time="2025-09-13T01:05:06.399218292Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 13 01:05:06.400113 env[1291]: time="2025-09-13T01:05:06.399229628Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 13 01:05:06.400113 env[1291]: time="2025-09-13T01:05:06.399237146Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 13 01:05:06.400113 env[1291]: time="2025-09-13T01:05:06.399295005Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 13 01:05:06.400113 env[1291]: time="2025-09-13T01:05:06.399340649Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 13 01:05:06.401802 env[1291]: time="2025-09-13T01:05:06.399519649Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 13 01:05:06.401802 env[1291]: time="2025-09-13T01:05:06.399538298Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 13 01:05:06.401802 env[1291]: time="2025-09-13T01:05:06.399547082Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 13 01:05:06.401802 env[1291]: time="2025-09-13T01:05:06.399579950Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 13 01:05:06.401802 env[1291]: time="2025-09-13T01:05:06.399590756Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 13 01:05:06.401802 env[1291]: time="2025-09-13T01:05:06.399598138Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 13 01:05:06.401802 env[1291]: time="2025-09-13T01:05:06.399604068Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 13 01:05:06.401802 env[1291]: time="2025-09-13T01:05:06.399610508Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 13 01:05:06.401802 env[1291]: time="2025-09-13T01:05:06.399619851Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 13 01:05:06.401802 env[1291]: time="2025-09-13T01:05:06.399658620Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 13 01:05:06.401802 env[1291]: time="2025-09-13T01:05:06.399668776Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 13 01:05:06.401802 env[1291]: time="2025-09-13T01:05:06.399676938Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 13 01:05:06.401802 env[1291]: time="2025-09-13T01:05:06.399742394Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 13 01:05:06.401802 env[1291]: time="2025-09-13T01:05:06.399751576Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 13 01:05:06.401802 env[1291]: time="2025-09-13T01:05:06.399758713Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 13 01:05:06.406636 kernel: NET: Registered PF_VSOCK protocol family Sep 13 01:05:06.403750 systemd[1]: Started containerd.service. Sep 13 01:05:06.406698 env[1291]: time="2025-09-13T01:05:06.399765024Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 13 01:05:06.406698 env[1291]: time="2025-09-13T01:05:06.399772776Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Sep 13 01:05:06.406698 env[1291]: time="2025-09-13T01:05:06.399779099Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 13 01:05:06.406698 env[1291]: time="2025-09-13T01:05:06.399789346Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Sep 13 01:05:06.406698 env[1291]: time="2025-09-13T01:05:06.399810986Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 13 01:05:06.406783 env[1291]: time="2025-09-13T01:05:06.399920848Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 13 01:05:06.406783 env[1291]: time="2025-09-13T01:05:06.399951973Z" level=info msg="Connect containerd service" Sep 13 01:05:06.406783 env[1291]: time="2025-09-13T01:05:06.399970453Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 13 01:05:06.406783 env[1291]: time="2025-09-13T01:05:06.403525784Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 13 01:05:06.406783 env[1291]: time="2025-09-13T01:05:06.403654340Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 13 01:05:06.406783 env[1291]: time="2025-09-13T01:05:06.403676799Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 13 01:05:06.406783 env[1291]: time="2025-09-13T01:05:06.403704238Z" level=info msg="containerd successfully booted in 0.059018s" Sep 13 01:05:06.406783 env[1291]: time="2025-09-13T01:05:06.405414668Z" level=info msg="Start subscribing containerd event" Sep 13 01:05:06.406783 env[1291]: time="2025-09-13T01:05:06.405447780Z" level=info msg="Start recovering state" Sep 13 01:05:06.406783 env[1291]: time="2025-09-13T01:05:06.405487349Z" level=info msg="Start event monitor" Sep 13 01:05:06.406783 env[1291]: time="2025-09-13T01:05:06.405500900Z" level=info msg="Start snapshots syncer" Sep 13 01:05:06.406783 env[1291]: time="2025-09-13T01:05:06.405508791Z" level=info msg="Start cni network conf syncer for default" Sep 13 01:05:06.406783 env[1291]: time="2025-09-13T01:05:06.405514187Z" level=info msg="Start streaming server" Sep 13 01:05:06.440381 update_engine[1263]: I0913 01:05:06.429610 1263 main.cc:92] Flatcar Update Engine starting Sep 13 01:05:06.450932 systemd[1]: Started update-engine.service. Sep 13 01:05:06.452489 systemd[1]: Started locksmithd.service. Sep 13 01:05:06.453829 update_engine[1263]: I0913 01:05:06.453758 1263 update_check_scheduler.cc:74] Next update check in 6m47s Sep 13 01:05:06.639261 sshd_keygen[1274]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 13 01:05:06.654191 systemd[1]: Finished sshd-keygen.service. Sep 13 01:05:06.655403 systemd[1]: Starting issuegen.service... Sep 13 01:05:06.659445 systemd[1]: issuegen.service: Deactivated successfully. Sep 13 01:05:06.659545 systemd[1]: Finished issuegen.service. Sep 13 01:05:06.660742 systemd[1]: Starting systemd-user-sessions.service... Sep 13 01:05:06.672242 systemd[1]: Finished systemd-user-sessions.service. Sep 13 01:05:06.673228 systemd[1]: Started getty@tty1.service. Sep 13 01:05:06.674100 systemd[1]: Started serial-getty@ttyS0.service. Sep 13 01:05:06.674298 systemd[1]: Reached target getty.target. Sep 13 01:05:06.746456 tar[1270]: linux-amd64/README.md Sep 13 01:05:06.749414 systemd[1]: Finished prepare-helm.service. Sep 13 01:05:06.867199 locksmithd[1331]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 13 01:05:09.827700 systemd[1]: Started kubelet.service. Sep 13 01:05:09.828114 systemd[1]: Reached target multi-user.target. Sep 13 01:05:09.829385 systemd[1]: Starting systemd-update-utmp-runlevel.service... Sep 13 01:05:09.834508 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Sep 13 01:05:09.834606 systemd[1]: Finished systemd-update-utmp-runlevel.service. Sep 13 01:05:09.834779 systemd[1]: Startup finished in 913ms (kernel) + 6.968s (initrd) + 11.052s (userspace) = 18.934s. Sep 13 01:05:10.016514 login[1345]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Sep 13 01:05:10.017688 login[1346]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Sep 13 01:05:10.040624 systemd[1]: Created slice user-500.slice. Sep 13 01:05:10.041544 systemd[1]: Starting user-runtime-dir@500.service... Sep 13 01:05:10.043311 systemd-logind[1262]: New session 2 of user core. Sep 13 01:05:10.045631 systemd-logind[1262]: New session 1 of user core. Sep 13 01:05:10.053497 systemd[1]: Finished user-runtime-dir@500.service. Sep 13 01:05:10.054641 systemd[1]: Starting user@500.service... Sep 13 01:05:10.074781 (systemd)[1401]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 13 01:05:10.214385 systemd[1401]: Queued start job for default target default.target. Sep 13 01:05:10.214790 systemd[1401]: Reached target paths.target. Sep 13 01:05:10.214804 systemd[1401]: Reached target sockets.target. Sep 13 01:05:10.214826 systemd[1401]: Reached target timers.target. Sep 13 01:05:10.214835 systemd[1401]: Reached target basic.target. Sep 13 01:05:10.214904 systemd[1]: Started user@500.service. Sep 13 01:05:10.215653 systemd[1]: Started session-1.scope. Sep 13 01:05:10.216146 systemd[1]: Started session-2.scope. Sep 13 01:05:10.216838 systemd[1401]: Reached target default.target. Sep 13 01:05:10.216972 systemd[1401]: Startup finished in 138ms. Sep 13 01:05:11.105852 kubelet[1398]: E0913 01:05:11.105825 1398 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 01:05:11.107068 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 01:05:11.107152 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 01:05:21.276607 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 13 01:05:21.276800 systemd[1]: Stopped kubelet.service. Sep 13 01:05:21.277979 systemd[1]: Starting kubelet.service... Sep 13 01:05:21.347947 systemd[1]: Started kubelet.service. Sep 13 01:05:21.386896 kubelet[1429]: E0913 01:05:21.386870 1429 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 01:05:21.389026 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 01:05:21.389102 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 01:05:31.526526 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 13 01:05:31.526644 systemd[1]: Stopped kubelet.service. Sep 13 01:05:31.527695 systemd[1]: Starting kubelet.service... Sep 13 01:05:31.586566 systemd[1]: Started kubelet.service. Sep 13 01:05:31.738823 kubelet[1438]: E0913 01:05:31.738797 1438 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 01:05:31.740054 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 01:05:31.740129 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 01:05:36.593180 systemd[1]: Created slice system-sshd.slice. Sep 13 01:05:36.594189 systemd[1]: Started sshd@0-139.178.70.99:22-147.75.109.163:57322.service. Sep 13 01:05:36.655897 sshd[1445]: Accepted publickey for core from 147.75.109.163 port 57322 ssh2: RSA SHA256:sJGDjo0Z2Vx3Gx4EUnUZFO+gxzu8eUeKwoabCfe3hp8 Sep 13 01:05:36.656950 sshd[1445]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 01:05:36.660237 systemd[1]: Started session-3.scope. Sep 13 01:05:36.660592 systemd-logind[1262]: New session 3 of user core. Sep 13 01:05:36.709454 systemd[1]: Started sshd@1-139.178.70.99:22-147.75.109.163:57324.service. Sep 13 01:05:36.752325 sshd[1450]: Accepted publickey for core from 147.75.109.163 port 57324 ssh2: RSA SHA256:sJGDjo0Z2Vx3Gx4EUnUZFO+gxzu8eUeKwoabCfe3hp8 Sep 13 01:05:36.753698 sshd[1450]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 01:05:36.757476 systemd[1]: Started session-4.scope. Sep 13 01:05:36.757904 systemd-logind[1262]: New session 4 of user core. Sep 13 01:05:36.807876 sshd[1450]: pam_unix(sshd:session): session closed for user core Sep 13 01:05:36.810707 systemd[1]: Started sshd@2-139.178.70.99:22-147.75.109.163:57340.service. Sep 13 01:05:36.811306 systemd[1]: sshd@1-139.178.70.99:22-147.75.109.163:57324.service: Deactivated successfully. Sep 13 01:05:36.811693 systemd[1]: session-4.scope: Deactivated successfully. Sep 13 01:05:36.813549 systemd-logind[1262]: Session 4 logged out. Waiting for processes to exit. Sep 13 01:05:36.814061 systemd-logind[1262]: Removed session 4. Sep 13 01:05:36.846901 sshd[1455]: Accepted publickey for core from 147.75.109.163 port 57340 ssh2: RSA SHA256:sJGDjo0Z2Vx3Gx4EUnUZFO+gxzu8eUeKwoabCfe3hp8 Sep 13 01:05:36.847383 sshd[1455]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 01:05:36.850559 systemd[1]: Started session-5.scope. Sep 13 01:05:36.851235 systemd-logind[1262]: New session 5 of user core. Sep 13 01:05:36.898134 sshd[1455]: pam_unix(sshd:session): session closed for user core Sep 13 01:05:36.900514 systemd[1]: sshd@2-139.178.70.99:22-147.75.109.163:57340.service: Deactivated successfully. Sep 13 01:05:36.900931 systemd[1]: session-5.scope: Deactivated successfully. Sep 13 01:05:36.901392 systemd-logind[1262]: Session 5 logged out. Waiting for processes to exit. Sep 13 01:05:36.902307 systemd[1]: Started sshd@3-139.178.70.99:22-147.75.109.163:57344.service. Sep 13 01:05:36.902993 systemd-logind[1262]: Removed session 5. Sep 13 01:05:36.937195 sshd[1462]: Accepted publickey for core from 147.75.109.163 port 57344 ssh2: RSA SHA256:sJGDjo0Z2Vx3Gx4EUnUZFO+gxzu8eUeKwoabCfe3hp8 Sep 13 01:05:36.938135 sshd[1462]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 01:05:36.940980 systemd-logind[1262]: New session 6 of user core. Sep 13 01:05:36.941472 systemd[1]: Started session-6.scope. Sep 13 01:05:36.990751 sshd[1462]: pam_unix(sshd:session): session closed for user core Sep 13 01:05:36.993423 systemd[1]: Started sshd@4-139.178.70.99:22-147.75.109.163:57358.service. Sep 13 01:05:36.994902 systemd[1]: sshd@3-139.178.70.99:22-147.75.109.163:57344.service: Deactivated successfully. Sep 13 01:05:36.995279 systemd[1]: session-6.scope: Deactivated successfully. Sep 13 01:05:36.995673 systemd-logind[1262]: Session 6 logged out. Waiting for processes to exit. Sep 13 01:05:36.996116 systemd-logind[1262]: Removed session 6. Sep 13 01:05:37.028983 sshd[1467]: Accepted publickey for core from 147.75.109.163 port 57358 ssh2: RSA SHA256:sJGDjo0Z2Vx3Gx4EUnUZFO+gxzu8eUeKwoabCfe3hp8 Sep 13 01:05:37.029780 sshd[1467]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 01:05:37.032913 systemd[1]: Started session-7.scope. Sep 13 01:05:37.033118 systemd-logind[1262]: New session 7 of user core. Sep 13 01:05:37.098173 sudo[1471]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 13 01:05:37.099013 sudo[1471]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Sep 13 01:05:37.113305 systemd[1]: Starting docker.service... Sep 13 01:05:37.139604 env[1482]: time="2025-09-13T01:05:37.139567010Z" level=info msg="Starting up" Sep 13 01:05:37.140289 env[1482]: time="2025-09-13T01:05:37.140276397Z" level=info msg="parsed scheme: \"unix\"" module=grpc Sep 13 01:05:37.140347 env[1482]: time="2025-09-13T01:05:37.140333494Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Sep 13 01:05:37.140422 env[1482]: time="2025-09-13T01:05:37.140406661Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Sep 13 01:05:37.140472 env[1482]: time="2025-09-13T01:05:37.140460047Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Sep 13 01:05:37.141310 env[1482]: time="2025-09-13T01:05:37.141298774Z" level=info msg="parsed scheme: \"unix\"" module=grpc Sep 13 01:05:37.141361 env[1482]: time="2025-09-13T01:05:37.141350947Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Sep 13 01:05:37.141436 env[1482]: time="2025-09-13T01:05:37.141424537Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Sep 13 01:05:37.141624 env[1482]: time="2025-09-13T01:05:37.141614297Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Sep 13 01:05:37.144912 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport3184179074-merged.mount: Deactivated successfully. Sep 13 01:05:37.156424 env[1482]: time="2025-09-13T01:05:37.156403517Z" level=info msg="Loading containers: start." Sep 13 01:05:37.260390 kernel: Initializing XFRM netlink socket Sep 13 01:05:37.289786 env[1482]: time="2025-09-13T01:05:37.289765206Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Sep 13 01:05:37.338462 systemd-networkd[1081]: docker0: Link UP Sep 13 01:05:37.349105 env[1482]: time="2025-09-13T01:05:37.348438344Z" level=info msg="Loading containers: done." Sep 13 01:05:37.354745 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck571228894-merged.mount: Deactivated successfully. Sep 13 01:05:37.357635 env[1482]: time="2025-09-13T01:05:37.357616785Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 13 01:05:37.357826 env[1482]: time="2025-09-13T01:05:37.357815370Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Sep 13 01:05:37.357919 env[1482]: time="2025-09-13T01:05:37.357909656Z" level=info msg="Daemon has completed initialization" Sep 13 01:05:37.369503 systemd[1]: Started docker.service. Sep 13 01:05:37.372287 env[1482]: time="2025-09-13T01:05:37.372250527Z" level=info msg="API listen on /run/docker.sock" Sep 13 01:05:38.554610 env[1291]: time="2025-09-13T01:05:38.554578474Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\"" Sep 13 01:05:39.104342 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3852318480.mount: Deactivated successfully. Sep 13 01:05:40.299129 env[1291]: time="2025-09-13T01:05:40.299092398Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.32.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:05:40.299810 env[1291]: time="2025-09-13T01:05:40.299793409Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:05:40.301205 env[1291]: time="2025-09-13T01:05:40.301186101Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.32.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:05:40.305979 env[1291]: time="2025-09-13T01:05:40.305955128Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:05:40.306273 env[1291]: time="2025-09-13T01:05:40.306258953Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\" returns image reference \"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\"" Sep 13 01:05:40.306642 env[1291]: time="2025-09-13T01:05:40.306625249Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\"" Sep 13 01:05:41.776596 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Sep 13 01:05:41.776772 systemd[1]: Stopped kubelet.service. Sep 13 01:05:41.777990 systemd[1]: Starting kubelet.service... Sep 13 01:05:41.841927 env[1291]: time="2025-09-13T01:05:41.841907967Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.32.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:05:41.842671 systemd[1]: Started kubelet.service. Sep 13 01:05:41.862724 env[1291]: time="2025-09-13T01:05:41.862700448Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:05:41.872632 env[1291]: time="2025-09-13T01:05:41.872603934Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.32.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:05:41.890456 kubelet[1608]: E0913 01:05:41.890421 1608 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 01:05:41.891573 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 01:05:41.891650 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 01:05:41.896511 env[1291]: time="2025-09-13T01:05:41.896486955Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:05:41.897025 env[1291]: time="2025-09-13T01:05:41.897007918Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\" returns image reference \"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\"" Sep 13 01:05:41.897407 env[1291]: time="2025-09-13T01:05:41.897394420Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\"" Sep 13 01:05:43.218557 env[1291]: time="2025-09-13T01:05:43.218503814Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.32.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:05:43.234674 env[1291]: time="2025-09-13T01:05:43.234646769Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:05:43.239663 env[1291]: time="2025-09-13T01:05:43.239642578Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.32.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:05:43.248630 env[1291]: time="2025-09-13T01:05:43.248606653Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:05:43.248967 env[1291]: time="2025-09-13T01:05:43.248947018Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\" returns image reference \"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\"" Sep 13 01:05:43.249609 env[1291]: time="2025-09-13T01:05:43.249593026Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\"" Sep 13 01:05:44.163689 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1123546569.mount: Deactivated successfully. Sep 13 01:05:44.790586 env[1291]: time="2025-09-13T01:05:44.790542221Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.32.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:05:44.807458 env[1291]: time="2025-09-13T01:05:44.807420581Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:05:44.815822 env[1291]: time="2025-09-13T01:05:44.815782845Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.32.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:05:44.823551 env[1291]: time="2025-09-13T01:05:44.823509633Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:05:44.824078 env[1291]: time="2025-09-13T01:05:44.824048167Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\" returns image reference \"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\"" Sep 13 01:05:44.824789 env[1291]: time="2025-09-13T01:05:44.824764919Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 13 01:05:45.343046 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3376999130.mount: Deactivated successfully. Sep 13 01:05:46.232957 env[1291]: time="2025-09-13T01:05:46.232924617Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:05:46.239946 env[1291]: time="2025-09-13T01:05:46.239921021Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:05:46.242951 env[1291]: time="2025-09-13T01:05:46.242927715Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:05:46.245031 env[1291]: time="2025-09-13T01:05:46.245012112Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:05:46.245518 env[1291]: time="2025-09-13T01:05:46.245499067Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Sep 13 01:05:46.245844 env[1291]: time="2025-09-13T01:05:46.245823331Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 13 01:05:46.689004 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2187819804.mount: Deactivated successfully. Sep 13 01:05:46.700810 env[1291]: time="2025-09-13T01:05:46.700767191Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:05:46.702009 env[1291]: time="2025-09-13T01:05:46.701992433Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:05:46.703417 env[1291]: time="2025-09-13T01:05:46.703395164Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:05:46.705311 env[1291]: time="2025-09-13T01:05:46.705285183Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:05:46.706015 env[1291]: time="2025-09-13T01:05:46.705982267Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 13 01:05:46.707109 env[1291]: time="2025-09-13T01:05:46.707091067Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Sep 13 01:05:47.269131 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1234812872.mount: Deactivated successfully. Sep 13 01:05:49.700307 env[1291]: time="2025-09-13T01:05:49.700238111Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.16-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:05:49.724453 env[1291]: time="2025-09-13T01:05:49.724415367Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:05:49.732129 env[1291]: time="2025-09-13T01:05:49.732101428Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.16-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:05:49.738597 env[1291]: time="2025-09-13T01:05:49.738566011Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:05:49.739147 env[1291]: time="2025-09-13T01:05:49.739118847Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Sep 13 01:05:51.695620 systemd[1]: Stopped kubelet.service. Sep 13 01:05:51.697647 systemd[1]: Starting kubelet.service... Sep 13 01:05:51.715504 systemd[1]: Reloading. Sep 13 01:05:51.777987 /usr/lib/systemd/system-generators/torcx-generator[1656]: time="2025-09-13T01:05:51Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 13 01:05:51.778202 /usr/lib/systemd/system-generators/torcx-generator[1656]: time="2025-09-13T01:05:51Z" level=info msg="torcx already run" Sep 13 01:05:51.835451 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 13 01:05:51.835598 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 13 01:05:51.847564 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 01:05:51.913885 update_engine[1263]: I0913 01:05:51.913861 1263 update_attempter.cc:509] Updating boot flags... Sep 13 01:05:51.916821 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 13 01:05:51.916886 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 13 01:05:51.917028 systemd[1]: Stopped kubelet.service. Sep 13 01:05:51.918219 systemd[1]: Starting kubelet.service... Sep 13 01:05:53.949125 systemd[1]: Started kubelet.service. Sep 13 01:05:54.124941 kubelet[1738]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 01:05:54.124941 kubelet[1738]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 13 01:05:54.124941 kubelet[1738]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 01:05:54.125239 kubelet[1738]: I0913 01:05:54.124987 1738 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 13 01:05:54.560027 kubelet[1738]: I0913 01:05:54.559999 1738 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 13 01:05:54.560027 kubelet[1738]: I0913 01:05:54.560020 1738 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 13 01:05:54.560271 kubelet[1738]: I0913 01:05:54.560205 1738 server.go:954] "Client rotation is on, will bootstrap in background" Sep 13 01:05:54.758572 kubelet[1738]: E0913 01:05:54.758548 1738 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://139.178.70.99:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 139.178.70.99:6443: connect: connection refused" logger="UnhandledError" Sep 13 01:05:54.759410 kubelet[1738]: I0913 01:05:54.759397 1738 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 13 01:05:54.789850 kubelet[1738]: E0913 01:05:54.789831 1738 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 13 01:05:54.789978 kubelet[1738]: I0913 01:05:54.789971 1738 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 13 01:05:54.791987 kubelet[1738]: I0913 01:05:54.791972 1738 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 13 01:05:54.792108 kubelet[1738]: I0913 01:05:54.792088 1738 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 13 01:05:54.792211 kubelet[1738]: I0913 01:05:54.792103 1738 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 13 01:05:54.792279 kubelet[1738]: I0913 01:05:54.792215 1738 topology_manager.go:138] "Creating topology manager with none policy" Sep 13 01:05:54.792279 kubelet[1738]: I0913 01:05:54.792222 1738 container_manager_linux.go:304] "Creating device plugin manager" Sep 13 01:05:54.792319 kubelet[1738]: I0913 01:05:54.792291 1738 state_mem.go:36] "Initialized new in-memory state store" Sep 13 01:05:54.829504 kubelet[1738]: I0913 01:05:54.829029 1738 kubelet.go:446] "Attempting to sync node with API server" Sep 13 01:05:54.829504 kubelet[1738]: I0913 01:05:54.829056 1738 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 13 01:05:54.829504 kubelet[1738]: I0913 01:05:54.829096 1738 kubelet.go:352] "Adding apiserver pod source" Sep 13 01:05:54.829504 kubelet[1738]: I0913 01:05:54.829105 1738 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 13 01:05:54.887041 kubelet[1738]: W0913 01:05:54.887012 1738 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://139.178.70.99:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 139.178.70.99:6443: connect: connection refused Sep 13 01:05:54.887174 kubelet[1738]: E0913 01:05:54.887156 1738 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://139.178.70.99:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 139.178.70.99:6443: connect: connection refused" logger="UnhandledError" Sep 13 01:05:54.887272 kubelet[1738]: W0913 01:05:54.887262 1738 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://139.178.70.99:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 139.178.70.99:6443: connect: connection refused Sep 13 01:05:54.887330 kubelet[1738]: E0913 01:05:54.887319 1738 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://139.178.70.99:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 139.178.70.99:6443: connect: connection refused" logger="UnhandledError" Sep 13 01:05:54.887619 kubelet[1738]: I0913 01:05:54.887608 1738 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Sep 13 01:05:54.887944 kubelet[1738]: I0913 01:05:54.887935 1738 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 13 01:05:54.898488 kubelet[1738]: W0913 01:05:54.898471 1738 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 13 01:05:54.903887 kubelet[1738]: I0913 01:05:54.903870 1738 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 13 01:05:54.904006 kubelet[1738]: I0913 01:05:54.903998 1738 server.go:1287] "Started kubelet" Sep 13 01:05:54.922553 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Sep 13 01:05:54.922716 kubelet[1738]: I0913 01:05:54.922678 1738 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 13 01:05:54.923458 kubelet[1738]: I0913 01:05:54.923449 1738 server.go:479] "Adding debug handlers to kubelet server" Sep 13 01:05:54.938283 kubelet[1738]: I0913 01:05:54.938256 1738 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 13 01:05:54.940260 kubelet[1738]: I0913 01:05:54.940225 1738 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 13 01:05:54.940444 kubelet[1738]: I0913 01:05:54.940435 1738 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 13 01:05:54.942284 kubelet[1738]: I0913 01:05:54.942266 1738 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 13 01:05:54.954732 kubelet[1738]: I0913 01:05:54.954712 1738 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 13 01:05:54.954977 kubelet[1738]: E0913 01:05:54.954967 1738 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 01:05:54.955939 kubelet[1738]: I0913 01:05:54.955931 1738 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 13 01:05:54.955990 kubelet[1738]: E0913 01:05:54.940549 1738 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://139.178.70.99:6443/api/v1/namespaces/default/events\": dial tcp 139.178.70.99:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1864b21450ad8643 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-13 01:05:54.903975491 +0000 UTC m=+0.938615885,LastTimestamp:2025-09-13 01:05:54.903975491 +0000 UTC m=+0.938615885,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 13 01:05:54.956110 kubelet[1738]: I0913 01:05:54.956102 1738 reconciler.go:26] "Reconciler: start to sync state" Sep 13 01:05:54.956671 kubelet[1738]: W0913 01:05:54.956651 1738 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://139.178.70.99:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.70.99:6443: connect: connection refused Sep 13 01:05:54.956736 kubelet[1738]: E0913 01:05:54.956723 1738 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://139.178.70.99:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 139.178.70.99:6443: connect: connection refused" logger="UnhandledError" Sep 13 01:05:54.957048 kubelet[1738]: E0913 01:05:54.957030 1738 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.99:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.99:6443: connect: connection refused" interval="200ms" Sep 13 01:05:54.957212 kubelet[1738]: I0913 01:05:54.957196 1738 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 13 01:05:54.963290 kubelet[1738]: I0913 01:05:54.963273 1738 factory.go:221] Registration of the containerd container factory successfully Sep 13 01:05:54.963290 kubelet[1738]: I0913 01:05:54.963288 1738 factory.go:221] Registration of the systemd container factory successfully Sep 13 01:05:54.968008 kubelet[1738]: E0913 01:05:54.967991 1738 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 13 01:05:54.971220 kubelet[1738]: I0913 01:05:54.971209 1738 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 13 01:05:54.971298 kubelet[1738]: I0913 01:05:54.971290 1738 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 13 01:05:54.971357 kubelet[1738]: I0913 01:05:54.971350 1738 state_mem.go:36] "Initialized new in-memory state store" Sep 13 01:05:54.978276 kubelet[1738]: I0913 01:05:54.978260 1738 policy_none.go:49] "None policy: Start" Sep 13 01:05:54.978376 kubelet[1738]: I0913 01:05:54.978358 1738 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 13 01:05:54.978425 kubelet[1738]: I0913 01:05:54.978418 1738 state_mem.go:35] "Initializing new in-memory state store" Sep 13 01:05:54.992944 systemd[1]: Created slice kubepods.slice. Sep 13 01:05:54.996024 systemd[1]: Created slice kubepods-burstable.slice. Sep 13 01:05:54.998226 systemd[1]: Created slice kubepods-besteffort.slice. Sep 13 01:05:55.004420 kubelet[1738]: I0913 01:05:55.004407 1738 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 13 01:05:55.004580 kubelet[1738]: I0913 01:05:55.004573 1738 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 13 01:05:55.004650 kubelet[1738]: I0913 01:05:55.004621 1738 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 13 01:05:55.006710 kubelet[1738]: I0913 01:05:55.006663 1738 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 13 01:05:55.007149 kubelet[1738]: E0913 01:05:55.007139 1738 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 13 01:05:55.007238 kubelet[1738]: E0913 01:05:55.007229 1738 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 13 01:05:55.007331 kubelet[1738]: I0913 01:05:55.007324 1738 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 13 01:05:55.007450 kubelet[1738]: I0913 01:05:55.007443 1738 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 13 01:05:55.007500 kubelet[1738]: I0913 01:05:55.007487 1738 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 13 01:05:55.007558 kubelet[1738]: I0913 01:05:55.007551 1738 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 13 01:05:55.007601 kubelet[1738]: I0913 01:05:55.007594 1738 kubelet.go:2382] "Starting kubelet main sync loop" Sep 13 01:05:55.007680 kubelet[1738]: E0913 01:05:55.007673 1738 kubelet.go:2406] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Sep 13 01:05:55.008081 kubelet[1738]: W0913 01:05:55.008071 1738 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://139.178.70.99:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.70.99:6443: connect: connection refused Sep 13 01:05:55.008148 kubelet[1738]: E0913 01:05:55.008136 1738 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://139.178.70.99:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 139.178.70.99:6443: connect: connection refused" logger="UnhandledError" Sep 13 01:05:55.106156 kubelet[1738]: I0913 01:05:55.106093 1738 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 13 01:05:55.107360 kubelet[1738]: E0913 01:05:55.107345 1738 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://139.178.70.99:6443/api/v1/nodes\": dial tcp 139.178.70.99:6443: connect: connection refused" node="localhost" Sep 13 01:05:55.112032 systemd[1]: Created slice kubepods-burstable-pod72a30db4fc25e4da65a3b99eba43be94.slice. Sep 13 01:05:55.131416 kubelet[1738]: E0913 01:05:55.131396 1738 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 13 01:05:55.133922 systemd[1]: Created slice kubepods-burstable-poda0b381e9c248d782688c35a4fece3e93.slice. Sep 13 01:05:55.135421 kubelet[1738]: E0913 01:05:55.135320 1738 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 13 01:05:55.142227 systemd[1]: Created slice kubepods-burstable-pod1403266a9792debaa127cd8df7a81c3c.slice. Sep 13 01:05:55.143787 kubelet[1738]: E0913 01:05:55.143767 1738 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 13 01:05:55.157137 kubelet[1738]: I0913 01:05:55.157113 1738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 01:05:55.157137 kubelet[1738]: I0913 01:05:55.157135 1738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 01:05:55.157281 kubelet[1738]: I0913 01:05:55.157146 1738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/72a30db4fc25e4da65a3b99eba43be94-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"72a30db4fc25e4da65a3b99eba43be94\") " pod="kube-system/kube-scheduler-localhost" Sep 13 01:05:55.157281 kubelet[1738]: I0913 01:05:55.157162 1738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a0b381e9c248d782688c35a4fece3e93-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"a0b381e9c248d782688c35a4fece3e93\") " pod="kube-system/kube-apiserver-localhost" Sep 13 01:05:55.157281 kubelet[1738]: I0913 01:05:55.157171 1738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a0b381e9c248d782688c35a4fece3e93-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"a0b381e9c248d782688c35a4fece3e93\") " pod="kube-system/kube-apiserver-localhost" Sep 13 01:05:55.157281 kubelet[1738]: I0913 01:05:55.157179 1738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 01:05:55.157281 kubelet[1738]: I0913 01:05:55.157187 1738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 01:05:55.157452 kubelet[1738]: I0913 01:05:55.157196 1738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a0b381e9c248d782688c35a4fece3e93-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"a0b381e9c248d782688c35a4fece3e93\") " pod="kube-system/kube-apiserver-localhost" Sep 13 01:05:55.157452 kubelet[1738]: I0913 01:05:55.157204 1738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 01:05:55.157724 kubelet[1738]: E0913 01:05:55.157705 1738 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.99:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.99:6443: connect: connection refused" interval="400ms" Sep 13 01:05:55.308843 kubelet[1738]: I0913 01:05:55.308824 1738 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 13 01:05:55.309380 kubelet[1738]: E0913 01:05:55.309344 1738 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://139.178.70.99:6443/api/v1/nodes\": dial tcp 139.178.70.99:6443: connect: connection refused" node="localhost" Sep 13 01:05:55.432812 env[1291]: time="2025-09-13T01:05:55.432595133Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:72a30db4fc25e4da65a3b99eba43be94,Namespace:kube-system,Attempt:0,}" Sep 13 01:05:55.436718 env[1291]: time="2025-09-13T01:05:55.436674981Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:a0b381e9c248d782688c35a4fece3e93,Namespace:kube-system,Attempt:0,}" Sep 13 01:05:55.448144 env[1291]: time="2025-09-13T01:05:55.448109330Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:1403266a9792debaa127cd8df7a81c3c,Namespace:kube-system,Attempt:0,}" Sep 13 01:05:55.558296 kubelet[1738]: E0913 01:05:55.558261 1738 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.99:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.99:6443: connect: connection refused" interval="800ms" Sep 13 01:05:55.711160 kubelet[1738]: I0913 01:05:55.711102 1738 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 13 01:05:55.711611 kubelet[1738]: E0913 01:05:55.711592 1738 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://139.178.70.99:6443/api/v1/nodes\": dial tcp 139.178.70.99:6443: connect: connection refused" node="localhost" Sep 13 01:05:55.789801 kubelet[1738]: W0913 01:05:55.789595 1738 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://139.178.70.99:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 139.178.70.99:6443: connect: connection refused Sep 13 01:05:55.789801 kubelet[1738]: E0913 01:05:55.789801 1738 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://139.178.70.99:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 139.178.70.99:6443: connect: connection refused" logger="UnhandledError" Sep 13 01:05:55.895985 kubelet[1738]: W0913 01:05:55.895953 1738 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://139.178.70.99:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.70.99:6443: connect: connection refused Sep 13 01:05:55.896116 kubelet[1738]: E0913 01:05:55.895993 1738 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://139.178.70.99:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 139.178.70.99:6443: connect: connection refused" logger="UnhandledError" Sep 13 01:05:55.964189 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount158708697.mount: Deactivated successfully. Sep 13 01:05:56.021479 kubelet[1738]: W0913 01:05:56.021440 1738 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://139.178.70.99:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 139.178.70.99:6443: connect: connection refused Sep 13 01:05:56.021592 kubelet[1738]: E0913 01:05:56.021485 1738 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://139.178.70.99:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 139.178.70.99:6443: connect: connection refused" logger="UnhandledError" Sep 13 01:05:56.042770 env[1291]: time="2025-09-13T01:05:56.042738486Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:05:56.064232 env[1291]: time="2025-09-13T01:05:56.064207400Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:05:56.075903 env[1291]: time="2025-09-13T01:05:56.075880216Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:05:56.083607 env[1291]: time="2025-09-13T01:05:56.083588485Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:05:56.098652 env[1291]: time="2025-09-13T01:05:56.098626795Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:05:56.104877 env[1291]: time="2025-09-13T01:05:56.104854121Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:05:56.116076 env[1291]: time="2025-09-13T01:05:56.116051021Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:05:56.137516 env[1291]: time="2025-09-13T01:05:56.137474784Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:05:56.149196 env[1291]: time="2025-09-13T01:05:56.149163442Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:05:56.152753 env[1291]: time="2025-09-13T01:05:56.152728033Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:05:56.157338 env[1291]: time="2025-09-13T01:05:56.157316524Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:05:56.160653 env[1291]: time="2025-09-13T01:05:56.160636254Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:05:56.202684 env[1291]: time="2025-09-13T01:05:56.202504740Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 01:05:56.202684 env[1291]: time="2025-09-13T01:05:56.202530269Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 01:05:56.202684 env[1291]: time="2025-09-13T01:05:56.202538681Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 01:05:56.202684 env[1291]: time="2025-09-13T01:05:56.202609929Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/301d80e55dfb3e14e04b89cc0725a888ebe9eaf7960e9c3d5da18e499fd44f9a pid=1783 runtime=io.containerd.runc.v2 Sep 13 01:05:56.203240 env[1291]: time="2025-09-13T01:05:56.203204288Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 01:05:56.203323 env[1291]: time="2025-09-13T01:05:56.203225017Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 01:05:56.203431 env[1291]: time="2025-09-13T01:05:56.203311603Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 01:05:56.203674 env[1291]: time="2025-09-13T01:05:56.203635491Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/3697661d0897b8d32dbc3c9183f88a364d2543b28875cc9fe01d43f3cb5ee09d pid=1792 runtime=io.containerd.runc.v2 Sep 13 01:05:56.205685 env[1291]: time="2025-09-13T01:05:56.205643257Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 01:05:56.205773 env[1291]: time="2025-09-13T01:05:56.205665019Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 01:05:56.205873 env[1291]: time="2025-09-13T01:05:56.205849461Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 01:05:56.206021 env[1291]: time="2025-09-13T01:05:56.205993976Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/bed9ccb7dd4189da0b56fa66e5ba18966f196908ec2a7733a99515c44b606652 pid=1806 runtime=io.containerd.runc.v2 Sep 13 01:05:56.214490 systemd[1]: Started cri-containerd-301d80e55dfb3e14e04b89cc0725a888ebe9eaf7960e9c3d5da18e499fd44f9a.scope. Sep 13 01:05:56.232952 systemd[1]: Started cri-containerd-3697661d0897b8d32dbc3c9183f88a364d2543b28875cc9fe01d43f3cb5ee09d.scope. Sep 13 01:05:56.233777 systemd[1]: Started cri-containerd-bed9ccb7dd4189da0b56fa66e5ba18966f196908ec2a7733a99515c44b606652.scope. Sep 13 01:05:56.270153 env[1291]: time="2025-09-13T01:05:56.270120116Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:a0b381e9c248d782688c35a4fece3e93,Namespace:kube-system,Attempt:0,} returns sandbox id \"301d80e55dfb3e14e04b89cc0725a888ebe9eaf7960e9c3d5da18e499fd44f9a\"" Sep 13 01:05:56.272219 env[1291]: time="2025-09-13T01:05:56.272199701Z" level=info msg="CreateContainer within sandbox \"301d80e55dfb3e14e04b89cc0725a888ebe9eaf7960e9c3d5da18e499fd44f9a\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 13 01:05:56.285400 env[1291]: time="2025-09-13T01:05:56.285346932Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:72a30db4fc25e4da65a3b99eba43be94,Namespace:kube-system,Attempt:0,} returns sandbox id \"bed9ccb7dd4189da0b56fa66e5ba18966f196908ec2a7733a99515c44b606652\"" Sep 13 01:05:56.285671 env[1291]: time="2025-09-13T01:05:56.285506107Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:1403266a9792debaa127cd8df7a81c3c,Namespace:kube-system,Attempt:0,} returns sandbox id \"3697661d0897b8d32dbc3c9183f88a364d2543b28875cc9fe01d43f3cb5ee09d\"" Sep 13 01:05:56.286646 env[1291]: time="2025-09-13T01:05:56.286629041Z" level=info msg="CreateContainer within sandbox \"bed9ccb7dd4189da0b56fa66e5ba18966f196908ec2a7733a99515c44b606652\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 13 01:05:56.286795 env[1291]: time="2025-09-13T01:05:56.286777661Z" level=info msg="CreateContainer within sandbox \"3697661d0897b8d32dbc3c9183f88a364d2543b28875cc9fe01d43f3cb5ee09d\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 13 01:05:56.309276 kubelet[1738]: W0913 01:05:56.309214 1738 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://139.178.70.99:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.70.99:6443: connect: connection refused Sep 13 01:05:56.309276 kubelet[1738]: E0913 01:05:56.309255 1738 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://139.178.70.99:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 139.178.70.99:6443: connect: connection refused" logger="UnhandledError" Sep 13 01:05:56.358947 kubelet[1738]: E0913 01:05:56.358915 1738 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.99:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.99:6443: connect: connection refused" interval="1.6s" Sep 13 01:05:56.409578 env[1291]: time="2025-09-13T01:05:56.409537864Z" level=info msg="CreateContainer within sandbox \"301d80e55dfb3e14e04b89cc0725a888ebe9eaf7960e9c3d5da18e499fd44f9a\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"6bde61a13635156caae1603bc96e57fd29640b4e62c233e95fda22c5d2ac2679\"" Sep 13 01:05:56.409966 env[1291]: time="2025-09-13T01:05:56.409948274Z" level=info msg="StartContainer for \"6bde61a13635156caae1603bc96e57fd29640b4e62c233e95fda22c5d2ac2679\"" Sep 13 01:05:56.421836 systemd[1]: Started cri-containerd-6bde61a13635156caae1603bc96e57fd29640b4e62c233e95fda22c5d2ac2679.scope. Sep 13 01:05:56.434744 env[1291]: time="2025-09-13T01:05:56.434698976Z" level=info msg="CreateContainer within sandbox \"3697661d0897b8d32dbc3c9183f88a364d2543b28875cc9fe01d43f3cb5ee09d\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"83276cd462bb101dd5afdb78eafb443f22a5d87188627207353454d63b637e1a\"" Sep 13 01:05:56.435074 env[1291]: time="2025-09-13T01:05:56.435050512Z" level=info msg="StartContainer for \"83276cd462bb101dd5afdb78eafb443f22a5d87188627207353454d63b637e1a\"" Sep 13 01:05:56.445790 systemd[1]: Started cri-containerd-83276cd462bb101dd5afdb78eafb443f22a5d87188627207353454d63b637e1a.scope. Sep 13 01:05:56.447809 env[1291]: time="2025-09-13T01:05:56.447764985Z" level=info msg="CreateContainer within sandbox \"bed9ccb7dd4189da0b56fa66e5ba18966f196908ec2a7733a99515c44b606652\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"f4b24f8755e54295f76a9782f8eb332bc227a0ea18c124a038cb17773474c6c1\"" Sep 13 01:05:56.448188 env[1291]: time="2025-09-13T01:05:56.448170746Z" level=info msg="StartContainer for \"f4b24f8755e54295f76a9782f8eb332bc227a0ea18c124a038cb17773474c6c1\"" Sep 13 01:05:56.459562 systemd[1]: Started cri-containerd-f4b24f8755e54295f76a9782f8eb332bc227a0ea18c124a038cb17773474c6c1.scope. Sep 13 01:05:56.480561 env[1291]: time="2025-09-13T01:05:56.480489087Z" level=info msg="StartContainer for \"6bde61a13635156caae1603bc96e57fd29640b4e62c233e95fda22c5d2ac2679\" returns successfully" Sep 13 01:05:56.513344 kubelet[1738]: I0913 01:05:56.513324 1738 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 13 01:05:56.518490 kubelet[1738]: E0913 01:05:56.513549 1738 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://139.178.70.99:6443/api/v1/nodes\": dial tcp 139.178.70.99:6443: connect: connection refused" node="localhost" Sep 13 01:05:56.527697 env[1291]: time="2025-09-13T01:05:56.527663601Z" level=info msg="StartContainer for \"83276cd462bb101dd5afdb78eafb443f22a5d87188627207353454d63b637e1a\" returns successfully" Sep 13 01:05:56.533237 env[1291]: time="2025-09-13T01:05:56.533213528Z" level=info msg="StartContainer for \"f4b24f8755e54295f76a9782f8eb332bc227a0ea18c124a038cb17773474c6c1\" returns successfully" Sep 13 01:05:56.891307 kubelet[1738]: E0913 01:05:56.891274 1738 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://139.178.70.99:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 139.178.70.99:6443: connect: connection refused" logger="UnhandledError" Sep 13 01:05:57.011994 kubelet[1738]: E0913 01:05:57.011971 1738 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 13 01:05:57.013324 kubelet[1738]: E0913 01:05:57.013310 1738 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 13 01:05:57.014495 kubelet[1738]: E0913 01:05:57.014477 1738 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 13 01:05:57.934110 kubelet[1738]: W0913 01:05:57.934056 1738 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://139.178.70.99:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.70.99:6443: connect: connection refused Sep 13 01:05:57.934110 kubelet[1738]: E0913 01:05:57.934108 1738 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://139.178.70.99:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 139.178.70.99:6443: connect: connection refused" logger="UnhandledError" Sep 13 01:05:57.959837 kubelet[1738]: E0913 01:05:57.959804 1738 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.99:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.99:6443: connect: connection refused" interval="3.2s" Sep 13 01:05:58.015975 kubelet[1738]: E0913 01:05:58.015866 1738 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 13 01:05:58.016320 kubelet[1738]: E0913 01:05:58.016240 1738 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 13 01:05:58.097766 kubelet[1738]: W0913 01:05:58.097705 1738 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://139.178.70.99:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 139.178.70.99:6443: connect: connection refused Sep 13 01:05:58.097993 kubelet[1738]: E0913 01:05:58.097778 1738 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://139.178.70.99:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 139.178.70.99:6443: connect: connection refused" logger="UnhandledError" Sep 13 01:05:58.115459 kubelet[1738]: I0913 01:05:58.115144 1738 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 13 01:05:58.115459 kubelet[1738]: E0913 01:05:58.115429 1738 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://139.178.70.99:6443/api/v1/nodes\": dial tcp 139.178.70.99:6443: connect: connection refused" node="localhost" Sep 13 01:05:58.634347 kubelet[1738]: W0913 01:05:58.634300 1738 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://139.178.70.99:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 139.178.70.99:6443: connect: connection refused Sep 13 01:05:58.634538 kubelet[1738]: E0913 01:05:58.634519 1738 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://139.178.70.99:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 139.178.70.99:6443: connect: connection refused" logger="UnhandledError" Sep 13 01:05:58.787116 kubelet[1738]: W0913 01:05:58.787096 1738 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://139.178.70.99:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.70.99:6443: connect: connection refused Sep 13 01:05:58.787252 kubelet[1738]: E0913 01:05:58.787240 1738 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://139.178.70.99:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 139.178.70.99:6443: connect: connection refused" logger="UnhandledError" Sep 13 01:05:59.017208 kubelet[1738]: E0913 01:05:59.017141 1738 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 13 01:05:59.017605 kubelet[1738]: E0913 01:05:59.017555 1738 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 13 01:06:00.444086 kubelet[1738]: E0913 01:06:00.444065 1738 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 13 01:06:00.499964 kubelet[1738]: E0913 01:06:00.499933 1738 csi_plugin.go:308] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Sep 13 01:06:00.850565 kubelet[1738]: E0913 01:06:00.850533 1738 csi_plugin.go:308] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Sep 13 01:06:01.162830 kubelet[1738]: E0913 01:06:01.162748 1738 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Sep 13 01:06:01.278756 kubelet[1738]: E0913 01:06:01.278737 1738 csi_plugin.go:308] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Sep 13 01:06:01.317075 kubelet[1738]: I0913 01:06:01.317060 1738 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 13 01:06:01.379602 kubelet[1738]: I0913 01:06:01.379575 1738 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 13 01:06:01.379602 kubelet[1738]: E0913 01:06:01.379606 1738 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Sep 13 01:06:01.392828 kubelet[1738]: E0913 01:06:01.392805 1738 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 01:06:01.493370 kubelet[1738]: E0913 01:06:01.493288 1738 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 01:06:01.593844 kubelet[1738]: E0913 01:06:01.593819 1738 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 01:06:01.694407 kubelet[1738]: E0913 01:06:01.694378 1738 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 01:06:01.795098 kubelet[1738]: E0913 01:06:01.795066 1738 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 01:06:01.895829 kubelet[1738]: E0913 01:06:01.895806 1738 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 01:06:01.996320 kubelet[1738]: E0913 01:06:01.996287 1738 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 01:06:02.097184 kubelet[1738]: E0913 01:06:02.097110 1738 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 01:06:02.143449 systemd[1]: Reloading. Sep 13 01:06:02.197749 kubelet[1738]: E0913 01:06:02.197724 1738 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 01:06:02.204962 /usr/lib/systemd/system-generators/torcx-generator[2024]: time="2025-09-13T01:06:02Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 13 01:06:02.205194 /usr/lib/systemd/system-generators/torcx-generator[2024]: time="2025-09-13T01:06:02Z" level=info msg="torcx already run" Sep 13 01:06:02.260009 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 13 01:06:02.260136 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 13 01:06:02.273080 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 01:06:02.298349 kubelet[1738]: E0913 01:06:02.298327 1738 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 01:06:02.337687 systemd[1]: Stopping kubelet.service... Sep 13 01:06:02.358880 systemd[1]: kubelet.service: Deactivated successfully. Sep 13 01:06:02.359001 systemd[1]: Stopped kubelet.service. Sep 13 01:06:02.360677 systemd[1]: Starting kubelet.service... Sep 13 01:06:03.974534 systemd[1]: Started kubelet.service. Sep 13 01:06:04.036358 kubelet[2087]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 01:06:04.036358 kubelet[2087]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 13 01:06:04.036358 kubelet[2087]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 01:06:04.036775 kubelet[2087]: I0913 01:06:04.036426 2087 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 13 01:06:04.046125 kubelet[2087]: I0913 01:06:04.046105 2087 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 13 01:06:04.046270 kubelet[2087]: I0913 01:06:04.046261 2087 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 13 01:06:04.046517 kubelet[2087]: I0913 01:06:04.046508 2087 server.go:954] "Client rotation is on, will bootstrap in background" Sep 13 01:06:04.051870 kubelet[2087]: I0913 01:06:04.051842 2087 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 13 01:06:04.055332 sudo[2099]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Sep 13 01:06:04.055496 sudo[2099]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Sep 13 01:06:04.057156 kubelet[2087]: I0913 01:06:04.057138 2087 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 13 01:06:04.071554 kubelet[2087]: E0913 01:06:04.071497 2087 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 13 01:06:04.071684 kubelet[2087]: I0913 01:06:04.071675 2087 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 13 01:06:04.078469 kubelet[2087]: I0913 01:06:04.078439 2087 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 13 01:06:04.079420 kubelet[2087]: I0913 01:06:04.079394 2087 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 13 01:06:04.079588 kubelet[2087]: I0913 01:06:04.079477 2087 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 13 01:06:04.079693 kubelet[2087]: I0913 01:06:04.079684 2087 topology_manager.go:138] "Creating topology manager with none policy" Sep 13 01:06:04.079740 kubelet[2087]: I0913 01:06:04.079733 2087 container_manager_linux.go:304] "Creating device plugin manager" Sep 13 01:06:04.081067 kubelet[2087]: I0913 01:06:04.081047 2087 state_mem.go:36] "Initialized new in-memory state store" Sep 13 01:06:04.081252 kubelet[2087]: I0913 01:06:04.081245 2087 kubelet.go:446] "Attempting to sync node with API server" Sep 13 01:06:04.081386 kubelet[2087]: I0913 01:06:04.081378 2087 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 13 01:06:04.081454 kubelet[2087]: I0913 01:06:04.081444 2087 kubelet.go:352] "Adding apiserver pod source" Sep 13 01:06:04.081518 kubelet[2087]: I0913 01:06:04.081508 2087 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 13 01:06:04.089999 kubelet[2087]: I0913 01:06:04.089983 2087 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Sep 13 01:06:04.091662 kubelet[2087]: I0913 01:06:04.091652 2087 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 13 01:06:04.100701 kubelet[2087]: I0913 01:06:04.100687 2087 apiserver.go:52] "Watching apiserver" Sep 13 01:06:04.102401 kubelet[2087]: I0913 01:06:04.102388 2087 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 13 01:06:04.102516 kubelet[2087]: I0913 01:06:04.102508 2087 server.go:1287] "Started kubelet" Sep 13 01:06:04.113504 kubelet[2087]: I0913 01:06:04.113486 2087 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 13 01:06:04.118855 kubelet[2087]: I0913 01:06:04.118829 2087 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 13 01:06:04.119531 kubelet[2087]: I0913 01:06:04.119521 2087 server.go:479] "Adding debug handlers to kubelet server" Sep 13 01:06:04.120091 kubelet[2087]: I0913 01:06:04.120066 2087 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 13 01:06:04.120230 kubelet[2087]: I0913 01:06:04.120222 2087 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 13 01:06:04.120394 kubelet[2087]: I0913 01:06:04.120386 2087 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 13 01:06:04.120927 kubelet[2087]: I0913 01:06:04.120909 2087 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 13 01:06:04.126871 kubelet[2087]: I0913 01:06:04.126849 2087 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 13 01:06:04.127056 kubelet[2087]: I0913 01:06:04.127049 2087 reconciler.go:26] "Reconciler: start to sync state" Sep 13 01:06:04.129203 kubelet[2087]: I0913 01:06:04.129180 2087 factory.go:221] Registration of the systemd container factory successfully Sep 13 01:06:04.129715 kubelet[2087]: I0913 01:06:04.129701 2087 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 13 01:06:04.132946 kubelet[2087]: E0913 01:06:04.132929 2087 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 13 01:06:04.136081 kubelet[2087]: I0913 01:06:04.136063 2087 factory.go:221] Registration of the containerd container factory successfully Sep 13 01:06:04.145377 kubelet[2087]: I0913 01:06:04.145346 2087 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 13 01:06:04.146076 kubelet[2087]: I0913 01:06:04.146064 2087 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 13 01:06:04.146165 kubelet[2087]: I0913 01:06:04.146157 2087 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 13 01:06:04.146225 kubelet[2087]: I0913 01:06:04.146217 2087 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 13 01:06:04.146272 kubelet[2087]: I0913 01:06:04.146265 2087 kubelet.go:2382] "Starting kubelet main sync loop" Sep 13 01:06:04.146353 kubelet[2087]: E0913 01:06:04.146340 2087 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 13 01:06:04.185237 kubelet[2087]: I0913 01:06:04.185222 2087 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 13 01:06:04.185350 kubelet[2087]: I0913 01:06:04.185340 2087 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 13 01:06:04.185420 kubelet[2087]: I0913 01:06:04.185413 2087 state_mem.go:36] "Initialized new in-memory state store" Sep 13 01:06:04.185614 kubelet[2087]: I0913 01:06:04.185606 2087 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 13 01:06:04.185682 kubelet[2087]: I0913 01:06:04.185665 2087 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 13 01:06:04.185737 kubelet[2087]: I0913 01:06:04.185729 2087 policy_none.go:49] "None policy: Start" Sep 13 01:06:04.185812 kubelet[2087]: I0913 01:06:04.185804 2087 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 13 01:06:04.185867 kubelet[2087]: I0913 01:06:04.185860 2087 state_mem.go:35] "Initializing new in-memory state store" Sep 13 01:06:04.185998 kubelet[2087]: I0913 01:06:04.185990 2087 state_mem.go:75] "Updated machine memory state" Sep 13 01:06:04.188283 kubelet[2087]: I0913 01:06:04.188272 2087 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 13 01:06:04.188688 kubelet[2087]: I0913 01:06:04.188681 2087 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 13 01:06:04.188759 kubelet[2087]: I0913 01:06:04.188740 2087 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 13 01:06:04.191264 kubelet[2087]: I0913 01:06:04.191245 2087 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 13 01:06:04.194783 kubelet[2087]: E0913 01:06:04.194274 2087 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 13 01:06:04.248058 kubelet[2087]: I0913 01:06:04.247145 2087 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 13 01:06:04.249645 kubelet[2087]: I0913 01:06:04.249634 2087 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 13 01:06:04.249816 kubelet[2087]: I0913 01:06:04.249809 2087 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 13 01:06:04.295986 kubelet[2087]: I0913 01:06:04.295962 2087 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 13 01:06:04.301680 kubelet[2087]: I0913 01:06:04.301658 2087 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Sep 13 01:06:04.301843 kubelet[2087]: I0913 01:06:04.301834 2087 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 13 01:06:04.328007 kubelet[2087]: I0913 01:06:04.327981 2087 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 13 01:06:04.429507 kubelet[2087]: I0913 01:06:04.429481 2087 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 01:06:04.429647 kubelet[2087]: I0913 01:06:04.429637 2087 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 01:06:04.429709 kubelet[2087]: I0913 01:06:04.429699 2087 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/72a30db4fc25e4da65a3b99eba43be94-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"72a30db4fc25e4da65a3b99eba43be94\") " pod="kube-system/kube-scheduler-localhost" Sep 13 01:06:04.429783 kubelet[2087]: I0913 01:06:04.429772 2087 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a0b381e9c248d782688c35a4fece3e93-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"a0b381e9c248d782688c35a4fece3e93\") " pod="kube-system/kube-apiserver-localhost" Sep 13 01:06:04.429873 kubelet[2087]: I0913 01:06:04.429846 2087 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a0b381e9c248d782688c35a4fece3e93-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"a0b381e9c248d782688c35a4fece3e93\") " pod="kube-system/kube-apiserver-localhost" Sep 13 01:06:04.429942 kubelet[2087]: I0913 01:06:04.429933 2087 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 01:06:04.429996 kubelet[2087]: I0913 01:06:04.429987 2087 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 01:06:04.430060 kubelet[2087]: I0913 01:06:04.430050 2087 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 01:06:04.430123 kubelet[2087]: I0913 01:06:04.430114 2087 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a0b381e9c248d782688c35a4fece3e93-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"a0b381e9c248d782688c35a4fece3e93\") " pod="kube-system/kube-apiserver-localhost" Sep 13 01:06:04.583885 sudo[2099]: pam_unix(sudo:session): session closed for user root Sep 13 01:06:05.092737 kubelet[2087]: I0913 01:06:05.092688 2087 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.092596967 podStartE2EDuration="1.092596967s" podCreationTimestamp="2025-09-13 01:06:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 01:06:05.088807982 +0000 UTC m=+1.102547401" watchObservedRunningTime="2025-09-13 01:06:05.092596967 +0000 UTC m=+1.106336381" Sep 13 01:06:05.097234 kubelet[2087]: I0913 01:06:05.097199 2087 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.097187575 podStartE2EDuration="1.097187575s" podCreationTimestamp="2025-09-13 01:06:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 01:06:05.092808231 +0000 UTC m=+1.106547639" watchObservedRunningTime="2025-09-13 01:06:05.097187575 +0000 UTC m=+1.110926987" Sep 13 01:06:05.179728 kubelet[2087]: I0913 01:06:05.179695 2087 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.17968138 podStartE2EDuration="1.17968138s" podCreationTimestamp="2025-09-13 01:06:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 01:06:05.097396415 +0000 UTC m=+1.111135833" watchObservedRunningTime="2025-09-13 01:06:05.17968138 +0000 UTC m=+1.193420791" Sep 13 01:06:06.393283 sudo[1471]: pam_unix(sudo:session): session closed for user root Sep 13 01:06:06.394852 sshd[1467]: pam_unix(sshd:session): session closed for user core Sep 13 01:06:06.396598 systemd[1]: sshd@4-139.178.70.99:22-147.75.109.163:57358.service: Deactivated successfully. Sep 13 01:06:06.397052 systemd[1]: session-7.scope: Deactivated successfully. Sep 13 01:06:06.397155 systemd[1]: session-7.scope: Consumed 2.786s CPU time. Sep 13 01:06:06.397451 systemd-logind[1262]: Session 7 logged out. Waiting for processes to exit. Sep 13 01:06:06.398228 systemd-logind[1262]: Removed session 7. Sep 13 01:06:07.872900 kubelet[2087]: I0913 01:06:07.872871 2087 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 13 01:06:07.873435 env[1291]: time="2025-09-13T01:06:07.873415004Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 13 01:06:07.873794 kubelet[2087]: I0913 01:06:07.873776 2087 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 13 01:06:08.785917 systemd[1]: Created slice kubepods-besteffort-pod4a5d6730_6535_4226_80d6_1724721f52ba.slice. Sep 13 01:06:08.801798 systemd[1]: Created slice kubepods-burstable-pod10042a06_0b7d_475f_837e_ffb345721f86.slice. Sep 13 01:06:08.930480 systemd[1]: Created slice kubepods-besteffort-pod7017425f_5a64_43eb_b5cd_e4225dc6e636.slice. Sep 13 01:06:08.960598 kubelet[2087]: I0913 01:06:08.960570 2087 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/10042a06-0b7d-475f-837e-ffb345721f86-hostproc\") pod \"cilium-62dmr\" (UID: \"10042a06-0b7d-475f-837e-ffb345721f86\") " pod="kube-system/cilium-62dmr" Sep 13 01:06:08.960859 kubelet[2087]: I0913 01:06:08.960605 2087 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/10042a06-0b7d-475f-837e-ffb345721f86-xtables-lock\") pod \"cilium-62dmr\" (UID: \"10042a06-0b7d-475f-837e-ffb345721f86\") " pod="kube-system/cilium-62dmr" Sep 13 01:06:08.960859 kubelet[2087]: I0913 01:06:08.960636 2087 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/10042a06-0b7d-475f-837e-ffb345721f86-cilium-cgroup\") pod \"cilium-62dmr\" (UID: \"10042a06-0b7d-475f-837e-ffb345721f86\") " pod="kube-system/cilium-62dmr" Sep 13 01:06:08.960859 kubelet[2087]: I0913 01:06:08.960651 2087 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/10042a06-0b7d-475f-837e-ffb345721f86-host-proc-sys-net\") pod \"cilium-62dmr\" (UID: \"10042a06-0b7d-475f-837e-ffb345721f86\") " pod="kube-system/cilium-62dmr" Sep 13 01:06:08.960859 kubelet[2087]: I0913 01:06:08.960660 2087 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4a5d6730-6535-4226-80d6-1724721f52ba-lib-modules\") pod \"kube-proxy-cvwf5\" (UID: \"4a5d6730-6535-4226-80d6-1724721f52ba\") " pod="kube-system/kube-proxy-cvwf5" Sep 13 01:06:08.960859 kubelet[2087]: I0913 01:06:08.960672 2087 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/10042a06-0b7d-475f-837e-ffb345721f86-cilium-run\") pod \"cilium-62dmr\" (UID: \"10042a06-0b7d-475f-837e-ffb345721f86\") " pod="kube-system/cilium-62dmr" Sep 13 01:06:08.960859 kubelet[2087]: I0913 01:06:08.960685 2087 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/10042a06-0b7d-475f-837e-ffb345721f86-bpf-maps\") pod \"cilium-62dmr\" (UID: \"10042a06-0b7d-475f-837e-ffb345721f86\") " pod="kube-system/cilium-62dmr" Sep 13 01:06:08.961007 kubelet[2087]: I0913 01:06:08.960709 2087 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/10042a06-0b7d-475f-837e-ffb345721f86-lib-modules\") pod \"cilium-62dmr\" (UID: \"10042a06-0b7d-475f-837e-ffb345721f86\") " pod="kube-system/cilium-62dmr" Sep 13 01:06:08.961007 kubelet[2087]: I0913 01:06:08.960726 2087 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/10042a06-0b7d-475f-837e-ffb345721f86-hubble-tls\") pod \"cilium-62dmr\" (UID: \"10042a06-0b7d-475f-837e-ffb345721f86\") " pod="kube-system/cilium-62dmr" Sep 13 01:06:08.961007 kubelet[2087]: I0913 01:06:08.960745 2087 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-654lp\" (UniqueName: \"kubernetes.io/projected/4a5d6730-6535-4226-80d6-1724721f52ba-kube-api-access-654lp\") pod \"kube-proxy-cvwf5\" (UID: \"4a5d6730-6535-4226-80d6-1724721f52ba\") " pod="kube-system/kube-proxy-cvwf5" Sep 13 01:06:08.961007 kubelet[2087]: I0913 01:06:08.960761 2087 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/10042a06-0b7d-475f-837e-ffb345721f86-cni-path\") pod \"cilium-62dmr\" (UID: \"10042a06-0b7d-475f-837e-ffb345721f86\") " pod="kube-system/cilium-62dmr" Sep 13 01:06:08.961007 kubelet[2087]: I0913 01:06:08.960787 2087 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/10042a06-0b7d-475f-837e-ffb345721f86-etc-cni-netd\") pod \"cilium-62dmr\" (UID: \"10042a06-0b7d-475f-837e-ffb345721f86\") " pod="kube-system/cilium-62dmr" Sep 13 01:06:08.961007 kubelet[2087]: I0913 01:06:08.960806 2087 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/10042a06-0b7d-475f-837e-ffb345721f86-clustermesh-secrets\") pod \"cilium-62dmr\" (UID: \"10042a06-0b7d-475f-837e-ffb345721f86\") " pod="kube-system/cilium-62dmr" Sep 13 01:06:08.961225 kubelet[2087]: I0913 01:06:08.960818 2087 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/10042a06-0b7d-475f-837e-ffb345721f86-cilium-config-path\") pod \"cilium-62dmr\" (UID: \"10042a06-0b7d-475f-837e-ffb345721f86\") " pod="kube-system/cilium-62dmr" Sep 13 01:06:08.961225 kubelet[2087]: I0913 01:06:08.960828 2087 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4a5d6730-6535-4226-80d6-1724721f52ba-xtables-lock\") pod \"kube-proxy-cvwf5\" (UID: \"4a5d6730-6535-4226-80d6-1724721f52ba\") " pod="kube-system/kube-proxy-cvwf5" Sep 13 01:06:08.961225 kubelet[2087]: I0913 01:06:08.960843 2087 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/10042a06-0b7d-475f-837e-ffb345721f86-host-proc-sys-kernel\") pod \"cilium-62dmr\" (UID: \"10042a06-0b7d-475f-837e-ffb345721f86\") " pod="kube-system/cilium-62dmr" Sep 13 01:06:08.961225 kubelet[2087]: I0913 01:06:08.960872 2087 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p98bp\" (UniqueName: \"kubernetes.io/projected/10042a06-0b7d-475f-837e-ffb345721f86-kube-api-access-p98bp\") pod \"cilium-62dmr\" (UID: \"10042a06-0b7d-475f-837e-ffb345721f86\") " pod="kube-system/cilium-62dmr" Sep 13 01:06:08.961225 kubelet[2087]: I0913 01:06:08.960885 2087 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/4a5d6730-6535-4226-80d6-1724721f52ba-kube-proxy\") pod \"kube-proxy-cvwf5\" (UID: \"4a5d6730-6535-4226-80d6-1724721f52ba\") " pod="kube-system/kube-proxy-cvwf5" Sep 13 01:06:09.061579 kubelet[2087]: I0913 01:06:09.061501 2087 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7017425f-5a64-43eb-b5cd-e4225dc6e636-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-xng8j\" (UID: \"7017425f-5a64-43eb-b5cd-e4225dc6e636\") " pod="kube-system/cilium-operator-6c4d7847fc-xng8j" Sep 13 01:06:09.061807 kubelet[2087]: I0913 01:06:09.061788 2087 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cxlvg\" (UniqueName: \"kubernetes.io/projected/7017425f-5a64-43eb-b5cd-e4225dc6e636-kube-api-access-cxlvg\") pod \"cilium-operator-6c4d7847fc-xng8j\" (UID: \"7017425f-5a64-43eb-b5cd-e4225dc6e636\") " pod="kube-system/cilium-operator-6c4d7847fc-xng8j" Sep 13 01:06:09.062464 kubelet[2087]: I0913 01:06:09.062330 2087 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Sep 13 01:06:09.234769 env[1291]: time="2025-09-13T01:06:09.234419191Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-xng8j,Uid:7017425f-5a64-43eb-b5cd-e4225dc6e636,Namespace:kube-system,Attempt:0,}" Sep 13 01:06:09.271892 env[1291]: time="2025-09-13T01:06:09.271823774Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 01:06:09.271892 env[1291]: time="2025-09-13T01:06:09.271863120Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 01:06:09.272060 env[1291]: time="2025-09-13T01:06:09.272035431Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 01:06:09.272443 env[1291]: time="2025-09-13T01:06:09.272386033Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/48dfeb1d6dce71aca95bad001797d30f7802d566e47c797fb4bfc148378ec161 pid=2170 runtime=io.containerd.runc.v2 Sep 13 01:06:09.280427 systemd[1]: Started cri-containerd-48dfeb1d6dce71aca95bad001797d30f7802d566e47c797fb4bfc148378ec161.scope. Sep 13 01:06:09.320260 env[1291]: time="2025-09-13T01:06:09.319986214Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-xng8j,Uid:7017425f-5a64-43eb-b5cd-e4225dc6e636,Namespace:kube-system,Attempt:0,} returns sandbox id \"48dfeb1d6dce71aca95bad001797d30f7802d566e47c797fb4bfc148378ec161\"" Sep 13 01:06:09.323031 env[1291]: time="2025-09-13T01:06:09.322992977Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Sep 13 01:06:09.394861 env[1291]: time="2025-09-13T01:06:09.394820130Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-cvwf5,Uid:4a5d6730-6535-4226-80d6-1724721f52ba,Namespace:kube-system,Attempt:0,}" Sep 13 01:06:09.406828 env[1291]: time="2025-09-13T01:06:09.406797605Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-62dmr,Uid:10042a06-0b7d-475f-837e-ffb345721f86,Namespace:kube-system,Attempt:0,}" Sep 13 01:06:09.554288 env[1291]: time="2025-09-13T01:06:09.554244060Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 01:06:09.554421 env[1291]: time="2025-09-13T01:06:09.554271481Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 01:06:09.554421 env[1291]: time="2025-09-13T01:06:09.554284632Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 01:06:09.554516 env[1291]: time="2025-09-13T01:06:09.554445425Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/0a43e61e27496c0f8e0df890d4b34580369e8f126536a06ad830db4c95c38a45 pid=2211 runtime=io.containerd.runc.v2 Sep 13 01:06:09.566551 systemd[1]: Started cri-containerd-0a43e61e27496c0f8e0df890d4b34580369e8f126536a06ad830db4c95c38a45.scope. Sep 13 01:06:09.579534 env[1291]: time="2025-09-13T01:06:09.579265362Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 01:06:09.579652 env[1291]: time="2025-09-13T01:06:09.579313390Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 01:06:09.579652 env[1291]: time="2025-09-13T01:06:09.579325000Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 01:06:09.579742 env[1291]: time="2025-09-13T01:06:09.579711144Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/7f0e732fe25d8749f1643553e59f060c664fcb858a6b2997e25b398c99546053 pid=2244 runtime=io.containerd.runc.v2 Sep 13 01:06:09.590347 systemd[1]: Started cri-containerd-7f0e732fe25d8749f1643553e59f060c664fcb858a6b2997e25b398c99546053.scope. Sep 13 01:06:09.597097 env[1291]: time="2025-09-13T01:06:09.597067662Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-cvwf5,Uid:4a5d6730-6535-4226-80d6-1724721f52ba,Namespace:kube-system,Attempt:0,} returns sandbox id \"0a43e61e27496c0f8e0df890d4b34580369e8f126536a06ad830db4c95c38a45\"" Sep 13 01:06:09.599723 env[1291]: time="2025-09-13T01:06:09.599692202Z" level=info msg="CreateContainer within sandbox \"0a43e61e27496c0f8e0df890d4b34580369e8f126536a06ad830db4c95c38a45\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 13 01:06:09.613779 env[1291]: time="2025-09-13T01:06:09.613752679Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-62dmr,Uid:10042a06-0b7d-475f-837e-ffb345721f86,Namespace:kube-system,Attempt:0,} returns sandbox id \"7f0e732fe25d8749f1643553e59f060c664fcb858a6b2997e25b398c99546053\"" Sep 13 01:06:09.670318 env[1291]: time="2025-09-13T01:06:09.670287285Z" level=info msg="CreateContainer within sandbox \"0a43e61e27496c0f8e0df890d4b34580369e8f126536a06ad830db4c95c38a45\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"dd73d5b57cf9981f6b3e47ba15d7d9d11bc35f3aa2352cb18a57bb74641b2d86\"" Sep 13 01:06:09.672092 env[1291]: time="2025-09-13T01:06:09.672061401Z" level=info msg="StartContainer for \"dd73d5b57cf9981f6b3e47ba15d7d9d11bc35f3aa2352cb18a57bb74641b2d86\"" Sep 13 01:06:09.689294 systemd[1]: Started cri-containerd-dd73d5b57cf9981f6b3e47ba15d7d9d11bc35f3aa2352cb18a57bb74641b2d86.scope. Sep 13 01:06:09.716024 env[1291]: time="2025-09-13T01:06:09.715982754Z" level=info msg="StartContainer for \"dd73d5b57cf9981f6b3e47ba15d7d9d11bc35f3aa2352cb18a57bb74641b2d86\" returns successfully" Sep 13 01:06:10.775204 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3145912480.mount: Deactivated successfully. Sep 13 01:06:11.275390 env[1291]: time="2025-09-13T01:06:11.275327836Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:06:11.280078 env[1291]: time="2025-09-13T01:06:11.280043166Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:06:11.282066 env[1291]: time="2025-09-13T01:06:11.282036818Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:06:11.282589 env[1291]: time="2025-09-13T01:06:11.282528495Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Sep 13 01:06:11.284701 env[1291]: time="2025-09-13T01:06:11.284637534Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Sep 13 01:06:11.286475 env[1291]: time="2025-09-13T01:06:11.285087961Z" level=info msg="CreateContainer within sandbox \"48dfeb1d6dce71aca95bad001797d30f7802d566e47c797fb4bfc148378ec161\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Sep 13 01:06:11.305288 env[1291]: time="2025-09-13T01:06:11.305250055Z" level=info msg="CreateContainer within sandbox \"48dfeb1d6dce71aca95bad001797d30f7802d566e47c797fb4bfc148378ec161\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"feba2ee23932e81c309898618a20cb8222cc8d13df8973e2908ff9364be10f7d\"" Sep 13 01:06:11.305890 env[1291]: time="2025-09-13T01:06:11.305869175Z" level=info msg="StartContainer for \"feba2ee23932e81c309898618a20cb8222cc8d13df8973e2908ff9364be10f7d\"" Sep 13 01:06:11.320499 systemd[1]: Started cri-containerd-feba2ee23932e81c309898618a20cb8222cc8d13df8973e2908ff9364be10f7d.scope. Sep 13 01:06:11.358684 env[1291]: time="2025-09-13T01:06:11.358652493Z" level=info msg="StartContainer for \"feba2ee23932e81c309898618a20cb8222cc8d13df8973e2908ff9364be10f7d\" returns successfully" Sep 13 01:06:11.517925 kubelet[2087]: I0913 01:06:11.517877 2087 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-cvwf5" podStartSLOduration=3.517865009 podStartE2EDuration="3.517865009s" podCreationTimestamp="2025-09-13 01:06:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 01:06:10.190064475 +0000 UTC m=+6.203803901" watchObservedRunningTime="2025-09-13 01:06:11.517865009 +0000 UTC m=+7.531604423" Sep 13 01:06:12.297854 systemd[1]: run-containerd-runc-k8s.io-feba2ee23932e81c309898618a20cb8222cc8d13df8973e2908ff9364be10f7d-runc.XTxtVr.mount: Deactivated successfully. Sep 13 01:06:14.191836 kubelet[2087]: I0913 01:06:14.191803 2087 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-xng8j" podStartSLOduration=4.229351505 podStartE2EDuration="6.191790207s" podCreationTimestamp="2025-09-13 01:06:08 +0000 UTC" firstStartedPulling="2025-09-13 01:06:09.321672525 +0000 UTC m=+5.335411940" lastFinishedPulling="2025-09-13 01:06:11.284111233 +0000 UTC m=+7.297850642" observedRunningTime="2025-09-13 01:06:12.207154197 +0000 UTC m=+8.220893617" watchObservedRunningTime="2025-09-13 01:06:14.191790207 +0000 UTC m=+10.205529621" Sep 13 01:06:15.910931 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount655936642.mount: Deactivated successfully. Sep 13 01:06:20.090873 env[1291]: time="2025-09-13T01:06:20.090837423Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:06:20.094259 env[1291]: time="2025-09-13T01:06:20.094228257Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:06:20.096222 env[1291]: time="2025-09-13T01:06:20.096196516Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:06:20.096786 env[1291]: time="2025-09-13T01:06:20.096766850Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Sep 13 01:06:20.121631 env[1291]: time="2025-09-13T01:06:20.121599359Z" level=info msg="CreateContainer within sandbox \"7f0e732fe25d8749f1643553e59f060c664fcb858a6b2997e25b398c99546053\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 13 01:06:20.144338 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1364327118.mount: Deactivated successfully. Sep 13 01:06:20.153403 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount802764355.mount: Deactivated successfully. Sep 13 01:06:20.166098 env[1291]: time="2025-09-13T01:06:20.166076856Z" level=info msg="CreateContainer within sandbox \"7f0e732fe25d8749f1643553e59f060c664fcb858a6b2997e25b398c99546053\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"4956b867f10568285cf2dad306cd26226abc37da2337c9a8d71a49fe37b3867d\"" Sep 13 01:06:20.166797 env[1291]: time="2025-09-13T01:06:20.166776345Z" level=info msg="StartContainer for \"4956b867f10568285cf2dad306cd26226abc37da2337c9a8d71a49fe37b3867d\"" Sep 13 01:06:20.188235 systemd[1]: Started cri-containerd-4956b867f10568285cf2dad306cd26226abc37da2337c9a8d71a49fe37b3867d.scope. Sep 13 01:06:20.212780 env[1291]: time="2025-09-13T01:06:20.212131780Z" level=info msg="StartContainer for \"4956b867f10568285cf2dad306cd26226abc37da2337c9a8d71a49fe37b3867d\" returns successfully" Sep 13 01:06:20.222619 systemd[1]: cri-containerd-4956b867f10568285cf2dad306cd26226abc37da2337c9a8d71a49fe37b3867d.scope: Deactivated successfully. Sep 13 01:06:20.772825 env[1291]: time="2025-09-13T01:06:20.772794513Z" level=info msg="shim disconnected" id=4956b867f10568285cf2dad306cd26226abc37da2337c9a8d71a49fe37b3867d Sep 13 01:06:20.772825 env[1291]: time="2025-09-13T01:06:20.772822448Z" level=warning msg="cleaning up after shim disconnected" id=4956b867f10568285cf2dad306cd26226abc37da2337c9a8d71a49fe37b3867d namespace=k8s.io Sep 13 01:06:20.772825 env[1291]: time="2025-09-13T01:06:20.772829112Z" level=info msg="cleaning up dead shim" Sep 13 01:06:20.778137 env[1291]: time="2025-09-13T01:06:20.778115037Z" level=warning msg="cleanup warnings time=\"2025-09-13T01:06:20Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2538 runtime=io.containerd.runc.v2\n" Sep 13 01:06:21.139518 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4956b867f10568285cf2dad306cd26226abc37da2337c9a8d71a49fe37b3867d-rootfs.mount: Deactivated successfully. Sep 13 01:06:21.444444 env[1291]: time="2025-09-13T01:06:21.444289691Z" level=info msg="CreateContainer within sandbox \"7f0e732fe25d8749f1643553e59f060c664fcb858a6b2997e25b398c99546053\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 13 01:06:21.478244 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2645183943.mount: Deactivated successfully. Sep 13 01:06:21.502419 env[1291]: time="2025-09-13T01:06:21.502371787Z" level=info msg="CreateContainer within sandbox \"7f0e732fe25d8749f1643553e59f060c664fcb858a6b2997e25b398c99546053\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"08978a0f0b85d44f1b5adb76fde298e9d72e8faa7b20ee768fc6a8bf25d2e9c3\"" Sep 13 01:06:21.503004 env[1291]: time="2025-09-13T01:06:21.502983771Z" level=info msg="StartContainer for \"08978a0f0b85d44f1b5adb76fde298e9d72e8faa7b20ee768fc6a8bf25d2e9c3\"" Sep 13 01:06:21.517518 systemd[1]: Started cri-containerd-08978a0f0b85d44f1b5adb76fde298e9d72e8faa7b20ee768fc6a8bf25d2e9c3.scope. Sep 13 01:06:21.547626 env[1291]: time="2025-09-13T01:06:21.547599605Z" level=info msg="StartContainer for \"08978a0f0b85d44f1b5adb76fde298e9d72e8faa7b20ee768fc6a8bf25d2e9c3\" returns successfully" Sep 13 01:06:21.577927 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 13 01:06:21.578069 systemd[1]: Stopped systemd-sysctl.service. Sep 13 01:06:21.578245 systemd[1]: Stopping systemd-sysctl.service... Sep 13 01:06:21.579721 systemd[1]: Starting systemd-sysctl.service... Sep 13 01:06:21.585086 systemd[1]: cri-containerd-08978a0f0b85d44f1b5adb76fde298e9d72e8faa7b20ee768fc6a8bf25d2e9c3.scope: Deactivated successfully. Sep 13 01:06:21.647829 env[1291]: time="2025-09-13T01:06:21.647794338Z" level=info msg="shim disconnected" id=08978a0f0b85d44f1b5adb76fde298e9d72e8faa7b20ee768fc6a8bf25d2e9c3 Sep 13 01:06:21.648124 env[1291]: time="2025-09-13T01:06:21.648111692Z" level=warning msg="cleaning up after shim disconnected" id=08978a0f0b85d44f1b5adb76fde298e9d72e8faa7b20ee768fc6a8bf25d2e9c3 namespace=k8s.io Sep 13 01:06:21.648187 env[1291]: time="2025-09-13T01:06:21.648177591Z" level=info msg="cleaning up dead shim" Sep 13 01:06:21.653673 env[1291]: time="2025-09-13T01:06:21.653616283Z" level=warning msg="cleanup warnings time=\"2025-09-13T01:06:21Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2602 runtime=io.containerd.runc.v2\n" Sep 13 01:06:21.717157 systemd[1]: Finished systemd-sysctl.service. Sep 13 01:06:22.139848 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-08978a0f0b85d44f1b5adb76fde298e9d72e8faa7b20ee768fc6a8bf25d2e9c3-rootfs.mount: Deactivated successfully. Sep 13 01:06:22.445708 env[1291]: time="2025-09-13T01:06:22.445625939Z" level=info msg="CreateContainer within sandbox \"7f0e732fe25d8749f1643553e59f060c664fcb858a6b2997e25b398c99546053\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 13 01:06:22.457002 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount665591490.mount: Deactivated successfully. Sep 13 01:06:22.460294 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount160336568.mount: Deactivated successfully. Sep 13 01:06:22.470464 env[1291]: time="2025-09-13T01:06:22.470428440Z" level=info msg="CreateContainer within sandbox \"7f0e732fe25d8749f1643553e59f060c664fcb858a6b2997e25b398c99546053\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"7eb0b8bda9e1dd47528daa0eacc8b0a0d013b7047e376933b8e19fc21e55035b\"" Sep 13 01:06:22.471916 env[1291]: time="2025-09-13T01:06:22.471890887Z" level=info msg="StartContainer for \"7eb0b8bda9e1dd47528daa0eacc8b0a0d013b7047e376933b8e19fc21e55035b\"" Sep 13 01:06:22.487999 systemd[1]: Started cri-containerd-7eb0b8bda9e1dd47528daa0eacc8b0a0d013b7047e376933b8e19fc21e55035b.scope. Sep 13 01:06:22.519513 env[1291]: time="2025-09-13T01:06:22.519479509Z" level=info msg="StartContainer for \"7eb0b8bda9e1dd47528daa0eacc8b0a0d013b7047e376933b8e19fc21e55035b\" returns successfully" Sep 13 01:06:22.563021 systemd[1]: cri-containerd-7eb0b8bda9e1dd47528daa0eacc8b0a0d013b7047e376933b8e19fc21e55035b.scope: Deactivated successfully. Sep 13 01:06:22.585735 env[1291]: time="2025-09-13T01:06:22.585700983Z" level=info msg="shim disconnected" id=7eb0b8bda9e1dd47528daa0eacc8b0a0d013b7047e376933b8e19fc21e55035b Sep 13 01:06:22.585735 env[1291]: time="2025-09-13T01:06:22.585731083Z" level=warning msg="cleaning up after shim disconnected" id=7eb0b8bda9e1dd47528daa0eacc8b0a0d013b7047e376933b8e19fc21e55035b namespace=k8s.io Sep 13 01:06:22.585735 env[1291]: time="2025-09-13T01:06:22.585737024Z" level=info msg="cleaning up dead shim" Sep 13 01:06:22.591944 env[1291]: time="2025-09-13T01:06:22.591910774Z" level=warning msg="cleanup warnings time=\"2025-09-13T01:06:22Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2660 runtime=io.containerd.runc.v2\n" Sep 13 01:06:23.447627 env[1291]: time="2025-09-13T01:06:23.447597261Z" level=info msg="CreateContainer within sandbox \"7f0e732fe25d8749f1643553e59f060c664fcb858a6b2997e25b398c99546053\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 13 01:06:23.479808 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount320491934.mount: Deactivated successfully. Sep 13 01:06:23.499409 env[1291]: time="2025-09-13T01:06:23.499362906Z" level=info msg="CreateContainer within sandbox \"7f0e732fe25d8749f1643553e59f060c664fcb858a6b2997e25b398c99546053\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"6321bad41dff50c33b8bb112bf3fc24480ad357725aee5ef040d66ca690ffe0a\"" Sep 13 01:06:23.500580 env[1291]: time="2025-09-13T01:06:23.500557901Z" level=info msg="StartContainer for \"6321bad41dff50c33b8bb112bf3fc24480ad357725aee5ef040d66ca690ffe0a\"" Sep 13 01:06:23.514638 systemd[1]: Started cri-containerd-6321bad41dff50c33b8bb112bf3fc24480ad357725aee5ef040d66ca690ffe0a.scope. Sep 13 01:06:23.534304 env[1291]: time="2025-09-13T01:06:23.534276080Z" level=info msg="StartContainer for \"6321bad41dff50c33b8bb112bf3fc24480ad357725aee5ef040d66ca690ffe0a\" returns successfully" Sep 13 01:06:23.534948 systemd[1]: cri-containerd-6321bad41dff50c33b8bb112bf3fc24480ad357725aee5ef040d66ca690ffe0a.scope: Deactivated successfully. Sep 13 01:06:23.565901 env[1291]: time="2025-09-13T01:06:23.565872780Z" level=info msg="shim disconnected" id=6321bad41dff50c33b8bb112bf3fc24480ad357725aee5ef040d66ca690ffe0a Sep 13 01:06:23.566090 env[1291]: time="2025-09-13T01:06:23.566078783Z" level=warning msg="cleaning up after shim disconnected" id=6321bad41dff50c33b8bb112bf3fc24480ad357725aee5ef040d66ca690ffe0a namespace=k8s.io Sep 13 01:06:23.566154 env[1291]: time="2025-09-13T01:06:23.566144157Z" level=info msg="cleaning up dead shim" Sep 13 01:06:23.571081 env[1291]: time="2025-09-13T01:06:23.571055737Z" level=warning msg="cleanup warnings time=\"2025-09-13T01:06:23Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2717 runtime=io.containerd.runc.v2\n" Sep 13 01:06:24.139575 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6321bad41dff50c33b8bb112bf3fc24480ad357725aee5ef040d66ca690ffe0a-rootfs.mount: Deactivated successfully. Sep 13 01:06:24.452792 env[1291]: time="2025-09-13T01:06:24.451662915Z" level=info msg="CreateContainer within sandbox \"7f0e732fe25d8749f1643553e59f060c664fcb858a6b2997e25b398c99546053\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 13 01:06:24.461468 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4167147571.mount: Deactivated successfully. Sep 13 01:06:24.465690 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1806218534.mount: Deactivated successfully. Sep 13 01:06:24.468305 env[1291]: time="2025-09-13T01:06:24.468275514Z" level=info msg="CreateContainer within sandbox \"7f0e732fe25d8749f1643553e59f060c664fcb858a6b2997e25b398c99546053\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"2e7a58a05880c6e3e9b4a437539dbd68c649a079a09e7a7171efec49a48c7816\"" Sep 13 01:06:24.468966 env[1291]: time="2025-09-13T01:06:24.468939054Z" level=info msg="StartContainer for \"2e7a58a05880c6e3e9b4a437539dbd68c649a079a09e7a7171efec49a48c7816\"" Sep 13 01:06:24.482822 systemd[1]: Started cri-containerd-2e7a58a05880c6e3e9b4a437539dbd68c649a079a09e7a7171efec49a48c7816.scope. Sep 13 01:06:24.511201 env[1291]: time="2025-09-13T01:06:24.511175675Z" level=info msg="StartContainer for \"2e7a58a05880c6e3e9b4a437539dbd68c649a079a09e7a7171efec49a48c7816\" returns successfully" Sep 13 01:06:24.831064 kubelet[2087]: I0913 01:06:24.831042 2087 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 13 01:06:24.920939 systemd[1]: Created slice kubepods-burstable-pod2d7d493d_0ec1_4424_812a_0c0eea185893.slice. Sep 13 01:06:24.924191 systemd[1]: Created slice kubepods-burstable-pod81311861_72c1_4eaa_a7dc_f23e707ce5ba.slice. Sep 13 01:06:24.992011 kubelet[2087]: I0913 01:06:24.991984 2087 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qgscq\" (UniqueName: \"kubernetes.io/projected/81311861-72c1-4eaa-a7dc-f23e707ce5ba-kube-api-access-qgscq\") pod \"coredns-668d6bf9bc-mglkl\" (UID: \"81311861-72c1-4eaa-a7dc-f23e707ce5ba\") " pod="kube-system/coredns-668d6bf9bc-mglkl" Sep 13 01:06:24.992191 kubelet[2087]: I0913 01:06:24.992176 2087 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/81311861-72c1-4eaa-a7dc-f23e707ce5ba-config-volume\") pod \"coredns-668d6bf9bc-mglkl\" (UID: \"81311861-72c1-4eaa-a7dc-f23e707ce5ba\") " pod="kube-system/coredns-668d6bf9bc-mglkl" Sep 13 01:06:24.992270 kubelet[2087]: I0913 01:06:24.992260 2087 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2d7d493d-0ec1-4424-812a-0c0eea185893-config-volume\") pod \"coredns-668d6bf9bc-hw2dj\" (UID: \"2d7d493d-0ec1-4424-812a-0c0eea185893\") " pod="kube-system/coredns-668d6bf9bc-hw2dj" Sep 13 01:06:24.992340 kubelet[2087]: I0913 01:06:24.992331 2087 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5tn5d\" (UniqueName: \"kubernetes.io/projected/2d7d493d-0ec1-4424-812a-0c0eea185893-kube-api-access-5tn5d\") pod \"coredns-668d6bf9bc-hw2dj\" (UID: \"2d7d493d-0ec1-4424-812a-0c0eea185893\") " pod="kube-system/coredns-668d6bf9bc-hw2dj" Sep 13 01:06:25.225820 env[1291]: time="2025-09-13T01:06:25.225738440Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-hw2dj,Uid:2d7d493d-0ec1-4424-812a-0c0eea185893,Namespace:kube-system,Attempt:0,}" Sep 13 01:06:25.226533 env[1291]: time="2025-09-13T01:06:25.226416036Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-mglkl,Uid:81311861-72c1-4eaa-a7dc-f23e707ce5ba,Namespace:kube-system,Attempt:0,}" Sep 13 01:06:25.269386 kernel: Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks! Sep 13 01:06:25.481902 kubelet[2087]: I0913 01:06:25.474142 2087 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-62dmr" podStartSLOduration=6.98290149 podStartE2EDuration="17.467744914s" podCreationTimestamp="2025-09-13 01:06:08 +0000 UTC" firstStartedPulling="2025-09-13 01:06:09.614700714 +0000 UTC m=+5.628440126" lastFinishedPulling="2025-09-13 01:06:20.099544143 +0000 UTC m=+16.113283550" observedRunningTime="2025-09-13 01:06:25.464116172 +0000 UTC m=+21.477855591" watchObservedRunningTime="2025-09-13 01:06:25.467744914 +0000 UTC m=+21.481484333" Sep 13 01:06:25.630384 kernel: Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks! Sep 13 01:06:28.077588 systemd-networkd[1081]: cilium_host: Link UP Sep 13 01:06:28.080472 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Sep 13 01:06:28.080507 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Sep 13 01:06:28.078041 systemd-networkd[1081]: cilium_net: Link UP Sep 13 01:06:28.079119 systemd-networkd[1081]: cilium_net: Gained carrier Sep 13 01:06:28.081894 systemd-networkd[1081]: cilium_host: Gained carrier Sep 13 01:06:28.213871 systemd-networkd[1081]: cilium_vxlan: Link UP Sep 13 01:06:28.213875 systemd-networkd[1081]: cilium_vxlan: Gained carrier Sep 13 01:06:28.429489 systemd-networkd[1081]: cilium_host: Gained IPv6LL Sep 13 01:06:28.591384 kernel: NET: Registered PF_ALG protocol family Sep 13 01:06:28.821528 systemd-networkd[1081]: cilium_net: Gained IPv6LL Sep 13 01:06:29.100710 systemd-networkd[1081]: lxc_health: Link UP Sep 13 01:06:29.104704 systemd-networkd[1081]: lxc_health: Gained carrier Sep 13 01:06:29.107810 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Sep 13 01:06:29.337123 systemd-networkd[1081]: lxc48ca5a51d758: Link UP Sep 13 01:06:29.342639 systemd-networkd[1081]: lxc216e4ee46108: Link UP Sep 13 01:06:29.348406 kernel: eth0: renamed from tmp2dabf Sep 13 01:06:29.353773 systemd-networkd[1081]: lxc48ca5a51d758: Gained carrier Sep 13 01:06:29.354447 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc48ca5a51d758: link becomes ready Sep 13 01:06:29.355272 kernel: eth0: renamed from tmp2ef93 Sep 13 01:06:29.360145 systemd-networkd[1081]: lxc216e4ee46108: Gained carrier Sep 13 01:06:29.362463 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc216e4ee46108: link becomes ready Sep 13 01:06:30.037445 systemd-networkd[1081]: cilium_vxlan: Gained IPv6LL Sep 13 01:06:30.549471 systemd-networkd[1081]: lxc216e4ee46108: Gained IPv6LL Sep 13 01:06:30.549660 systemd-networkd[1081]: lxc_health: Gained IPv6LL Sep 13 01:06:30.997524 systemd-networkd[1081]: lxc48ca5a51d758: Gained IPv6LL Sep 13 01:06:32.009467 env[1291]: time="2025-09-13T01:06:32.006916963Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 01:06:32.009467 env[1291]: time="2025-09-13T01:06:32.006940651Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 01:06:32.009467 env[1291]: time="2025-09-13T01:06:32.006947554Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 01:06:32.009467 env[1291]: time="2025-09-13T01:06:32.007009713Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/2ef9321f34ab33f7a5cce20100eef36956cd716273e9f98aae00e7933c8f67ac pid=3275 runtime=io.containerd.runc.v2 Sep 13 01:06:32.022716 env[1291]: time="2025-09-13T01:06:32.020235692Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 01:06:32.022716 env[1291]: time="2025-09-13T01:06:32.020265990Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 01:06:32.022716 env[1291]: time="2025-09-13T01:06:32.020273217Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 01:06:32.022716 env[1291]: time="2025-09-13T01:06:32.020405619Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/2dabfc68b5725328cc2d33be524a0a3ca2cf69a15d6f2daced4b6845cbea016a pid=3291 runtime=io.containerd.runc.v2 Sep 13 01:06:32.082750 systemd[1]: run-containerd-runc-k8s.io-2dabfc68b5725328cc2d33be524a0a3ca2cf69a15d6f2daced4b6845cbea016a-runc.bF6QdO.mount: Deactivated successfully. Sep 13 01:06:32.084199 systemd[1]: Started cri-containerd-2dabfc68b5725328cc2d33be524a0a3ca2cf69a15d6f2daced4b6845cbea016a.scope. Sep 13 01:06:32.092146 systemd[1]: Started cri-containerd-2ef9321f34ab33f7a5cce20100eef36956cd716273e9f98aae00e7933c8f67ac.scope. Sep 13 01:06:32.104049 systemd-resolved[1225]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 13 01:06:32.139642 env[1291]: time="2025-09-13T01:06:32.128688667Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-mglkl,Uid:81311861-72c1-4eaa-a7dc-f23e707ce5ba,Namespace:kube-system,Attempt:0,} returns sandbox id \"2dabfc68b5725328cc2d33be524a0a3ca2cf69a15d6f2daced4b6845cbea016a\"" Sep 13 01:06:32.139642 env[1291]: time="2025-09-13T01:06:32.131055619Z" level=info msg="CreateContainer within sandbox \"2dabfc68b5725328cc2d33be524a0a3ca2cf69a15d6f2daced4b6845cbea016a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 13 01:06:32.139642 env[1291]: time="2025-09-13T01:06:32.131201154Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-hw2dj,Uid:2d7d493d-0ec1-4424-812a-0c0eea185893,Namespace:kube-system,Attempt:0,} returns sandbox id \"2ef9321f34ab33f7a5cce20100eef36956cd716273e9f98aae00e7933c8f67ac\"" Sep 13 01:06:32.139642 env[1291]: time="2025-09-13T01:06:32.132410489Z" level=info msg="CreateContainer within sandbox \"2ef9321f34ab33f7a5cce20100eef36956cd716273e9f98aae00e7933c8f67ac\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 13 01:06:32.108259 systemd-resolved[1225]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 13 01:06:32.452104 env[1291]: time="2025-09-13T01:06:32.452070521Z" level=info msg="CreateContainer within sandbox \"2ef9321f34ab33f7a5cce20100eef36956cd716273e9f98aae00e7933c8f67ac\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"54368ee2d85474aa4e7d66a58cc0ef81f6132e8aa66b42766d5c8300c6a463fe\"" Sep 13 01:06:32.453490 env[1291]: time="2025-09-13T01:06:32.453126055Z" level=info msg="StartContainer for \"54368ee2d85474aa4e7d66a58cc0ef81f6132e8aa66b42766d5c8300c6a463fe\"" Sep 13 01:06:32.453615 env[1291]: time="2025-09-13T01:06:32.453598979Z" level=info msg="CreateContainer within sandbox \"2dabfc68b5725328cc2d33be524a0a3ca2cf69a15d6f2daced4b6845cbea016a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"09e7e02b6341165a2505dc8ba81770c3f91daa3bea0b442b7bab0b8f1bf66f8e\"" Sep 13 01:06:32.454005 env[1291]: time="2025-09-13T01:06:32.453991445Z" level=info msg="StartContainer for \"09e7e02b6341165a2505dc8ba81770c3f91daa3bea0b442b7bab0b8f1bf66f8e\"" Sep 13 01:06:32.471454 systemd[1]: Started cri-containerd-09e7e02b6341165a2505dc8ba81770c3f91daa3bea0b442b7bab0b8f1bf66f8e.scope. Sep 13 01:06:32.476652 systemd[1]: Started cri-containerd-54368ee2d85474aa4e7d66a58cc0ef81f6132e8aa66b42766d5c8300c6a463fe.scope. Sep 13 01:06:32.507765 env[1291]: time="2025-09-13T01:06:32.507734257Z" level=info msg="StartContainer for \"09e7e02b6341165a2505dc8ba81770c3f91daa3bea0b442b7bab0b8f1bf66f8e\" returns successfully" Sep 13 01:06:32.508542 env[1291]: time="2025-09-13T01:06:32.508411798Z" level=info msg="StartContainer for \"54368ee2d85474aa4e7d66a58cc0ef81f6132e8aa66b42766d5c8300c6a463fe\" returns successfully" Sep 13 01:06:33.012050 systemd[1]: run-containerd-runc-k8s.io-2ef9321f34ab33f7a5cce20100eef36956cd716273e9f98aae00e7933c8f67ac-runc.Ds0eYY.mount: Deactivated successfully. Sep 13 01:06:33.492665 kubelet[2087]: I0913 01:06:33.492628 2087 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-hw2dj" podStartSLOduration=25.492610276 podStartE2EDuration="25.492610276s" podCreationTimestamp="2025-09-13 01:06:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 01:06:33.492221292 +0000 UTC m=+29.505960710" watchObservedRunningTime="2025-09-13 01:06:33.492610276 +0000 UTC m=+29.506349687" Sep 13 01:06:33.526212 kubelet[2087]: I0913 01:06:33.526176 2087 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-mglkl" podStartSLOduration=25.526162438 podStartE2EDuration="25.526162438s" podCreationTimestamp="2025-09-13 01:06:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 01:06:33.516225017 +0000 UTC m=+29.529964449" watchObservedRunningTime="2025-09-13 01:06:33.526162438 +0000 UTC m=+29.539901850" Sep 13 01:07:15.276345 systemd[1]: Started sshd@5-139.178.70.99:22-147.75.109.163:49994.service. Sep 13 01:07:15.349730 sshd[3442]: Accepted publickey for core from 147.75.109.163 port 49994 ssh2: RSA SHA256:sJGDjo0Z2Vx3Gx4EUnUZFO+gxzu8eUeKwoabCfe3hp8 Sep 13 01:07:15.350499 sshd[3442]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 01:07:15.353773 systemd-logind[1262]: New session 8 of user core. Sep 13 01:07:15.354578 systemd[1]: Started session-8.scope. Sep 13 01:07:15.719797 sshd[3442]: pam_unix(sshd:session): session closed for user core Sep 13 01:07:15.722255 systemd-logind[1262]: Session 8 logged out. Waiting for processes to exit. Sep 13 01:07:15.722383 systemd[1]: sshd@5-139.178.70.99:22-147.75.109.163:49994.service: Deactivated successfully. Sep 13 01:07:15.722888 systemd[1]: session-8.scope: Deactivated successfully. Sep 13 01:07:15.723386 systemd-logind[1262]: Removed session 8. Sep 13 01:07:20.723013 systemd[1]: Started sshd@6-139.178.70.99:22-147.75.109.163:33800.service. Sep 13 01:07:20.853992 sshd[3454]: Accepted publickey for core from 147.75.109.163 port 33800 ssh2: RSA SHA256:sJGDjo0Z2Vx3Gx4EUnUZFO+gxzu8eUeKwoabCfe3hp8 Sep 13 01:07:20.855222 sshd[3454]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 01:07:20.860022 systemd[1]: Started session-9.scope. Sep 13 01:07:20.860396 systemd-logind[1262]: New session 9 of user core. Sep 13 01:07:21.243974 sshd[3454]: pam_unix(sshd:session): session closed for user core Sep 13 01:07:21.245649 systemd[1]: sshd@6-139.178.70.99:22-147.75.109.163:33800.service: Deactivated successfully. Sep 13 01:07:21.246074 systemd[1]: session-9.scope: Deactivated successfully. Sep 13 01:07:21.246321 systemd-logind[1262]: Session 9 logged out. Waiting for processes to exit. Sep 13 01:07:21.246746 systemd-logind[1262]: Removed session 9. Sep 13 01:07:26.248018 systemd[1]: Started sshd@7-139.178.70.99:22-147.75.109.163:33808.service. Sep 13 01:07:26.287134 sshd[3468]: Accepted publickey for core from 147.75.109.163 port 33808 ssh2: RSA SHA256:sJGDjo0Z2Vx3Gx4EUnUZFO+gxzu8eUeKwoabCfe3hp8 Sep 13 01:07:26.288356 sshd[3468]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 01:07:26.291509 systemd[1]: Started session-10.scope. Sep 13 01:07:26.291818 systemd-logind[1262]: New session 10 of user core. Sep 13 01:07:26.388620 sshd[3468]: pam_unix(sshd:session): session closed for user core Sep 13 01:07:26.390614 systemd[1]: sshd@7-139.178.70.99:22-147.75.109.163:33808.service: Deactivated successfully. Sep 13 01:07:26.391079 systemd[1]: session-10.scope: Deactivated successfully. Sep 13 01:07:26.391693 systemd-logind[1262]: Session 10 logged out. Waiting for processes to exit. Sep 13 01:07:26.392149 systemd-logind[1262]: Removed session 10. Sep 13 01:07:31.392004 systemd[1]: Started sshd@8-139.178.70.99:22-147.75.109.163:37588.service. Sep 13 01:07:31.428287 sshd[3481]: Accepted publickey for core from 147.75.109.163 port 37588 ssh2: RSA SHA256:sJGDjo0Z2Vx3Gx4EUnUZFO+gxzu8eUeKwoabCfe3hp8 Sep 13 01:07:31.429162 sshd[3481]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 01:07:31.432267 systemd[1]: Started session-11.scope. Sep 13 01:07:31.433071 systemd-logind[1262]: New session 11 of user core. Sep 13 01:07:31.532760 sshd[3481]: pam_unix(sshd:session): session closed for user core Sep 13 01:07:31.535232 systemd[1]: Started sshd@9-139.178.70.99:22-147.75.109.163:37590.service. Sep 13 01:07:31.538950 systemd-logind[1262]: Session 11 logged out. Waiting for processes to exit. Sep 13 01:07:31.539874 systemd[1]: sshd@8-139.178.70.99:22-147.75.109.163:37588.service: Deactivated successfully. Sep 13 01:07:31.540337 systemd[1]: session-11.scope: Deactivated successfully. Sep 13 01:07:31.541203 systemd-logind[1262]: Removed session 11. Sep 13 01:07:31.571002 sshd[3493]: Accepted publickey for core from 147.75.109.163 port 37590 ssh2: RSA SHA256:sJGDjo0Z2Vx3Gx4EUnUZFO+gxzu8eUeKwoabCfe3hp8 Sep 13 01:07:31.571907 sshd[3493]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 01:07:31.574968 systemd[1]: Started session-12.scope. Sep 13 01:07:31.575754 systemd-logind[1262]: New session 12 of user core. Sep 13 01:07:31.723139 sshd[3493]: pam_unix(sshd:session): session closed for user core Sep 13 01:07:31.725754 systemd[1]: Started sshd@10-139.178.70.99:22-147.75.109.163:37606.service. Sep 13 01:07:31.731787 systemd-logind[1262]: Session 12 logged out. Waiting for processes to exit. Sep 13 01:07:31.732681 systemd[1]: sshd@9-139.178.70.99:22-147.75.109.163:37590.service: Deactivated successfully. Sep 13 01:07:31.733189 systemd[1]: session-12.scope: Deactivated successfully. Sep 13 01:07:31.734040 systemd-logind[1262]: Removed session 12. Sep 13 01:07:31.779229 sshd[3503]: Accepted publickey for core from 147.75.109.163 port 37606 ssh2: RSA SHA256:sJGDjo0Z2Vx3Gx4EUnUZFO+gxzu8eUeKwoabCfe3hp8 Sep 13 01:07:31.780315 sshd[3503]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 01:07:31.783498 systemd[1]: Started session-13.scope. Sep 13 01:07:31.783700 systemd-logind[1262]: New session 13 of user core. Sep 13 01:07:31.895769 sshd[3503]: pam_unix(sshd:session): session closed for user core Sep 13 01:07:31.897418 systemd[1]: sshd@10-139.178.70.99:22-147.75.109.163:37606.service: Deactivated successfully. Sep 13 01:07:31.897821 systemd[1]: session-13.scope: Deactivated successfully. Sep 13 01:07:31.898054 systemd-logind[1262]: Session 13 logged out. Waiting for processes to exit. Sep 13 01:07:31.898517 systemd-logind[1262]: Removed session 13. Sep 13 01:07:36.899131 systemd[1]: Started sshd@11-139.178.70.99:22-147.75.109.163:37608.service. Sep 13 01:07:36.933558 sshd[3516]: Accepted publickey for core from 147.75.109.163 port 37608 ssh2: RSA SHA256:sJGDjo0Z2Vx3Gx4EUnUZFO+gxzu8eUeKwoabCfe3hp8 Sep 13 01:07:36.934397 sshd[3516]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 01:07:36.937393 systemd[1]: Started session-14.scope. Sep 13 01:07:36.937682 systemd-logind[1262]: New session 14 of user core. Sep 13 01:07:37.033827 sshd[3516]: pam_unix(sshd:session): session closed for user core Sep 13 01:07:37.035482 systemd[1]: sshd@11-139.178.70.99:22-147.75.109.163:37608.service: Deactivated successfully. Sep 13 01:07:37.035938 systemd[1]: session-14.scope: Deactivated successfully. Sep 13 01:07:37.036526 systemd-logind[1262]: Session 14 logged out. Waiting for processes to exit. Sep 13 01:07:37.037103 systemd-logind[1262]: Removed session 14. Sep 13 01:07:42.038096 systemd[1]: Started sshd@12-139.178.70.99:22-147.75.109.163:38304.service. Sep 13 01:07:42.074698 sshd[3531]: Accepted publickey for core from 147.75.109.163 port 38304 ssh2: RSA SHA256:sJGDjo0Z2Vx3Gx4EUnUZFO+gxzu8eUeKwoabCfe3hp8 Sep 13 01:07:42.076001 sshd[3531]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 01:07:42.079875 systemd[1]: Started session-15.scope. Sep 13 01:07:42.080528 systemd-logind[1262]: New session 15 of user core. Sep 13 01:07:42.176667 sshd[3531]: pam_unix(sshd:session): session closed for user core Sep 13 01:07:42.179160 systemd[1]: Started sshd@13-139.178.70.99:22-147.75.109.163:38314.service. Sep 13 01:07:42.184402 systemd[1]: sshd@12-139.178.70.99:22-147.75.109.163:38304.service: Deactivated successfully. Sep 13 01:07:42.184792 systemd[1]: session-15.scope: Deactivated successfully. Sep 13 01:07:42.185241 systemd-logind[1262]: Session 15 logged out. Waiting for processes to exit. Sep 13 01:07:42.185688 systemd-logind[1262]: Removed session 15. Sep 13 01:07:42.215389 sshd[3542]: Accepted publickey for core from 147.75.109.163 port 38314 ssh2: RSA SHA256:sJGDjo0Z2Vx3Gx4EUnUZFO+gxzu8eUeKwoabCfe3hp8 Sep 13 01:07:42.216334 sshd[3542]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 01:07:42.220257 systemd[1]: Started session-16.scope. Sep 13 01:07:42.220625 systemd-logind[1262]: New session 16 of user core. Sep 13 01:07:42.684183 sshd[3542]: pam_unix(sshd:session): session closed for user core Sep 13 01:07:42.687750 systemd[1]: Started sshd@14-139.178.70.99:22-147.75.109.163:38330.service. Sep 13 01:07:42.691362 systemd-logind[1262]: Session 16 logged out. Waiting for processes to exit. Sep 13 01:07:42.692800 systemd[1]: sshd@13-139.178.70.99:22-147.75.109.163:38314.service: Deactivated successfully. Sep 13 01:07:42.693318 systemd[1]: session-16.scope: Deactivated successfully. Sep 13 01:07:42.694153 systemd-logind[1262]: Removed session 16. Sep 13 01:07:42.740526 sshd[3552]: Accepted publickey for core from 147.75.109.163 port 38330 ssh2: RSA SHA256:sJGDjo0Z2Vx3Gx4EUnUZFO+gxzu8eUeKwoabCfe3hp8 Sep 13 01:07:42.741606 sshd[3552]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 01:07:42.744179 systemd-logind[1262]: New session 17 of user core. Sep 13 01:07:42.744769 systemd[1]: Started session-17.scope. Sep 13 01:07:43.326060 sshd[3552]: pam_unix(sshd:session): session closed for user core Sep 13 01:07:43.329175 systemd[1]: Started sshd@15-139.178.70.99:22-147.75.109.163:38342.service. Sep 13 01:07:43.334325 systemd[1]: sshd@14-139.178.70.99:22-147.75.109.163:38330.service: Deactivated successfully. Sep 13 01:07:43.334758 systemd[1]: session-17.scope: Deactivated successfully. Sep 13 01:07:43.335085 systemd-logind[1262]: Session 17 logged out. Waiting for processes to exit. Sep 13 01:07:43.335575 systemd-logind[1262]: Removed session 17. Sep 13 01:07:43.370905 sshd[3568]: Accepted publickey for core from 147.75.109.163 port 38342 ssh2: RSA SHA256:sJGDjo0Z2Vx3Gx4EUnUZFO+gxzu8eUeKwoabCfe3hp8 Sep 13 01:07:43.371988 sshd[3568]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 01:07:43.374491 systemd-logind[1262]: New session 18 of user core. Sep 13 01:07:43.374993 systemd[1]: Started session-18.scope. Sep 13 01:07:43.571242 sshd[3568]: pam_unix(sshd:session): session closed for user core Sep 13 01:07:43.573939 systemd[1]: Started sshd@16-139.178.70.99:22-147.75.109.163:38358.service. Sep 13 01:07:43.575126 systemd[1]: sshd@15-139.178.70.99:22-147.75.109.163:38342.service: Deactivated successfully. Sep 13 01:07:43.575653 systemd[1]: session-18.scope: Deactivated successfully. Sep 13 01:07:43.576347 systemd-logind[1262]: Session 18 logged out. Waiting for processes to exit. Sep 13 01:07:43.576960 systemd-logind[1262]: Removed session 18. Sep 13 01:07:43.616914 sshd[3579]: Accepted publickey for core from 147.75.109.163 port 38358 ssh2: RSA SHA256:sJGDjo0Z2Vx3Gx4EUnUZFO+gxzu8eUeKwoabCfe3hp8 Sep 13 01:07:43.617744 sshd[3579]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 01:07:43.620170 systemd-logind[1262]: New session 19 of user core. Sep 13 01:07:43.620664 systemd[1]: Started session-19.scope. Sep 13 01:07:43.715737 sshd[3579]: pam_unix(sshd:session): session closed for user core Sep 13 01:07:43.717651 systemd-logind[1262]: Session 19 logged out. Waiting for processes to exit. Sep 13 01:07:43.717837 systemd[1]: sshd@16-139.178.70.99:22-147.75.109.163:38358.service: Deactivated successfully. Sep 13 01:07:43.718470 systemd[1]: session-19.scope: Deactivated successfully. Sep 13 01:07:43.719185 systemd-logind[1262]: Removed session 19. Sep 13 01:07:48.720984 systemd[1]: Started sshd@17-139.178.70.99:22-147.75.109.163:38360.service. Sep 13 01:07:48.757216 sshd[3593]: Accepted publickey for core from 147.75.109.163 port 38360 ssh2: RSA SHA256:sJGDjo0Z2Vx3Gx4EUnUZFO+gxzu8eUeKwoabCfe3hp8 Sep 13 01:07:48.758688 sshd[3593]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 01:07:48.762129 systemd-logind[1262]: New session 20 of user core. Sep 13 01:07:48.762881 systemd[1]: Started session-20.scope. Sep 13 01:07:48.854631 sshd[3593]: pam_unix(sshd:session): session closed for user core Sep 13 01:07:48.856739 systemd[1]: sshd@17-139.178.70.99:22-147.75.109.163:38360.service: Deactivated successfully. Sep 13 01:07:48.857216 systemd[1]: session-20.scope: Deactivated successfully. Sep 13 01:07:48.857748 systemd-logind[1262]: Session 20 logged out. Waiting for processes to exit. Sep 13 01:07:48.858608 systemd-logind[1262]: Removed session 20. Sep 13 01:07:53.859301 systemd[1]: Started sshd@18-139.178.70.99:22-147.75.109.163:35378.service. Sep 13 01:07:53.895025 sshd[3605]: Accepted publickey for core from 147.75.109.163 port 35378 ssh2: RSA SHA256:sJGDjo0Z2Vx3Gx4EUnUZFO+gxzu8eUeKwoabCfe3hp8 Sep 13 01:07:53.896537 sshd[3605]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 01:07:53.900666 systemd[1]: Started session-21.scope. Sep 13 01:07:53.901440 systemd-logind[1262]: New session 21 of user core. Sep 13 01:07:53.990269 sshd[3605]: pam_unix(sshd:session): session closed for user core Sep 13 01:07:53.991839 systemd[1]: sshd@18-139.178.70.99:22-147.75.109.163:35378.service: Deactivated successfully. Sep 13 01:07:53.992277 systemd[1]: session-21.scope: Deactivated successfully. Sep 13 01:07:53.992930 systemd-logind[1262]: Session 21 logged out. Waiting for processes to exit. Sep 13 01:07:53.993466 systemd-logind[1262]: Removed session 21. Sep 13 01:07:58.993982 systemd[1]: Started sshd@19-139.178.70.99:22-147.75.109.163:35388.service. Sep 13 01:07:59.029572 sshd[3617]: Accepted publickey for core from 147.75.109.163 port 35388 ssh2: RSA SHA256:sJGDjo0Z2Vx3Gx4EUnUZFO+gxzu8eUeKwoabCfe3hp8 Sep 13 01:07:59.030859 sshd[3617]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 01:07:59.035175 systemd[1]: Started session-22.scope. Sep 13 01:07:59.035621 systemd-logind[1262]: New session 22 of user core. Sep 13 01:07:59.136824 sshd[3617]: pam_unix(sshd:session): session closed for user core Sep 13 01:07:59.138322 systemd[1]: sshd@19-139.178.70.99:22-147.75.109.163:35388.service: Deactivated successfully. Sep 13 01:07:59.138835 systemd[1]: session-22.scope: Deactivated successfully. Sep 13 01:07:59.139329 systemd-logind[1262]: Session 22 logged out. Waiting for processes to exit. Sep 13 01:07:59.139831 systemd-logind[1262]: Removed session 22. Sep 13 01:08:04.140768 systemd[1]: Started sshd@20-139.178.70.99:22-147.75.109.163:36758.service. Sep 13 01:08:04.181938 sshd[3629]: Accepted publickey for core from 147.75.109.163 port 36758 ssh2: RSA SHA256:sJGDjo0Z2Vx3Gx4EUnUZFO+gxzu8eUeKwoabCfe3hp8 Sep 13 01:08:04.183055 sshd[3629]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 01:08:04.186056 systemd[1]: Started session-23.scope. Sep 13 01:08:04.186239 systemd-logind[1262]: New session 23 of user core. Sep 13 01:08:04.292067 sshd[3629]: pam_unix(sshd:session): session closed for user core Sep 13 01:08:04.294811 systemd[1]: Started sshd@21-139.178.70.99:22-147.75.109.163:36772.service. Sep 13 01:08:04.298506 systemd[1]: sshd@20-139.178.70.99:22-147.75.109.163:36758.service: Deactivated successfully. Sep 13 01:08:04.298710 systemd-logind[1262]: Session 23 logged out. Waiting for processes to exit. Sep 13 01:08:04.299279 systemd[1]: session-23.scope: Deactivated successfully. Sep 13 01:08:04.299834 systemd-logind[1262]: Removed session 23. Sep 13 01:08:04.330544 sshd[3641]: Accepted publickey for core from 147.75.109.163 port 36772 ssh2: RSA SHA256:sJGDjo0Z2Vx3Gx4EUnUZFO+gxzu8eUeKwoabCfe3hp8 Sep 13 01:08:04.331981 sshd[3641]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 01:08:04.335985 systemd[1]: Started session-24.scope. Sep 13 01:08:04.336673 systemd-logind[1262]: New session 24 of user core. Sep 13 01:08:05.944507 env[1291]: time="2025-09-13T01:08:05.944476470Z" level=info msg="StopContainer for \"feba2ee23932e81c309898618a20cb8222cc8d13df8973e2908ff9364be10f7d\" with timeout 30 (s)" Sep 13 01:08:05.945014 env[1291]: time="2025-09-13T01:08:05.944998793Z" level=info msg="Stop container \"feba2ee23932e81c309898618a20cb8222cc8d13df8973e2908ff9364be10f7d\" with signal terminated" Sep 13 01:08:05.950852 systemd[1]: run-containerd-runc-k8s.io-2e7a58a05880c6e3e9b4a437539dbd68c649a079a09e7a7171efec49a48c7816-runc.Dkr7cA.mount: Deactivated successfully. Sep 13 01:08:05.969859 systemd[1]: cri-containerd-feba2ee23932e81c309898618a20cb8222cc8d13df8973e2908ff9364be10f7d.scope: Deactivated successfully. Sep 13 01:08:05.973958 env[1291]: time="2025-09-13T01:08:05.973909288Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 13 01:08:05.977125 env[1291]: time="2025-09-13T01:08:05.977107805Z" level=info msg="StopContainer for \"2e7a58a05880c6e3e9b4a437539dbd68c649a079a09e7a7171efec49a48c7816\" with timeout 2 (s)" Sep 13 01:08:05.977419 env[1291]: time="2025-09-13T01:08:05.977406695Z" level=info msg="Stop container \"2e7a58a05880c6e3e9b4a437539dbd68c649a079a09e7a7171efec49a48c7816\" with signal terminated" Sep 13 01:08:05.986894 systemd-networkd[1081]: lxc_health: Link DOWN Sep 13 01:08:05.986899 systemd-networkd[1081]: lxc_health: Lost carrier Sep 13 01:08:06.007391 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-feba2ee23932e81c309898618a20cb8222cc8d13df8973e2908ff9364be10f7d-rootfs.mount: Deactivated successfully. Sep 13 01:08:06.007813 systemd[1]: cri-containerd-2e7a58a05880c6e3e9b4a437539dbd68c649a079a09e7a7171efec49a48c7816.scope: Deactivated successfully. Sep 13 01:08:06.007965 systemd[1]: cri-containerd-2e7a58a05880c6e3e9b4a437539dbd68c649a079a09e7a7171efec49a48c7816.scope: Consumed 4.477s CPU time. Sep 13 01:08:06.014748 env[1291]: time="2025-09-13T01:08:06.014719499Z" level=info msg="shim disconnected" id=feba2ee23932e81c309898618a20cb8222cc8d13df8973e2908ff9364be10f7d Sep 13 01:08:06.014862 env[1291]: time="2025-09-13T01:08:06.014850720Z" level=warning msg="cleaning up after shim disconnected" id=feba2ee23932e81c309898618a20cb8222cc8d13df8973e2908ff9364be10f7d namespace=k8s.io Sep 13 01:08:06.014923 env[1291]: time="2025-09-13T01:08:06.014912672Z" level=info msg="cleaning up dead shim" Sep 13 01:08:06.022852 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2e7a58a05880c6e3e9b4a437539dbd68c649a079a09e7a7171efec49a48c7816-rootfs.mount: Deactivated successfully. Sep 13 01:08:06.025733 env[1291]: time="2025-09-13T01:08:06.025697151Z" level=warning msg="cleanup warnings time=\"2025-09-13T01:08:06Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3702 runtime=io.containerd.runc.v2\n" Sep 13 01:08:06.026340 env[1291]: time="2025-09-13T01:08:06.026313917Z" level=info msg="shim disconnected" id=2e7a58a05880c6e3e9b4a437539dbd68c649a079a09e7a7171efec49a48c7816 Sep 13 01:08:06.026408 env[1291]: time="2025-09-13T01:08:06.026338724Z" level=warning msg="cleaning up after shim disconnected" id=2e7a58a05880c6e3e9b4a437539dbd68c649a079a09e7a7171efec49a48c7816 namespace=k8s.io Sep 13 01:08:06.026408 env[1291]: time="2025-09-13T01:08:06.026349945Z" level=info msg="cleaning up dead shim" Sep 13 01:08:06.027092 env[1291]: time="2025-09-13T01:08:06.027075453Z" level=info msg="StopContainer for \"feba2ee23932e81c309898618a20cb8222cc8d13df8973e2908ff9364be10f7d\" returns successfully" Sep 13 01:08:06.034630 env[1291]: time="2025-09-13T01:08:06.034605198Z" level=info msg="StopPodSandbox for \"48dfeb1d6dce71aca95bad001797d30f7802d566e47c797fb4bfc148378ec161\"" Sep 13 01:08:06.034778 env[1291]: time="2025-09-13T01:08:06.034762821Z" level=info msg="Container to stop \"feba2ee23932e81c309898618a20cb8222cc8d13df8973e2908ff9364be10f7d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 01:08:06.037248 env[1291]: time="2025-09-13T01:08:06.037229087Z" level=warning msg="cleanup warnings time=\"2025-09-13T01:08:06Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3722 runtime=io.containerd.runc.v2\n" Sep 13 01:08:06.038029 env[1291]: time="2025-09-13T01:08:06.038013321Z" level=info msg="StopContainer for \"2e7a58a05880c6e3e9b4a437539dbd68c649a079a09e7a7171efec49a48c7816\" returns successfully" Sep 13 01:08:06.038435 env[1291]: time="2025-09-13T01:08:06.038402694Z" level=info msg="StopPodSandbox for \"7f0e732fe25d8749f1643553e59f060c664fcb858a6b2997e25b398c99546053\"" Sep 13 01:08:06.038477 env[1291]: time="2025-09-13T01:08:06.038457198Z" level=info msg="Container to stop \"08978a0f0b85d44f1b5adb76fde298e9d72e8faa7b20ee768fc6a8bf25d2e9c3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 01:08:06.038477 env[1291]: time="2025-09-13T01:08:06.038465725Z" level=info msg="Container to stop \"7eb0b8bda9e1dd47528daa0eacc8b0a0d013b7047e376933b8e19fc21e55035b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 01:08:06.038477 env[1291]: time="2025-09-13T01:08:06.038471829Z" level=info msg="Container to stop \"6321bad41dff50c33b8bb112bf3fc24480ad357725aee5ef040d66ca690ffe0a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 01:08:06.038547 env[1291]: time="2025-09-13T01:08:06.038478678Z" level=info msg="Container to stop \"4956b867f10568285cf2dad306cd26226abc37da2337c9a8d71a49fe37b3867d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 01:08:06.038547 env[1291]: time="2025-09-13T01:08:06.038484249Z" level=info msg="Container to stop \"2e7a58a05880c6e3e9b4a437539dbd68c649a079a09e7a7171efec49a48c7816\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 01:08:06.041392 systemd[1]: cri-containerd-48dfeb1d6dce71aca95bad001797d30f7802d566e47c797fb4bfc148378ec161.scope: Deactivated successfully. Sep 13 01:08:06.042128 systemd[1]: cri-containerd-7f0e732fe25d8749f1643553e59f060c664fcb858a6b2997e25b398c99546053.scope: Deactivated successfully. Sep 13 01:08:06.066514 env[1291]: time="2025-09-13T01:08:06.066483332Z" level=info msg="shim disconnected" id=7f0e732fe25d8749f1643553e59f060c664fcb858a6b2997e25b398c99546053 Sep 13 01:08:06.066711 env[1291]: time="2025-09-13T01:08:06.066699818Z" level=warning msg="cleaning up after shim disconnected" id=7f0e732fe25d8749f1643553e59f060c664fcb858a6b2997e25b398c99546053 namespace=k8s.io Sep 13 01:08:06.066778 env[1291]: time="2025-09-13T01:08:06.066754320Z" level=info msg="cleaning up dead shim" Sep 13 01:08:06.070771 env[1291]: time="2025-09-13T01:08:06.070743720Z" level=info msg="shim disconnected" id=48dfeb1d6dce71aca95bad001797d30f7802d566e47c797fb4bfc148378ec161 Sep 13 01:08:06.070839 env[1291]: time="2025-09-13T01:08:06.070800497Z" level=warning msg="cleaning up after shim disconnected" id=48dfeb1d6dce71aca95bad001797d30f7802d566e47c797fb4bfc148378ec161 namespace=k8s.io Sep 13 01:08:06.070839 env[1291]: time="2025-09-13T01:08:06.070809220Z" level=info msg="cleaning up dead shim" Sep 13 01:08:06.074240 env[1291]: time="2025-09-13T01:08:06.074225787Z" level=warning msg="cleanup warnings time=\"2025-09-13T01:08:06Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3770 runtime=io.containerd.runc.v2\n" Sep 13 01:08:06.074671 env[1291]: time="2025-09-13T01:08:06.074656863Z" level=info msg="TearDown network for sandbox \"7f0e732fe25d8749f1643553e59f060c664fcb858a6b2997e25b398c99546053\" successfully" Sep 13 01:08:06.074728 env[1291]: time="2025-09-13T01:08:06.074716707Z" level=info msg="StopPodSandbox for \"7f0e732fe25d8749f1643553e59f060c664fcb858a6b2997e25b398c99546053\" returns successfully" Sep 13 01:08:06.079377 env[1291]: time="2025-09-13T01:08:06.079277888Z" level=warning msg="cleanup warnings time=\"2025-09-13T01:08:06Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3777 runtime=io.containerd.runc.v2\n" Sep 13 01:08:06.081807 env[1291]: time="2025-09-13T01:08:06.080410026Z" level=info msg="TearDown network for sandbox \"48dfeb1d6dce71aca95bad001797d30f7802d566e47c797fb4bfc148378ec161\" successfully" Sep 13 01:08:06.081807 env[1291]: time="2025-09-13T01:08:06.080434836Z" level=info msg="StopPodSandbox for \"48dfeb1d6dce71aca95bad001797d30f7802d566e47c797fb4bfc148378ec161\" returns successfully" Sep 13 01:08:06.191140 kubelet[2087]: I0913 01:08:06.189024 2087 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/10042a06-0b7d-475f-837e-ffb345721f86-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "10042a06-0b7d-475f-837e-ffb345721f86" (UID: "10042a06-0b7d-475f-837e-ffb345721f86"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 01:08:06.192330 kubelet[2087]: I0913 01:08:06.192306 2087 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/10042a06-0b7d-475f-837e-ffb345721f86-etc-cni-netd\") pod \"10042a06-0b7d-475f-837e-ffb345721f86\" (UID: \"10042a06-0b7d-475f-837e-ffb345721f86\") " Sep 13 01:08:06.192400 kubelet[2087]: I0913 01:08:06.192352 2087 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/10042a06-0b7d-475f-837e-ffb345721f86-hostproc\") pod \"10042a06-0b7d-475f-837e-ffb345721f86\" (UID: \"10042a06-0b7d-475f-837e-ffb345721f86\") " Sep 13 01:08:06.192400 kubelet[2087]: I0913 01:08:06.192377 2087 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/10042a06-0b7d-475f-837e-ffb345721f86-cilium-cgroup\") pod \"10042a06-0b7d-475f-837e-ffb345721f86\" (UID: \"10042a06-0b7d-475f-837e-ffb345721f86\") " Sep 13 01:08:06.192400 kubelet[2087]: I0913 01:08:06.192394 2087 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/10042a06-0b7d-475f-837e-ffb345721f86-cni-path\") pod \"10042a06-0b7d-475f-837e-ffb345721f86\" (UID: \"10042a06-0b7d-475f-837e-ffb345721f86\") " Sep 13 01:08:06.192493 kubelet[2087]: I0913 01:08:06.192416 2087 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7017425f-5a64-43eb-b5cd-e4225dc6e636-cilium-config-path\") pod \"7017425f-5a64-43eb-b5cd-e4225dc6e636\" (UID: \"7017425f-5a64-43eb-b5cd-e4225dc6e636\") " Sep 13 01:08:06.192493 kubelet[2087]: I0913 01:08:06.192429 2087 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/10042a06-0b7d-475f-837e-ffb345721f86-cilium-run\") pod \"10042a06-0b7d-475f-837e-ffb345721f86\" (UID: \"10042a06-0b7d-475f-837e-ffb345721f86\") " Sep 13 01:08:06.192493 kubelet[2087]: I0913 01:08:06.192442 2087 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/10042a06-0b7d-475f-837e-ffb345721f86-hubble-tls\") pod \"10042a06-0b7d-475f-837e-ffb345721f86\" (UID: \"10042a06-0b7d-475f-837e-ffb345721f86\") " Sep 13 01:08:06.192493 kubelet[2087]: I0913 01:08:06.192454 2087 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cxlvg\" (UniqueName: \"kubernetes.io/projected/7017425f-5a64-43eb-b5cd-e4225dc6e636-kube-api-access-cxlvg\") pod \"7017425f-5a64-43eb-b5cd-e4225dc6e636\" (UID: \"7017425f-5a64-43eb-b5cd-e4225dc6e636\") " Sep 13 01:08:06.192493 kubelet[2087]: I0913 01:08:06.192469 2087 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/10042a06-0b7d-475f-837e-ffb345721f86-clustermesh-secrets\") pod \"10042a06-0b7d-475f-837e-ffb345721f86\" (UID: \"10042a06-0b7d-475f-837e-ffb345721f86\") " Sep 13 01:08:06.192493 kubelet[2087]: I0913 01:08:06.192487 2087 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/10042a06-0b7d-475f-837e-ffb345721f86-cilium-config-path\") pod \"10042a06-0b7d-475f-837e-ffb345721f86\" (UID: \"10042a06-0b7d-475f-837e-ffb345721f86\") " Sep 13 01:08:06.192665 kubelet[2087]: I0913 01:08:06.192500 2087 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/10042a06-0b7d-475f-837e-ffb345721f86-host-proc-sys-kernel\") pod \"10042a06-0b7d-475f-837e-ffb345721f86\" (UID: \"10042a06-0b7d-475f-837e-ffb345721f86\") " Sep 13 01:08:06.192665 kubelet[2087]: I0913 01:08:06.192513 2087 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p98bp\" (UniqueName: \"kubernetes.io/projected/10042a06-0b7d-475f-837e-ffb345721f86-kube-api-access-p98bp\") pod \"10042a06-0b7d-475f-837e-ffb345721f86\" (UID: \"10042a06-0b7d-475f-837e-ffb345721f86\") " Sep 13 01:08:06.192665 kubelet[2087]: I0913 01:08:06.192526 2087 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/10042a06-0b7d-475f-837e-ffb345721f86-xtables-lock\") pod \"10042a06-0b7d-475f-837e-ffb345721f86\" (UID: \"10042a06-0b7d-475f-837e-ffb345721f86\") " Sep 13 01:08:06.192665 kubelet[2087]: I0913 01:08:06.192537 2087 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/10042a06-0b7d-475f-837e-ffb345721f86-host-proc-sys-net\") pod \"10042a06-0b7d-475f-837e-ffb345721f86\" (UID: \"10042a06-0b7d-475f-837e-ffb345721f86\") " Sep 13 01:08:06.192665 kubelet[2087]: I0913 01:08:06.192548 2087 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/10042a06-0b7d-475f-837e-ffb345721f86-bpf-maps\") pod \"10042a06-0b7d-475f-837e-ffb345721f86\" (UID: \"10042a06-0b7d-475f-837e-ffb345721f86\") " Sep 13 01:08:06.192665 kubelet[2087]: I0913 01:08:06.192559 2087 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/10042a06-0b7d-475f-837e-ffb345721f86-lib-modules\") pod \"10042a06-0b7d-475f-837e-ffb345721f86\" (UID: \"10042a06-0b7d-475f-837e-ffb345721f86\") " Sep 13 01:08:06.193732 kubelet[2087]: I0913 01:08:06.193712 2087 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/10042a06-0b7d-475f-837e-ffb345721f86-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Sep 13 01:08:06.193786 kubelet[2087]: I0913 01:08:06.193743 2087 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/10042a06-0b7d-475f-837e-ffb345721f86-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "10042a06-0b7d-475f-837e-ffb345721f86" (UID: "10042a06-0b7d-475f-837e-ffb345721f86"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 01:08:06.207659 kubelet[2087]: I0913 01:08:06.207590 2087 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/10042a06-0b7d-475f-837e-ffb345721f86-hostproc" (OuterVolumeSpecName: "hostproc") pod "10042a06-0b7d-475f-837e-ffb345721f86" (UID: "10042a06-0b7d-475f-837e-ffb345721f86"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 01:08:06.207659 kubelet[2087]: I0913 01:08:06.207624 2087 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/10042a06-0b7d-475f-837e-ffb345721f86-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "10042a06-0b7d-475f-837e-ffb345721f86" (UID: "10042a06-0b7d-475f-837e-ffb345721f86"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 01:08:06.207659 kubelet[2087]: I0913 01:08:06.207636 2087 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/10042a06-0b7d-475f-837e-ffb345721f86-cni-path" (OuterVolumeSpecName: "cni-path") pod "10042a06-0b7d-475f-837e-ffb345721f86" (UID: "10042a06-0b7d-475f-837e-ffb345721f86"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 01:08:06.209145 kubelet[2087]: I0913 01:08:06.209129 2087 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7017425f-5a64-43eb-b5cd-e4225dc6e636-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "7017425f-5a64-43eb-b5cd-e4225dc6e636" (UID: "7017425f-5a64-43eb-b5cd-e4225dc6e636"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 13 01:08:06.209193 kubelet[2087]: I0913 01:08:06.209157 2087 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/10042a06-0b7d-475f-837e-ffb345721f86-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "10042a06-0b7d-475f-837e-ffb345721f86" (UID: "10042a06-0b7d-475f-837e-ffb345721f86"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 01:08:06.209362 kubelet[2087]: I0913 01:08:06.209342 2087 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/10042a06-0b7d-475f-837e-ffb345721f86-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "10042a06-0b7d-475f-837e-ffb345721f86" (UID: "10042a06-0b7d-475f-837e-ffb345721f86"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 01:08:06.210690 kubelet[2087]: I0913 01:08:06.210676 2087 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/10042a06-0b7d-475f-837e-ffb345721f86-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "10042a06-0b7d-475f-837e-ffb345721f86" (UID: "10042a06-0b7d-475f-837e-ffb345721f86"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 13 01:08:06.211121 kubelet[2087]: I0913 01:08:06.211106 2087 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/10042a06-0b7d-475f-837e-ffb345721f86-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "10042a06-0b7d-475f-837e-ffb345721f86" (UID: "10042a06-0b7d-475f-837e-ffb345721f86"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 01:08:06.211166 kubelet[2087]: I0913 01:08:06.211125 2087 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/10042a06-0b7d-475f-837e-ffb345721f86-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "10042a06-0b7d-475f-837e-ffb345721f86" (UID: "10042a06-0b7d-475f-837e-ffb345721f86"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 01:08:06.211166 kubelet[2087]: I0913 01:08:06.211135 2087 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/10042a06-0b7d-475f-837e-ffb345721f86-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "10042a06-0b7d-475f-837e-ffb345721f86" (UID: "10042a06-0b7d-475f-837e-ffb345721f86"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 01:08:06.211869 kubelet[2087]: I0913 01:08:06.211853 2087 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/10042a06-0b7d-475f-837e-ffb345721f86-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "10042a06-0b7d-475f-837e-ffb345721f86" (UID: "10042a06-0b7d-475f-837e-ffb345721f86"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 13 01:08:06.217375 kubelet[2087]: I0913 01:08:06.214414 2087 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7017425f-5a64-43eb-b5cd-e4225dc6e636-kube-api-access-cxlvg" (OuterVolumeSpecName: "kube-api-access-cxlvg") pod "7017425f-5a64-43eb-b5cd-e4225dc6e636" (UID: "7017425f-5a64-43eb-b5cd-e4225dc6e636"). InnerVolumeSpecName "kube-api-access-cxlvg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 13 01:08:06.218019 kubelet[2087]: I0913 01:08:06.218003 2087 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/10042a06-0b7d-475f-837e-ffb345721f86-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "10042a06-0b7d-475f-837e-ffb345721f86" (UID: "10042a06-0b7d-475f-837e-ffb345721f86"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 13 01:08:06.218210 kubelet[2087]: I0913 01:08:06.218196 2087 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/10042a06-0b7d-475f-837e-ffb345721f86-kube-api-access-p98bp" (OuterVolumeSpecName: "kube-api-access-p98bp") pod "10042a06-0b7d-475f-837e-ffb345721f86" (UID: "10042a06-0b7d-475f-837e-ffb345721f86"). InnerVolumeSpecName "kube-api-access-p98bp". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 13 01:08:06.294346 kubelet[2087]: I0913 01:08:06.294311 2087 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/10042a06-0b7d-475f-837e-ffb345721f86-xtables-lock\") on node \"localhost\" DevicePath \"\"" Sep 13 01:08:06.294346 kubelet[2087]: I0913 01:08:06.294339 2087 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/10042a06-0b7d-475f-837e-ffb345721f86-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Sep 13 01:08:06.294346 kubelet[2087]: I0913 01:08:06.294348 2087 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/10042a06-0b7d-475f-837e-ffb345721f86-bpf-maps\") on node \"localhost\" DevicePath \"\"" Sep 13 01:08:06.294535 kubelet[2087]: I0913 01:08:06.294356 2087 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/10042a06-0b7d-475f-837e-ffb345721f86-lib-modules\") on node \"localhost\" DevicePath \"\"" Sep 13 01:08:06.294535 kubelet[2087]: I0913 01:08:06.294378 2087 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/10042a06-0b7d-475f-837e-ffb345721f86-cni-path\") on node \"localhost\" DevicePath \"\"" Sep 13 01:08:06.294535 kubelet[2087]: I0913 01:08:06.294385 2087 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/10042a06-0b7d-475f-837e-ffb345721f86-hostproc\") on node \"localhost\" DevicePath \"\"" Sep 13 01:08:06.294535 kubelet[2087]: I0913 01:08:06.294391 2087 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/10042a06-0b7d-475f-837e-ffb345721f86-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Sep 13 01:08:06.294535 kubelet[2087]: I0913 01:08:06.294399 2087 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7017425f-5a64-43eb-b5cd-e4225dc6e636-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 13 01:08:06.294535 kubelet[2087]: I0913 01:08:06.294405 2087 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/10042a06-0b7d-475f-837e-ffb345721f86-hubble-tls\") on node \"localhost\" DevicePath \"\"" Sep 13 01:08:06.294535 kubelet[2087]: I0913 01:08:06.294411 2087 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-cxlvg\" (UniqueName: \"kubernetes.io/projected/7017425f-5a64-43eb-b5cd-e4225dc6e636-kube-api-access-cxlvg\") on node \"localhost\" DevicePath \"\"" Sep 13 01:08:06.294535 kubelet[2087]: I0913 01:08:06.294417 2087 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/10042a06-0b7d-475f-837e-ffb345721f86-cilium-run\") on node \"localhost\" DevicePath \"\"" Sep 13 01:08:06.294741 kubelet[2087]: I0913 01:08:06.294422 2087 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/10042a06-0b7d-475f-837e-ffb345721f86-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Sep 13 01:08:06.294741 kubelet[2087]: I0913 01:08:06.294429 2087 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/10042a06-0b7d-475f-837e-ffb345721f86-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 13 01:08:06.294741 kubelet[2087]: I0913 01:08:06.294435 2087 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/10042a06-0b7d-475f-837e-ffb345721f86-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Sep 13 01:08:06.294741 kubelet[2087]: I0913 01:08:06.294441 2087 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-p98bp\" (UniqueName: \"kubernetes.io/projected/10042a06-0b7d-475f-837e-ffb345721f86-kube-api-access-p98bp\") on node \"localhost\" DevicePath \"\"" Sep 13 01:08:06.604156 kubelet[2087]: I0913 01:08:06.603609 2087 scope.go:117] "RemoveContainer" containerID="2e7a58a05880c6e3e9b4a437539dbd68c649a079a09e7a7171efec49a48c7816" Sep 13 01:08:06.603973 systemd[1]: Removed slice kubepods-burstable-pod10042a06_0b7d_475f_837e_ffb345721f86.slice. Sep 13 01:08:06.604023 systemd[1]: kubepods-burstable-pod10042a06_0b7d_475f_837e_ffb345721f86.slice: Consumed 4.551s CPU time. Sep 13 01:08:06.606821 env[1291]: time="2025-09-13T01:08:06.606797616Z" level=info msg="RemoveContainer for \"2e7a58a05880c6e3e9b4a437539dbd68c649a079a09e7a7171efec49a48c7816\"" Sep 13 01:08:06.608550 env[1291]: time="2025-09-13T01:08:06.608525069Z" level=info msg="RemoveContainer for \"2e7a58a05880c6e3e9b4a437539dbd68c649a079a09e7a7171efec49a48c7816\" returns successfully" Sep 13 01:08:06.609815 kubelet[2087]: I0913 01:08:06.609804 2087 scope.go:117] "RemoveContainer" containerID="6321bad41dff50c33b8bb112bf3fc24480ad357725aee5ef040d66ca690ffe0a" Sep 13 01:08:06.612452 systemd[1]: Removed slice kubepods-besteffort-pod7017425f_5a64_43eb_b5cd_e4225dc6e636.slice. Sep 13 01:08:06.613860 env[1291]: time="2025-09-13T01:08:06.613624700Z" level=info msg="RemoveContainer for \"6321bad41dff50c33b8bb112bf3fc24480ad357725aee5ef040d66ca690ffe0a\"" Sep 13 01:08:06.615618 env[1291]: time="2025-09-13T01:08:06.615563394Z" level=info msg="RemoveContainer for \"6321bad41dff50c33b8bb112bf3fc24480ad357725aee5ef040d66ca690ffe0a\" returns successfully" Sep 13 01:08:06.617657 kubelet[2087]: I0913 01:08:06.617328 2087 scope.go:117] "RemoveContainer" containerID="7eb0b8bda9e1dd47528daa0eacc8b0a0d013b7047e376933b8e19fc21e55035b" Sep 13 01:08:06.617878 env[1291]: time="2025-09-13T01:08:06.617858035Z" level=info msg="RemoveContainer for \"7eb0b8bda9e1dd47528daa0eacc8b0a0d013b7047e376933b8e19fc21e55035b\"" Sep 13 01:08:06.619328 env[1291]: time="2025-09-13T01:08:06.619226600Z" level=info msg="RemoveContainer for \"7eb0b8bda9e1dd47528daa0eacc8b0a0d013b7047e376933b8e19fc21e55035b\" returns successfully" Sep 13 01:08:06.619721 kubelet[2087]: I0913 01:08:06.619443 2087 scope.go:117] "RemoveContainer" containerID="08978a0f0b85d44f1b5adb76fde298e9d72e8faa7b20ee768fc6a8bf25d2e9c3" Sep 13 01:08:06.622290 env[1291]: time="2025-09-13T01:08:06.622265947Z" level=info msg="RemoveContainer for \"08978a0f0b85d44f1b5adb76fde298e9d72e8faa7b20ee768fc6a8bf25d2e9c3\"" Sep 13 01:08:06.623384 env[1291]: time="2025-09-13T01:08:06.623359129Z" level=info msg="RemoveContainer for \"08978a0f0b85d44f1b5adb76fde298e9d72e8faa7b20ee768fc6a8bf25d2e9c3\" returns successfully" Sep 13 01:08:06.623533 kubelet[2087]: I0913 01:08:06.623519 2087 scope.go:117] "RemoveContainer" containerID="4956b867f10568285cf2dad306cd26226abc37da2337c9a8d71a49fe37b3867d" Sep 13 01:08:06.624262 env[1291]: time="2025-09-13T01:08:06.624245262Z" level=info msg="RemoveContainer for \"4956b867f10568285cf2dad306cd26226abc37da2337c9a8d71a49fe37b3867d\"" Sep 13 01:08:06.625327 env[1291]: time="2025-09-13T01:08:06.625311287Z" level=info msg="RemoveContainer for \"4956b867f10568285cf2dad306cd26226abc37da2337c9a8d71a49fe37b3867d\" returns successfully" Sep 13 01:08:06.625454 kubelet[2087]: I0913 01:08:06.625437 2087 scope.go:117] "RemoveContainer" containerID="2e7a58a05880c6e3e9b4a437539dbd68c649a079a09e7a7171efec49a48c7816" Sep 13 01:08:06.625677 env[1291]: time="2025-09-13T01:08:06.625609507Z" level=error msg="ContainerStatus for \"2e7a58a05880c6e3e9b4a437539dbd68c649a079a09e7a7171efec49a48c7816\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2e7a58a05880c6e3e9b4a437539dbd68c649a079a09e7a7171efec49a48c7816\": not found" Sep 13 01:08:06.627612 kubelet[2087]: E0913 01:08:06.627597 2087 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2e7a58a05880c6e3e9b4a437539dbd68c649a079a09e7a7171efec49a48c7816\": not found" containerID="2e7a58a05880c6e3e9b4a437539dbd68c649a079a09e7a7171efec49a48c7816" Sep 13 01:08:06.627672 kubelet[2087]: I0913 01:08:06.627621 2087 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2e7a58a05880c6e3e9b4a437539dbd68c649a079a09e7a7171efec49a48c7816"} err="failed to get container status \"2e7a58a05880c6e3e9b4a437539dbd68c649a079a09e7a7171efec49a48c7816\": rpc error: code = NotFound desc = an error occurred when try to find container \"2e7a58a05880c6e3e9b4a437539dbd68c649a079a09e7a7171efec49a48c7816\": not found" Sep 13 01:08:06.627672 kubelet[2087]: I0913 01:08:06.627671 2087 scope.go:117] "RemoveContainer" containerID="6321bad41dff50c33b8bb112bf3fc24480ad357725aee5ef040d66ca690ffe0a" Sep 13 01:08:06.628164 env[1291]: time="2025-09-13T01:08:06.628134750Z" level=error msg="ContainerStatus for \"6321bad41dff50c33b8bb112bf3fc24480ad357725aee5ef040d66ca690ffe0a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6321bad41dff50c33b8bb112bf3fc24480ad357725aee5ef040d66ca690ffe0a\": not found" Sep 13 01:08:06.628254 kubelet[2087]: E0913 01:08:06.628242 2087 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6321bad41dff50c33b8bb112bf3fc24480ad357725aee5ef040d66ca690ffe0a\": not found" containerID="6321bad41dff50c33b8bb112bf3fc24480ad357725aee5ef040d66ca690ffe0a" Sep 13 01:08:06.628318 kubelet[2087]: I0913 01:08:06.628306 2087 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6321bad41dff50c33b8bb112bf3fc24480ad357725aee5ef040d66ca690ffe0a"} err="failed to get container status \"6321bad41dff50c33b8bb112bf3fc24480ad357725aee5ef040d66ca690ffe0a\": rpc error: code = NotFound desc = an error occurred when try to find container \"6321bad41dff50c33b8bb112bf3fc24480ad357725aee5ef040d66ca690ffe0a\": not found" Sep 13 01:08:06.628373 kubelet[2087]: I0913 01:08:06.628357 2087 scope.go:117] "RemoveContainer" containerID="7eb0b8bda9e1dd47528daa0eacc8b0a0d013b7047e376933b8e19fc21e55035b" Sep 13 01:08:06.628570 env[1291]: time="2025-09-13T01:08:06.628538650Z" level=error msg="ContainerStatus for \"7eb0b8bda9e1dd47528daa0eacc8b0a0d013b7047e376933b8e19fc21e55035b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7eb0b8bda9e1dd47528daa0eacc8b0a0d013b7047e376933b8e19fc21e55035b\": not found" Sep 13 01:08:06.628645 kubelet[2087]: E0913 01:08:06.628627 2087 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7eb0b8bda9e1dd47528daa0eacc8b0a0d013b7047e376933b8e19fc21e55035b\": not found" containerID="7eb0b8bda9e1dd47528daa0eacc8b0a0d013b7047e376933b8e19fc21e55035b" Sep 13 01:08:06.628645 kubelet[2087]: I0913 01:08:06.628641 2087 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7eb0b8bda9e1dd47528daa0eacc8b0a0d013b7047e376933b8e19fc21e55035b"} err="failed to get container status \"7eb0b8bda9e1dd47528daa0eacc8b0a0d013b7047e376933b8e19fc21e55035b\": rpc error: code = NotFound desc = an error occurred when try to find container \"7eb0b8bda9e1dd47528daa0eacc8b0a0d013b7047e376933b8e19fc21e55035b\": not found" Sep 13 01:08:06.628700 kubelet[2087]: I0913 01:08:06.628650 2087 scope.go:117] "RemoveContainer" containerID="08978a0f0b85d44f1b5adb76fde298e9d72e8faa7b20ee768fc6a8bf25d2e9c3" Sep 13 01:08:06.628737 env[1291]: time="2025-09-13T01:08:06.628713932Z" level=error msg="ContainerStatus for \"08978a0f0b85d44f1b5adb76fde298e9d72e8faa7b20ee768fc6a8bf25d2e9c3\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"08978a0f0b85d44f1b5adb76fde298e9d72e8faa7b20ee768fc6a8bf25d2e9c3\": not found" Sep 13 01:08:06.628788 kubelet[2087]: E0913 01:08:06.628777 2087 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"08978a0f0b85d44f1b5adb76fde298e9d72e8faa7b20ee768fc6a8bf25d2e9c3\": not found" containerID="08978a0f0b85d44f1b5adb76fde298e9d72e8faa7b20ee768fc6a8bf25d2e9c3" Sep 13 01:08:06.628828 kubelet[2087]: I0913 01:08:06.628788 2087 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"08978a0f0b85d44f1b5adb76fde298e9d72e8faa7b20ee768fc6a8bf25d2e9c3"} err="failed to get container status \"08978a0f0b85d44f1b5adb76fde298e9d72e8faa7b20ee768fc6a8bf25d2e9c3\": rpc error: code = NotFound desc = an error occurred when try to find container \"08978a0f0b85d44f1b5adb76fde298e9d72e8faa7b20ee768fc6a8bf25d2e9c3\": not found" Sep 13 01:08:06.628828 kubelet[2087]: I0913 01:08:06.628796 2087 scope.go:117] "RemoveContainer" containerID="4956b867f10568285cf2dad306cd26226abc37da2337c9a8d71a49fe37b3867d" Sep 13 01:08:06.629194 kubelet[2087]: E0913 01:08:06.628914 2087 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4956b867f10568285cf2dad306cd26226abc37da2337c9a8d71a49fe37b3867d\": not found" containerID="4956b867f10568285cf2dad306cd26226abc37da2337c9a8d71a49fe37b3867d" Sep 13 01:08:06.629194 kubelet[2087]: I0913 01:08:06.628923 2087 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4956b867f10568285cf2dad306cd26226abc37da2337c9a8d71a49fe37b3867d"} err="failed to get container status \"4956b867f10568285cf2dad306cd26226abc37da2337c9a8d71a49fe37b3867d\": rpc error: code = NotFound desc = an error occurred when try to find container \"4956b867f10568285cf2dad306cd26226abc37da2337c9a8d71a49fe37b3867d\": not found" Sep 13 01:08:06.629194 kubelet[2087]: I0913 01:08:06.628930 2087 scope.go:117] "RemoveContainer" containerID="feba2ee23932e81c309898618a20cb8222cc8d13df8973e2908ff9364be10f7d" Sep 13 01:08:06.629268 env[1291]: time="2025-09-13T01:08:06.628857593Z" level=error msg="ContainerStatus for \"4956b867f10568285cf2dad306cd26226abc37da2337c9a8d71a49fe37b3867d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4956b867f10568285cf2dad306cd26226abc37da2337c9a8d71a49fe37b3867d\": not found" Sep 13 01:08:06.629461 env[1291]: time="2025-09-13T01:08:06.629447647Z" level=info msg="RemoveContainer for \"feba2ee23932e81c309898618a20cb8222cc8d13df8973e2908ff9364be10f7d\"" Sep 13 01:08:06.630442 env[1291]: time="2025-09-13T01:08:06.630407728Z" level=info msg="RemoveContainer for \"feba2ee23932e81c309898618a20cb8222cc8d13df8973e2908ff9364be10f7d\" returns successfully" Sep 13 01:08:06.630519 kubelet[2087]: I0913 01:08:06.630509 2087 scope.go:117] "RemoveContainer" containerID="feba2ee23932e81c309898618a20cb8222cc8d13df8973e2908ff9364be10f7d" Sep 13 01:08:06.630609 env[1291]: time="2025-09-13T01:08:06.630583033Z" level=error msg="ContainerStatus for \"feba2ee23932e81c309898618a20cb8222cc8d13df8973e2908ff9364be10f7d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"feba2ee23932e81c309898618a20cb8222cc8d13df8973e2908ff9364be10f7d\": not found" Sep 13 01:08:06.630667 kubelet[2087]: E0913 01:08:06.630647 2087 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"feba2ee23932e81c309898618a20cb8222cc8d13df8973e2908ff9364be10f7d\": not found" containerID="feba2ee23932e81c309898618a20cb8222cc8d13df8973e2908ff9364be10f7d" Sep 13 01:08:06.630667 kubelet[2087]: I0913 01:08:06.630660 2087 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"feba2ee23932e81c309898618a20cb8222cc8d13df8973e2908ff9364be10f7d"} err="failed to get container status \"feba2ee23932e81c309898618a20cb8222cc8d13df8973e2908ff9364be10f7d\": rpc error: code = NotFound desc = an error occurred when try to find container \"feba2ee23932e81c309898618a20cb8222cc8d13df8973e2908ff9364be10f7d\": not found" Sep 13 01:08:06.943381 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7f0e732fe25d8749f1643553e59f060c664fcb858a6b2997e25b398c99546053-rootfs.mount: Deactivated successfully. Sep 13 01:08:06.943454 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7f0e732fe25d8749f1643553e59f060c664fcb858a6b2997e25b398c99546053-shm.mount: Deactivated successfully. Sep 13 01:08:06.943507 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-48dfeb1d6dce71aca95bad001797d30f7802d566e47c797fb4bfc148378ec161-rootfs.mount: Deactivated successfully. Sep 13 01:08:06.943552 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-48dfeb1d6dce71aca95bad001797d30f7802d566e47c797fb4bfc148378ec161-shm.mount: Deactivated successfully. Sep 13 01:08:06.943597 systemd[1]: var-lib-kubelet-pods-7017425f\x2d5a64\x2d43eb\x2db5cd\x2de4225dc6e636-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dcxlvg.mount: Deactivated successfully. Sep 13 01:08:06.943642 systemd[1]: var-lib-kubelet-pods-10042a06\x2d0b7d\x2d475f\x2d837e\x2dffb345721f86-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 13 01:08:06.943693 systemd[1]: var-lib-kubelet-pods-10042a06\x2d0b7d\x2d475f\x2d837e\x2dffb345721f86-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 13 01:08:06.943739 systemd[1]: var-lib-kubelet-pods-10042a06\x2d0b7d\x2d475f\x2d837e\x2dffb345721f86-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dp98bp.mount: Deactivated successfully. Sep 13 01:08:07.907970 sshd[3641]: pam_unix(sshd:session): session closed for user core Sep 13 01:08:07.913151 systemd[1]: Started sshd@22-139.178.70.99:22-147.75.109.163:36776.service. Sep 13 01:08:07.914432 systemd[1]: sshd@21-139.178.70.99:22-147.75.109.163:36772.service: Deactivated successfully. Sep 13 01:08:07.914830 systemd[1]: session-24.scope: Deactivated successfully. Sep 13 01:08:07.915255 systemd-logind[1262]: Session 24 logged out. Waiting for processes to exit. Sep 13 01:08:07.915984 systemd-logind[1262]: Removed session 24. Sep 13 01:08:07.962917 sshd[3803]: Accepted publickey for core from 147.75.109.163 port 36776 ssh2: RSA SHA256:sJGDjo0Z2Vx3Gx4EUnUZFO+gxzu8eUeKwoabCfe3hp8 Sep 13 01:08:07.964235 sshd[3803]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 01:08:07.969433 systemd[1]: Started session-25.scope. Sep 13 01:08:07.970324 systemd-logind[1262]: New session 25 of user core. Sep 13 01:08:08.149775 kubelet[2087]: I0913 01:08:08.149748 2087 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="10042a06-0b7d-475f-837e-ffb345721f86" path="/var/lib/kubelet/pods/10042a06-0b7d-475f-837e-ffb345721f86/volumes" Sep 13 01:08:08.150743 kubelet[2087]: I0913 01:08:08.150729 2087 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7017425f-5a64-43eb-b5cd-e4225dc6e636" path="/var/lib/kubelet/pods/7017425f-5a64-43eb-b5cd-e4225dc6e636/volumes" Sep 13 01:08:08.445495 sshd[3803]: pam_unix(sshd:session): session closed for user core Sep 13 01:08:08.447229 systemd[1]: sshd@22-139.178.70.99:22-147.75.109.163:36776.service: Deactivated successfully. Sep 13 01:08:08.447638 systemd[1]: session-25.scope: Deactivated successfully. Sep 13 01:08:08.448112 systemd-logind[1262]: Session 25 logged out. Waiting for processes to exit. Sep 13 01:08:08.448971 systemd[1]: Started sshd@23-139.178.70.99:22-147.75.109.163:36778.service. Sep 13 01:08:08.450111 systemd-logind[1262]: Removed session 25. Sep 13 01:08:08.475476 kubelet[2087]: I0913 01:08:08.475444 2087 memory_manager.go:355] "RemoveStaleState removing state" podUID="7017425f-5a64-43eb-b5cd-e4225dc6e636" containerName="cilium-operator" Sep 13 01:08:08.475476 kubelet[2087]: I0913 01:08:08.475465 2087 memory_manager.go:355] "RemoveStaleState removing state" podUID="10042a06-0b7d-475f-837e-ffb345721f86" containerName="cilium-agent" Sep 13 01:08:08.487611 sshd[3814]: Accepted publickey for core from 147.75.109.163 port 36778 ssh2: RSA SHA256:sJGDjo0Z2Vx3Gx4EUnUZFO+gxzu8eUeKwoabCfe3hp8 Sep 13 01:08:08.487522 systemd[1]: Created slice kubepods-burstable-podb6c43b62_631a_41a2_a970_2aef5c9c711c.slice. Sep 13 01:08:08.489374 sshd[3814]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 01:08:08.492452 systemd-logind[1262]: New session 26 of user core. Sep 13 01:08:08.492926 systemd[1]: Started session-26.scope. Sep 13 01:08:08.509234 kubelet[2087]: I0913 01:08:08.509174 2087 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b6c43b62-631a-41a2-a970-2aef5c9c711c-hostproc\") pod \"cilium-bqdzg\" (UID: \"b6c43b62-631a-41a2-a970-2aef5c9c711c\") " pod="kube-system/cilium-bqdzg" Sep 13 01:08:08.509234 kubelet[2087]: I0913 01:08:08.509214 2087 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b6c43b62-631a-41a2-a970-2aef5c9c711c-etc-cni-netd\") pod \"cilium-bqdzg\" (UID: \"b6c43b62-631a-41a2-a970-2aef5c9c711c\") " pod="kube-system/cilium-bqdzg" Sep 13 01:08:08.509426 kubelet[2087]: I0913 01:08:08.509228 2087 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b6c43b62-631a-41a2-a970-2aef5c9c711c-cilium-run\") pod \"cilium-bqdzg\" (UID: \"b6c43b62-631a-41a2-a970-2aef5c9c711c\") " pod="kube-system/cilium-bqdzg" Sep 13 01:08:08.509426 kubelet[2087]: I0913 01:08:08.509254 2087 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b6c43b62-631a-41a2-a970-2aef5c9c711c-host-proc-sys-net\") pod \"cilium-bqdzg\" (UID: \"b6c43b62-631a-41a2-a970-2aef5c9c711c\") " pod="kube-system/cilium-bqdzg" Sep 13 01:08:08.509426 kubelet[2087]: I0913 01:08:08.509265 2087 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/b6c43b62-631a-41a2-a970-2aef5c9c711c-cilium-ipsec-secrets\") pod \"cilium-bqdzg\" (UID: \"b6c43b62-631a-41a2-a970-2aef5c9c711c\") " pod="kube-system/cilium-bqdzg" Sep 13 01:08:08.509426 kubelet[2087]: I0913 01:08:08.509273 2087 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b6c43b62-631a-41a2-a970-2aef5c9c711c-hubble-tls\") pod \"cilium-bqdzg\" (UID: \"b6c43b62-631a-41a2-a970-2aef5c9c711c\") " pod="kube-system/cilium-bqdzg" Sep 13 01:08:08.509426 kubelet[2087]: I0913 01:08:08.509284 2087 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b6c43b62-631a-41a2-a970-2aef5c9c711c-cilium-config-path\") pod \"cilium-bqdzg\" (UID: \"b6c43b62-631a-41a2-a970-2aef5c9c711c\") " pod="kube-system/cilium-bqdzg" Sep 13 01:08:08.509426 kubelet[2087]: I0913 01:08:08.509293 2087 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b6c43b62-631a-41a2-a970-2aef5c9c711c-cilium-cgroup\") pod \"cilium-bqdzg\" (UID: \"b6c43b62-631a-41a2-a970-2aef5c9c711c\") " pod="kube-system/cilium-bqdzg" Sep 13 01:08:08.509563 kubelet[2087]: I0913 01:08:08.509302 2087 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b6c43b62-631a-41a2-a970-2aef5c9c711c-cni-path\") pod \"cilium-bqdzg\" (UID: \"b6c43b62-631a-41a2-a970-2aef5c9c711c\") " pod="kube-system/cilium-bqdzg" Sep 13 01:08:08.509563 kubelet[2087]: I0913 01:08:08.509320 2087 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b6c43b62-631a-41a2-a970-2aef5c9c711c-xtables-lock\") pod \"cilium-bqdzg\" (UID: \"b6c43b62-631a-41a2-a970-2aef5c9c711c\") " pod="kube-system/cilium-bqdzg" Sep 13 01:08:08.509563 kubelet[2087]: I0913 01:08:08.509332 2087 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b6c43b62-631a-41a2-a970-2aef5c9c711c-clustermesh-secrets\") pod \"cilium-bqdzg\" (UID: \"b6c43b62-631a-41a2-a970-2aef5c9c711c\") " pod="kube-system/cilium-bqdzg" Sep 13 01:08:08.509563 kubelet[2087]: I0913 01:08:08.509341 2087 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b6c43b62-631a-41a2-a970-2aef5c9c711c-host-proc-sys-kernel\") pod \"cilium-bqdzg\" (UID: \"b6c43b62-631a-41a2-a970-2aef5c9c711c\") " pod="kube-system/cilium-bqdzg" Sep 13 01:08:08.509563 kubelet[2087]: I0913 01:08:08.509350 2087 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b6c43b62-631a-41a2-a970-2aef5c9c711c-lib-modules\") pod \"cilium-bqdzg\" (UID: \"b6c43b62-631a-41a2-a970-2aef5c9c711c\") " pod="kube-system/cilium-bqdzg" Sep 13 01:08:08.509563 kubelet[2087]: I0913 01:08:08.509362 2087 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-44npc\" (UniqueName: \"kubernetes.io/projected/b6c43b62-631a-41a2-a970-2aef5c9c711c-kube-api-access-44npc\") pod \"cilium-bqdzg\" (UID: \"b6c43b62-631a-41a2-a970-2aef5c9c711c\") " pod="kube-system/cilium-bqdzg" Sep 13 01:08:08.509678 kubelet[2087]: I0913 01:08:08.509391 2087 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b6c43b62-631a-41a2-a970-2aef5c9c711c-bpf-maps\") pod \"cilium-bqdzg\" (UID: \"b6c43b62-631a-41a2-a970-2aef5c9c711c\") " pod="kube-system/cilium-bqdzg" Sep 13 01:08:08.675160 sshd[3814]: pam_unix(sshd:session): session closed for user core Sep 13 01:08:08.677871 systemd[1]: Started sshd@24-139.178.70.99:22-147.75.109.163:36782.service. Sep 13 01:08:08.682300 env[1291]: time="2025-09-13T01:08:08.682273572Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-bqdzg,Uid:b6c43b62-631a-41a2-a970-2aef5c9c711c,Namespace:kube-system,Attempt:0,}" Sep 13 01:08:08.684622 systemd[1]: sshd@23-139.178.70.99:22-147.75.109.163:36778.service: Deactivated successfully. Sep 13 01:08:08.687059 systemd[1]: session-26.scope: Deactivated successfully. Sep 13 01:08:08.689118 systemd-logind[1262]: Session 26 logged out. Waiting for processes to exit. Sep 13 01:08:08.690659 systemd-logind[1262]: Removed session 26. Sep 13 01:08:08.693181 env[1291]: time="2025-09-13T01:08:08.692969737Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 01:08:08.693181 env[1291]: time="2025-09-13T01:08:08.692991479Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 01:08:08.693181 env[1291]: time="2025-09-13T01:08:08.692998254Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 01:08:08.693181 env[1291]: time="2025-09-13T01:08:08.693086710Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c8538e42db50ccf4b84b0d61df68621ecefba3071034f8ca88a463cd036454f3 pid=3837 runtime=io.containerd.runc.v2 Sep 13 01:08:08.704867 systemd[1]: Started cri-containerd-c8538e42db50ccf4b84b0d61df68621ecefba3071034f8ca88a463cd036454f3.scope. Sep 13 01:08:08.727154 sshd[3828]: Accepted publickey for core from 147.75.109.163 port 36782 ssh2: RSA SHA256:sJGDjo0Z2Vx3Gx4EUnUZFO+gxzu8eUeKwoabCfe3hp8 Sep 13 01:08:08.728051 sshd[3828]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 01:08:08.732108 systemd-logind[1262]: New session 27 of user core. Sep 13 01:08:08.732646 systemd[1]: Started session-27.scope. Sep 13 01:08:08.736383 env[1291]: time="2025-09-13T01:08:08.735950665Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-bqdzg,Uid:b6c43b62-631a-41a2-a970-2aef5c9c711c,Namespace:kube-system,Attempt:0,} returns sandbox id \"c8538e42db50ccf4b84b0d61df68621ecefba3071034f8ca88a463cd036454f3\"" Sep 13 01:08:08.739265 env[1291]: time="2025-09-13T01:08:08.739247488Z" level=info msg="CreateContainer within sandbox \"c8538e42db50ccf4b84b0d61df68621ecefba3071034f8ca88a463cd036454f3\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 13 01:08:08.745221 env[1291]: time="2025-09-13T01:08:08.745200303Z" level=info msg="CreateContainer within sandbox \"c8538e42db50ccf4b84b0d61df68621ecefba3071034f8ca88a463cd036454f3\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"6c92e5ec97bb8ce3bc8bf7d77d940c07860c25be8445fd0b9d377dc81b74705e\"" Sep 13 01:08:08.745538 env[1291]: time="2025-09-13T01:08:08.745525423Z" level=info msg="StartContainer for \"6c92e5ec97bb8ce3bc8bf7d77d940c07860c25be8445fd0b9d377dc81b74705e\"" Sep 13 01:08:08.754478 systemd[1]: Started cri-containerd-6c92e5ec97bb8ce3bc8bf7d77d940c07860c25be8445fd0b9d377dc81b74705e.scope. Sep 13 01:08:08.763437 systemd[1]: cri-containerd-6c92e5ec97bb8ce3bc8bf7d77d940c07860c25be8445fd0b9d377dc81b74705e.scope: Deactivated successfully. Sep 13 01:08:08.763585 systemd[1]: Stopped cri-containerd-6c92e5ec97bb8ce3bc8bf7d77d940c07860c25be8445fd0b9d377dc81b74705e.scope. Sep 13 01:08:08.776002 env[1291]: time="2025-09-13T01:08:08.775968980Z" level=info msg="shim disconnected" id=6c92e5ec97bb8ce3bc8bf7d77d940c07860c25be8445fd0b9d377dc81b74705e Sep 13 01:08:08.776002 env[1291]: time="2025-09-13T01:08:08.776000700Z" level=warning msg="cleaning up after shim disconnected" id=6c92e5ec97bb8ce3bc8bf7d77d940c07860c25be8445fd0b9d377dc81b74705e namespace=k8s.io Sep 13 01:08:08.776116 env[1291]: time="2025-09-13T01:08:08.776006860Z" level=info msg="cleaning up dead shim" Sep 13 01:08:08.783411 env[1291]: time="2025-09-13T01:08:08.782710666Z" level=warning msg="cleanup warnings time=\"2025-09-13T01:08:08Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3895 runtime=io.containerd.runc.v2\ntime=\"2025-09-13T01:08:08Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/6c92e5ec97bb8ce3bc8bf7d77d940c07860c25be8445fd0b9d377dc81b74705e/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Sep 13 01:08:08.783411 env[1291]: time="2025-09-13T01:08:08.782882669Z" level=error msg="copy shim log" error="read /proc/self/fd/28: file already closed" Sep 13 01:08:08.783411 env[1291]: time="2025-09-13T01:08:08.783085553Z" level=error msg="Failed to pipe stdout of container \"6c92e5ec97bb8ce3bc8bf7d77d940c07860c25be8445fd0b9d377dc81b74705e\"" error="reading from a closed fifo" Sep 13 01:08:08.783623 env[1291]: time="2025-09-13T01:08:08.783603079Z" level=error msg="Failed to pipe stderr of container \"6c92e5ec97bb8ce3bc8bf7d77d940c07860c25be8445fd0b9d377dc81b74705e\"" error="reading from a closed fifo" Sep 13 01:08:08.784533 env[1291]: time="2025-09-13T01:08:08.784083800Z" level=error msg="StartContainer for \"6c92e5ec97bb8ce3bc8bf7d77d940c07860c25be8445fd0b9d377dc81b74705e\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Sep 13 01:08:08.784576 kubelet[2087]: E0913 01:08:08.784447 2087 log.go:32] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="6c92e5ec97bb8ce3bc8bf7d77d940c07860c25be8445fd0b9d377dc81b74705e" Sep 13 01:08:08.787801 kubelet[2087]: E0913 01:08:08.787012 2087 kuberuntime_manager.go:1341] "Unhandled Error" err=< Sep 13 01:08:08.787801 kubelet[2087]: init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Sep 13 01:08:08.787801 kubelet[2087]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Sep 13 01:08:08.787801 kubelet[2087]: rm /hostbin/cilium-mount Sep 13 01:08:08.787915 kubelet[2087]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-44npc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:&AppArmorProfile{Type:Unconfined,LocalhostProfile:nil,},},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-bqdzg_kube-system(b6c43b62-631a-41a2-a970-2aef5c9c711c): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Sep 13 01:08:08.787915 kubelet[2087]: > logger="UnhandledError" Sep 13 01:08:08.788191 kubelet[2087]: E0913 01:08:08.788089 2087 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-bqdzg" podUID="b6c43b62-631a-41a2-a970-2aef5c9c711c" Sep 13 01:08:09.237893 kubelet[2087]: E0913 01:08:09.237859 2087 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 13 01:08:09.618502 env[1291]: time="2025-09-13T01:08:09.618471090Z" level=info msg="StopPodSandbox for \"c8538e42db50ccf4b84b0d61df68621ecefba3071034f8ca88a463cd036454f3\"" Sep 13 01:08:09.618619 env[1291]: time="2025-09-13T01:08:09.618515796Z" level=info msg="Container to stop \"6c92e5ec97bb8ce3bc8bf7d77d940c07860c25be8445fd0b9d377dc81b74705e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 01:08:09.619854 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c8538e42db50ccf4b84b0d61df68621ecefba3071034f8ca88a463cd036454f3-shm.mount: Deactivated successfully. Sep 13 01:08:09.623950 systemd[1]: cri-containerd-c8538e42db50ccf4b84b0d61df68621ecefba3071034f8ca88a463cd036454f3.scope: Deactivated successfully. Sep 13 01:08:09.636905 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c8538e42db50ccf4b84b0d61df68621ecefba3071034f8ca88a463cd036454f3-rootfs.mount: Deactivated successfully. Sep 13 01:08:09.640029 env[1291]: time="2025-09-13T01:08:09.639989227Z" level=info msg="shim disconnected" id=c8538e42db50ccf4b84b0d61df68621ecefba3071034f8ca88a463cd036454f3 Sep 13 01:08:09.640106 env[1291]: time="2025-09-13T01:08:09.640025566Z" level=warning msg="cleaning up after shim disconnected" id=c8538e42db50ccf4b84b0d61df68621ecefba3071034f8ca88a463cd036454f3 namespace=k8s.io Sep 13 01:08:09.640106 env[1291]: time="2025-09-13T01:08:09.640035870Z" level=info msg="cleaning up dead shim" Sep 13 01:08:09.644678 env[1291]: time="2025-09-13T01:08:09.644660876Z" level=warning msg="cleanup warnings time=\"2025-09-13T01:08:09Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3934 runtime=io.containerd.runc.v2\n" Sep 13 01:08:09.644866 env[1291]: time="2025-09-13T01:08:09.644829067Z" level=info msg="TearDown network for sandbox \"c8538e42db50ccf4b84b0d61df68621ecefba3071034f8ca88a463cd036454f3\" successfully" Sep 13 01:08:09.644866 env[1291]: time="2025-09-13T01:08:09.644860644Z" level=info msg="StopPodSandbox for \"c8538e42db50ccf4b84b0d61df68621ecefba3071034f8ca88a463cd036454f3\" returns successfully" Sep 13 01:08:09.718251 kubelet[2087]: I0913 01:08:09.717513 2087 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b6c43b62-631a-41a2-a970-2aef5c9c711c-host-proc-sys-kernel\") pod \"b6c43b62-631a-41a2-a970-2aef5c9c711c\" (UID: \"b6c43b62-631a-41a2-a970-2aef5c9c711c\") " Sep 13 01:08:09.718251 kubelet[2087]: I0913 01:08:09.717541 2087 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b6c43b62-631a-41a2-a970-2aef5c9c711c-xtables-lock\") pod \"b6c43b62-631a-41a2-a970-2aef5c9c711c\" (UID: \"b6c43b62-631a-41a2-a970-2aef5c9c711c\") " Sep 13 01:08:09.718251 kubelet[2087]: I0913 01:08:09.717559 2087 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b6c43b62-631a-41a2-a970-2aef5c9c711c-cni-path\") pod \"b6c43b62-631a-41a2-a970-2aef5c9c711c\" (UID: \"b6c43b62-631a-41a2-a970-2aef5c9c711c\") " Sep 13 01:08:09.718251 kubelet[2087]: I0913 01:08:09.717572 2087 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b6c43b62-631a-41a2-a970-2aef5c9c711c-clustermesh-secrets\") pod \"b6c43b62-631a-41a2-a970-2aef5c9c711c\" (UID: \"b6c43b62-631a-41a2-a970-2aef5c9c711c\") " Sep 13 01:08:09.718251 kubelet[2087]: I0913 01:08:09.717586 2087 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b6c43b62-631a-41a2-a970-2aef5c9c711c-hubble-tls\") pod \"b6c43b62-631a-41a2-a970-2aef5c9c711c\" (UID: \"b6c43b62-631a-41a2-a970-2aef5c9c711c\") " Sep 13 01:08:09.718251 kubelet[2087]: I0913 01:08:09.717587 2087 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b6c43b62-631a-41a2-a970-2aef5c9c711c-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "b6c43b62-631a-41a2-a970-2aef5c9c711c" (UID: "b6c43b62-631a-41a2-a970-2aef5c9c711c"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 01:08:09.718251 kubelet[2087]: I0913 01:08:09.717594 2087 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b6c43b62-631a-41a2-a970-2aef5c9c711c-cilium-cgroup\") pod \"b6c43b62-631a-41a2-a970-2aef5c9c711c\" (UID: \"b6c43b62-631a-41a2-a970-2aef5c9c711c\") " Sep 13 01:08:09.718251 kubelet[2087]: I0913 01:08:09.717610 2087 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/b6c43b62-631a-41a2-a970-2aef5c9c711c-cilium-ipsec-secrets\") pod \"b6c43b62-631a-41a2-a970-2aef5c9c711c\" (UID: \"b6c43b62-631a-41a2-a970-2aef5c9c711c\") " Sep 13 01:08:09.718251 kubelet[2087]: I0913 01:08:09.717611 2087 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b6c43b62-631a-41a2-a970-2aef5c9c711c-cni-path" (OuterVolumeSpecName: "cni-path") pod "b6c43b62-631a-41a2-a970-2aef5c9c711c" (UID: "b6c43b62-631a-41a2-a970-2aef5c9c711c"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 01:08:09.718251 kubelet[2087]: I0913 01:08:09.717619 2087 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b6c43b62-631a-41a2-a970-2aef5c9c711c-host-proc-sys-net\") pod \"b6c43b62-631a-41a2-a970-2aef5c9c711c\" (UID: \"b6c43b62-631a-41a2-a970-2aef5c9c711c\") " Sep 13 01:08:09.718251 kubelet[2087]: I0913 01:08:09.717628 2087 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b6c43b62-631a-41a2-a970-2aef5c9c711c-cilium-run\") pod \"b6c43b62-631a-41a2-a970-2aef5c9c711c\" (UID: \"b6c43b62-631a-41a2-a970-2aef5c9c711c\") " Sep 13 01:08:09.718251 kubelet[2087]: I0913 01:08:09.717634 2087 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b6c43b62-631a-41a2-a970-2aef5c9c711c-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "b6c43b62-631a-41a2-a970-2aef5c9c711c" (UID: "b6c43b62-631a-41a2-a970-2aef5c9c711c"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 01:08:09.718251 kubelet[2087]: I0913 01:08:09.717645 2087 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b6c43b62-631a-41a2-a970-2aef5c9c711c-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "b6c43b62-631a-41a2-a970-2aef5c9c711c" (UID: "b6c43b62-631a-41a2-a970-2aef5c9c711c"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 01:08:09.718251 kubelet[2087]: I0913 01:08:09.717648 2087 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b6c43b62-631a-41a2-a970-2aef5c9c711c-bpf-maps\") pod \"b6c43b62-631a-41a2-a970-2aef5c9c711c\" (UID: \"b6c43b62-631a-41a2-a970-2aef5c9c711c\") " Sep 13 01:08:09.718251 kubelet[2087]: I0913 01:08:09.717658 2087 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b6c43b62-631a-41a2-a970-2aef5c9c711c-lib-modules\") pod \"b6c43b62-631a-41a2-a970-2aef5c9c711c\" (UID: \"b6c43b62-631a-41a2-a970-2aef5c9c711c\") " Sep 13 01:08:09.718837 kubelet[2087]: I0913 01:08:09.717669 2087 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-44npc\" (UniqueName: \"kubernetes.io/projected/b6c43b62-631a-41a2-a970-2aef5c9c711c-kube-api-access-44npc\") pod \"b6c43b62-631a-41a2-a970-2aef5c9c711c\" (UID: \"b6c43b62-631a-41a2-a970-2aef5c9c711c\") " Sep 13 01:08:09.718837 kubelet[2087]: I0913 01:08:09.717680 2087 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b6c43b62-631a-41a2-a970-2aef5c9c711c-hostproc\") pod \"b6c43b62-631a-41a2-a970-2aef5c9c711c\" (UID: \"b6c43b62-631a-41a2-a970-2aef5c9c711c\") " Sep 13 01:08:09.718837 kubelet[2087]: I0913 01:08:09.717692 2087 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b6c43b62-631a-41a2-a970-2aef5c9c711c-cilium-config-path\") pod \"b6c43b62-631a-41a2-a970-2aef5c9c711c\" (UID: \"b6c43b62-631a-41a2-a970-2aef5c9c711c\") " Sep 13 01:08:09.718837 kubelet[2087]: I0913 01:08:09.717703 2087 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b6c43b62-631a-41a2-a970-2aef5c9c711c-etc-cni-netd\") pod \"b6c43b62-631a-41a2-a970-2aef5c9c711c\" (UID: \"b6c43b62-631a-41a2-a970-2aef5c9c711c\") " Sep 13 01:08:09.718837 kubelet[2087]: I0913 01:08:09.717726 2087 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b6c43b62-631a-41a2-a970-2aef5c9c711c-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Sep 13 01:08:09.718837 kubelet[2087]: I0913 01:08:09.717732 2087 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b6c43b62-631a-41a2-a970-2aef5c9c711c-xtables-lock\") on node \"localhost\" DevicePath \"\"" Sep 13 01:08:09.718837 kubelet[2087]: I0913 01:08:09.717738 2087 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b6c43b62-631a-41a2-a970-2aef5c9c711c-cni-path\") on node \"localhost\" DevicePath \"\"" Sep 13 01:08:09.718837 kubelet[2087]: I0913 01:08:09.717743 2087 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b6c43b62-631a-41a2-a970-2aef5c9c711c-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Sep 13 01:08:09.718837 kubelet[2087]: I0913 01:08:09.717759 2087 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b6c43b62-631a-41a2-a970-2aef5c9c711c-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "b6c43b62-631a-41a2-a970-2aef5c9c711c" (UID: "b6c43b62-631a-41a2-a970-2aef5c9c711c"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 01:08:09.718837 kubelet[2087]: I0913 01:08:09.718114 2087 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b6c43b62-631a-41a2-a970-2aef5c9c711c-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "b6c43b62-631a-41a2-a970-2aef5c9c711c" (UID: "b6c43b62-631a-41a2-a970-2aef5c9c711c"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 01:08:09.718837 kubelet[2087]: I0913 01:08:09.718129 2087 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b6c43b62-631a-41a2-a970-2aef5c9c711c-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "b6c43b62-631a-41a2-a970-2aef5c9c711c" (UID: "b6c43b62-631a-41a2-a970-2aef5c9c711c"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 01:08:09.718837 kubelet[2087]: I0913 01:08:09.718138 2087 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b6c43b62-631a-41a2-a970-2aef5c9c711c-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "b6c43b62-631a-41a2-a970-2aef5c9c711c" (UID: "b6c43b62-631a-41a2-a970-2aef5c9c711c"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 01:08:09.718837 kubelet[2087]: I0913 01:08:09.718147 2087 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b6c43b62-631a-41a2-a970-2aef5c9c711c-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "b6c43b62-631a-41a2-a970-2aef5c9c711c" (UID: "b6c43b62-631a-41a2-a970-2aef5c9c711c"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 01:08:09.720706 systemd[1]: var-lib-kubelet-pods-b6c43b62\x2d631a\x2d41a2\x2da970\x2d2aef5c9c711c-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 13 01:08:09.721429 kubelet[2087]: I0913 01:08:09.721411 2087 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6c43b62-631a-41a2-a970-2aef5c9c711c-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "b6c43b62-631a-41a2-a970-2aef5c9c711c" (UID: "b6c43b62-631a-41a2-a970-2aef5c9c711c"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 13 01:08:09.721478 kubelet[2087]: I0913 01:08:09.721434 2087 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b6c43b62-631a-41a2-a970-2aef5c9c711c-hostproc" (OuterVolumeSpecName: "hostproc") pod "b6c43b62-631a-41a2-a970-2aef5c9c711c" (UID: "b6c43b62-631a-41a2-a970-2aef5c9c711c"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 01:08:09.722760 systemd[1]: var-lib-kubelet-pods-b6c43b62\x2d631a\x2d41a2\x2da970\x2d2aef5c9c711c-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Sep 13 01:08:09.723896 kubelet[2087]: I0913 01:08:09.723877 2087 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6c43b62-631a-41a2-a970-2aef5c9c711c-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "b6c43b62-631a-41a2-a970-2aef5c9c711c" (UID: "b6c43b62-631a-41a2-a970-2aef5c9c711c"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 13 01:08:09.723938 kubelet[2087]: I0913 01:08:09.723928 2087 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6c43b62-631a-41a2-a970-2aef5c9c711c-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "b6c43b62-631a-41a2-a970-2aef5c9c711c" (UID: "b6c43b62-631a-41a2-a970-2aef5c9c711c"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 13 01:08:09.725026 kubelet[2087]: I0913 01:08:09.725012 2087 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6c43b62-631a-41a2-a970-2aef5c9c711c-kube-api-access-44npc" (OuterVolumeSpecName: "kube-api-access-44npc") pod "b6c43b62-631a-41a2-a970-2aef5c9c711c" (UID: "b6c43b62-631a-41a2-a970-2aef5c9c711c"). InnerVolumeSpecName "kube-api-access-44npc". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 13 01:08:09.725358 kubelet[2087]: I0913 01:08:09.725339 2087 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6c43b62-631a-41a2-a970-2aef5c9c711c-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "b6c43b62-631a-41a2-a970-2aef5c9c711c" (UID: "b6c43b62-631a-41a2-a970-2aef5c9c711c"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 13 01:08:09.818731 kubelet[2087]: I0913 01:08:09.818698 2087 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b6c43b62-631a-41a2-a970-2aef5c9c711c-bpf-maps\") on node \"localhost\" DevicePath \"\"" Sep 13 01:08:09.818731 kubelet[2087]: I0913 01:08:09.818726 2087 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b6c43b62-631a-41a2-a970-2aef5c9c711c-lib-modules\") on node \"localhost\" DevicePath \"\"" Sep 13 01:08:09.818731 kubelet[2087]: I0913 01:08:09.818737 2087 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-44npc\" (UniqueName: \"kubernetes.io/projected/b6c43b62-631a-41a2-a970-2aef5c9c711c-kube-api-access-44npc\") on node \"localhost\" DevicePath \"\"" Sep 13 01:08:09.818901 kubelet[2087]: I0913 01:08:09.818744 2087 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b6c43b62-631a-41a2-a970-2aef5c9c711c-hostproc\") on node \"localhost\" DevicePath \"\"" Sep 13 01:08:09.818901 kubelet[2087]: I0913 01:08:09.818750 2087 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b6c43b62-631a-41a2-a970-2aef5c9c711c-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 13 01:08:09.818901 kubelet[2087]: I0913 01:08:09.818757 2087 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b6c43b62-631a-41a2-a970-2aef5c9c711c-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Sep 13 01:08:09.818901 kubelet[2087]: I0913 01:08:09.818763 2087 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b6c43b62-631a-41a2-a970-2aef5c9c711c-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Sep 13 01:08:09.818901 kubelet[2087]: I0913 01:08:09.818771 2087 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b6c43b62-631a-41a2-a970-2aef5c9c711c-hubble-tls\") on node \"localhost\" DevicePath \"\"" Sep 13 01:08:09.818901 kubelet[2087]: I0913 01:08:09.818777 2087 reconciler_common.go:299] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/b6c43b62-631a-41a2-a970-2aef5c9c711c-cilium-ipsec-secrets\") on node \"localhost\" DevicePath \"\"" Sep 13 01:08:09.818901 kubelet[2087]: I0913 01:08:09.818782 2087 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b6c43b62-631a-41a2-a970-2aef5c9c711c-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Sep 13 01:08:09.818901 kubelet[2087]: I0913 01:08:09.818791 2087 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b6c43b62-631a-41a2-a970-2aef5c9c711c-cilium-run\") on node \"localhost\" DevicePath \"\"" Sep 13 01:08:10.151526 systemd[1]: Removed slice kubepods-burstable-podb6c43b62_631a_41a2_a970_2aef5c9c711c.slice. Sep 13 01:08:10.616430 systemd[1]: var-lib-kubelet-pods-b6c43b62\x2d631a\x2d41a2\x2da970\x2d2aef5c9c711c-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d44npc.mount: Deactivated successfully. Sep 13 01:08:10.616511 systemd[1]: var-lib-kubelet-pods-b6c43b62\x2d631a\x2d41a2\x2da970\x2d2aef5c9c711c-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 13 01:08:10.620951 kubelet[2087]: I0913 01:08:10.620930 2087 scope.go:117] "RemoveContainer" containerID="6c92e5ec97bb8ce3bc8bf7d77d940c07860c25be8445fd0b9d377dc81b74705e" Sep 13 01:08:10.622020 env[1291]: time="2025-09-13T01:08:10.621990730Z" level=info msg="RemoveContainer for \"6c92e5ec97bb8ce3bc8bf7d77d940c07860c25be8445fd0b9d377dc81b74705e\"" Sep 13 01:08:10.623757 env[1291]: time="2025-09-13T01:08:10.623716972Z" level=info msg="RemoveContainer for \"6c92e5ec97bb8ce3bc8bf7d77d940c07860c25be8445fd0b9d377dc81b74705e\" returns successfully" Sep 13 01:08:10.646838 kubelet[2087]: I0913 01:08:10.646813 2087 memory_manager.go:355] "RemoveStaleState removing state" podUID="b6c43b62-631a-41a2-a970-2aef5c9c711c" containerName="mount-cgroup" Sep 13 01:08:10.650411 systemd[1]: Created slice kubepods-burstable-pod9dbe9084_b7d2_40fb_a527_f77fd9badebf.slice. Sep 13 01:08:10.725213 kubelet[2087]: I0913 01:08:10.725178 2087 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9dbe9084-b7d2-40fb-a527-f77fd9badebf-cilium-cgroup\") pod \"cilium-zkw9h\" (UID: \"9dbe9084-b7d2-40fb-a527-f77fd9badebf\") " pod="kube-system/cilium-zkw9h" Sep 13 01:08:10.725213 kubelet[2087]: I0913 01:08:10.725212 2087 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9dbe9084-b7d2-40fb-a527-f77fd9badebf-bpf-maps\") pod \"cilium-zkw9h\" (UID: \"9dbe9084-b7d2-40fb-a527-f77fd9badebf\") " pod="kube-system/cilium-zkw9h" Sep 13 01:08:10.725360 kubelet[2087]: I0913 01:08:10.725228 2087 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9dbe9084-b7d2-40fb-a527-f77fd9badebf-xtables-lock\") pod \"cilium-zkw9h\" (UID: \"9dbe9084-b7d2-40fb-a527-f77fd9badebf\") " pod="kube-system/cilium-zkw9h" Sep 13 01:08:10.725360 kubelet[2087]: I0913 01:08:10.725242 2087 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9dbe9084-b7d2-40fb-a527-f77fd9badebf-host-proc-sys-net\") pod \"cilium-zkw9h\" (UID: \"9dbe9084-b7d2-40fb-a527-f77fd9badebf\") " pod="kube-system/cilium-zkw9h" Sep 13 01:08:10.725360 kubelet[2087]: I0913 01:08:10.725262 2087 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9dbe9084-b7d2-40fb-a527-f77fd9badebf-hubble-tls\") pod \"cilium-zkw9h\" (UID: \"9dbe9084-b7d2-40fb-a527-f77fd9badebf\") " pod="kube-system/cilium-zkw9h" Sep 13 01:08:10.725360 kubelet[2087]: I0913 01:08:10.725278 2087 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6jzx8\" (UniqueName: \"kubernetes.io/projected/9dbe9084-b7d2-40fb-a527-f77fd9badebf-kube-api-access-6jzx8\") pod \"cilium-zkw9h\" (UID: \"9dbe9084-b7d2-40fb-a527-f77fd9badebf\") " pod="kube-system/cilium-zkw9h" Sep 13 01:08:10.725360 kubelet[2087]: I0913 01:08:10.725292 2087 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9dbe9084-b7d2-40fb-a527-f77fd9badebf-clustermesh-secrets\") pod \"cilium-zkw9h\" (UID: \"9dbe9084-b7d2-40fb-a527-f77fd9badebf\") " pod="kube-system/cilium-zkw9h" Sep 13 01:08:10.725360 kubelet[2087]: I0913 01:08:10.725303 2087 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/9dbe9084-b7d2-40fb-a527-f77fd9badebf-cilium-ipsec-secrets\") pod \"cilium-zkw9h\" (UID: \"9dbe9084-b7d2-40fb-a527-f77fd9badebf\") " pod="kube-system/cilium-zkw9h" Sep 13 01:08:10.725360 kubelet[2087]: I0913 01:08:10.725319 2087 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9dbe9084-b7d2-40fb-a527-f77fd9badebf-hostproc\") pod \"cilium-zkw9h\" (UID: \"9dbe9084-b7d2-40fb-a527-f77fd9badebf\") " pod="kube-system/cilium-zkw9h" Sep 13 01:08:10.725360 kubelet[2087]: I0913 01:08:10.725330 2087 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9dbe9084-b7d2-40fb-a527-f77fd9badebf-cni-path\") pod \"cilium-zkw9h\" (UID: \"9dbe9084-b7d2-40fb-a527-f77fd9badebf\") " pod="kube-system/cilium-zkw9h" Sep 13 01:08:10.725360 kubelet[2087]: I0913 01:08:10.725341 2087 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9dbe9084-b7d2-40fb-a527-f77fd9badebf-host-proc-sys-kernel\") pod \"cilium-zkw9h\" (UID: \"9dbe9084-b7d2-40fb-a527-f77fd9badebf\") " pod="kube-system/cilium-zkw9h" Sep 13 01:08:10.725360 kubelet[2087]: I0913 01:08:10.725352 2087 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9dbe9084-b7d2-40fb-a527-f77fd9badebf-lib-modules\") pod \"cilium-zkw9h\" (UID: \"9dbe9084-b7d2-40fb-a527-f77fd9badebf\") " pod="kube-system/cilium-zkw9h" Sep 13 01:08:10.725636 kubelet[2087]: I0913 01:08:10.725375 2087 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9dbe9084-b7d2-40fb-a527-f77fd9badebf-etc-cni-netd\") pod \"cilium-zkw9h\" (UID: \"9dbe9084-b7d2-40fb-a527-f77fd9badebf\") " pod="kube-system/cilium-zkw9h" Sep 13 01:08:10.725636 kubelet[2087]: I0913 01:08:10.725388 2087 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9dbe9084-b7d2-40fb-a527-f77fd9badebf-cilium-config-path\") pod \"cilium-zkw9h\" (UID: \"9dbe9084-b7d2-40fb-a527-f77fd9badebf\") " pod="kube-system/cilium-zkw9h" Sep 13 01:08:10.725636 kubelet[2087]: I0913 01:08:10.725399 2087 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9dbe9084-b7d2-40fb-a527-f77fd9badebf-cilium-run\") pod \"cilium-zkw9h\" (UID: \"9dbe9084-b7d2-40fb-a527-f77fd9badebf\") " pod="kube-system/cilium-zkw9h" Sep 13 01:08:10.953636 env[1291]: time="2025-09-13T01:08:10.953181668Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-zkw9h,Uid:9dbe9084-b7d2-40fb-a527-f77fd9badebf,Namespace:kube-system,Attempt:0,}" Sep 13 01:08:10.984207 env[1291]: time="2025-09-13T01:08:10.984154166Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 01:08:10.984207 env[1291]: time="2025-09-13T01:08:10.984186340Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 01:08:10.984384 env[1291]: time="2025-09-13T01:08:10.984338716Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 01:08:10.984582 env[1291]: time="2025-09-13T01:08:10.984547494Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/93b51984fd4c6ebc359921420d6f458dad5bf0146529b3dfe0569c2409664ee8 pid=3962 runtime=io.containerd.runc.v2 Sep 13 01:08:10.993428 systemd[1]: Started cri-containerd-93b51984fd4c6ebc359921420d6f458dad5bf0146529b3dfe0569c2409664ee8.scope. Sep 13 01:08:11.010924 env[1291]: time="2025-09-13T01:08:11.010890007Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-zkw9h,Uid:9dbe9084-b7d2-40fb-a527-f77fd9badebf,Namespace:kube-system,Attempt:0,} returns sandbox id \"93b51984fd4c6ebc359921420d6f458dad5bf0146529b3dfe0569c2409664ee8\"" Sep 13 01:08:11.021859 env[1291]: time="2025-09-13T01:08:11.021824853Z" level=info msg="CreateContainer within sandbox \"93b51984fd4c6ebc359921420d6f458dad5bf0146529b3dfe0569c2409664ee8\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 13 01:08:11.029202 env[1291]: time="2025-09-13T01:08:11.029175044Z" level=info msg="CreateContainer within sandbox \"93b51984fd4c6ebc359921420d6f458dad5bf0146529b3dfe0569c2409664ee8\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"53719db53a62861f366a8e28755513b0af13db4a06eec440c75fbf8a88cf6437\"" Sep 13 01:08:11.030238 env[1291]: time="2025-09-13T01:08:11.030217525Z" level=info msg="StartContainer for \"53719db53a62861f366a8e28755513b0af13db4a06eec440c75fbf8a88cf6437\"" Sep 13 01:08:11.039917 systemd[1]: Started cri-containerd-53719db53a62861f366a8e28755513b0af13db4a06eec440c75fbf8a88cf6437.scope. Sep 13 01:08:11.057762 env[1291]: time="2025-09-13T01:08:11.057733186Z" level=info msg="StartContainer for \"53719db53a62861f366a8e28755513b0af13db4a06eec440c75fbf8a88cf6437\" returns successfully" Sep 13 01:08:11.074297 systemd[1]: cri-containerd-53719db53a62861f366a8e28755513b0af13db4a06eec440c75fbf8a88cf6437.scope: Deactivated successfully. Sep 13 01:08:11.090902 env[1291]: time="2025-09-13T01:08:11.090874733Z" level=info msg="shim disconnected" id=53719db53a62861f366a8e28755513b0af13db4a06eec440c75fbf8a88cf6437 Sep 13 01:08:11.091151 env[1291]: time="2025-09-13T01:08:11.091139699Z" level=warning msg="cleaning up after shim disconnected" id=53719db53a62861f366a8e28755513b0af13db4a06eec440c75fbf8a88cf6437 namespace=k8s.io Sep 13 01:08:11.091216 env[1291]: time="2025-09-13T01:08:11.091206307Z" level=info msg="cleaning up dead shim" Sep 13 01:08:11.096098 env[1291]: time="2025-09-13T01:08:11.096076455Z" level=warning msg="cleanup warnings time=\"2025-09-13T01:08:11Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4049 runtime=io.containerd.runc.v2\n" Sep 13 01:08:11.625024 env[1291]: time="2025-09-13T01:08:11.624982192Z" level=info msg="CreateContainer within sandbox \"93b51984fd4c6ebc359921420d6f458dad5bf0146529b3dfe0569c2409664ee8\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 13 01:08:11.635589 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1024916936.mount: Deactivated successfully. Sep 13 01:08:11.640544 env[1291]: time="2025-09-13T01:08:11.640514385Z" level=info msg="CreateContainer within sandbox \"93b51984fd4c6ebc359921420d6f458dad5bf0146529b3dfe0569c2409664ee8\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"43a23c463219f14c1772ac63caf95cf3df2fb52acd8cc84d555ce0bfa832ad13\"" Sep 13 01:08:11.641903 env[1291]: time="2025-09-13T01:08:11.641886871Z" level=info msg="StartContainer for \"43a23c463219f14c1772ac63caf95cf3df2fb52acd8cc84d555ce0bfa832ad13\"" Sep 13 01:08:11.654857 systemd[1]: Started cri-containerd-43a23c463219f14c1772ac63caf95cf3df2fb52acd8cc84d555ce0bfa832ad13.scope. Sep 13 01:08:11.676616 env[1291]: time="2025-09-13T01:08:11.676571816Z" level=info msg="StartContainer for \"43a23c463219f14c1772ac63caf95cf3df2fb52acd8cc84d555ce0bfa832ad13\" returns successfully" Sep 13 01:08:11.687641 systemd[1]: cri-containerd-43a23c463219f14c1772ac63caf95cf3df2fb52acd8cc84d555ce0bfa832ad13.scope: Deactivated successfully. Sep 13 01:08:11.700600 env[1291]: time="2025-09-13T01:08:11.700570852Z" level=info msg="shim disconnected" id=43a23c463219f14c1772ac63caf95cf3df2fb52acd8cc84d555ce0bfa832ad13 Sep 13 01:08:11.700600 env[1291]: time="2025-09-13T01:08:11.700599215Z" level=warning msg="cleaning up after shim disconnected" id=43a23c463219f14c1772ac63caf95cf3df2fb52acd8cc84d555ce0bfa832ad13 namespace=k8s.io Sep 13 01:08:11.700725 env[1291]: time="2025-09-13T01:08:11.700605446Z" level=info msg="cleaning up dead shim" Sep 13 01:08:11.704896 env[1291]: time="2025-09-13T01:08:11.704878624Z" level=warning msg="cleanup warnings time=\"2025-09-13T01:08:11Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4111 runtime=io.containerd.runc.v2\n" Sep 13 01:08:11.880561 kubelet[2087]: W0913 01:08:11.880463 2087 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb6c43b62_631a_41a2_a970_2aef5c9c711c.slice/cri-containerd-6c92e5ec97bb8ce3bc8bf7d77d940c07860c25be8445fd0b9d377dc81b74705e.scope WatchSource:0}: container "6c92e5ec97bb8ce3bc8bf7d77d940c07860c25be8445fd0b9d377dc81b74705e" in namespace "k8s.io": not found Sep 13 01:08:12.148723 kubelet[2087]: I0913 01:08:12.148654 2087 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6c43b62-631a-41a2-a970-2aef5c9c711c" path="/var/lib/kubelet/pods/b6c43b62-631a-41a2-a970-2aef5c9c711c/volumes" Sep 13 01:08:12.628190 env[1291]: time="2025-09-13T01:08:12.628165724Z" level=info msg="CreateContainer within sandbox \"93b51984fd4c6ebc359921420d6f458dad5bf0146529b3dfe0569c2409664ee8\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 13 01:08:12.695276 env[1291]: time="2025-09-13T01:08:12.695242415Z" level=info msg="CreateContainer within sandbox \"93b51984fd4c6ebc359921420d6f458dad5bf0146529b3dfe0569c2409664ee8\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"c3887b4910171dc5d89f9b23b269e77b298c13664cf5a8c1454656d8076c7d69\"" Sep 13 01:08:12.695797 env[1291]: time="2025-09-13T01:08:12.695782486Z" level=info msg="StartContainer for \"c3887b4910171dc5d89f9b23b269e77b298c13664cf5a8c1454656d8076c7d69\"" Sep 13 01:08:12.709058 systemd[1]: Started cri-containerd-c3887b4910171dc5d89f9b23b269e77b298c13664cf5a8c1454656d8076c7d69.scope. Sep 13 01:08:12.727183 env[1291]: time="2025-09-13T01:08:12.727153482Z" level=info msg="StartContainer for \"c3887b4910171dc5d89f9b23b269e77b298c13664cf5a8c1454656d8076c7d69\" returns successfully" Sep 13 01:08:12.733117 systemd[1]: cri-containerd-c3887b4910171dc5d89f9b23b269e77b298c13664cf5a8c1454656d8076c7d69.scope: Deactivated successfully. Sep 13 01:08:12.747513 env[1291]: time="2025-09-13T01:08:12.747486216Z" level=info msg="shim disconnected" id=c3887b4910171dc5d89f9b23b269e77b298c13664cf5a8c1454656d8076c7d69 Sep 13 01:08:12.747679 env[1291]: time="2025-09-13T01:08:12.747665519Z" level=warning msg="cleaning up after shim disconnected" id=c3887b4910171dc5d89f9b23b269e77b298c13664cf5a8c1454656d8076c7d69 namespace=k8s.io Sep 13 01:08:12.747755 env[1291]: time="2025-09-13T01:08:12.747745124Z" level=info msg="cleaning up dead shim" Sep 13 01:08:12.752402 env[1291]: time="2025-09-13T01:08:12.752385935Z" level=warning msg="cleanup warnings time=\"2025-09-13T01:08:12Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4169 runtime=io.containerd.runc.v2\n" Sep 13 01:08:13.616601 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c3887b4910171dc5d89f9b23b269e77b298c13664cf5a8c1454656d8076c7d69-rootfs.mount: Deactivated successfully. Sep 13 01:08:13.633967 env[1291]: time="2025-09-13T01:08:13.633945712Z" level=info msg="CreateContainer within sandbox \"93b51984fd4c6ebc359921420d6f458dad5bf0146529b3dfe0569c2409664ee8\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 13 01:08:13.639844 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3768798843.mount: Deactivated successfully. Sep 13 01:08:13.643245 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2604360367.mount: Deactivated successfully. Sep 13 01:08:13.644826 env[1291]: time="2025-09-13T01:08:13.644801947Z" level=info msg="CreateContainer within sandbox \"93b51984fd4c6ebc359921420d6f458dad5bf0146529b3dfe0569c2409664ee8\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"bc1271770bd40d6898fe290c423c4bcd565b46fb353407624c9af9c436e77f44\"" Sep 13 01:08:13.645270 env[1291]: time="2025-09-13T01:08:13.645256720Z" level=info msg="StartContainer for \"bc1271770bd40d6898fe290c423c4bcd565b46fb353407624c9af9c436e77f44\"" Sep 13 01:08:13.657546 systemd[1]: Started cri-containerd-bc1271770bd40d6898fe290c423c4bcd565b46fb353407624c9af9c436e77f44.scope. Sep 13 01:08:13.673808 systemd[1]: cri-containerd-bc1271770bd40d6898fe290c423c4bcd565b46fb353407624c9af9c436e77f44.scope: Deactivated successfully. Sep 13 01:08:13.674943 env[1291]: time="2025-09-13T01:08:13.674899382Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9dbe9084_b7d2_40fb_a527_f77fd9badebf.slice/cri-containerd-bc1271770bd40d6898fe290c423c4bcd565b46fb353407624c9af9c436e77f44.scope/memory.events\": no such file or directory" Sep 13 01:08:13.675460 env[1291]: time="2025-09-13T01:08:13.675438968Z" level=info msg="StartContainer for \"bc1271770bd40d6898fe290c423c4bcd565b46fb353407624c9af9c436e77f44\" returns successfully" Sep 13 01:08:13.686636 env[1291]: time="2025-09-13T01:08:13.686604786Z" level=info msg="shim disconnected" id=bc1271770bd40d6898fe290c423c4bcd565b46fb353407624c9af9c436e77f44 Sep 13 01:08:13.686636 env[1291]: time="2025-09-13T01:08:13.686635290Z" level=warning msg="cleaning up after shim disconnected" id=bc1271770bd40d6898fe290c423c4bcd565b46fb353407624c9af9c436e77f44 namespace=k8s.io Sep 13 01:08:13.686825 env[1291]: time="2025-09-13T01:08:13.686641933Z" level=info msg="cleaning up dead shim" Sep 13 01:08:13.691063 env[1291]: time="2025-09-13T01:08:13.691042068Z" level=warning msg="cleanup warnings time=\"2025-09-13T01:08:13Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4227 runtime=io.containerd.runc.v2\n" Sep 13 01:08:14.238774 kubelet[2087]: E0913 01:08:14.238743 2087 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 13 01:08:14.636233 env[1291]: time="2025-09-13T01:08:14.636204173Z" level=info msg="CreateContainer within sandbox \"93b51984fd4c6ebc359921420d6f458dad5bf0146529b3dfe0569c2409664ee8\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 13 01:08:14.643911 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount445132944.mount: Deactivated successfully. Sep 13 01:08:14.648106 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1383156077.mount: Deactivated successfully. Sep 13 01:08:14.652734 env[1291]: time="2025-09-13T01:08:14.652707905Z" level=info msg="CreateContainer within sandbox \"93b51984fd4c6ebc359921420d6f458dad5bf0146529b3dfe0569c2409664ee8\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"5de09e5ccd46edd7eeaf6ab1fad407c17d6faddbb876f6b6014e02c4cec6b2db\"" Sep 13 01:08:14.653841 env[1291]: time="2025-09-13T01:08:14.653820593Z" level=info msg="StartContainer for \"5de09e5ccd46edd7eeaf6ab1fad407c17d6faddbb876f6b6014e02c4cec6b2db\"" Sep 13 01:08:14.663501 systemd[1]: Started cri-containerd-5de09e5ccd46edd7eeaf6ab1fad407c17d6faddbb876f6b6014e02c4cec6b2db.scope. Sep 13 01:08:14.681151 env[1291]: time="2025-09-13T01:08:14.681127745Z" level=info msg="StartContainer for \"5de09e5ccd46edd7eeaf6ab1fad407c17d6faddbb876f6b6014e02c4cec6b2db\" returns successfully" Sep 13 01:08:14.989996 kubelet[2087]: W0913 01:08:14.987881 2087 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9dbe9084_b7d2_40fb_a527_f77fd9badebf.slice/cri-containerd-53719db53a62861f366a8e28755513b0af13db4a06eec440c75fbf8a88cf6437.scope WatchSource:0}: task 53719db53a62861f366a8e28755513b0af13db4a06eec440c75fbf8a88cf6437 not found: not found Sep 13 01:08:15.185382 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Sep 13 01:08:16.884709 kubelet[2087]: I0913 01:08:16.884679 2087 setters.go:602] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-13T01:08:16Z","lastTransitionTime":"2025-09-13T01:08:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Sep 13 01:08:17.618671 systemd-networkd[1081]: lxc_health: Link UP Sep 13 01:08:17.628167 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Sep 13 01:08:17.625478 systemd-networkd[1081]: lxc_health: Gained carrier Sep 13 01:08:18.096334 kubelet[2087]: W0913 01:08:18.096296 2087 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9dbe9084_b7d2_40fb_a527_f77fd9badebf.slice/cri-containerd-43a23c463219f14c1772ac63caf95cf3df2fb52acd8cc84d555ce0bfa832ad13.scope WatchSource:0}: task 43a23c463219f14c1772ac63caf95cf3df2fb52acd8cc84d555ce0bfa832ad13 not found: not found Sep 13 01:08:18.971255 kubelet[2087]: I0913 01:08:18.971214 2087 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-zkw9h" podStartSLOduration=8.971196915 podStartE2EDuration="8.971196915s" podCreationTimestamp="2025-09-13 01:08:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 01:08:15.650895794 +0000 UTC m=+131.664635213" watchObservedRunningTime="2025-09-13 01:08:18.971196915 +0000 UTC m=+134.984936329" Sep 13 01:08:19.029466 systemd-networkd[1081]: lxc_health: Gained IPv6LL Sep 13 01:08:19.060117 systemd[1]: run-containerd-runc-k8s.io-5de09e5ccd46edd7eeaf6ab1fad407c17d6faddbb876f6b6014e02c4cec6b2db-runc.iQxdVi.mount: Deactivated successfully. Sep 13 01:08:21.174825 systemd[1]: run-containerd-runc-k8s.io-5de09e5ccd46edd7eeaf6ab1fad407c17d6faddbb876f6b6014e02c4cec6b2db-runc.GMkkcs.mount: Deactivated successfully. Sep 13 01:08:21.207886 kubelet[2087]: W0913 01:08:21.207532 2087 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9dbe9084_b7d2_40fb_a527_f77fd9badebf.slice/cri-containerd-c3887b4910171dc5d89f9b23b269e77b298c13664cf5a8c1454656d8076c7d69.scope WatchSource:0}: task c3887b4910171dc5d89f9b23b269e77b298c13664cf5a8c1454656d8076c7d69 not found: not found Sep 13 01:08:23.252079 systemd[1]: run-containerd-runc-k8s.io-5de09e5ccd46edd7eeaf6ab1fad407c17d6faddbb876f6b6014e02c4cec6b2db-runc.94osyD.mount: Deactivated successfully. Sep 13 01:08:24.312421 kubelet[2087]: W0913 01:08:24.312352 2087 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9dbe9084_b7d2_40fb_a527_f77fd9badebf.slice/cri-containerd-bc1271770bd40d6898fe290c423c4bcd565b46fb353407624c9af9c436e77f44.scope WatchSource:0}: task bc1271770bd40d6898fe290c423c4bcd565b46fb353407624c9af9c436e77f44 not found: not found Sep 13 01:08:25.328769 systemd[1]: run-containerd-runc-k8s.io-5de09e5ccd46edd7eeaf6ab1fad407c17d6faddbb876f6b6014e02c4cec6b2db-runc.KpB8tk.mount: Deactivated successfully. Sep 13 01:08:25.365337 sshd[3828]: pam_unix(sshd:session): session closed for user core Sep 13 01:08:25.372873 systemd[1]: sshd@24-139.178.70.99:22-147.75.109.163:36782.service: Deactivated successfully. Sep 13 01:08:25.373336 systemd[1]: session-27.scope: Deactivated successfully. Sep 13 01:08:25.374087 systemd-logind[1262]: Session 27 logged out. Waiting for processes to exit. Sep 13 01:08:25.374617 systemd-logind[1262]: Removed session 27.