May 8 00:05:39.774049 kernel: Linux version 6.6.88-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT_DYNAMIC Wed May 7 22:19:27 -00 2025 May 8 00:05:39.774066 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=90f0413c3d850985bb1e645e67699e9890362068cb417837636fe4022f4be979 May 8 00:05:39.774072 kernel: Disabled fast string operations May 8 00:05:39.774076 kernel: BIOS-provided physical RAM map: May 8 00:05:39.774080 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ebff] usable May 8 00:05:39.774085 kernel: BIOS-e820: [mem 0x000000000009ec00-0x000000000009ffff] reserved May 8 00:05:39.774091 kernel: BIOS-e820: [mem 0x00000000000dc000-0x00000000000fffff] reserved May 8 00:05:39.774095 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007fedffff] usable May 8 00:05:39.774100 kernel: BIOS-e820: [mem 0x000000007fee0000-0x000000007fefefff] ACPI data May 8 00:05:39.774104 kernel: BIOS-e820: [mem 0x000000007feff000-0x000000007fefffff] ACPI NVS May 8 00:05:39.774108 kernel: BIOS-e820: [mem 0x000000007ff00000-0x000000007fffffff] usable May 8 00:05:39.774113 kernel: BIOS-e820: [mem 0x00000000f0000000-0x00000000f7ffffff] reserved May 8 00:05:39.774117 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec0ffff] reserved May 8 00:05:39.774121 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved May 8 00:05:39.774128 kernel: BIOS-e820: [mem 0x00000000fffe0000-0x00000000ffffffff] reserved May 8 00:05:39.774133 kernel: NX (Execute Disable) protection: active May 8 00:05:39.774137 kernel: APIC: Static calls initialized May 8 00:05:39.774142 kernel: SMBIOS 2.7 present. May 8 00:05:39.774147 kernel: DMI: VMware, Inc. VMware Virtual Platform/440BX Desktop Reference Platform, BIOS 6.00 05/28/2020 May 8 00:05:39.774152 kernel: vmware: hypercall mode: 0x00 May 8 00:05:39.774157 kernel: Hypervisor detected: VMware May 8 00:05:39.774162 kernel: vmware: TSC freq read from hypervisor : 3408.000 MHz May 8 00:05:39.774168 kernel: vmware: Host bus clock speed read from hypervisor : 66000000 Hz May 8 00:05:39.774172 kernel: vmware: using clock offset of 3009430520 ns May 8 00:05:39.774177 kernel: tsc: Detected 3408.000 MHz processor May 8 00:05:39.774183 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 8 00:05:39.774188 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 8 00:05:39.774193 kernel: last_pfn = 0x80000 max_arch_pfn = 0x400000000 May 8 00:05:39.774198 kernel: total RAM covered: 3072M May 8 00:05:39.774203 kernel: Found optimal setting for mtrr clean up May 8 00:05:39.774209 kernel: gran_size: 64K chunk_size: 64K num_reg: 2 lose cover RAM: 0G May 8 00:05:39.774214 kernel: MTRR map: 6 entries (5 fixed + 1 variable; max 21), built from 8 variable MTRRs May 8 00:05:39.774220 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 8 00:05:39.774225 kernel: Using GB pages for direct mapping May 8 00:05:39.774230 kernel: ACPI: Early table checksum verification disabled May 8 00:05:39.774234 kernel: ACPI: RSDP 0x00000000000F6A00 000024 (v02 PTLTD ) May 8 00:05:39.774239 kernel: ACPI: XSDT 0x000000007FEE965B 00005C (v01 INTEL 440BX 06040000 VMW 01324272) May 8 00:05:39.774244 kernel: ACPI: FACP 0x000000007FEFEE73 0000F4 (v04 INTEL 440BX 06040000 PTL 000F4240) May 8 00:05:39.774249 kernel: ACPI: DSDT 0x000000007FEEAD55 01411E (v01 PTLTD Custom 06040000 MSFT 03000001) May 8 00:05:39.774254 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 May 8 00:05:39.774262 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 May 8 00:05:39.774267 kernel: ACPI: BOOT 0x000000007FEEAD2D 000028 (v01 PTLTD $SBFTBL$ 06040000 LTP 00000001) May 8 00:05:39.774272 kernel: ACPI: APIC 0x000000007FEEA5EB 000742 (v01 PTLTD ? APIC 06040000 LTP 00000000) May 8 00:05:39.774278 kernel: ACPI: MCFG 0x000000007FEEA5AF 00003C (v01 PTLTD $PCITBL$ 06040000 LTP 00000001) May 8 00:05:39.774283 kernel: ACPI: SRAT 0x000000007FEE9757 0008A8 (v02 VMWARE MEMPLUG 06040000 VMW 00000001) May 8 00:05:39.774288 kernel: ACPI: HPET 0x000000007FEE971F 000038 (v01 VMWARE VMW HPET 06040000 VMW 00000001) May 8 00:05:39.774295 kernel: ACPI: WAET 0x000000007FEE96F7 000028 (v01 VMWARE VMW WAET 06040000 VMW 00000001) May 8 00:05:39.774300 kernel: ACPI: Reserving FACP table memory at [mem 0x7fefee73-0x7fefef66] May 8 00:05:39.774305 kernel: ACPI: Reserving DSDT table memory at [mem 0x7feead55-0x7fefee72] May 8 00:05:39.774310 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] May 8 00:05:39.774316 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] May 8 00:05:39.774321 kernel: ACPI: Reserving BOOT table memory at [mem 0x7feead2d-0x7feead54] May 8 00:05:39.774326 kernel: ACPI: Reserving APIC table memory at [mem 0x7feea5eb-0x7feead2c] May 8 00:05:39.774331 kernel: ACPI: Reserving MCFG table memory at [mem 0x7feea5af-0x7feea5ea] May 8 00:05:39.774336 kernel: ACPI: Reserving SRAT table memory at [mem 0x7fee9757-0x7fee9ffe] May 8 00:05:39.774342 kernel: ACPI: Reserving HPET table memory at [mem 0x7fee971f-0x7fee9756] May 8 00:05:39.774348 kernel: ACPI: Reserving WAET table memory at [mem 0x7fee96f7-0x7fee971e] May 8 00:05:39.774353 kernel: system APIC only can use physical flat May 8 00:05:39.774358 kernel: APIC: Switched APIC routing to: physical flat May 8 00:05:39.774363 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 May 8 00:05:39.774368 kernel: SRAT: PXM 0 -> APIC 0x02 -> Node 0 May 8 00:05:39.774373 kernel: SRAT: PXM 0 -> APIC 0x04 -> Node 0 May 8 00:05:39.774378 kernel: SRAT: PXM 0 -> APIC 0x06 -> Node 0 May 8 00:05:39.774383 kernel: SRAT: PXM 0 -> APIC 0x08 -> Node 0 May 8 00:05:39.774389 kernel: SRAT: PXM 0 -> APIC 0x0a -> Node 0 May 8 00:05:39.774394 kernel: SRAT: PXM 0 -> APIC 0x0c -> Node 0 May 8 00:05:39.774400 kernel: SRAT: PXM 0 -> APIC 0x0e -> Node 0 May 8 00:05:39.774405 kernel: SRAT: PXM 0 -> APIC 0x10 -> Node 0 May 8 00:05:39.774409 kernel: SRAT: PXM 0 -> APIC 0x12 -> Node 0 May 8 00:05:39.774414 kernel: SRAT: PXM 0 -> APIC 0x14 -> Node 0 May 8 00:05:39.774419 kernel: SRAT: PXM 0 -> APIC 0x16 -> Node 0 May 8 00:05:39.774424 kernel: SRAT: PXM 0 -> APIC 0x18 -> Node 0 May 8 00:05:39.774429 kernel: SRAT: PXM 0 -> APIC 0x1a -> Node 0 May 8 00:05:39.774435 kernel: SRAT: PXM 0 -> APIC 0x1c -> Node 0 May 8 00:05:39.774439 kernel: SRAT: PXM 0 -> APIC 0x1e -> Node 0 May 8 00:05:39.774446 kernel: SRAT: PXM 0 -> APIC 0x20 -> Node 0 May 8 00:05:39.774451 kernel: SRAT: PXM 0 -> APIC 0x22 -> Node 0 May 8 00:05:39.774455 kernel: SRAT: PXM 0 -> APIC 0x24 -> Node 0 May 8 00:05:39.774460 kernel: SRAT: PXM 0 -> APIC 0x26 -> Node 0 May 8 00:05:39.774466 kernel: SRAT: PXM 0 -> APIC 0x28 -> Node 0 May 8 00:05:39.774471 kernel: SRAT: PXM 0 -> APIC 0x2a -> Node 0 May 8 00:05:39.774476 kernel: SRAT: PXM 0 -> APIC 0x2c -> Node 0 May 8 00:05:39.774480 kernel: SRAT: PXM 0 -> APIC 0x2e -> Node 0 May 8 00:05:39.774485 kernel: SRAT: PXM 0 -> APIC 0x30 -> Node 0 May 8 00:05:39.774490 kernel: SRAT: PXM 0 -> APIC 0x32 -> Node 0 May 8 00:05:39.774497 kernel: SRAT: PXM 0 -> APIC 0x34 -> Node 0 May 8 00:05:39.774502 kernel: SRAT: PXM 0 -> APIC 0x36 -> Node 0 May 8 00:05:39.774507 kernel: SRAT: PXM 0 -> APIC 0x38 -> Node 0 May 8 00:05:39.774512 kernel: SRAT: PXM 0 -> APIC 0x3a -> Node 0 May 8 00:05:39.774517 kernel: SRAT: PXM 0 -> APIC 0x3c -> Node 0 May 8 00:05:39.774522 kernel: SRAT: PXM 0 -> APIC 0x3e -> Node 0 May 8 00:05:39.774527 kernel: SRAT: PXM 0 -> APIC 0x40 -> Node 0 May 8 00:05:39.774532 kernel: SRAT: PXM 0 -> APIC 0x42 -> Node 0 May 8 00:05:39.774538 kernel: SRAT: PXM 0 -> APIC 0x44 -> Node 0 May 8 00:05:39.774543 kernel: SRAT: PXM 0 -> APIC 0x46 -> Node 0 May 8 00:05:39.774549 kernel: SRAT: PXM 0 -> APIC 0x48 -> Node 0 May 8 00:05:39.774554 kernel: SRAT: PXM 0 -> APIC 0x4a -> Node 0 May 8 00:05:39.774559 kernel: SRAT: PXM 0 -> APIC 0x4c -> Node 0 May 8 00:05:39.774565 kernel: SRAT: PXM 0 -> APIC 0x4e -> Node 0 May 8 00:05:39.774569 kernel: SRAT: PXM 0 -> APIC 0x50 -> Node 0 May 8 00:05:39.774575 kernel: SRAT: PXM 0 -> APIC 0x52 -> Node 0 May 8 00:05:39.774580 kernel: SRAT: PXM 0 -> APIC 0x54 -> Node 0 May 8 00:05:39.774585 kernel: SRAT: PXM 0 -> APIC 0x56 -> Node 0 May 8 00:05:39.774590 kernel: SRAT: PXM 0 -> APIC 0x58 -> Node 0 May 8 00:05:39.774595 kernel: SRAT: PXM 0 -> APIC 0x5a -> Node 0 May 8 00:05:39.774602 kernel: SRAT: PXM 0 -> APIC 0x5c -> Node 0 May 8 00:05:39.774610 kernel: SRAT: PXM 0 -> APIC 0x5e -> Node 0 May 8 00:05:39.774618 kernel: SRAT: PXM 0 -> APIC 0x60 -> Node 0 May 8 00:05:39.774626 kernel: SRAT: PXM 0 -> APIC 0x62 -> Node 0 May 8 00:05:39.774635 kernel: SRAT: PXM 0 -> APIC 0x64 -> Node 0 May 8 00:05:39.774641 kernel: SRAT: PXM 0 -> APIC 0x66 -> Node 0 May 8 00:05:39.774646 kernel: SRAT: PXM 0 -> APIC 0x68 -> Node 0 May 8 00:05:39.774651 kernel: SRAT: PXM 0 -> APIC 0x6a -> Node 0 May 8 00:05:39.774656 kernel: SRAT: PXM 0 -> APIC 0x6c -> Node 0 May 8 00:05:39.774661 kernel: SRAT: PXM 0 -> APIC 0x6e -> Node 0 May 8 00:05:39.774666 kernel: SRAT: PXM 0 -> APIC 0x70 -> Node 0 May 8 00:05:39.774673 kernel: SRAT: PXM 0 -> APIC 0x72 -> Node 0 May 8 00:05:39.774679 kernel: SRAT: PXM 0 -> APIC 0x74 -> Node 0 May 8 00:05:39.774688 kernel: SRAT: PXM 0 -> APIC 0x76 -> Node 0 May 8 00:05:39.774694 kernel: SRAT: PXM 0 -> APIC 0x78 -> Node 0 May 8 00:05:39.774702 kernel: SRAT: PXM 0 -> APIC 0x7a -> Node 0 May 8 00:05:39.774711 kernel: SRAT: PXM 0 -> APIC 0x7c -> Node 0 May 8 00:05:39.774720 kernel: SRAT: PXM 0 -> APIC 0x7e -> Node 0 May 8 00:05:39.774726 kernel: SRAT: PXM 0 -> APIC 0x80 -> Node 0 May 8 00:05:39.774733 kernel: SRAT: PXM 0 -> APIC 0x82 -> Node 0 May 8 00:05:39.774739 kernel: SRAT: PXM 0 -> APIC 0x84 -> Node 0 May 8 00:05:39.774744 kernel: SRAT: PXM 0 -> APIC 0x86 -> Node 0 May 8 00:05:39.774749 kernel: SRAT: PXM 0 -> APIC 0x88 -> Node 0 May 8 00:05:39.774754 kernel: SRAT: PXM 0 -> APIC 0x8a -> Node 0 May 8 00:05:39.774760 kernel: SRAT: PXM 0 -> APIC 0x8c -> Node 0 May 8 00:05:39.774765 kernel: SRAT: PXM 0 -> APIC 0x8e -> Node 0 May 8 00:05:39.774771 kernel: SRAT: PXM 0 -> APIC 0x90 -> Node 0 May 8 00:05:39.774776 kernel: SRAT: PXM 0 -> APIC 0x92 -> Node 0 May 8 00:05:39.774781 kernel: SRAT: PXM 0 -> APIC 0x94 -> Node 0 May 8 00:05:39.774787 kernel: SRAT: PXM 0 -> APIC 0x96 -> Node 0 May 8 00:05:39.774793 kernel: SRAT: PXM 0 -> APIC 0x98 -> Node 0 May 8 00:05:39.774799 kernel: SRAT: PXM 0 -> APIC 0x9a -> Node 0 May 8 00:05:39.774807 kernel: SRAT: PXM 0 -> APIC 0x9c -> Node 0 May 8 00:05:39.774816 kernel: SRAT: PXM 0 -> APIC 0x9e -> Node 0 May 8 00:05:39.774824 kernel: SRAT: PXM 0 -> APIC 0xa0 -> Node 0 May 8 00:05:39.774830 kernel: SRAT: PXM 0 -> APIC 0xa2 -> Node 0 May 8 00:05:39.774835 kernel: SRAT: PXM 0 -> APIC 0xa4 -> Node 0 May 8 00:05:39.774840 kernel: SRAT: PXM 0 -> APIC 0xa6 -> Node 0 May 8 00:05:39.774846 kernel: SRAT: PXM 0 -> APIC 0xa8 -> Node 0 May 8 00:05:39.774851 kernel: SRAT: PXM 0 -> APIC 0xaa -> Node 0 May 8 00:05:39.774858 kernel: SRAT: PXM 0 -> APIC 0xac -> Node 0 May 8 00:05:39.774864 kernel: SRAT: PXM 0 -> APIC 0xae -> Node 0 May 8 00:05:39.774869 kernel: SRAT: PXM 0 -> APIC 0xb0 -> Node 0 May 8 00:05:39.774874 kernel: SRAT: PXM 0 -> APIC 0xb2 -> Node 0 May 8 00:05:39.774880 kernel: SRAT: PXM 0 -> APIC 0xb4 -> Node 0 May 8 00:05:39.774885 kernel: SRAT: PXM 0 -> APIC 0xb6 -> Node 0 May 8 00:05:39.774890 kernel: SRAT: PXM 0 -> APIC 0xb8 -> Node 0 May 8 00:05:39.774895 kernel: SRAT: PXM 0 -> APIC 0xba -> Node 0 May 8 00:05:39.774901 kernel: SRAT: PXM 0 -> APIC 0xbc -> Node 0 May 8 00:05:39.774906 kernel: SRAT: PXM 0 -> APIC 0xbe -> Node 0 May 8 00:05:39.774913 kernel: SRAT: PXM 0 -> APIC 0xc0 -> Node 0 May 8 00:05:39.774918 kernel: SRAT: PXM 0 -> APIC 0xc2 -> Node 0 May 8 00:05:39.774923 kernel: SRAT: PXM 0 -> APIC 0xc4 -> Node 0 May 8 00:05:39.774929 kernel: SRAT: PXM 0 -> APIC 0xc6 -> Node 0 May 8 00:05:39.774934 kernel: SRAT: PXM 0 -> APIC 0xc8 -> Node 0 May 8 00:05:39.774940 kernel: SRAT: PXM 0 -> APIC 0xca -> Node 0 May 8 00:05:39.774945 kernel: SRAT: PXM 0 -> APIC 0xcc -> Node 0 May 8 00:05:39.774954 kernel: SRAT: PXM 0 -> APIC 0xce -> Node 0 May 8 00:05:39.775210 kernel: SRAT: PXM 0 -> APIC 0xd0 -> Node 0 May 8 00:05:39.775219 kernel: SRAT: PXM 0 -> APIC 0xd2 -> Node 0 May 8 00:05:39.775228 kernel: SRAT: PXM 0 -> APIC 0xd4 -> Node 0 May 8 00:05:39.775233 kernel: SRAT: PXM 0 -> APIC 0xd6 -> Node 0 May 8 00:05:39.775239 kernel: SRAT: PXM 0 -> APIC 0xd8 -> Node 0 May 8 00:05:39.775244 kernel: SRAT: PXM 0 -> APIC 0xda -> Node 0 May 8 00:05:39.775249 kernel: SRAT: PXM 0 -> APIC 0xdc -> Node 0 May 8 00:05:39.775255 kernel: SRAT: PXM 0 -> APIC 0xde -> Node 0 May 8 00:05:39.775260 kernel: SRAT: PXM 0 -> APIC 0xe0 -> Node 0 May 8 00:05:39.775266 kernel: SRAT: PXM 0 -> APIC 0xe2 -> Node 0 May 8 00:05:39.775271 kernel: SRAT: PXM 0 -> APIC 0xe4 -> Node 0 May 8 00:05:39.775277 kernel: SRAT: PXM 0 -> APIC 0xe6 -> Node 0 May 8 00:05:39.775282 kernel: SRAT: PXM 0 -> APIC 0xe8 -> Node 0 May 8 00:05:39.775289 kernel: SRAT: PXM 0 -> APIC 0xea -> Node 0 May 8 00:05:39.775294 kernel: SRAT: PXM 0 -> APIC 0xec -> Node 0 May 8 00:05:39.775300 kernel: SRAT: PXM 0 -> APIC 0xee -> Node 0 May 8 00:05:39.775305 kernel: SRAT: PXM 0 -> APIC 0xf0 -> Node 0 May 8 00:05:39.775310 kernel: SRAT: PXM 0 -> APIC 0xf2 -> Node 0 May 8 00:05:39.775315 kernel: SRAT: PXM 0 -> APIC 0xf4 -> Node 0 May 8 00:05:39.775321 kernel: SRAT: PXM 0 -> APIC 0xf6 -> Node 0 May 8 00:05:39.775326 kernel: SRAT: PXM 0 -> APIC 0xf8 -> Node 0 May 8 00:05:39.775332 kernel: SRAT: PXM 0 -> APIC 0xfa -> Node 0 May 8 00:05:39.775337 kernel: SRAT: PXM 0 -> APIC 0xfc -> Node 0 May 8 00:05:39.775344 kernel: SRAT: PXM 0 -> APIC 0xfe -> Node 0 May 8 00:05:39.775350 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] May 8 00:05:39.775355 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] May 8 00:05:39.775361 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000-0xbfffffff] hotplug May 8 00:05:39.775366 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7fffffff] -> [mem 0x00000000-0x7fffffff] May 8 00:05:39.775372 kernel: NODE_DATA(0) allocated [mem 0x7fffa000-0x7fffffff] May 8 00:05:39.775378 kernel: Zone ranges: May 8 00:05:39.775383 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 8 00:05:39.775389 kernel: DMA32 [mem 0x0000000001000000-0x000000007fffffff] May 8 00:05:39.775396 kernel: Normal empty May 8 00:05:39.775401 kernel: Movable zone start for each node May 8 00:05:39.775407 kernel: Early memory node ranges May 8 00:05:39.775412 kernel: node 0: [mem 0x0000000000001000-0x000000000009dfff] May 8 00:05:39.775418 kernel: node 0: [mem 0x0000000000100000-0x000000007fedffff] May 8 00:05:39.775423 kernel: node 0: [mem 0x000000007ff00000-0x000000007fffffff] May 8 00:05:39.775429 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007fffffff] May 8 00:05:39.775435 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 8 00:05:39.775440 kernel: On node 0, zone DMA: 98 pages in unavailable ranges May 8 00:05:39.775446 kernel: On node 0, zone DMA32: 32 pages in unavailable ranges May 8 00:05:39.775453 kernel: ACPI: PM-Timer IO Port: 0x1008 May 8 00:05:39.775458 kernel: system APIC only can use physical flat May 8 00:05:39.775464 kernel: ACPI: LAPIC_NMI (acpi_id[0x00] high edge lint[0x1]) May 8 00:05:39.775469 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) May 8 00:05:39.775475 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) May 8 00:05:39.775480 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) May 8 00:05:39.775485 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) May 8 00:05:39.775491 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) May 8 00:05:39.775497 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) May 8 00:05:39.775502 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) May 8 00:05:39.775509 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) May 8 00:05:39.775514 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) May 8 00:05:39.775520 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) May 8 00:05:39.775525 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) May 8 00:05:39.775531 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) May 8 00:05:39.775536 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) May 8 00:05:39.775541 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) May 8 00:05:39.775547 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) May 8 00:05:39.775552 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) May 8 00:05:39.775559 kernel: ACPI: LAPIC_NMI (acpi_id[0x11] high edge lint[0x1]) May 8 00:05:39.775564 kernel: ACPI: LAPIC_NMI (acpi_id[0x12] high edge lint[0x1]) May 8 00:05:39.775570 kernel: ACPI: LAPIC_NMI (acpi_id[0x13] high edge lint[0x1]) May 8 00:05:39.775575 kernel: ACPI: LAPIC_NMI (acpi_id[0x14] high edge lint[0x1]) May 8 00:05:39.775581 kernel: ACPI: LAPIC_NMI (acpi_id[0x15] high edge lint[0x1]) May 8 00:05:39.775586 kernel: ACPI: LAPIC_NMI (acpi_id[0x16] high edge lint[0x1]) May 8 00:05:39.775592 kernel: ACPI: LAPIC_NMI (acpi_id[0x17] high edge lint[0x1]) May 8 00:05:39.775597 kernel: ACPI: LAPIC_NMI (acpi_id[0x18] high edge lint[0x1]) May 8 00:05:39.775603 kernel: ACPI: LAPIC_NMI (acpi_id[0x19] high edge lint[0x1]) May 8 00:05:39.775608 kernel: ACPI: LAPIC_NMI (acpi_id[0x1a] high edge lint[0x1]) May 8 00:05:39.775615 kernel: ACPI: LAPIC_NMI (acpi_id[0x1b] high edge lint[0x1]) May 8 00:05:39.775620 kernel: ACPI: LAPIC_NMI (acpi_id[0x1c] high edge lint[0x1]) May 8 00:05:39.775625 kernel: ACPI: LAPIC_NMI (acpi_id[0x1d] high edge lint[0x1]) May 8 00:05:39.775631 kernel: ACPI: LAPIC_NMI (acpi_id[0x1e] high edge lint[0x1]) May 8 00:05:39.775636 kernel: ACPI: LAPIC_NMI (acpi_id[0x1f] high edge lint[0x1]) May 8 00:05:39.775642 kernel: ACPI: LAPIC_NMI (acpi_id[0x20] high edge lint[0x1]) May 8 00:05:39.775647 kernel: ACPI: LAPIC_NMI (acpi_id[0x21] high edge lint[0x1]) May 8 00:05:39.775653 kernel: ACPI: LAPIC_NMI (acpi_id[0x22] high edge lint[0x1]) May 8 00:05:39.775658 kernel: ACPI: LAPIC_NMI (acpi_id[0x23] high edge lint[0x1]) May 8 00:05:39.775664 kernel: ACPI: LAPIC_NMI (acpi_id[0x24] high edge lint[0x1]) May 8 00:05:39.775670 kernel: ACPI: LAPIC_NMI (acpi_id[0x25] high edge lint[0x1]) May 8 00:05:39.775675 kernel: ACPI: LAPIC_NMI (acpi_id[0x26] high edge lint[0x1]) May 8 00:05:39.775681 kernel: ACPI: LAPIC_NMI (acpi_id[0x27] high edge lint[0x1]) May 8 00:05:39.775686 kernel: ACPI: LAPIC_NMI (acpi_id[0x28] high edge lint[0x1]) May 8 00:05:39.775692 kernel: ACPI: LAPIC_NMI (acpi_id[0x29] high edge lint[0x1]) May 8 00:05:39.775697 kernel: ACPI: LAPIC_NMI (acpi_id[0x2a] high edge lint[0x1]) May 8 00:05:39.775703 kernel: ACPI: LAPIC_NMI (acpi_id[0x2b] high edge lint[0x1]) May 8 00:05:39.775708 kernel: ACPI: LAPIC_NMI (acpi_id[0x2c] high edge lint[0x1]) May 8 00:05:39.775714 kernel: ACPI: LAPIC_NMI (acpi_id[0x2d] high edge lint[0x1]) May 8 00:05:39.775720 kernel: ACPI: LAPIC_NMI (acpi_id[0x2e] high edge lint[0x1]) May 8 00:05:39.775726 kernel: ACPI: LAPIC_NMI (acpi_id[0x2f] high edge lint[0x1]) May 8 00:05:39.775731 kernel: ACPI: LAPIC_NMI (acpi_id[0x30] high edge lint[0x1]) May 8 00:05:39.775736 kernel: ACPI: LAPIC_NMI (acpi_id[0x31] high edge lint[0x1]) May 8 00:05:39.775742 kernel: ACPI: LAPIC_NMI (acpi_id[0x32] high edge lint[0x1]) May 8 00:05:39.775748 kernel: ACPI: LAPIC_NMI (acpi_id[0x33] high edge lint[0x1]) May 8 00:05:39.775753 kernel: ACPI: LAPIC_NMI (acpi_id[0x34] high edge lint[0x1]) May 8 00:05:39.775759 kernel: ACPI: LAPIC_NMI (acpi_id[0x35] high edge lint[0x1]) May 8 00:05:39.775764 kernel: ACPI: LAPIC_NMI (acpi_id[0x36] high edge lint[0x1]) May 8 00:05:39.775770 kernel: ACPI: LAPIC_NMI (acpi_id[0x37] high edge lint[0x1]) May 8 00:05:39.775776 kernel: ACPI: LAPIC_NMI (acpi_id[0x38] high edge lint[0x1]) May 8 00:05:39.775782 kernel: ACPI: LAPIC_NMI (acpi_id[0x39] high edge lint[0x1]) May 8 00:05:39.775787 kernel: ACPI: LAPIC_NMI (acpi_id[0x3a] high edge lint[0x1]) May 8 00:05:39.775792 kernel: ACPI: LAPIC_NMI (acpi_id[0x3b] high edge lint[0x1]) May 8 00:05:39.775798 kernel: ACPI: LAPIC_NMI (acpi_id[0x3c] high edge lint[0x1]) May 8 00:05:39.775803 kernel: ACPI: LAPIC_NMI (acpi_id[0x3d] high edge lint[0x1]) May 8 00:05:39.775809 kernel: ACPI: LAPIC_NMI (acpi_id[0x3e] high edge lint[0x1]) May 8 00:05:39.775814 kernel: ACPI: LAPIC_NMI (acpi_id[0x3f] high edge lint[0x1]) May 8 00:05:39.775820 kernel: ACPI: LAPIC_NMI (acpi_id[0x40] high edge lint[0x1]) May 8 00:05:39.775825 kernel: ACPI: LAPIC_NMI (acpi_id[0x41] high edge lint[0x1]) May 8 00:05:39.775832 kernel: ACPI: LAPIC_NMI (acpi_id[0x42] high edge lint[0x1]) May 8 00:05:39.775837 kernel: ACPI: LAPIC_NMI (acpi_id[0x43] high edge lint[0x1]) May 8 00:05:39.775843 kernel: ACPI: LAPIC_NMI (acpi_id[0x44] high edge lint[0x1]) May 8 00:05:39.775848 kernel: ACPI: LAPIC_NMI (acpi_id[0x45] high edge lint[0x1]) May 8 00:05:39.775854 kernel: ACPI: LAPIC_NMI (acpi_id[0x46] high edge lint[0x1]) May 8 00:05:39.775859 kernel: ACPI: LAPIC_NMI (acpi_id[0x47] high edge lint[0x1]) May 8 00:05:39.775865 kernel: ACPI: LAPIC_NMI (acpi_id[0x48] high edge lint[0x1]) May 8 00:05:39.775870 kernel: ACPI: LAPIC_NMI (acpi_id[0x49] high edge lint[0x1]) May 8 00:05:39.775876 kernel: ACPI: LAPIC_NMI (acpi_id[0x4a] high edge lint[0x1]) May 8 00:05:39.775881 kernel: ACPI: LAPIC_NMI (acpi_id[0x4b] high edge lint[0x1]) May 8 00:05:39.775888 kernel: ACPI: LAPIC_NMI (acpi_id[0x4c] high edge lint[0x1]) May 8 00:05:39.775893 kernel: ACPI: LAPIC_NMI (acpi_id[0x4d] high edge lint[0x1]) May 8 00:05:39.775898 kernel: ACPI: LAPIC_NMI (acpi_id[0x4e] high edge lint[0x1]) May 8 00:05:39.775904 kernel: ACPI: LAPIC_NMI (acpi_id[0x4f] high edge lint[0x1]) May 8 00:05:39.775909 kernel: ACPI: LAPIC_NMI (acpi_id[0x50] high edge lint[0x1]) May 8 00:05:39.775915 kernel: ACPI: LAPIC_NMI (acpi_id[0x51] high edge lint[0x1]) May 8 00:05:39.775920 kernel: ACPI: LAPIC_NMI (acpi_id[0x52] high edge lint[0x1]) May 8 00:05:39.775926 kernel: ACPI: LAPIC_NMI (acpi_id[0x53] high edge lint[0x1]) May 8 00:05:39.775931 kernel: ACPI: LAPIC_NMI (acpi_id[0x54] high edge lint[0x1]) May 8 00:05:39.775938 kernel: ACPI: LAPIC_NMI (acpi_id[0x55] high edge lint[0x1]) May 8 00:05:39.775943 kernel: ACPI: LAPIC_NMI (acpi_id[0x56] high edge lint[0x1]) May 8 00:05:39.775948 kernel: ACPI: LAPIC_NMI (acpi_id[0x57] high edge lint[0x1]) May 8 00:05:39.775954 kernel: ACPI: LAPIC_NMI (acpi_id[0x58] high edge lint[0x1]) May 8 00:05:39.775959 kernel: ACPI: LAPIC_NMI (acpi_id[0x59] high edge lint[0x1]) May 8 00:05:39.777771 kernel: ACPI: LAPIC_NMI (acpi_id[0x5a] high edge lint[0x1]) May 8 00:05:39.777779 kernel: ACPI: LAPIC_NMI (acpi_id[0x5b] high edge lint[0x1]) May 8 00:05:39.777785 kernel: ACPI: LAPIC_NMI (acpi_id[0x5c] high edge lint[0x1]) May 8 00:05:39.777791 kernel: ACPI: LAPIC_NMI (acpi_id[0x5d] high edge lint[0x1]) May 8 00:05:39.777797 kernel: ACPI: LAPIC_NMI (acpi_id[0x5e] high edge lint[0x1]) May 8 00:05:39.777805 kernel: ACPI: LAPIC_NMI (acpi_id[0x5f] high edge lint[0x1]) May 8 00:05:39.777811 kernel: ACPI: LAPIC_NMI (acpi_id[0x60] high edge lint[0x1]) May 8 00:05:39.777816 kernel: ACPI: LAPIC_NMI (acpi_id[0x61] high edge lint[0x1]) May 8 00:05:39.777821 kernel: ACPI: LAPIC_NMI (acpi_id[0x62] high edge lint[0x1]) May 8 00:05:39.777827 kernel: ACPI: LAPIC_NMI (acpi_id[0x63] high edge lint[0x1]) May 8 00:05:39.777833 kernel: ACPI: LAPIC_NMI (acpi_id[0x64] high edge lint[0x1]) May 8 00:05:39.777838 kernel: ACPI: LAPIC_NMI (acpi_id[0x65] high edge lint[0x1]) May 8 00:05:39.777844 kernel: ACPI: LAPIC_NMI (acpi_id[0x66] high edge lint[0x1]) May 8 00:05:39.777849 kernel: ACPI: LAPIC_NMI (acpi_id[0x67] high edge lint[0x1]) May 8 00:05:39.777855 kernel: ACPI: LAPIC_NMI (acpi_id[0x68] high edge lint[0x1]) May 8 00:05:39.777862 kernel: ACPI: LAPIC_NMI (acpi_id[0x69] high edge lint[0x1]) May 8 00:05:39.777867 kernel: ACPI: LAPIC_NMI (acpi_id[0x6a] high edge lint[0x1]) May 8 00:05:39.777872 kernel: ACPI: LAPIC_NMI (acpi_id[0x6b] high edge lint[0x1]) May 8 00:05:39.777878 kernel: ACPI: LAPIC_NMI (acpi_id[0x6c] high edge lint[0x1]) May 8 00:05:39.777883 kernel: ACPI: LAPIC_NMI (acpi_id[0x6d] high edge lint[0x1]) May 8 00:05:39.777889 kernel: ACPI: LAPIC_NMI (acpi_id[0x6e] high edge lint[0x1]) May 8 00:05:39.777894 kernel: ACPI: LAPIC_NMI (acpi_id[0x6f] high edge lint[0x1]) May 8 00:05:39.777900 kernel: ACPI: LAPIC_NMI (acpi_id[0x70] high edge lint[0x1]) May 8 00:05:39.777905 kernel: ACPI: LAPIC_NMI (acpi_id[0x71] high edge lint[0x1]) May 8 00:05:39.777911 kernel: ACPI: LAPIC_NMI (acpi_id[0x72] high edge lint[0x1]) May 8 00:05:39.777918 kernel: ACPI: LAPIC_NMI (acpi_id[0x73] high edge lint[0x1]) May 8 00:05:39.777923 kernel: ACPI: LAPIC_NMI (acpi_id[0x74] high edge lint[0x1]) May 8 00:05:39.777931 kernel: ACPI: LAPIC_NMI (acpi_id[0x75] high edge lint[0x1]) May 8 00:05:39.777940 kernel: ACPI: LAPIC_NMI (acpi_id[0x76] high edge lint[0x1]) May 8 00:05:39.777949 kernel: ACPI: LAPIC_NMI (acpi_id[0x77] high edge lint[0x1]) May 8 00:05:39.777956 kernel: ACPI: LAPIC_NMI (acpi_id[0x78] high edge lint[0x1]) May 8 00:05:39.777962 kernel: ACPI: LAPIC_NMI (acpi_id[0x79] high edge lint[0x1]) May 8 00:05:39.777976 kernel: ACPI: LAPIC_NMI (acpi_id[0x7a] high edge lint[0x1]) May 8 00:05:39.777982 kernel: ACPI: LAPIC_NMI (acpi_id[0x7b] high edge lint[0x1]) May 8 00:05:39.777990 kernel: ACPI: LAPIC_NMI (acpi_id[0x7c] high edge lint[0x1]) May 8 00:05:39.777995 kernel: ACPI: LAPIC_NMI (acpi_id[0x7d] high edge lint[0x1]) May 8 00:05:39.778001 kernel: ACPI: LAPIC_NMI (acpi_id[0x7e] high edge lint[0x1]) May 8 00:05:39.778006 kernel: ACPI: LAPIC_NMI (acpi_id[0x7f] high edge lint[0x1]) May 8 00:05:39.778012 kernel: IOAPIC[0]: apic_id 1, version 17, address 0xfec00000, GSI 0-23 May 8 00:05:39.778017 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 high edge) May 8 00:05:39.778023 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 8 00:05:39.778029 kernel: ACPI: HPET id: 0x8086af01 base: 0xfed00000 May 8 00:05:39.778034 kernel: TSC deadline timer available May 8 00:05:39.778040 kernel: smpboot: Allowing 128 CPUs, 126 hotplug CPUs May 8 00:05:39.778047 kernel: [mem 0x80000000-0xefffffff] available for PCI devices May 8 00:05:39.778052 kernel: Booting paravirtualized kernel on VMware hypervisor May 8 00:05:39.778058 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 8 00:05:39.778064 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:128 nr_cpu_ids:128 nr_node_ids:1 May 8 00:05:39.778070 kernel: percpu: Embedded 58 pages/cpu s197096 r8192 d32280 u262144 May 8 00:05:39.778076 kernel: pcpu-alloc: s197096 r8192 d32280 u262144 alloc=1*2097152 May 8 00:05:39.778081 kernel: pcpu-alloc: [0] 000 001 002 003 004 005 006 007 May 8 00:05:39.778087 kernel: pcpu-alloc: [0] 008 009 010 011 012 013 014 015 May 8 00:05:39.778092 kernel: pcpu-alloc: [0] 016 017 018 019 020 021 022 023 May 8 00:05:39.778099 kernel: pcpu-alloc: [0] 024 025 026 027 028 029 030 031 May 8 00:05:39.778105 kernel: pcpu-alloc: [0] 032 033 034 035 036 037 038 039 May 8 00:05:39.778118 kernel: pcpu-alloc: [0] 040 041 042 043 044 045 046 047 May 8 00:05:39.778125 kernel: pcpu-alloc: [0] 048 049 050 051 052 053 054 055 May 8 00:05:39.778130 kernel: pcpu-alloc: [0] 056 057 058 059 060 061 062 063 May 8 00:05:39.778136 kernel: pcpu-alloc: [0] 064 065 066 067 068 069 070 071 May 8 00:05:39.778142 kernel: pcpu-alloc: [0] 072 073 074 075 076 077 078 079 May 8 00:05:39.778148 kernel: pcpu-alloc: [0] 080 081 082 083 084 085 086 087 May 8 00:05:39.778155 kernel: pcpu-alloc: [0] 088 089 090 091 092 093 094 095 May 8 00:05:39.778161 kernel: pcpu-alloc: [0] 096 097 098 099 100 101 102 103 May 8 00:05:39.778167 kernel: pcpu-alloc: [0] 104 105 106 107 108 109 110 111 May 8 00:05:39.778173 kernel: pcpu-alloc: [0] 112 113 114 115 116 117 118 119 May 8 00:05:39.778178 kernel: pcpu-alloc: [0] 120 121 122 123 124 125 126 127 May 8 00:05:39.778185 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=90f0413c3d850985bb1e645e67699e9890362068cb417837636fe4022f4be979 May 8 00:05:39.778192 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 8 00:05:39.778198 kernel: random: crng init done May 8 00:05:39.778205 kernel: printk: log_buf_len individual max cpu contribution: 4096 bytes May 8 00:05:39.778211 kernel: printk: log_buf_len total cpu_extra contributions: 520192 bytes May 8 00:05:39.778217 kernel: printk: log_buf_len min size: 262144 bytes May 8 00:05:39.778222 kernel: printk: log_buf_len: 1048576 bytes May 8 00:05:39.778228 kernel: printk: early log buf free: 239648(91%) May 8 00:05:39.778234 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 8 00:05:39.778240 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) May 8 00:05:39.778248 kernel: Fallback order for Node 0: 0 May 8 00:05:39.778253 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515808 May 8 00:05:39.778261 kernel: Policy zone: DMA32 May 8 00:05:39.778267 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 8 00:05:39.778273 kernel: Memory: 1934328K/2096628K available (14336K kernel code, 2295K rwdata, 22864K rodata, 43484K init, 1592K bss, 162040K reserved, 0K cma-reserved) May 8 00:05:39.778280 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=128, Nodes=1 May 8 00:05:39.778286 kernel: ftrace: allocating 37918 entries in 149 pages May 8 00:05:39.778292 kernel: ftrace: allocated 149 pages with 4 groups May 8 00:05:39.778299 kernel: Dynamic Preempt: voluntary May 8 00:05:39.778305 kernel: rcu: Preemptible hierarchical RCU implementation. May 8 00:05:39.778311 kernel: rcu: RCU event tracing is enabled. May 8 00:05:39.778317 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=128. May 8 00:05:39.778323 kernel: Trampoline variant of Tasks RCU enabled. May 8 00:05:39.778329 kernel: Rude variant of Tasks RCU enabled. May 8 00:05:39.778335 kernel: Tracing variant of Tasks RCU enabled. May 8 00:05:39.778341 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 8 00:05:39.778346 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=128 May 8 00:05:39.778354 kernel: NR_IRQS: 33024, nr_irqs: 1448, preallocated irqs: 16 May 8 00:05:39.778360 kernel: rcu: srcu_init: Setting srcu_struct sizes to big. May 8 00:05:39.778366 kernel: Console: colour VGA+ 80x25 May 8 00:05:39.778371 kernel: printk: console [tty0] enabled May 8 00:05:39.778377 kernel: printk: console [ttyS0] enabled May 8 00:05:39.778383 kernel: ACPI: Core revision 20230628 May 8 00:05:39.778389 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 133484882848 ns May 8 00:05:39.778395 kernel: APIC: Switch to symmetric I/O mode setup May 8 00:05:39.778401 kernel: x2apic enabled May 8 00:05:39.778407 kernel: APIC: Switched APIC routing to: physical x2apic May 8 00:05:39.778414 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 May 8 00:05:39.778420 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns May 8 00:05:39.778426 kernel: Calibrating delay loop (skipped) preset value.. 6816.00 BogoMIPS (lpj=3408000) May 8 00:05:39.778432 kernel: Disabled fast string operations May 8 00:05:39.778438 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 May 8 00:05:39.778444 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 May 8 00:05:39.778450 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 8 00:05:39.778456 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit May 8 00:05:39.778462 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall May 8 00:05:39.778469 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS May 8 00:05:39.778475 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch May 8 00:05:39.778481 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT May 8 00:05:39.778487 kernel: RETBleed: Mitigation: Enhanced IBRS May 8 00:05:39.778493 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier May 8 00:05:39.778499 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl May 8 00:05:39.778505 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode May 8 00:05:39.778511 kernel: SRBDS: Unknown: Dependent on hypervisor status May 8 00:05:39.778518 kernel: GDS: Unknown: Dependent on hypervisor status May 8 00:05:39.778524 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' May 8 00:05:39.778530 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' May 8 00:05:39.778536 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' May 8 00:05:39.778542 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 May 8 00:05:39.778548 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. May 8 00:05:39.778553 kernel: Freeing SMP alternatives memory: 32K May 8 00:05:39.778560 kernel: pid_max: default: 131072 minimum: 1024 May 8 00:05:39.778570 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 8 00:05:39.778582 kernel: landlock: Up and running. May 8 00:05:39.778591 kernel: SELinux: Initializing. May 8 00:05:39.778600 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) May 8 00:05:39.778608 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) May 8 00:05:39.778617 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) May 8 00:05:39.778626 kernel: RCU Tasks: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. May 8 00:05:39.778635 kernel: RCU Tasks Rude: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. May 8 00:05:39.778645 kernel: RCU Tasks Trace: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. May 8 00:05:39.778655 kernel: Performance Events: Skylake events, core PMU driver. May 8 00:05:39.778666 kernel: core: CPUID marked event: 'cpu cycles' unavailable May 8 00:05:39.778676 kernel: core: CPUID marked event: 'instructions' unavailable May 8 00:05:39.778685 kernel: core: CPUID marked event: 'bus cycles' unavailable May 8 00:05:39.778693 kernel: core: CPUID marked event: 'cache references' unavailable May 8 00:05:39.778702 kernel: core: CPUID marked event: 'cache misses' unavailable May 8 00:05:39.778713 kernel: core: CPUID marked event: 'branch instructions' unavailable May 8 00:05:39.778722 kernel: core: CPUID marked event: 'branch misses' unavailable May 8 00:05:39.778730 kernel: ... version: 1 May 8 00:05:39.778739 kernel: ... bit width: 48 May 8 00:05:39.778750 kernel: ... generic registers: 4 May 8 00:05:39.778759 kernel: ... value mask: 0000ffffffffffff May 8 00:05:39.778768 kernel: ... max period: 000000007fffffff May 8 00:05:39.778779 kernel: ... fixed-purpose events: 0 May 8 00:05:39.778787 kernel: ... event mask: 000000000000000f May 8 00:05:39.778793 kernel: signal: max sigframe size: 1776 May 8 00:05:39.778799 kernel: rcu: Hierarchical SRCU implementation. May 8 00:05:39.778808 kernel: rcu: Max phase no-delay instances is 400. May 8 00:05:39.778818 kernel: NMI watchdog: Perf NMI watchdog permanently disabled May 8 00:05:39.778828 kernel: smp: Bringing up secondary CPUs ... May 8 00:05:39.778834 kernel: smpboot: x86: Booting SMP configuration: May 8 00:05:39.778840 kernel: .... node #0, CPUs: #1 May 8 00:05:39.778846 kernel: Disabled fast string operations May 8 00:05:39.778852 kernel: smpboot: CPU 1 Converting physical 2 to logical package 1 May 8 00:05:39.778858 kernel: smpboot: CPU 1 Converting physical 0 to logical die 1 May 8 00:05:39.778863 kernel: smp: Brought up 1 node, 2 CPUs May 8 00:05:39.778869 kernel: smpboot: Max logical packages: 128 May 8 00:05:39.778875 kernel: smpboot: Total of 2 processors activated (13632.00 BogoMIPS) May 8 00:05:39.778881 kernel: devtmpfs: initialized May 8 00:05:39.778889 kernel: x86/mm: Memory block size: 128MB May 8 00:05:39.778895 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7feff000-0x7fefffff] (4096 bytes) May 8 00:05:39.778901 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 8 00:05:39.778907 kernel: futex hash table entries: 32768 (order: 9, 2097152 bytes, linear) May 8 00:05:39.778913 kernel: pinctrl core: initialized pinctrl subsystem May 8 00:05:39.778918 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 8 00:05:39.778924 kernel: audit: initializing netlink subsys (disabled) May 8 00:05:39.778930 kernel: audit: type=2000 audit(1746662738.067:1): state=initialized audit_enabled=0 res=1 May 8 00:05:39.778937 kernel: thermal_sys: Registered thermal governor 'step_wise' May 8 00:05:39.778943 kernel: thermal_sys: Registered thermal governor 'user_space' May 8 00:05:39.778949 kernel: cpuidle: using governor menu May 8 00:05:39.778955 kernel: Simple Boot Flag at 0x36 set to 0x80 May 8 00:05:39.778961 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 8 00:05:39.779118 kernel: dca service started, version 1.12.1 May 8 00:05:39.779124 kernel: PCI: MMCONFIG for domain 0000 [bus 00-7f] at [mem 0xf0000000-0xf7ffffff] (base 0xf0000000) May 8 00:05:39.779130 kernel: PCI: Using configuration type 1 for base access May 8 00:05:39.779136 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 8 00:05:39.779144 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 8 00:05:39.779150 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page May 8 00:05:39.779156 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 8 00:05:39.779162 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page May 8 00:05:39.779168 kernel: ACPI: Added _OSI(Module Device) May 8 00:05:39.779174 kernel: ACPI: Added _OSI(Processor Device) May 8 00:05:39.779180 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 8 00:05:39.779186 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 8 00:05:39.779191 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 8 00:05:39.779198 kernel: ACPI: [Firmware Bug]: BIOS _OSI(Linux) query ignored May 8 00:05:39.779210 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC May 8 00:05:39.779218 kernel: ACPI: Interpreter enabled May 8 00:05:39.779228 kernel: ACPI: PM: (supports S0 S1 S5) May 8 00:05:39.779237 kernel: ACPI: Using IOAPIC for interrupt routing May 8 00:05:39.779245 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 8 00:05:39.779251 kernel: PCI: Using E820 reservations for host bridge windows May 8 00:05:39.779259 kernel: ACPI: Enabled 4 GPEs in block 00 to 0F May 8 00:05:39.779268 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-7f]) May 8 00:05:39.779385 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 8 00:05:39.779470 kernel: acpi PNP0A03:00: _OSC: platform does not support [AER LTR] May 8 00:05:39.779555 kernel: acpi PNP0A03:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability] May 8 00:05:39.779569 kernel: PCI host bridge to bus 0000:00 May 8 00:05:39.779650 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] May 8 00:05:39.779724 kernel: pci_bus 0000:00: root bus resource [mem 0x000cc000-0x000dbfff window] May 8 00:05:39.779799 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] May 8 00:05:39.779877 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] May 8 00:05:39.779934 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xfeff window] May 8 00:05:39.780008 kernel: pci_bus 0000:00: root bus resource [bus 00-7f] May 8 00:05:39.780082 kernel: pci 0000:00:00.0: [8086:7190] type 00 class 0x060000 May 8 00:05:39.780145 kernel: pci 0000:00:01.0: [8086:7191] type 01 class 0x060400 May 8 00:05:39.780205 kernel: pci 0000:00:07.0: [8086:7110] type 00 class 0x060100 May 8 00:05:39.780265 kernel: pci 0000:00:07.1: [8086:7111] type 00 class 0x01018a May 8 00:05:39.780317 kernel: pci 0000:00:07.1: reg 0x20: [io 0x1060-0x106f] May 8 00:05:39.780368 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] May 8 00:05:39.780434 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x14: [io 0x03f6] May 8 00:05:39.780488 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] May 8 00:05:39.780539 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x1c: [io 0x0376] May 8 00:05:39.780608 kernel: pci 0000:00:07.3: [8086:7113] type 00 class 0x068000 May 8 00:05:39.780687 kernel: pci 0000:00:07.3: quirk: [io 0x1000-0x103f] claimed by PIIX4 ACPI May 8 00:05:39.780758 kernel: pci 0000:00:07.3: quirk: [io 0x1040-0x104f] claimed by PIIX4 SMB May 8 00:05:39.780835 kernel: pci 0000:00:07.7: [15ad:0740] type 00 class 0x088000 May 8 00:05:39.780889 kernel: pci 0000:00:07.7: reg 0x10: [io 0x1080-0x10bf] May 8 00:05:39.780941 kernel: pci 0000:00:07.7: reg 0x14: [mem 0xfebfe000-0xfebfffff 64bit] May 8 00:05:39.786047 kernel: pci 0000:00:0f.0: [15ad:0405] type 00 class 0x030000 May 8 00:05:39.786122 kernel: pci 0000:00:0f.0: reg 0x10: [io 0x1070-0x107f] May 8 00:05:39.786176 kernel: pci 0000:00:0f.0: reg 0x14: [mem 0xe8000000-0xefffffff pref] May 8 00:05:39.786228 kernel: pci 0000:00:0f.0: reg 0x18: [mem 0xfe000000-0xfe7fffff] May 8 00:05:39.786280 kernel: pci 0000:00:0f.0: reg 0x30: [mem 0x00000000-0x00007fff pref] May 8 00:05:39.786331 kernel: pci 0000:00:0f.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] May 8 00:05:39.786386 kernel: pci 0000:00:11.0: [15ad:0790] type 01 class 0x060401 May 8 00:05:39.786448 kernel: pci 0000:00:15.0: [15ad:07a0] type 01 class 0x060400 May 8 00:05:39.786504 kernel: pci 0000:00:15.0: PME# supported from D0 D3hot D3cold May 8 00:05:39.786561 kernel: pci 0000:00:15.1: [15ad:07a0] type 01 class 0x060400 May 8 00:05:39.786613 kernel: pci 0000:00:15.1: PME# supported from D0 D3hot D3cold May 8 00:05:39.786671 kernel: pci 0000:00:15.2: [15ad:07a0] type 01 class 0x060400 May 8 00:05:39.786723 kernel: pci 0000:00:15.2: PME# supported from D0 D3hot D3cold May 8 00:05:39.786781 kernel: pci 0000:00:15.3: [15ad:07a0] type 01 class 0x060400 May 8 00:05:39.786836 kernel: pci 0000:00:15.3: PME# supported from D0 D3hot D3cold May 8 00:05:39.786893 kernel: pci 0000:00:15.4: [15ad:07a0] type 01 class 0x060400 May 8 00:05:39.786947 kernel: pci 0000:00:15.4: PME# supported from D0 D3hot D3cold May 8 00:05:39.787018 kernel: pci 0000:00:15.5: [15ad:07a0] type 01 class 0x060400 May 8 00:05:39.787073 kernel: pci 0000:00:15.5: PME# supported from D0 D3hot D3cold May 8 00:05:39.787129 kernel: pci 0000:00:15.6: [15ad:07a0] type 01 class 0x060400 May 8 00:05:39.787185 kernel: pci 0000:00:15.6: PME# supported from D0 D3hot D3cold May 8 00:05:39.787241 kernel: pci 0000:00:15.7: [15ad:07a0] type 01 class 0x060400 May 8 00:05:39.787293 kernel: pci 0000:00:15.7: PME# supported from D0 D3hot D3cold May 8 00:05:39.787349 kernel: pci 0000:00:16.0: [15ad:07a0] type 01 class 0x060400 May 8 00:05:39.787401 kernel: pci 0000:00:16.0: PME# supported from D0 D3hot D3cold May 8 00:05:39.787462 kernel: pci 0000:00:16.1: [15ad:07a0] type 01 class 0x060400 May 8 00:05:39.787514 kernel: pci 0000:00:16.1: PME# supported from D0 D3hot D3cold May 8 00:05:39.787569 kernel: pci 0000:00:16.2: [15ad:07a0] type 01 class 0x060400 May 8 00:05:39.787621 kernel: pci 0000:00:16.2: PME# supported from D0 D3hot D3cold May 8 00:05:39.787679 kernel: pci 0000:00:16.3: [15ad:07a0] type 01 class 0x060400 May 8 00:05:39.787731 kernel: pci 0000:00:16.3: PME# supported from D0 D3hot D3cold May 8 00:05:39.787785 kernel: pci 0000:00:16.4: [15ad:07a0] type 01 class 0x060400 May 8 00:05:39.787840 kernel: pci 0000:00:16.4: PME# supported from D0 D3hot D3cold May 8 00:05:39.787896 kernel: pci 0000:00:16.5: [15ad:07a0] type 01 class 0x060400 May 8 00:05:39.787948 kernel: pci 0000:00:16.5: PME# supported from D0 D3hot D3cold May 8 00:05:39.790387 kernel: pci 0000:00:16.6: [15ad:07a0] type 01 class 0x060400 May 8 00:05:39.790464 kernel: pci 0000:00:16.6: PME# supported from D0 D3hot D3cold May 8 00:05:39.790526 kernel: pci 0000:00:16.7: [15ad:07a0] type 01 class 0x060400 May 8 00:05:39.790584 kernel: pci 0000:00:16.7: PME# supported from D0 D3hot D3cold May 8 00:05:39.790640 kernel: pci 0000:00:17.0: [15ad:07a0] type 01 class 0x060400 May 8 00:05:39.790692 kernel: pci 0000:00:17.0: PME# supported from D0 D3hot D3cold May 8 00:05:39.790748 kernel: pci 0000:00:17.1: [15ad:07a0] type 01 class 0x060400 May 8 00:05:39.790801 kernel: pci 0000:00:17.1: PME# supported from D0 D3hot D3cold May 8 00:05:39.790857 kernel: pci 0000:00:17.2: [15ad:07a0] type 01 class 0x060400 May 8 00:05:39.790912 kernel: pci 0000:00:17.2: PME# supported from D0 D3hot D3cold May 8 00:05:39.790983 kernel: pci 0000:00:17.3: [15ad:07a0] type 01 class 0x060400 May 8 00:05:39.791040 kernel: pci 0000:00:17.3: PME# supported from D0 D3hot D3cold May 8 00:05:39.791095 kernel: pci 0000:00:17.4: [15ad:07a0] type 01 class 0x060400 May 8 00:05:39.791146 kernel: pci 0000:00:17.4: PME# supported from D0 D3hot D3cold May 8 00:05:39.791206 kernel: pci 0000:00:17.5: [15ad:07a0] type 01 class 0x060400 May 8 00:05:39.791262 kernel: pci 0000:00:17.5: PME# supported from D0 D3hot D3cold May 8 00:05:39.791319 kernel: pci 0000:00:17.6: [15ad:07a0] type 01 class 0x060400 May 8 00:05:39.791371 kernel: pci 0000:00:17.6: PME# supported from D0 D3hot D3cold May 8 00:05:39.791491 kernel: pci 0000:00:17.7: [15ad:07a0] type 01 class 0x060400 May 8 00:05:39.791569 kernel: pci 0000:00:17.7: PME# supported from D0 D3hot D3cold May 8 00:05:39.791642 kernel: pci 0000:00:18.0: [15ad:07a0] type 01 class 0x060400 May 8 00:05:39.791700 kernel: pci 0000:00:18.0: PME# supported from D0 D3hot D3cold May 8 00:05:39.791756 kernel: pci 0000:00:18.1: [15ad:07a0] type 01 class 0x060400 May 8 00:05:39.791807 kernel: pci 0000:00:18.1: PME# supported from D0 D3hot D3cold May 8 00:05:39.791862 kernel: pci 0000:00:18.2: [15ad:07a0] type 01 class 0x060400 May 8 00:05:39.791913 kernel: pci 0000:00:18.2: PME# supported from D0 D3hot D3cold May 8 00:05:39.791981 kernel: pci 0000:00:18.3: [15ad:07a0] type 01 class 0x060400 May 8 00:05:39.792059 kernel: pci 0000:00:18.3: PME# supported from D0 D3hot D3cold May 8 00:05:39.792120 kernel: pci 0000:00:18.4: [15ad:07a0] type 01 class 0x060400 May 8 00:05:39.792173 kernel: pci 0000:00:18.4: PME# supported from D0 D3hot D3cold May 8 00:05:39.792228 kernel: pci 0000:00:18.5: [15ad:07a0] type 01 class 0x060400 May 8 00:05:39.792279 kernel: pci 0000:00:18.5: PME# supported from D0 D3hot D3cold May 8 00:05:39.792336 kernel: pci 0000:00:18.6: [15ad:07a0] type 01 class 0x060400 May 8 00:05:39.792388 kernel: pci 0000:00:18.6: PME# supported from D0 D3hot D3cold May 8 00:05:39.792484 kernel: pci 0000:00:18.7: [15ad:07a0] type 01 class 0x060400 May 8 00:05:39.792543 kernel: pci 0000:00:18.7: PME# supported from D0 D3hot D3cold May 8 00:05:39.792599 kernel: pci_bus 0000:01: extended config space not accessible May 8 00:05:39.792654 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] May 8 00:05:39.792707 kernel: pci_bus 0000:02: extended config space not accessible May 8 00:05:39.792717 kernel: acpiphp: Slot [32] registered May 8 00:05:39.792723 kernel: acpiphp: Slot [33] registered May 8 00:05:39.792732 kernel: acpiphp: Slot [34] registered May 8 00:05:39.792738 kernel: acpiphp: Slot [35] registered May 8 00:05:39.792744 kernel: acpiphp: Slot [36] registered May 8 00:05:39.792750 kernel: acpiphp: Slot [37] registered May 8 00:05:39.792756 kernel: acpiphp: Slot [38] registered May 8 00:05:39.792762 kernel: acpiphp: Slot [39] registered May 8 00:05:39.792768 kernel: acpiphp: Slot [40] registered May 8 00:05:39.792774 kernel: acpiphp: Slot [41] registered May 8 00:05:39.792780 kernel: acpiphp: Slot [42] registered May 8 00:05:39.792787 kernel: acpiphp: Slot [43] registered May 8 00:05:39.792793 kernel: acpiphp: Slot [44] registered May 8 00:05:39.792799 kernel: acpiphp: Slot [45] registered May 8 00:05:39.792805 kernel: acpiphp: Slot [46] registered May 8 00:05:39.792811 kernel: acpiphp: Slot [47] registered May 8 00:05:39.792817 kernel: acpiphp: Slot [48] registered May 8 00:05:39.792822 kernel: acpiphp: Slot [49] registered May 8 00:05:39.792828 kernel: acpiphp: Slot [50] registered May 8 00:05:39.792834 kernel: acpiphp: Slot [51] registered May 8 00:05:39.792840 kernel: acpiphp: Slot [52] registered May 8 00:05:39.792847 kernel: acpiphp: Slot [53] registered May 8 00:05:39.792853 kernel: acpiphp: Slot [54] registered May 8 00:05:39.792859 kernel: acpiphp: Slot [55] registered May 8 00:05:39.792865 kernel: acpiphp: Slot [56] registered May 8 00:05:39.792871 kernel: acpiphp: Slot [57] registered May 8 00:05:39.792877 kernel: acpiphp: Slot [58] registered May 8 00:05:39.792883 kernel: acpiphp: Slot [59] registered May 8 00:05:39.792889 kernel: acpiphp: Slot [60] registered May 8 00:05:39.792895 kernel: acpiphp: Slot [61] registered May 8 00:05:39.792902 kernel: acpiphp: Slot [62] registered May 8 00:05:39.792908 kernel: acpiphp: Slot [63] registered May 8 00:05:39.792975 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] (subtractive decode) May 8 00:05:39.793033 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] May 8 00:05:39.793084 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] May 8 00:05:39.793136 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] May 8 00:05:39.793208 kernel: pci 0000:00:11.0: bridge window [mem 0x000a0000-0x000bffff window] (subtractive decode) May 8 00:05:39.793279 kernel: pci 0000:00:11.0: bridge window [mem 0x000cc000-0x000dbfff window] (subtractive decode) May 8 00:05:39.793357 kernel: pci 0000:00:11.0: bridge window [mem 0xc0000000-0xfebfffff window] (subtractive decode) May 8 00:05:39.793439 kernel: pci 0000:00:11.0: bridge window [io 0x0000-0x0cf7 window] (subtractive decode) May 8 00:05:39.793520 kernel: pci 0000:00:11.0: bridge window [io 0x0d00-0xfeff window] (subtractive decode) May 8 00:05:39.793608 kernel: pci 0000:03:00.0: [15ad:07c0] type 00 class 0x010700 May 8 00:05:39.793686 kernel: pci 0000:03:00.0: reg 0x10: [io 0x4000-0x4007] May 8 00:05:39.793767 kernel: pci 0000:03:00.0: reg 0x14: [mem 0xfd5f8000-0xfd5fffff 64bit] May 8 00:05:39.793828 kernel: pci 0000:03:00.0: reg 0x30: [mem 0x00000000-0x0000ffff pref] May 8 00:05:39.793892 kernel: pci 0000:03:00.0: PME# supported from D0 D3hot D3cold May 8 00:05:39.793946 kernel: pci 0000:03:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' May 8 00:05:39.794021 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] May 8 00:05:39.794076 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] May 8 00:05:39.794127 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] May 8 00:05:39.794181 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] May 8 00:05:39.794233 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] May 8 00:05:39.794284 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] May 8 00:05:39.794340 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] May 8 00:05:39.794395 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] May 8 00:05:39.794447 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] May 8 00:05:39.794500 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] May 8 00:05:39.794553 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] May 8 00:05:39.794607 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] May 8 00:05:39.794673 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] May 8 00:05:39.794740 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] May 8 00:05:39.794795 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] May 8 00:05:39.794846 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] May 8 00:05:39.794897 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] May 8 00:05:39.794954 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] May 8 00:05:39.795024 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] May 8 00:05:39.795075 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] May 8 00:05:39.795130 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] May 8 00:05:39.795181 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] May 8 00:05:39.795231 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] May 8 00:05:39.795285 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] May 8 00:05:39.795338 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] May 8 00:05:39.795390 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] May 8 00:05:39.795452 kernel: pci 0000:0b:00.0: [15ad:07b0] type 00 class 0x020000 May 8 00:05:39.795506 kernel: pci 0000:0b:00.0: reg 0x10: [mem 0xfd4fc000-0xfd4fcfff] May 8 00:05:39.795559 kernel: pci 0000:0b:00.0: reg 0x14: [mem 0xfd4fd000-0xfd4fdfff] May 8 00:05:39.795610 kernel: pci 0000:0b:00.0: reg 0x18: [mem 0xfd4fe000-0xfd4fffff] May 8 00:05:39.795662 kernel: pci 0000:0b:00.0: reg 0x1c: [io 0x5000-0x500f] May 8 00:05:39.795714 kernel: pci 0000:0b:00.0: reg 0x30: [mem 0x00000000-0x0000ffff pref] May 8 00:05:39.795766 kernel: pci 0000:0b:00.0: supports D1 D2 May 8 00:05:39.795820 kernel: pci 0000:0b:00.0: PME# supported from D0 D1 D2 D3hot D3cold May 8 00:05:39.795871 kernel: pci 0000:0b:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' May 8 00:05:39.795924 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] May 8 00:05:39.796068 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] May 8 00:05:39.796122 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] May 8 00:05:39.796175 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] May 8 00:05:39.796227 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] May 8 00:05:39.796277 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] May 8 00:05:39.796331 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] May 8 00:05:39.796384 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] May 8 00:05:39.796447 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] May 8 00:05:39.796500 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] May 8 00:05:39.796551 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] May 8 00:05:39.796605 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] May 8 00:05:39.796656 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] May 8 00:05:39.796710 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] May 8 00:05:39.796765 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] May 8 00:05:39.796817 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] May 8 00:05:39.796868 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] May 8 00:05:39.796922 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] May 8 00:05:39.796984 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] May 8 00:05:39.797043 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] May 8 00:05:39.797110 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] May 8 00:05:39.797167 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] May 8 00:05:39.797218 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] May 8 00:05:39.797272 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] May 8 00:05:39.797324 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] May 8 00:05:39.797374 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] May 8 00:05:39.797439 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] May 8 00:05:39.797493 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] May 8 00:05:39.797544 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] May 8 00:05:39.797599 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] May 8 00:05:39.797654 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] May 8 00:05:39.797706 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] May 8 00:05:39.797757 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] May 8 00:05:39.797811 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] May 8 00:05:39.797871 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] May 8 00:05:39.797937 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] May 8 00:05:39.798067 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] May 8 00:05:39.798136 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] May 8 00:05:39.798192 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] May 8 00:05:39.798242 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] May 8 00:05:39.798292 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] May 8 00:05:39.798344 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] May 8 00:05:39.798396 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] May 8 00:05:39.798446 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] May 8 00:05:39.798499 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] May 8 00:05:39.798553 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] May 8 00:05:39.798605 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] May 8 00:05:39.798658 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] May 8 00:05:39.798711 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] May 8 00:05:39.798762 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] May 8 00:05:39.798814 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] May 8 00:05:39.798865 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] May 8 00:05:39.798915 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] May 8 00:05:39.798989 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] May 8 00:05:39.799060 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] May 8 00:05:39.799113 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] May 8 00:05:39.799164 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] May 8 00:05:39.799217 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] May 8 00:05:39.799268 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] May 8 00:05:39.799319 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] May 8 00:05:39.799373 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] May 8 00:05:39.799433 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] May 8 00:05:39.799484 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] May 8 00:05:39.799535 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] May 8 00:05:39.799588 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] May 8 00:05:39.799639 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] May 8 00:05:39.799689 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] May 8 00:05:39.799741 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] May 8 00:05:39.799795 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] May 8 00:05:39.799845 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] May 8 00:05:39.799899 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] May 8 00:05:39.799950 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] May 8 00:05:39.804081 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] May 8 00:05:39.804163 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] May 8 00:05:39.804218 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] May 8 00:05:39.804272 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] May 8 00:05:39.804332 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] May 8 00:05:39.804383 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] May 8 00:05:39.804435 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] May 8 00:05:39.804444 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 9 May 8 00:05:39.804451 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 0 May 8 00:05:39.804457 kernel: ACPI: PCI: Interrupt link LNKB disabled May 8 00:05:39.804463 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 May 8 00:05:39.804469 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 10 May 8 00:05:39.804475 kernel: iommu: Default domain type: Translated May 8 00:05:39.804484 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 8 00:05:39.804490 kernel: PCI: Using ACPI for IRQ routing May 8 00:05:39.804496 kernel: PCI: pci_cache_line_size set to 64 bytes May 8 00:05:39.804502 kernel: e820: reserve RAM buffer [mem 0x0009ec00-0x0009ffff] May 8 00:05:39.804508 kernel: e820: reserve RAM buffer [mem 0x7fee0000-0x7fffffff] May 8 00:05:39.804560 kernel: pci 0000:00:0f.0: vgaarb: setting as boot VGA device May 8 00:05:39.804612 kernel: pci 0000:00:0f.0: vgaarb: bridge control possible May 8 00:05:39.804679 kernel: pci 0000:00:0f.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none May 8 00:05:39.804692 kernel: vgaarb: loaded May 8 00:05:39.804707 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 May 8 00:05:39.804716 kernel: hpet0: 16 comparators, 64-bit 14.318180 MHz counter May 8 00:05:39.804725 kernel: clocksource: Switched to clocksource tsc-early May 8 00:05:39.804735 kernel: VFS: Disk quotas dquot_6.6.0 May 8 00:05:39.804746 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 8 00:05:39.804757 kernel: pnp: PnP ACPI init May 8 00:05:39.804829 kernel: system 00:00: [io 0x1000-0x103f] has been reserved May 8 00:05:39.804889 kernel: system 00:00: [io 0x1040-0x104f] has been reserved May 8 00:05:39.804941 kernel: system 00:00: [io 0x0cf0-0x0cf1] has been reserved May 8 00:05:39.805019 kernel: system 00:04: [mem 0xfed00000-0xfed003ff] has been reserved May 8 00:05:39.805073 kernel: pnp 00:06: [dma 2] May 8 00:05:39.805131 kernel: system 00:07: [io 0xfce0-0xfcff] has been reserved May 8 00:05:39.805180 kernel: system 00:07: [mem 0xf0000000-0xf7ffffff] has been reserved May 8 00:05:39.805229 kernel: system 00:07: [mem 0xfe800000-0xfe9fffff] has been reserved May 8 00:05:39.805241 kernel: pnp: PnP ACPI: found 8 devices May 8 00:05:39.805247 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 8 00:05:39.805253 kernel: NET: Registered PF_INET protocol family May 8 00:05:39.805259 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) May 8 00:05:39.805265 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) May 8 00:05:39.805271 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 8 00:05:39.805277 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) May 8 00:05:39.805283 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) May 8 00:05:39.805289 kernel: TCP: Hash tables configured (established 16384 bind 16384) May 8 00:05:39.805297 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) May 8 00:05:39.805303 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) May 8 00:05:39.805309 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 8 00:05:39.805315 kernel: NET: Registered PF_XDP protocol family May 8 00:05:39.805372 kernel: pci 0000:00:15.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 May 8 00:05:39.805433 kernel: pci 0000:00:15.3: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 May 8 00:05:39.805489 kernel: pci 0000:00:15.4: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 May 8 00:05:39.805547 kernel: pci 0000:00:15.5: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 May 8 00:05:39.805602 kernel: pci 0000:00:15.6: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 May 8 00:05:39.805659 kernel: pci 0000:00:15.7: bridge window [io 0x1000-0x0fff] to [bus 0a] add_size 1000 May 8 00:05:39.805713 kernel: pci 0000:00:16.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 0b] add_size 200000 add_align 100000 May 8 00:05:39.805767 kernel: pci 0000:00:16.3: bridge window [io 0x1000-0x0fff] to [bus 0e] add_size 1000 May 8 00:05:39.805823 kernel: pci 0000:00:16.4: bridge window [io 0x1000-0x0fff] to [bus 0f] add_size 1000 May 8 00:05:39.805882 kernel: pci 0000:00:16.5: bridge window [io 0x1000-0x0fff] to [bus 10] add_size 1000 May 8 00:05:39.805935 kernel: pci 0000:00:16.6: bridge window [io 0x1000-0x0fff] to [bus 11] add_size 1000 May 8 00:05:39.806012 kernel: pci 0000:00:16.7: bridge window [io 0x1000-0x0fff] to [bus 12] add_size 1000 May 8 00:05:39.806066 kernel: pci 0000:00:17.3: bridge window [io 0x1000-0x0fff] to [bus 16] add_size 1000 May 8 00:05:39.806120 kernel: pci 0000:00:17.4: bridge window [io 0x1000-0x0fff] to [bus 17] add_size 1000 May 8 00:05:39.806173 kernel: pci 0000:00:17.5: bridge window [io 0x1000-0x0fff] to [bus 18] add_size 1000 May 8 00:05:39.806229 kernel: pci 0000:00:17.6: bridge window [io 0x1000-0x0fff] to [bus 19] add_size 1000 May 8 00:05:39.806282 kernel: pci 0000:00:17.7: bridge window [io 0x1000-0x0fff] to [bus 1a] add_size 1000 May 8 00:05:39.806337 kernel: pci 0000:00:18.2: bridge window [io 0x1000-0x0fff] to [bus 1d] add_size 1000 May 8 00:05:39.806391 kernel: pci 0000:00:18.3: bridge window [io 0x1000-0x0fff] to [bus 1e] add_size 1000 May 8 00:05:39.806456 kernel: pci 0000:00:18.4: bridge window [io 0x1000-0x0fff] to [bus 1f] add_size 1000 May 8 00:05:39.806512 kernel: pci 0000:00:18.5: bridge window [io 0x1000-0x0fff] to [bus 20] add_size 1000 May 8 00:05:39.806569 kernel: pci 0000:00:18.6: bridge window [io 0x1000-0x0fff] to [bus 21] add_size 1000 May 8 00:05:39.806622 kernel: pci 0000:00:18.7: bridge window [io 0x1000-0x0fff] to [bus 22] add_size 1000 May 8 00:05:39.806675 kernel: pci 0000:00:15.0: BAR 15: assigned [mem 0xc0000000-0xc01fffff 64bit pref] May 8 00:05:39.806727 kernel: pci 0000:00:16.0: BAR 15: assigned [mem 0xc0200000-0xc03fffff 64bit pref] May 8 00:05:39.806783 kernel: pci 0000:00:15.3: BAR 13: no space for [io size 0x1000] May 8 00:05:39.806854 kernel: pci 0000:00:15.3: BAR 13: failed to assign [io size 0x1000] May 8 00:05:39.806930 kernel: pci 0000:00:15.4: BAR 13: no space for [io size 0x1000] May 8 00:05:39.807154 kernel: pci 0000:00:15.4: BAR 13: failed to assign [io size 0x1000] May 8 00:05:39.807233 kernel: pci 0000:00:15.5: BAR 13: no space for [io size 0x1000] May 8 00:05:39.807296 kernel: pci 0000:00:15.5: BAR 13: failed to assign [io size 0x1000] May 8 00:05:39.807351 kernel: pci 0000:00:15.6: BAR 13: no space for [io size 0x1000] May 8 00:05:39.807403 kernel: pci 0000:00:15.6: BAR 13: failed to assign [io size 0x1000] May 8 00:05:39.807454 kernel: pci 0000:00:15.7: BAR 13: no space for [io size 0x1000] May 8 00:05:39.807505 kernel: pci 0000:00:15.7: BAR 13: failed to assign [io size 0x1000] May 8 00:05:39.807558 kernel: pci 0000:00:16.3: BAR 13: no space for [io size 0x1000] May 8 00:05:39.807614 kernel: pci 0000:00:16.3: BAR 13: failed to assign [io size 0x1000] May 8 00:05:39.807668 kernel: pci 0000:00:16.4: BAR 13: no space for [io size 0x1000] May 8 00:05:39.807719 kernel: pci 0000:00:16.4: BAR 13: failed to assign [io size 0x1000] May 8 00:05:39.807771 kernel: pci 0000:00:16.5: BAR 13: no space for [io size 0x1000] May 8 00:05:39.807822 kernel: pci 0000:00:16.5: BAR 13: failed to assign [io size 0x1000] May 8 00:05:39.807875 kernel: pci 0000:00:16.6: BAR 13: no space for [io size 0x1000] May 8 00:05:39.807930 kernel: pci 0000:00:16.6: BAR 13: failed to assign [io size 0x1000] May 8 00:05:39.808013 kernel: pci 0000:00:16.7: BAR 13: no space for [io size 0x1000] May 8 00:05:39.809185 kernel: pci 0000:00:16.7: BAR 13: failed to assign [io size 0x1000] May 8 00:05:39.809255 kernel: pci 0000:00:17.3: BAR 13: no space for [io size 0x1000] May 8 00:05:39.809316 kernel: pci 0000:00:17.3: BAR 13: failed to assign [io size 0x1000] May 8 00:05:39.809372 kernel: pci 0000:00:17.4: BAR 13: no space for [io size 0x1000] May 8 00:05:39.809434 kernel: pci 0000:00:17.4: BAR 13: failed to assign [io size 0x1000] May 8 00:05:39.809499 kernel: pci 0000:00:17.5: BAR 13: no space for [io size 0x1000] May 8 00:05:39.809571 kernel: pci 0000:00:17.5: BAR 13: failed to assign [io size 0x1000] May 8 00:05:39.809644 kernel: pci 0000:00:17.6: BAR 13: no space for [io size 0x1000] May 8 00:05:39.810120 kernel: pci 0000:00:17.6: BAR 13: failed to assign [io size 0x1000] May 8 00:05:39.810200 kernel: pci 0000:00:17.7: BAR 13: no space for [io size 0x1000] May 8 00:05:39.810288 kernel: pci 0000:00:17.7: BAR 13: failed to assign [io size 0x1000] May 8 00:05:39.810367 kernel: pci 0000:00:18.2: BAR 13: no space for [io size 0x1000] May 8 00:05:39.810451 kernel: pci 0000:00:18.2: BAR 13: failed to assign [io size 0x1000] May 8 00:05:39.810516 kernel: pci 0000:00:18.3: BAR 13: no space for [io size 0x1000] May 8 00:05:39.810581 kernel: pci 0000:00:18.3: BAR 13: failed to assign [io size 0x1000] May 8 00:05:39.810652 kernel: pci 0000:00:18.4: BAR 13: no space for [io size 0x1000] May 8 00:05:39.810727 kernel: pci 0000:00:18.4: BAR 13: failed to assign [io size 0x1000] May 8 00:05:39.810801 kernel: pci 0000:00:18.5: BAR 13: no space for [io size 0x1000] May 8 00:05:39.810898 kernel: pci 0000:00:18.5: BAR 13: failed to assign [io size 0x1000] May 8 00:05:39.810973 kernel: pci 0000:00:18.6: BAR 13: no space for [io size 0x1000] May 8 00:05:39.811039 kernel: pci 0000:00:18.6: BAR 13: failed to assign [io size 0x1000] May 8 00:05:39.811111 kernel: pci 0000:00:18.7: BAR 13: no space for [io size 0x1000] May 8 00:05:39.811183 kernel: pci 0000:00:18.7: BAR 13: failed to assign [io size 0x1000] May 8 00:05:39.811258 kernel: pci 0000:00:18.7: BAR 13: no space for [io size 0x1000] May 8 00:05:39.811331 kernel: pci 0000:00:18.7: BAR 13: failed to assign [io size 0x1000] May 8 00:05:39.811400 kernel: pci 0000:00:18.6: BAR 13: no space for [io size 0x1000] May 8 00:05:39.811466 kernel: pci 0000:00:18.6: BAR 13: failed to assign [io size 0x1000] May 8 00:05:39.811538 kernel: pci 0000:00:18.5: BAR 13: no space for [io size 0x1000] May 8 00:05:39.811604 kernel: pci 0000:00:18.5: BAR 13: failed to assign [io size 0x1000] May 8 00:05:39.811670 kernel: pci 0000:00:18.4: BAR 13: no space for [io size 0x1000] May 8 00:05:39.811737 kernel: pci 0000:00:18.4: BAR 13: failed to assign [io size 0x1000] May 8 00:05:39.811818 kernel: pci 0000:00:18.3: BAR 13: no space for [io size 0x1000] May 8 00:05:39.811899 kernel: pci 0000:00:18.3: BAR 13: failed to assign [io size 0x1000] May 8 00:05:39.812513 kernel: pci 0000:00:18.2: BAR 13: no space for [io size 0x1000] May 8 00:05:39.812596 kernel: pci 0000:00:18.2: BAR 13: failed to assign [io size 0x1000] May 8 00:05:39.812669 kernel: pci 0000:00:17.7: BAR 13: no space for [io size 0x1000] May 8 00:05:39.812744 kernel: pci 0000:00:17.7: BAR 13: failed to assign [io size 0x1000] May 8 00:05:39.812832 kernel: pci 0000:00:17.6: BAR 13: no space for [io size 0x1000] May 8 00:05:39.812920 kernel: pci 0000:00:17.6: BAR 13: failed to assign [io size 0x1000] May 8 00:05:39.813024 kernel: pci 0000:00:17.5: BAR 13: no space for [io size 0x1000] May 8 00:05:39.813101 kernel: pci 0000:00:17.5: BAR 13: failed to assign [io size 0x1000] May 8 00:05:39.813174 kernel: pci 0000:00:17.4: BAR 13: no space for [io size 0x1000] May 8 00:05:39.813243 kernel: pci 0000:00:17.4: BAR 13: failed to assign [io size 0x1000] May 8 00:05:39.813310 kernel: pci 0000:00:17.3: BAR 13: no space for [io size 0x1000] May 8 00:05:39.813365 kernel: pci 0000:00:17.3: BAR 13: failed to assign [io size 0x1000] May 8 00:05:39.813424 kernel: pci 0000:00:16.7: BAR 13: no space for [io size 0x1000] May 8 00:05:39.813478 kernel: pci 0000:00:16.7: BAR 13: failed to assign [io size 0x1000] May 8 00:05:39.813531 kernel: pci 0000:00:16.6: BAR 13: no space for [io size 0x1000] May 8 00:05:39.813594 kernel: pci 0000:00:16.6: BAR 13: failed to assign [io size 0x1000] May 8 00:05:39.813652 kernel: pci 0000:00:16.5: BAR 13: no space for [io size 0x1000] May 8 00:05:39.813704 kernel: pci 0000:00:16.5: BAR 13: failed to assign [io size 0x1000] May 8 00:05:39.813762 kernel: pci 0000:00:16.4: BAR 13: no space for [io size 0x1000] May 8 00:05:39.813827 kernel: pci 0000:00:16.4: BAR 13: failed to assign [io size 0x1000] May 8 00:05:39.813894 kernel: pci 0000:00:16.3: BAR 13: no space for [io size 0x1000] May 8 00:05:39.813948 kernel: pci 0000:00:16.3: BAR 13: failed to assign [io size 0x1000] May 8 00:05:39.815116 kernel: pci 0000:00:15.7: BAR 13: no space for [io size 0x1000] May 8 00:05:39.815192 kernel: pci 0000:00:15.7: BAR 13: failed to assign [io size 0x1000] May 8 00:05:39.815259 kernel: pci 0000:00:15.6: BAR 13: no space for [io size 0x1000] May 8 00:05:39.815329 kernel: pci 0000:00:15.6: BAR 13: failed to assign [io size 0x1000] May 8 00:05:39.815400 kernel: pci 0000:00:15.5: BAR 13: no space for [io size 0x1000] May 8 00:05:39.815467 kernel: pci 0000:00:15.5: BAR 13: failed to assign [io size 0x1000] May 8 00:05:39.815531 kernel: pci 0000:00:15.4: BAR 13: no space for [io size 0x1000] May 8 00:05:39.815594 kernel: pci 0000:00:15.4: BAR 13: failed to assign [io size 0x1000] May 8 00:05:39.815664 kernel: pci 0000:00:15.3: BAR 13: no space for [io size 0x1000] May 8 00:05:39.815722 kernel: pci 0000:00:15.3: BAR 13: failed to assign [io size 0x1000] May 8 00:05:39.815790 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] May 8 00:05:39.815849 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] May 8 00:05:39.815909 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] May 8 00:05:39.817005 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] May 8 00:05:39.817100 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] May 8 00:05:39.817200 kernel: pci 0000:03:00.0: BAR 6: assigned [mem 0xfd500000-0xfd50ffff pref] May 8 00:05:39.817307 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] May 8 00:05:39.817387 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] May 8 00:05:39.817449 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] May 8 00:05:39.817512 kernel: pci 0000:00:15.0: bridge window [mem 0xc0000000-0xc01fffff 64bit pref] May 8 00:05:39.817592 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] May 8 00:05:39.817661 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] May 8 00:05:39.817742 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] May 8 00:05:39.817819 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] May 8 00:05:39.817886 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] May 8 00:05:39.817944 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] May 8 00:05:39.818026 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] May 8 00:05:39.818099 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] May 8 00:05:39.818180 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] May 8 00:05:39.818249 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] May 8 00:05:39.818323 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] May 8 00:05:39.818401 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] May 8 00:05:39.818463 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] May 8 00:05:39.818523 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] May 8 00:05:39.818649 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] May 8 00:05:39.820465 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] May 8 00:05:39.820562 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] May 8 00:05:39.820641 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] May 8 00:05:39.820711 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] May 8 00:05:39.820777 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] May 8 00:05:39.820848 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] May 8 00:05:39.820934 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] May 8 00:05:39.821753 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] May 8 00:05:39.821852 kernel: pci 0000:0b:00.0: BAR 6: assigned [mem 0xfd400000-0xfd40ffff pref] May 8 00:05:39.821922 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] May 8 00:05:39.822001 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] May 8 00:05:39.822072 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] May 8 00:05:39.822135 kernel: pci 0000:00:16.0: bridge window [mem 0xc0200000-0xc03fffff 64bit pref] May 8 00:05:39.822221 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] May 8 00:05:39.822298 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] May 8 00:05:39.822360 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] May 8 00:05:39.822419 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] May 8 00:05:39.822485 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] May 8 00:05:39.822550 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] May 8 00:05:39.822630 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] May 8 00:05:39.822694 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] May 8 00:05:39.822768 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] May 8 00:05:39.822843 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] May 8 00:05:39.822909 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] May 8 00:05:39.823352 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] May 8 00:05:39.823437 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] May 8 00:05:39.823512 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] May 8 00:05:39.823597 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] May 8 00:05:39.823656 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] May 8 00:05:39.823734 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] May 8 00:05:39.823807 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] May 8 00:05:39.823892 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] May 8 00:05:39.823957 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] May 8 00:05:39.824493 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] May 8 00:05:39.824547 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] May 8 00:05:39.824600 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] May 8 00:05:39.824655 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] May 8 00:05:39.824711 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] May 8 00:05:39.824773 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] May 8 00:05:39.824845 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] May 8 00:05:39.824918 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] May 8 00:05:39.825032 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] May 8 00:05:39.825099 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] May 8 00:05:39.825164 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] May 8 00:05:39.825242 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] May 8 00:05:39.825322 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] May 8 00:05:39.825396 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] May 8 00:05:39.825455 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] May 8 00:05:39.825520 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] May 8 00:05:39.825580 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] May 8 00:05:39.825655 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] May 8 00:05:39.825725 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] May 8 00:05:39.825807 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] May 8 00:05:39.825888 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] May 8 00:05:39.825949 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] May 8 00:05:39.826061 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] May 8 00:05:39.826133 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] May 8 00:05:39.826199 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] May 8 00:05:39.826280 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] May 8 00:05:39.826344 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] May 8 00:05:39.826414 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] May 8 00:05:39.826482 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] May 8 00:05:39.826546 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] May 8 00:05:39.826601 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] May 8 00:05:39.826653 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] May 8 00:05:39.826712 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] May 8 00:05:39.826791 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] May 8 00:05:39.826853 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] May 8 00:05:39.826935 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] May 8 00:05:39.827015 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] May 8 00:05:39.827087 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] May 8 00:05:39.827170 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] May 8 00:05:39.827250 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] May 8 00:05:39.827310 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] May 8 00:05:39.827375 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] May 8 00:05:39.827451 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] May 8 00:05:39.827515 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] May 8 00:05:39.827598 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] May 8 00:05:39.827672 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] May 8 00:05:39.827737 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] May 8 00:05:39.827796 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] May 8 00:05:39.827849 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] May 8 00:05:39.827901 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] May 8 00:05:39.827956 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] May 8 00:05:39.828050 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] May 8 00:05:39.828111 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] May 8 00:05:39.828192 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] May 8 00:05:39.828266 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] May 8 00:05:39.828329 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] May 8 00:05:39.828392 kernel: pci_bus 0000:00: resource 4 [mem 0x000a0000-0x000bffff window] May 8 00:05:39.828470 kernel: pci_bus 0000:00: resource 5 [mem 0x000cc000-0x000dbfff window] May 8 00:05:39.828542 kernel: pci_bus 0000:00: resource 6 [mem 0xc0000000-0xfebfffff window] May 8 00:05:39.828607 kernel: pci_bus 0000:00: resource 7 [io 0x0000-0x0cf7 window] May 8 00:05:39.828675 kernel: pci_bus 0000:00: resource 8 [io 0x0d00-0xfeff window] May 8 00:05:39.828734 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x3fff] May 8 00:05:39.828784 kernel: pci_bus 0000:02: resource 1 [mem 0xfd600000-0xfdffffff] May 8 00:05:39.828832 kernel: pci_bus 0000:02: resource 2 [mem 0xe7b00000-0xe7ffffff 64bit pref] May 8 00:05:39.828888 kernel: pci_bus 0000:02: resource 4 [mem 0x000a0000-0x000bffff window] May 8 00:05:39.828940 kernel: pci_bus 0000:02: resource 5 [mem 0x000cc000-0x000dbfff window] May 8 00:05:39.829048 kernel: pci_bus 0000:02: resource 6 [mem 0xc0000000-0xfebfffff window] May 8 00:05:39.829122 kernel: pci_bus 0000:02: resource 7 [io 0x0000-0x0cf7 window] May 8 00:05:39.829188 kernel: pci_bus 0000:02: resource 8 [io 0x0d00-0xfeff window] May 8 00:05:39.829268 kernel: pci_bus 0000:03: resource 0 [io 0x4000-0x4fff] May 8 00:05:39.829324 kernel: pci_bus 0000:03: resource 1 [mem 0xfd500000-0xfd5fffff] May 8 00:05:39.829388 kernel: pci_bus 0000:03: resource 2 [mem 0xc0000000-0xc01fffff 64bit pref] May 8 00:05:39.829458 kernel: pci_bus 0000:04: resource 0 [io 0x8000-0x8fff] May 8 00:05:39.829530 kernel: pci_bus 0000:04: resource 1 [mem 0xfd100000-0xfd1fffff] May 8 00:05:39.829600 kernel: pci_bus 0000:04: resource 2 [mem 0xe7800000-0xe78fffff 64bit pref] May 8 00:05:39.829669 kernel: pci_bus 0000:05: resource 0 [io 0xc000-0xcfff] May 8 00:05:39.829727 kernel: pci_bus 0000:05: resource 1 [mem 0xfcd00000-0xfcdfffff] May 8 00:05:39.829784 kernel: pci_bus 0000:05: resource 2 [mem 0xe7400000-0xe74fffff 64bit pref] May 8 00:05:39.829865 kernel: pci_bus 0000:06: resource 1 [mem 0xfc900000-0xfc9fffff] May 8 00:05:39.829933 kernel: pci_bus 0000:06: resource 2 [mem 0xe7000000-0xe70fffff 64bit pref] May 8 00:05:39.830114 kernel: pci_bus 0000:07: resource 1 [mem 0xfc500000-0xfc5fffff] May 8 00:05:39.830172 kernel: pci_bus 0000:07: resource 2 [mem 0xe6c00000-0xe6cfffff 64bit pref] May 8 00:05:39.830242 kernel: pci_bus 0000:08: resource 1 [mem 0xfc100000-0xfc1fffff] May 8 00:05:39.830303 kernel: pci_bus 0000:08: resource 2 [mem 0xe6800000-0xe68fffff 64bit pref] May 8 00:05:39.830376 kernel: pci_bus 0000:09: resource 1 [mem 0xfbd00000-0xfbdfffff] May 8 00:05:39.830445 kernel: pci_bus 0000:09: resource 2 [mem 0xe6400000-0xe64fffff 64bit pref] May 8 00:05:39.830513 kernel: pci_bus 0000:0a: resource 1 [mem 0xfb900000-0xfb9fffff] May 8 00:05:39.830579 kernel: pci_bus 0000:0a: resource 2 [mem 0xe6000000-0xe60fffff 64bit pref] May 8 00:05:39.830658 kernel: pci_bus 0000:0b: resource 0 [io 0x5000-0x5fff] May 8 00:05:39.830726 kernel: pci_bus 0000:0b: resource 1 [mem 0xfd400000-0xfd4fffff] May 8 00:05:39.830799 kernel: pci_bus 0000:0b: resource 2 [mem 0xc0200000-0xc03fffff 64bit pref] May 8 00:05:39.830858 kernel: pci_bus 0000:0c: resource 0 [io 0x9000-0x9fff] May 8 00:05:39.830916 kernel: pci_bus 0000:0c: resource 1 [mem 0xfd000000-0xfd0fffff] May 8 00:05:39.830984 kernel: pci_bus 0000:0c: resource 2 [mem 0xe7700000-0xe77fffff 64bit pref] May 8 00:05:39.831069 kernel: pci_bus 0000:0d: resource 0 [io 0xd000-0xdfff] May 8 00:05:39.831130 kernel: pci_bus 0000:0d: resource 1 [mem 0xfcc00000-0xfccfffff] May 8 00:05:39.831202 kernel: pci_bus 0000:0d: resource 2 [mem 0xe7300000-0xe73fffff 64bit pref] May 8 00:05:39.831288 kernel: pci_bus 0000:0e: resource 1 [mem 0xfc800000-0xfc8fffff] May 8 00:05:39.831342 kernel: pci_bus 0000:0e: resource 2 [mem 0xe6f00000-0xe6ffffff 64bit pref] May 8 00:05:39.831403 kernel: pci_bus 0000:0f: resource 1 [mem 0xfc400000-0xfc4fffff] May 8 00:05:39.831476 kernel: pci_bus 0000:0f: resource 2 [mem 0xe6b00000-0xe6bfffff 64bit pref] May 8 00:05:39.831559 kernel: pci_bus 0000:10: resource 1 [mem 0xfc000000-0xfc0fffff] May 8 00:05:39.831633 kernel: pci_bus 0000:10: resource 2 [mem 0xe6700000-0xe67fffff 64bit pref] May 8 00:05:39.831700 kernel: pci_bus 0000:11: resource 1 [mem 0xfbc00000-0xfbcfffff] May 8 00:05:39.831758 kernel: pci_bus 0000:11: resource 2 [mem 0xe6300000-0xe63fffff 64bit pref] May 8 00:05:39.831838 kernel: pci_bus 0000:12: resource 1 [mem 0xfb800000-0xfb8fffff] May 8 00:05:39.831907 kernel: pci_bus 0000:12: resource 2 [mem 0xe5f00000-0xe5ffffff 64bit pref] May 8 00:05:39.831990 kernel: pci_bus 0000:13: resource 0 [io 0x6000-0x6fff] May 8 00:05:39.834128 kernel: pci_bus 0000:13: resource 1 [mem 0xfd300000-0xfd3fffff] May 8 00:05:39.834201 kernel: pci_bus 0000:13: resource 2 [mem 0xe7a00000-0xe7afffff 64bit pref] May 8 00:05:39.834269 kernel: pci_bus 0000:14: resource 0 [io 0xa000-0xafff] May 8 00:05:39.834346 kernel: pci_bus 0000:14: resource 1 [mem 0xfcf00000-0xfcffffff] May 8 00:05:39.834415 kernel: pci_bus 0000:14: resource 2 [mem 0xe7600000-0xe76fffff 64bit pref] May 8 00:05:39.834483 kernel: pci_bus 0000:15: resource 0 [io 0xe000-0xefff] May 8 00:05:39.834542 kernel: pci_bus 0000:15: resource 1 [mem 0xfcb00000-0xfcbfffff] May 8 00:05:39.834600 kernel: pci_bus 0000:15: resource 2 [mem 0xe7200000-0xe72fffff 64bit pref] May 8 00:05:39.834674 kernel: pci_bus 0000:16: resource 1 [mem 0xfc700000-0xfc7fffff] May 8 00:05:39.834746 kernel: pci_bus 0000:16: resource 2 [mem 0xe6e00000-0xe6efffff 64bit pref] May 8 00:05:39.834816 kernel: pci_bus 0000:17: resource 1 [mem 0xfc300000-0xfc3fffff] May 8 00:05:39.834875 kernel: pci_bus 0000:17: resource 2 [mem 0xe6a00000-0xe6afffff 64bit pref] May 8 00:05:39.834937 kernel: pci_bus 0000:18: resource 1 [mem 0xfbf00000-0xfbffffff] May 8 00:05:39.835026 kernel: pci_bus 0000:18: resource 2 [mem 0xe6600000-0xe66fffff 64bit pref] May 8 00:05:39.835741 kernel: pci_bus 0000:19: resource 1 [mem 0xfbb00000-0xfbbfffff] May 8 00:05:39.835812 kernel: pci_bus 0000:19: resource 2 [mem 0xe6200000-0xe62fffff 64bit pref] May 8 00:05:39.835870 kernel: pci_bus 0000:1a: resource 1 [mem 0xfb700000-0xfb7fffff] May 8 00:05:39.835922 kernel: pci_bus 0000:1a: resource 2 [mem 0xe5e00000-0xe5efffff 64bit pref] May 8 00:05:39.836052 kernel: pci_bus 0000:1b: resource 0 [io 0x7000-0x7fff] May 8 00:05:39.836103 kernel: pci_bus 0000:1b: resource 1 [mem 0xfd200000-0xfd2fffff] May 8 00:05:39.836153 kernel: pci_bus 0000:1b: resource 2 [mem 0xe7900000-0xe79fffff 64bit pref] May 8 00:05:39.836206 kernel: pci_bus 0000:1c: resource 0 [io 0xb000-0xbfff] May 8 00:05:39.836255 kernel: pci_bus 0000:1c: resource 1 [mem 0xfce00000-0xfcefffff] May 8 00:05:39.836303 kernel: pci_bus 0000:1c: resource 2 [mem 0xe7500000-0xe75fffff 64bit pref] May 8 00:05:39.836356 kernel: pci_bus 0000:1d: resource 1 [mem 0xfca00000-0xfcafffff] May 8 00:05:39.836417 kernel: pci_bus 0000:1d: resource 2 [mem 0xe7100000-0xe71fffff 64bit pref] May 8 00:05:39.836473 kernel: pci_bus 0000:1e: resource 1 [mem 0xfc600000-0xfc6fffff] May 8 00:05:39.836528 kernel: pci_bus 0000:1e: resource 2 [mem 0xe6d00000-0xe6dfffff 64bit pref] May 8 00:05:39.836586 kernel: pci_bus 0000:1f: resource 1 [mem 0xfc200000-0xfc2fffff] May 8 00:05:39.836635 kernel: pci_bus 0000:1f: resource 2 [mem 0xe6900000-0xe69fffff 64bit pref] May 8 00:05:39.836687 kernel: pci_bus 0000:20: resource 1 [mem 0xfbe00000-0xfbefffff] May 8 00:05:39.836736 kernel: pci_bus 0000:20: resource 2 [mem 0xe6500000-0xe65fffff 64bit pref] May 8 00:05:39.836793 kernel: pci_bus 0000:21: resource 1 [mem 0xfba00000-0xfbafffff] May 8 00:05:39.836845 kernel: pci_bus 0000:21: resource 2 [mem 0xe6100000-0xe61fffff 64bit pref] May 8 00:05:39.836898 kernel: pci_bus 0000:22: resource 1 [mem 0xfb600000-0xfb6fffff] May 8 00:05:39.836947 kernel: pci_bus 0000:22: resource 2 [mem 0xe5d00000-0xe5dfffff 64bit pref] May 8 00:05:39.837433 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers May 8 00:05:39.837448 kernel: PCI: CLS 32 bytes, default 64 May 8 00:05:39.837456 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer May 8 00:05:39.837463 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns May 8 00:05:39.837472 kernel: clocksource: Switched to clocksource tsc May 8 00:05:39.837480 kernel: Initialise system trusted keyrings May 8 00:05:39.837487 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 May 8 00:05:39.837493 kernel: Key type asymmetric registered May 8 00:05:39.837499 kernel: Asymmetric key parser 'x509' registered May 8 00:05:39.837505 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) May 8 00:05:39.837512 kernel: io scheduler mq-deadline registered May 8 00:05:39.837520 kernel: io scheduler kyber registered May 8 00:05:39.837526 kernel: io scheduler bfq registered May 8 00:05:39.837588 kernel: pcieport 0000:00:15.0: PME: Signaling with IRQ 24 May 8 00:05:39.837646 kernel: pcieport 0000:00:15.0: pciehp: Slot #160 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 8 00:05:39.837713 kernel: pcieport 0000:00:15.1: PME: Signaling with IRQ 25 May 8 00:05:39.837774 kernel: pcieport 0000:00:15.1: pciehp: Slot #161 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 8 00:05:39.837833 kernel: pcieport 0000:00:15.2: PME: Signaling with IRQ 26 May 8 00:05:39.837887 kernel: pcieport 0000:00:15.2: pciehp: Slot #162 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 8 00:05:39.837942 kernel: pcieport 0000:00:15.3: PME: Signaling with IRQ 27 May 8 00:05:39.838070 kernel: pcieport 0000:00:15.3: pciehp: Slot #163 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 8 00:05:39.838127 kernel: pcieport 0000:00:15.4: PME: Signaling with IRQ 28 May 8 00:05:39.838202 kernel: pcieport 0000:00:15.4: pciehp: Slot #164 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 8 00:05:39.838257 kernel: pcieport 0000:00:15.5: PME: Signaling with IRQ 29 May 8 00:05:39.838311 kernel: pcieport 0000:00:15.5: pciehp: Slot #165 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 8 00:05:39.838370 kernel: pcieport 0000:00:15.6: PME: Signaling with IRQ 30 May 8 00:05:39.838423 kernel: pcieport 0000:00:15.6: pciehp: Slot #166 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 8 00:05:39.838478 kernel: pcieport 0000:00:15.7: PME: Signaling with IRQ 31 May 8 00:05:39.838531 kernel: pcieport 0000:00:15.7: pciehp: Slot #167 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 8 00:05:39.838586 kernel: pcieport 0000:00:16.0: PME: Signaling with IRQ 32 May 8 00:05:39.838639 kernel: pcieport 0000:00:16.0: pciehp: Slot #192 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 8 00:05:39.838697 kernel: pcieport 0000:00:16.1: PME: Signaling with IRQ 33 May 8 00:05:39.838749 kernel: pcieport 0000:00:16.1: pciehp: Slot #193 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 8 00:05:39.838805 kernel: pcieport 0000:00:16.2: PME: Signaling with IRQ 34 May 8 00:05:39.838858 kernel: pcieport 0000:00:16.2: pciehp: Slot #194 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 8 00:05:39.838913 kernel: pcieport 0000:00:16.3: PME: Signaling with IRQ 35 May 8 00:05:39.839373 kernel: pcieport 0000:00:16.3: pciehp: Slot #195 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 8 00:05:39.839452 kernel: pcieport 0000:00:16.4: PME: Signaling with IRQ 36 May 8 00:05:39.839514 kernel: pcieport 0000:00:16.4: pciehp: Slot #196 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 8 00:05:39.839572 kernel: pcieport 0000:00:16.5: PME: Signaling with IRQ 37 May 8 00:05:39.839627 kernel: pcieport 0000:00:16.5: pciehp: Slot #197 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 8 00:05:39.839682 kernel: pcieport 0000:00:16.6: PME: Signaling with IRQ 38 May 8 00:05:39.839736 kernel: pcieport 0000:00:16.6: pciehp: Slot #198 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 8 00:05:39.839794 kernel: pcieport 0000:00:16.7: PME: Signaling with IRQ 39 May 8 00:05:39.839850 kernel: pcieport 0000:00:16.7: pciehp: Slot #199 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 8 00:05:39.839904 kernel: pcieport 0000:00:17.0: PME: Signaling with IRQ 40 May 8 00:05:39.839958 kernel: pcieport 0000:00:17.0: pciehp: Slot #224 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 8 00:05:39.840120 kernel: pcieport 0000:00:17.1: PME: Signaling with IRQ 41 May 8 00:05:39.840173 kernel: pcieport 0000:00:17.1: pciehp: Slot #225 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 8 00:05:39.840231 kernel: pcieport 0000:00:17.2: PME: Signaling with IRQ 42 May 8 00:05:39.840284 kernel: pcieport 0000:00:17.2: pciehp: Slot #226 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 8 00:05:39.840337 kernel: pcieport 0000:00:17.3: PME: Signaling with IRQ 43 May 8 00:05:39.840391 kernel: pcieport 0000:00:17.3: pciehp: Slot #227 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 8 00:05:39.840459 kernel: pcieport 0000:00:17.4: PME: Signaling with IRQ 44 May 8 00:05:39.840513 kernel: pcieport 0000:00:17.4: pciehp: Slot #228 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 8 00:05:39.840570 kernel: pcieport 0000:00:17.5: PME: Signaling with IRQ 45 May 8 00:05:39.840624 kernel: pcieport 0000:00:17.5: pciehp: Slot #229 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 8 00:05:39.840681 kernel: pcieport 0000:00:17.6: PME: Signaling with IRQ 46 May 8 00:05:39.840736 kernel: pcieport 0000:00:17.6: pciehp: Slot #230 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 8 00:05:39.840790 kernel: pcieport 0000:00:17.7: PME: Signaling with IRQ 47 May 8 00:05:39.840843 kernel: pcieport 0000:00:17.7: pciehp: Slot #231 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 8 00:05:39.840900 kernel: pcieport 0000:00:18.0: PME: Signaling with IRQ 48 May 8 00:05:39.840953 kernel: pcieport 0000:00:18.0: pciehp: Slot #256 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 8 00:05:39.841023 kernel: pcieport 0000:00:18.1: PME: Signaling with IRQ 49 May 8 00:05:39.841077 kernel: pcieport 0000:00:18.1: pciehp: Slot #257 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 8 00:05:39.841130 kernel: pcieport 0000:00:18.2: PME: Signaling with IRQ 50 May 8 00:05:39.841184 kernel: pcieport 0000:00:18.2: pciehp: Slot #258 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 8 00:05:39.841243 kernel: pcieport 0000:00:18.3: PME: Signaling with IRQ 51 May 8 00:05:39.841297 kernel: pcieport 0000:00:18.3: pciehp: Slot #259 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 8 00:05:39.841353 kernel: pcieport 0000:00:18.4: PME: Signaling with IRQ 52 May 8 00:05:39.841407 kernel: pcieport 0000:00:18.4: pciehp: Slot #260 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 8 00:05:39.841461 kernel: pcieport 0000:00:18.5: PME: Signaling with IRQ 53 May 8 00:05:39.841520 kernel: pcieport 0000:00:18.5: pciehp: Slot #261 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 8 00:05:39.841576 kernel: pcieport 0000:00:18.6: PME: Signaling with IRQ 54 May 8 00:05:39.841638 kernel: pcieport 0000:00:18.6: pciehp: Slot #262 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 8 00:05:39.841706 kernel: pcieport 0000:00:18.7: PME: Signaling with IRQ 55 May 8 00:05:39.841761 kernel: pcieport 0000:00:18.7: pciehp: Slot #263 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 8 00:05:39.841773 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 8 00:05:39.841780 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 8 00:05:39.841787 kernel: 00:05: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 8 00:05:39.841794 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBC,PNP0f13:MOUS] at 0x60,0x64 irq 1,12 May 8 00:05:39.841800 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 May 8 00:05:39.841807 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 May 8 00:05:39.841865 kernel: rtc_cmos 00:01: registered as rtc0 May 8 00:05:39.841916 kernel: rtc_cmos 00:01: setting system clock to 2025-05-08T00:05:39 UTC (1746662739) May 8 00:05:39.841925 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 May 8 00:05:39.843100 kernel: rtc_cmos 00:01: alarms up to one month, y3k, 114 bytes nvram May 8 00:05:39.843119 kernel: intel_pstate: CPU model not supported May 8 00:05:39.843131 kernel: NET: Registered PF_INET6 protocol family May 8 00:05:39.843142 kernel: Segment Routing with IPv6 May 8 00:05:39.843149 kernel: In-situ OAM (IOAM) with IPv6 May 8 00:05:39.843156 kernel: NET: Registered PF_PACKET protocol family May 8 00:05:39.843162 kernel: Key type dns_resolver registered May 8 00:05:39.843169 kernel: IPI shorthand broadcast: enabled May 8 00:05:39.843175 kernel: sched_clock: Marking stable (900003563, 225942325)->(1187719704, -61773816) May 8 00:05:39.843186 kernel: registered taskstats version 1 May 8 00:05:39.843195 kernel: Loading compiled-in X.509 certificates May 8 00:05:39.843203 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.88-flatcar: dac8423f6f9fa2fb5f636925d45d7c2572b3a9b6' May 8 00:05:39.843210 kernel: Key type .fscrypt registered May 8 00:05:39.843216 kernel: Key type fscrypt-provisioning registered May 8 00:05:39.843222 kernel: ima: No TPM chip found, activating TPM-bypass! May 8 00:05:39.843232 kernel: ima: Allocated hash algorithm: sha1 May 8 00:05:39.843239 kernel: ima: No architecture policies found May 8 00:05:39.843247 kernel: clk: Disabling unused clocks May 8 00:05:39.843254 kernel: Freeing unused kernel image (initmem) memory: 43484K May 8 00:05:39.843260 kernel: Write protecting the kernel read-only data: 38912k May 8 00:05:39.843267 kernel: Freeing unused kernel image (rodata/data gap) memory: 1712K May 8 00:05:39.843276 kernel: Run /init as init process May 8 00:05:39.843282 kernel: with arguments: May 8 00:05:39.843291 kernel: /init May 8 00:05:39.843298 kernel: with environment: May 8 00:05:39.843304 kernel: HOME=/ May 8 00:05:39.843310 kernel: TERM=linux May 8 00:05:39.843318 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 8 00:05:39.843326 systemd[1]: Successfully made /usr/ read-only. May 8 00:05:39.843334 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 8 00:05:39.843343 systemd[1]: Detected virtualization vmware. May 8 00:05:39.843351 systemd[1]: Detected architecture x86-64. May 8 00:05:39.843358 systemd[1]: Running in initrd. May 8 00:05:39.843366 systemd[1]: No hostname configured, using default hostname. May 8 00:05:39.843375 systemd[1]: Hostname set to . May 8 00:05:39.843381 systemd[1]: Initializing machine ID from random generator. May 8 00:05:39.843387 systemd[1]: Queued start job for default target initrd.target. May 8 00:05:39.843394 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 8 00:05:39.843401 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 8 00:05:39.843416 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 8 00:05:39.843423 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 8 00:05:39.843432 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 8 00:05:39.843444 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 8 00:05:39.843451 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 8 00:05:39.843458 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 8 00:05:39.843464 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 8 00:05:39.843471 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 8 00:05:39.843479 systemd[1]: Reached target paths.target - Path Units. May 8 00:05:39.843486 systemd[1]: Reached target slices.target - Slice Units. May 8 00:05:39.843494 systemd[1]: Reached target swap.target - Swaps. May 8 00:05:39.843504 systemd[1]: Reached target timers.target - Timer Units. May 8 00:05:39.843516 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 8 00:05:39.843527 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 8 00:05:39.843539 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 8 00:05:39.843551 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. May 8 00:05:39.843559 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 8 00:05:39.843569 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 8 00:05:39.843578 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 8 00:05:39.843584 systemd[1]: Reached target sockets.target - Socket Units. May 8 00:05:39.843591 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 8 00:05:39.843598 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 8 00:05:39.843604 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 8 00:05:39.843613 systemd[1]: Starting systemd-fsck-usr.service... May 8 00:05:39.843621 systemd[1]: Starting systemd-journald.service - Journal Service... May 8 00:05:39.843627 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 8 00:05:39.843637 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 8 00:05:39.843645 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 8 00:05:39.843652 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 8 00:05:39.843676 systemd-journald[216]: Collecting audit messages is disabled. May 8 00:05:39.843699 systemd[1]: Finished systemd-fsck-usr.service. May 8 00:05:39.843707 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 8 00:05:39.843716 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 8 00:05:39.843723 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 8 00:05:39.843730 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 8 00:05:39.843739 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 8 00:05:39.843746 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 8 00:05:39.843753 kernel: Bridge firewalling registered May 8 00:05:39.843763 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 8 00:05:39.843773 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 8 00:05:39.843780 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 8 00:05:39.844182 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 8 00:05:39.844194 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 8 00:05:39.844203 systemd-journald[216]: Journal started May 8 00:05:39.844225 systemd-journald[216]: Runtime Journal (/run/log/journal/42270c1022f042918154c07792ba43f6) is 4.8M, max 38.6M, 33.8M free. May 8 00:05:39.791612 systemd-modules-load[217]: Inserted module 'overlay' May 8 00:05:39.845510 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 8 00:05:39.823012 systemd-modules-load[217]: Inserted module 'br_netfilter' May 8 00:05:39.847115 systemd[1]: Started systemd-journald.service - Journal Service. May 8 00:05:39.847466 dracut-cmdline[235]: dracut-dracut-053 May 8 00:05:39.848447 dracut-cmdline[235]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=90f0413c3d850985bb1e645e67699e9890362068cb417837636fe4022f4be979 May 8 00:05:39.854159 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 8 00:05:39.860360 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 8 00:05:39.866219 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 8 00:05:39.889034 systemd-resolved[282]: Positive Trust Anchors: May 8 00:05:39.889284 systemd-resolved[282]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 8 00:05:39.889308 systemd-resolved[282]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 8 00:05:39.891887 systemd-resolved[282]: Defaulting to hostname 'linux'. May 8 00:05:39.892497 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 8 00:05:39.892642 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 8 00:05:39.901983 kernel: SCSI subsystem initialized May 8 00:05:39.907975 kernel: Loading iSCSI transport class v2.0-870. May 8 00:05:39.914983 kernel: iscsi: registered transport (tcp) May 8 00:05:39.928429 kernel: iscsi: registered transport (qla4xxx) May 8 00:05:39.928483 kernel: QLogic iSCSI HBA Driver May 8 00:05:39.951861 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 8 00:05:39.959144 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 8 00:05:39.976168 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 8 00:05:39.976219 kernel: device-mapper: uevent: version 1.0.3 May 8 00:05:39.977517 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 8 00:05:40.008986 kernel: raid6: avx2x4 gen() 46369 MB/s May 8 00:05:40.026009 kernel: raid6: avx2x2 gen() 52120 MB/s May 8 00:05:40.043200 kernel: raid6: avx2x1 gen() 42818 MB/s May 8 00:05:40.043257 kernel: raid6: using algorithm avx2x2 gen() 52120 MB/s May 8 00:05:40.061314 kernel: raid6: .... xor() 30694 MB/s, rmw enabled May 8 00:05:40.061371 kernel: raid6: using avx2x2 recovery algorithm May 8 00:05:40.075990 kernel: xor: automatically using best checksumming function avx May 8 00:05:40.169475 kernel: Btrfs loaded, zoned=no, fsverity=no May 8 00:05:40.174520 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 8 00:05:40.180081 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 8 00:05:40.189141 systemd-udevd[434]: Using default interface naming scheme 'v255'. May 8 00:05:40.192199 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 8 00:05:40.198367 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 8 00:05:40.204792 dracut-pre-trigger[439]: rd.md=0: removing MD RAID activation May 8 00:05:40.220342 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 8 00:05:40.225090 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 8 00:05:40.301716 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 8 00:05:40.307089 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 8 00:05:40.316659 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 8 00:05:40.317127 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 8 00:05:40.317239 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 8 00:05:40.317344 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 8 00:05:40.323087 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 8 00:05:40.330823 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 8 00:05:40.376008 kernel: VMware PVSCSI driver - version 1.0.7.0-k May 8 00:05:40.381413 kernel: VMware vmxnet3 virtual NIC driver - version 1.7.0.0-k-NAPI May 8 00:05:40.381445 kernel: vmw_pvscsi: using 64bit dma May 8 00:05:40.381989 kernel: vmw_pvscsi: max_id: 16 May 8 00:05:40.382007 kernel: vmw_pvscsi: setting ring_pages to 8 May 8 00:05:40.391985 kernel: vmxnet3 0000:0b:00.0: # of Tx queues : 2, # of Rx queues : 2 May 8 00:05:40.401744 kernel: vmw_pvscsi: enabling reqCallThreshold May 8 00:05:40.401755 kernel: vmw_pvscsi: driver-based request coalescing enabled May 8 00:05:40.401763 kernel: vmw_pvscsi: using MSI-X May 8 00:05:40.401770 kernel: scsi host0: VMware PVSCSI storage adapter rev 2, req/cmp/msg rings: 8/8/1 pages, cmd_per_lun=254 May 8 00:05:40.401793 kernel: vmxnet3 0000:0b:00.0 eth0: NIC Link is Up 10000 Mbps May 8 00:05:40.404986 kernel: cryptd: max_cpu_qlen set to 1000 May 8 00:05:40.416279 kernel: vmw_pvscsi 0000:03:00.0: VMware PVSCSI rev 2 host #0 May 8 00:05:40.417800 kernel: scsi 0:0:0:0: Direct-Access VMware Virtual disk 2.0 PQ: 0 ANSI: 6 May 8 00:05:40.422977 kernel: vmxnet3 0000:0b:00.0 ens192: renamed from eth0 May 8 00:05:40.423085 kernel: AVX2 version of gcm_enc/dec engaged. May 8 00:05:40.424984 kernel: AES CTR mode by8 optimization enabled May 8 00:05:40.425010 kernel: libata version 3.00 loaded. May 8 00:05:40.425233 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 8 00:05:40.425312 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 8 00:05:40.426310 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 8 00:05:40.426904 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 8 00:05:40.426986 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 8 00:05:40.427216 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 8 00:05:40.431022 kernel: ata_piix 0000:00:07.1: version 2.13 May 8 00:05:40.446075 kernel: scsi host1: ata_piix May 8 00:05:40.446161 kernel: sd 0:0:0:0: [sda] 17805312 512-byte logical blocks: (9.12 GB/8.49 GiB) May 8 00:05:40.544092 kernel: sd 0:0:0:0: [sda] Write Protect is off May 8 00:05:40.544256 kernel: sd 0:0:0:0: [sda] Mode Sense: 31 00 00 00 May 8 00:05:40.544369 kernel: sd 0:0:0:0: [sda] Cache data unavailable May 8 00:05:40.544498 kernel: sd 0:0:0:0: [sda] Assuming drive cache: write through May 8 00:05:40.544635 kernel: scsi host2: ata_piix May 8 00:05:40.544742 kernel: ata1: PATA max UDMA/33 cmd 0x1f0 ctl 0x3f6 bmdma 0x1060 irq 14 May 8 00:05:40.544762 kernel: ata2: PATA max UDMA/33 cmd 0x170 ctl 0x376 bmdma 0x1068 irq 15 May 8 00:05:40.544781 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 8 00:05:40.544800 kernel: sd 0:0:0:0: [sda] Attached SCSI disk May 8 00:05:40.432153 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 8 00:05:40.455543 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 8 00:05:40.460082 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 8 00:05:40.470803 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 8 00:05:40.616041 kernel: ata2.00: ATAPI: VMware Virtual IDE CDROM Drive, 00000001, max UDMA/33 May 8 00:05:40.619979 kernel: scsi 2:0:0:0: CD-ROM NECVMWar VMware IDE CDR10 1.00 PQ: 0 ANSI: 5 May 8 00:05:40.637306 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 1x/1x writer dvd-ram cd/rw xa/form2 cdda tray May 8 00:05:40.649939 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 May 8 00:05:40.650012 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 May 8 00:05:40.883975 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by (udev-worker) (486) May 8 00:05:40.884928 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_disk ROOT. May 8 00:05:40.891990 kernel: BTRFS: device fsid 1c9931ea-0995-4065-8a57-32743027822a devid 1 transid 42 /dev/sda3 scanned by (udev-worker) (488) May 8 00:05:40.892052 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_disk EFI-SYSTEM. May 8 00:05:40.898609 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_disk OEM. May 8 00:05:40.904371 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_disk USR-A. May 8 00:05:40.904674 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_disk USR-A. May 8 00:05:40.909120 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 8 00:05:40.935513 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 8 00:05:41.948872 disk-uuid[592]: The operation has completed successfully. May 8 00:05:41.949301 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 8 00:05:41.989070 systemd[1]: disk-uuid.service: Deactivated successfully. May 8 00:05:41.989146 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 8 00:05:42.010077 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 8 00:05:42.017693 sh[606]: Success May 8 00:05:42.034986 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" May 8 00:05:42.086109 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 8 00:05:42.096738 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 8 00:05:42.097301 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 8 00:05:42.135031 kernel: BTRFS info (device dm-0): first mount of filesystem 1c9931ea-0995-4065-8a57-32743027822a May 8 00:05:42.135072 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm May 8 00:05:42.135083 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 8 00:05:42.135091 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 8 00:05:42.135104 kernel: BTRFS info (device dm-0): using free space tree May 8 00:05:42.143987 kernel: BTRFS info (device dm-0): enabling ssd optimizations May 8 00:05:42.145310 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 8 00:05:42.154215 systemd[1]: Starting afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments... May 8 00:05:42.156018 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 8 00:05:42.209504 kernel: BTRFS info (device sda6): first mount of filesystem 13774eeb-24b8-4f6d-a245-c0facb6e43f9 May 8 00:05:42.209548 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm May 8 00:05:42.209558 kernel: BTRFS info (device sda6): using free space tree May 8 00:05:42.213980 kernel: BTRFS info (device sda6): enabling ssd optimizations May 8 00:05:42.221074 kernel: BTRFS info (device sda6): last unmount of filesystem 13774eeb-24b8-4f6d-a245-c0facb6e43f9 May 8 00:05:42.221924 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 8 00:05:42.227102 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 8 00:05:42.254912 systemd[1]: Finished afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments. May 8 00:05:42.260169 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 8 00:05:42.325319 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 8 00:05:42.331777 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 8 00:05:42.334889 ignition[662]: Ignition 2.20.0 May 8 00:05:42.334895 ignition[662]: Stage: fetch-offline May 8 00:05:42.334915 ignition[662]: no configs at "/usr/lib/ignition/base.d" May 8 00:05:42.334920 ignition[662]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" May 8 00:05:42.334990 ignition[662]: parsed url from cmdline: "" May 8 00:05:42.334992 ignition[662]: no config URL provided May 8 00:05:42.334996 ignition[662]: reading system config file "/usr/lib/ignition/user.ign" May 8 00:05:42.335003 ignition[662]: no config at "/usr/lib/ignition/user.ign" May 8 00:05:42.335395 ignition[662]: config successfully fetched May 8 00:05:42.335412 ignition[662]: parsing config with SHA512: 27a9a32715e57ba1cfa0de39cfad2cd6cfe3f1bd0f73f3d1fd6189ed5fd95f8cae23216dfd45631161113c27c645936469e496ced704a30002ae72a8a935d310 May 8 00:05:42.338283 unknown[662]: fetched base config from "system" May 8 00:05:42.338289 unknown[662]: fetched user config from "vmware" May 8 00:05:42.338575 ignition[662]: fetch-offline: fetch-offline passed May 8 00:05:42.338624 ignition[662]: Ignition finished successfully May 8 00:05:42.341230 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 8 00:05:42.350071 systemd-networkd[795]: lo: Link UP May 8 00:05:42.350078 systemd-networkd[795]: lo: Gained carrier May 8 00:05:42.350847 systemd-networkd[795]: Enumeration completed May 8 00:05:42.351047 systemd[1]: Started systemd-networkd.service - Network Configuration. May 8 00:05:42.351137 systemd-networkd[795]: ens192: Configuring with /etc/systemd/network/10-dracut-cmdline-99.network. May 8 00:05:42.351196 systemd[1]: Reached target network.target - Network. May 8 00:05:42.351278 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 8 00:05:42.355731 kernel: vmxnet3 0000:0b:00.0 ens192: intr type 3, mode 0, 3 vectors allocated May 8 00:05:42.355864 kernel: vmxnet3 0000:0b:00.0 ens192: NIC Link is Up 10000 Mbps May 8 00:05:42.355599 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 8 00:05:42.357598 systemd-networkd[795]: ens192: Link UP May 8 00:05:42.357602 systemd-networkd[795]: ens192: Gained carrier May 8 00:05:42.366120 ignition[800]: Ignition 2.20.0 May 8 00:05:42.366129 ignition[800]: Stage: kargs May 8 00:05:42.366274 ignition[800]: no configs at "/usr/lib/ignition/base.d" May 8 00:05:42.366281 ignition[800]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" May 8 00:05:42.367270 ignition[800]: kargs: kargs passed May 8 00:05:42.367311 ignition[800]: Ignition finished successfully May 8 00:05:42.368626 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 8 00:05:42.372062 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 8 00:05:42.378839 ignition[807]: Ignition 2.20.0 May 8 00:05:42.378845 ignition[807]: Stage: disks May 8 00:05:42.378946 ignition[807]: no configs at "/usr/lib/ignition/base.d" May 8 00:05:42.378952 ignition[807]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" May 8 00:05:42.379460 ignition[807]: disks: disks passed May 8 00:05:42.379483 ignition[807]: Ignition finished successfully May 8 00:05:42.380115 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 8 00:05:42.380477 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 8 00:05:42.380591 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 8 00:05:42.380701 systemd[1]: Reached target local-fs.target - Local File Systems. May 8 00:05:42.380796 systemd[1]: Reached target sysinit.target - System Initialization. May 8 00:05:42.380893 systemd[1]: Reached target basic.target - Basic System. May 8 00:05:42.385056 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 8 00:05:42.415958 systemd-fsck[815]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks May 8 00:05:42.417354 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 8 00:05:42.421056 systemd[1]: Mounting sysroot.mount - /sysroot... May 8 00:05:42.490975 kernel: EXT4-fs (sda9): mounted filesystem 369e2962-701e-4244-8c1c-27f8fa83bc64 r/w with ordered data mode. Quota mode: none. May 8 00:05:42.491471 systemd[1]: Mounted sysroot.mount - /sysroot. May 8 00:05:42.491836 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 8 00:05:42.506061 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 8 00:05:42.507461 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 8 00:05:42.507742 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 8 00:05:42.507771 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 8 00:05:42.507787 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 8 00:05:42.511020 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 8 00:05:42.511988 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 8 00:05:42.516986 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (823) May 8 00:05:42.519735 kernel: BTRFS info (device sda6): first mount of filesystem 13774eeb-24b8-4f6d-a245-c0facb6e43f9 May 8 00:05:42.519760 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm May 8 00:05:42.519769 kernel: BTRFS info (device sda6): using free space tree May 8 00:05:42.524097 kernel: BTRFS info (device sda6): enabling ssd optimizations May 8 00:05:42.525144 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 8 00:05:42.563518 initrd-setup-root[847]: cut: /sysroot/etc/passwd: No such file or directory May 8 00:05:42.576397 initrd-setup-root[854]: cut: /sysroot/etc/group: No such file or directory May 8 00:05:42.578639 initrd-setup-root[861]: cut: /sysroot/etc/shadow: No such file or directory May 8 00:05:42.581270 initrd-setup-root[868]: cut: /sysroot/etc/gshadow: No such file or directory May 8 00:05:42.636063 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 8 00:05:42.640058 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 8 00:05:42.641061 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 8 00:05:42.645980 kernel: BTRFS info (device sda6): last unmount of filesystem 13774eeb-24b8-4f6d-a245-c0facb6e43f9 May 8 00:05:42.658695 ignition[938]: INFO : Ignition 2.20.0 May 8 00:05:42.658695 ignition[938]: INFO : Stage: mount May 8 00:05:42.659069 ignition[938]: INFO : no configs at "/usr/lib/ignition/base.d" May 8 00:05:42.659069 ignition[938]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" May 8 00:05:42.660834 ignition[938]: INFO : mount: mount passed May 8 00:05:42.660834 ignition[938]: INFO : Ignition finished successfully May 8 00:05:42.660042 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 8 00:05:42.666059 systemd[1]: Starting ignition-files.service - Ignition (files)... May 8 00:05:42.670142 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 8 00:05:43.131769 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 8 00:05:43.139144 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 8 00:05:43.146983 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (949) May 8 00:05:43.149190 kernel: BTRFS info (device sda6): first mount of filesystem 13774eeb-24b8-4f6d-a245-c0facb6e43f9 May 8 00:05:43.149215 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm May 8 00:05:43.149225 kernel: BTRFS info (device sda6): using free space tree May 8 00:05:43.154405 kernel: BTRFS info (device sda6): enabling ssd optimizations May 8 00:05:43.153922 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 8 00:05:43.167142 ignition[966]: INFO : Ignition 2.20.0 May 8 00:05:43.167426 ignition[966]: INFO : Stage: files May 8 00:05:43.167808 ignition[966]: INFO : no configs at "/usr/lib/ignition/base.d" May 8 00:05:43.167946 ignition[966]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" May 8 00:05:43.168731 ignition[966]: DEBUG : files: compiled without relabeling support, skipping May 8 00:05:43.176287 ignition[966]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 8 00:05:43.176287 ignition[966]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 8 00:05:43.195914 ignition[966]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 8 00:05:43.196300 ignition[966]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 8 00:05:43.196780 unknown[966]: wrote ssh authorized keys file for user: core May 8 00:05:43.197202 ignition[966]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 8 00:05:43.213545 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 8 00:05:43.213545 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 May 8 00:05:43.281866 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 8 00:05:43.450084 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 8 00:05:43.450084 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" May 8 00:05:43.450597 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 May 8 00:05:43.730061 systemd-networkd[795]: ens192: Gained IPv6LL May 8 00:05:43.948457 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK May 8 00:05:44.150293 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" May 8 00:05:44.150293 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" May 8 00:05:44.150751 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" May 8 00:05:44.150751 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" May 8 00:05:44.150751 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" May 8 00:05:44.150751 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 8 00:05:44.150751 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 8 00:05:44.150751 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 8 00:05:44.150751 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 8 00:05:44.150751 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" May 8 00:05:44.152090 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 8 00:05:44.152090 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" May 8 00:05:44.152090 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" May 8 00:05:44.152090 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" May 8 00:05:44.152090 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-x86-64.raw: attempt #1 May 8 00:05:44.616849 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK May 8 00:05:44.859783 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" May 8 00:05:44.860091 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/etc/systemd/network/00-vmware.network" May 8 00:05:44.860091 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/etc/systemd/network/00-vmware.network" May 8 00:05:44.860091 ignition[966]: INFO : files: op(d): [started] processing unit "prepare-helm.service" May 8 00:05:44.862856 ignition[966]: INFO : files: op(d): op(e): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 8 00:05:44.863095 ignition[966]: INFO : files: op(d): op(e): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 8 00:05:44.863095 ignition[966]: INFO : files: op(d): [finished] processing unit "prepare-helm.service" May 8 00:05:44.863095 ignition[966]: INFO : files: op(f): [started] processing unit "coreos-metadata.service" May 8 00:05:44.863095 ignition[966]: INFO : files: op(f): op(10): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 8 00:05:44.863749 ignition[966]: INFO : files: op(f): op(10): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 8 00:05:44.863749 ignition[966]: INFO : files: op(f): [finished] processing unit "coreos-metadata.service" May 8 00:05:44.863749 ignition[966]: INFO : files: op(11): [started] setting preset to disabled for "coreos-metadata.service" May 8 00:05:44.885276 ignition[966]: INFO : files: op(11): op(12): [started] removing enablement symlink(s) for "coreos-metadata.service" May 8 00:05:44.887805 ignition[966]: INFO : files: op(11): op(12): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 8 00:05:44.888053 ignition[966]: INFO : files: op(11): [finished] setting preset to disabled for "coreos-metadata.service" May 8 00:05:44.888053 ignition[966]: INFO : files: op(13): [started] setting preset to enabled for "prepare-helm.service" May 8 00:05:44.888053 ignition[966]: INFO : files: op(13): [finished] setting preset to enabled for "prepare-helm.service" May 8 00:05:44.888053 ignition[966]: INFO : files: createResultFile: createFiles: op(14): [started] writing file "/sysroot/etc/.ignition-result.json" May 8 00:05:44.890305 ignition[966]: INFO : files: createResultFile: createFiles: op(14): [finished] writing file "/sysroot/etc/.ignition-result.json" May 8 00:05:44.890305 ignition[966]: INFO : files: files passed May 8 00:05:44.890305 ignition[966]: INFO : Ignition finished successfully May 8 00:05:44.888944 systemd[1]: Finished ignition-files.service - Ignition (files). May 8 00:05:44.893134 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 8 00:05:44.894344 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 8 00:05:44.896129 systemd[1]: ignition-quench.service: Deactivated successfully. May 8 00:05:44.896347 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 8 00:05:44.911124 initrd-setup-root-after-ignition[997]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 8 00:05:44.911124 initrd-setup-root-after-ignition[997]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 8 00:05:44.912260 initrd-setup-root-after-ignition[1001]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 8 00:05:44.913239 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 8 00:05:44.913666 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 8 00:05:44.919091 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 8 00:05:44.943499 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 8 00:05:44.943566 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 8 00:05:44.943883 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 8 00:05:44.943988 systemd[1]: Reached target initrd.target - Initrd Default Target. May 8 00:05:44.944196 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 8 00:05:44.944732 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 8 00:05:44.955063 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 8 00:05:44.959087 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 8 00:05:44.965368 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 8 00:05:44.965656 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 8 00:05:44.965834 systemd[1]: Stopped target timers.target - Timer Units. May 8 00:05:44.966311 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 8 00:05:44.966397 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 8 00:05:44.966865 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 8 00:05:44.967158 systemd[1]: Stopped target basic.target - Basic System. May 8 00:05:44.967588 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 8 00:05:44.967732 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 8 00:05:44.968014 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 8 00:05:44.968413 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 8 00:05:44.968682 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 8 00:05:44.968993 systemd[1]: Stopped target sysinit.target - System Initialization. May 8 00:05:44.969277 systemd[1]: Stopped target local-fs.target - Local File Systems. May 8 00:05:44.969535 systemd[1]: Stopped target swap.target - Swaps. May 8 00:05:44.969777 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 8 00:05:44.969867 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 8 00:05:44.970360 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 8 00:05:44.970640 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 8 00:05:44.970930 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 8 00:05:44.971146 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 8 00:05:44.971401 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 8 00:05:44.971486 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 8 00:05:44.971955 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 8 00:05:44.972137 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 8 00:05:44.972474 systemd[1]: Stopped target paths.target - Path Units. May 8 00:05:44.972723 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 8 00:05:44.972883 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 8 00:05:44.973059 systemd[1]: Stopped target slices.target - Slice Units. May 8 00:05:44.973374 systemd[1]: Stopped target sockets.target - Socket Units. May 8 00:05:44.973729 systemd[1]: iscsid.socket: Deactivated successfully. May 8 00:05:44.973785 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 8 00:05:44.974066 systemd[1]: iscsiuio.socket: Deactivated successfully. May 8 00:05:44.974116 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 8 00:05:44.974506 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 8 00:05:44.974576 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 8 00:05:44.975014 systemd[1]: ignition-files.service: Deactivated successfully. May 8 00:05:44.975080 systemd[1]: Stopped ignition-files.service - Ignition (files). May 8 00:05:44.984198 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 8 00:05:44.987138 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 8 00:05:44.987398 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 8 00:05:44.987486 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 8 00:05:44.987910 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 8 00:05:44.988129 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 8 00:05:44.991389 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 8 00:05:44.991617 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 8 00:05:44.993749 ignition[1021]: INFO : Ignition 2.20.0 May 8 00:05:44.993749 ignition[1021]: INFO : Stage: umount May 8 00:05:44.995446 ignition[1021]: INFO : no configs at "/usr/lib/ignition/base.d" May 8 00:05:44.995446 ignition[1021]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" May 8 00:05:44.995446 ignition[1021]: INFO : umount: umount passed May 8 00:05:44.995446 ignition[1021]: INFO : Ignition finished successfully May 8 00:05:44.995170 systemd[1]: ignition-mount.service: Deactivated successfully. May 8 00:05:44.996062 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 8 00:05:44.996470 systemd[1]: Stopped target network.target - Network. May 8 00:05:44.996566 systemd[1]: ignition-disks.service: Deactivated successfully. May 8 00:05:44.996608 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 8 00:05:44.996732 systemd[1]: ignition-kargs.service: Deactivated successfully. May 8 00:05:44.996769 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 8 00:05:44.996878 systemd[1]: ignition-setup.service: Deactivated successfully. May 8 00:05:44.996904 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 8 00:05:44.997018 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 8 00:05:44.997040 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 8 00:05:44.997228 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 8 00:05:44.998784 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 8 00:05:45.004248 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 8 00:05:45.006670 systemd[1]: systemd-networkd.service: Deactivated successfully. May 8 00:05:45.006745 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 8 00:05:45.007447 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. May 8 00:05:45.007909 systemd[1]: systemd-resolved.service: Deactivated successfully. May 8 00:05:45.007987 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 8 00:05:45.009151 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. May 8 00:05:45.009445 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 8 00:05:45.009478 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 8 00:05:45.013078 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 8 00:05:45.013216 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 8 00:05:45.013259 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 8 00:05:45.013466 systemd[1]: afterburn-network-kargs.service: Deactivated successfully. May 8 00:05:45.013501 systemd[1]: Stopped afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments. May 8 00:05:45.013643 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 8 00:05:45.013666 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 8 00:05:45.013851 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 8 00:05:45.013883 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 8 00:05:45.014020 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 8 00:05:45.014043 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 8 00:05:45.014382 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 8 00:05:45.016293 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 8 00:05:45.016340 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. May 8 00:05:45.022268 systemd[1]: network-cleanup.service: Deactivated successfully. May 8 00:05:45.022528 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 8 00:05:45.028502 systemd[1]: systemd-udevd.service: Deactivated successfully. May 8 00:05:45.028591 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 8 00:05:45.029329 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 8 00:05:45.029359 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 8 00:05:45.029488 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 8 00:05:45.029511 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 8 00:05:45.029624 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 8 00:05:45.029649 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 8 00:05:45.029954 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 8 00:05:45.030000 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 8 00:05:45.030133 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 8 00:05:45.030154 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 8 00:05:45.035090 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 8 00:05:45.035198 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 8 00:05:45.035226 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 8 00:05:45.035400 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. May 8 00:05:45.035426 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 8 00:05:45.035560 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 8 00:05:45.035582 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 8 00:05:45.036717 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 8 00:05:45.036743 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 8 00:05:45.037323 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. May 8 00:05:45.037356 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 8 00:05:45.037951 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 8 00:05:45.038408 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 8 00:05:45.112939 systemd[1]: sysroot-boot.service: Deactivated successfully. May 8 00:05:45.113041 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 8 00:05:45.113491 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 8 00:05:45.113662 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 8 00:05:45.113704 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 8 00:05:45.119077 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 8 00:05:45.132244 systemd[1]: Switching root. May 8 00:05:45.172673 systemd-journald[216]: Journal stopped May 8 00:05:47.265986 systemd-journald[216]: Received SIGTERM from PID 1 (systemd). May 8 00:05:47.266007 kernel: SELinux: policy capability network_peer_controls=1 May 8 00:05:47.266015 kernel: SELinux: policy capability open_perms=1 May 8 00:05:47.266021 kernel: SELinux: policy capability extended_socket_class=1 May 8 00:05:47.266026 kernel: SELinux: policy capability always_check_network=0 May 8 00:05:47.266031 kernel: SELinux: policy capability cgroup_seclabel=1 May 8 00:05:47.266038 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 8 00:05:47.266045 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 8 00:05:47.266050 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 8 00:05:47.266055 kernel: audit: type=1403 audit(1746662746.147:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 8 00:05:47.266062 systemd[1]: Successfully loaded SELinux policy in 36.919ms. May 8 00:05:47.266070 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 7.137ms. May 8 00:05:47.266081 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 8 00:05:47.266094 systemd[1]: Detected virtualization vmware. May 8 00:05:47.266106 systemd[1]: Detected architecture x86-64. May 8 00:05:47.266117 systemd[1]: Detected first boot. May 8 00:05:47.266129 systemd[1]: Initializing machine ID from random generator. May 8 00:05:47.266142 zram_generator::config[1066]: No configuration found. May 8 00:05:47.266270 kernel: vmw_vmci 0000:00:07.7: Using capabilities 0xc May 8 00:05:47.266282 kernel: Guest personality initialized and is active May 8 00:05:47.266288 kernel: VMCI host device registered (name=vmci, major=10, minor=125) May 8 00:05:47.266294 kernel: Initialized host personality May 8 00:05:47.266299 kernel: NET: Registered PF_VSOCK protocol family May 8 00:05:47.266306 systemd[1]: Populated /etc with preset unit settings. May 8 00:05:47.266315 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") May 8 00:05:47.266323 systemd[1]: COREOS_CUSTOM_PUBLIC_IPV4=$(ip addr show ens192 | grep -v "inet 10." | grep -Po "inet \K[\d.]+")" > ${OUTPUT}" May 8 00:05:47.266329 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. May 8 00:05:47.266336 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 8 00:05:47.266342 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 8 00:05:47.266348 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 8 00:05:47.266357 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 8 00:05:47.266364 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 8 00:05:47.266371 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 8 00:05:47.266377 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 8 00:05:47.266384 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 8 00:05:47.266391 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 8 00:05:47.266397 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 8 00:05:47.266404 systemd[1]: Created slice user.slice - User and Session Slice. May 8 00:05:47.266411 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 8 00:05:47.266419 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 8 00:05:47.266427 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 8 00:05:47.266434 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 8 00:05:47.266441 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 8 00:05:47.266448 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 8 00:05:47.266454 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... May 8 00:05:47.266461 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 8 00:05:47.266469 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 8 00:05:47.266476 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 8 00:05:47.266482 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 8 00:05:47.266489 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 8 00:05:47.266495 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 8 00:05:47.266503 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 8 00:05:47.266509 systemd[1]: Reached target slices.target - Slice Units. May 8 00:05:47.266516 systemd[1]: Reached target swap.target - Swaps. May 8 00:05:47.266523 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 8 00:05:47.266530 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 8 00:05:47.266537 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. May 8 00:05:47.266544 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 8 00:05:47.266551 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 8 00:05:47.266559 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 8 00:05:47.266566 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 8 00:05:47.266573 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 8 00:05:47.266579 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 8 00:05:47.266586 systemd[1]: Mounting media.mount - External Media Directory... May 8 00:05:47.266593 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 8 00:05:47.266600 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 8 00:05:47.266606 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 8 00:05:47.266614 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 8 00:05:47.266622 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 8 00:05:47.266629 systemd[1]: Reached target machines.target - Containers. May 8 00:05:47.266635 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 8 00:05:47.266642 systemd[1]: Starting ignition-delete-config.service - Ignition (delete config)... May 8 00:05:47.266649 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 8 00:05:47.266656 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 8 00:05:47.266663 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 8 00:05:47.266671 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 8 00:05:47.266678 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 8 00:05:47.266684 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 8 00:05:47.266691 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 8 00:05:47.266698 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 8 00:05:47.266705 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 8 00:05:47.266712 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 8 00:05:47.266718 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 8 00:05:47.266725 systemd[1]: Stopped systemd-fsck-usr.service. May 8 00:05:47.266734 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 8 00:05:47.266741 systemd[1]: Starting systemd-journald.service - Journal Service... May 8 00:05:47.266747 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 8 00:05:47.266754 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 8 00:05:47.266761 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 8 00:05:47.266768 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... May 8 00:05:47.266774 kernel: fuse: init (API version 7.39) May 8 00:05:47.266780 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 8 00:05:47.266789 systemd[1]: verity-setup.service: Deactivated successfully. May 8 00:05:47.266796 systemd[1]: Stopped verity-setup.service. May 8 00:05:47.266803 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 8 00:05:47.266810 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 8 00:05:47.266817 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 8 00:05:47.266824 systemd[1]: Mounted media.mount - External Media Directory. May 8 00:05:47.266830 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 8 00:05:47.266837 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 8 00:05:47.266856 systemd-journald[1152]: Collecting audit messages is disabled. May 8 00:05:47.266873 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 8 00:05:47.266881 systemd-journald[1152]: Journal started May 8 00:05:47.266897 systemd-journald[1152]: Runtime Journal (/run/log/journal/82b933b6c3494e0989b1261cca753b9c) is 4.8M, max 38.6M, 33.8M free. May 8 00:05:47.053847 systemd[1]: Queued start job for default target multi-user.target. May 8 00:05:47.268065 systemd[1]: Started systemd-journald.service - Journal Service. May 8 00:05:47.065014 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. May 8 00:05:47.065282 systemd[1]: systemd-journald.service: Deactivated successfully. May 8 00:05:47.269685 jq[1136]: true May 8 00:05:47.272981 kernel: loop: module loaded May 8 00:05:47.275987 jq[1161]: true May 8 00:05:47.276243 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 8 00:05:47.276710 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 8 00:05:47.276810 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 8 00:05:47.277061 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 8 00:05:47.277152 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 8 00:05:47.277375 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 8 00:05:47.277461 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 8 00:05:47.279430 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 8 00:05:47.280255 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 8 00:05:47.280504 systemd[1]: modprobe@loop.service: Deactivated successfully. May 8 00:05:47.280592 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 8 00:05:47.281985 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 8 00:05:47.282255 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 8 00:05:47.288016 kernel: ACPI: bus type drm_connector registered May 8 00:05:47.290464 systemd[1]: modprobe@drm.service: Deactivated successfully. May 8 00:05:47.290954 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 8 00:05:47.296032 systemd[1]: Reached target network-pre.target - Preparation for Network. May 8 00:05:47.305047 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 8 00:05:47.309356 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 8 00:05:47.309601 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 8 00:05:47.309625 systemd[1]: Reached target local-fs.target - Local File Systems. May 8 00:05:47.310339 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. May 8 00:05:47.313483 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 8 00:05:47.314647 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 8 00:05:47.314800 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 8 00:05:47.339278 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 8 00:05:47.342251 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 8 00:05:47.342701 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 8 00:05:47.345954 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 8 00:05:47.346107 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 8 00:05:47.347322 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 8 00:05:47.350628 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 8 00:05:47.352610 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 8 00:05:47.353171 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. May 8 00:05:47.353378 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 8 00:05:47.353527 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 8 00:05:47.353760 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 8 00:05:47.368124 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 8 00:05:47.390810 systemd-journald[1152]: Time spent on flushing to /var/log/journal/82b933b6c3494e0989b1261cca753b9c is 85.940ms for 1853 entries. May 8 00:05:47.390810 systemd-journald[1152]: System Journal (/var/log/journal/82b933b6c3494e0989b1261cca753b9c) is 8M, max 584.8M, 576.8M free. May 8 00:05:47.489698 systemd-journald[1152]: Received client request to flush runtime journal. May 8 00:05:47.489729 kernel: loop0: detected capacity change from 0 to 138176 May 8 00:05:47.394497 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 8 00:05:47.394689 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 8 00:05:47.401556 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... May 8 00:05:47.423702 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 8 00:05:47.455780 systemd-tmpfiles[1195]: ACLs are not supported, ignoring. May 8 00:05:47.455796 systemd-tmpfiles[1195]: ACLs are not supported, ignoring. May 8 00:05:47.462878 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 8 00:05:47.468736 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 8 00:05:47.475381 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 8 00:05:47.497133 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 8 00:05:47.500876 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 8 00:05:47.512109 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... May 8 00:05:47.512709 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. May 8 00:05:47.516979 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 8 00:05:47.519755 ignition[1162]: Ignition 2.20.0 May 8 00:05:47.519952 ignition[1162]: deleting config from guestinfo properties May 8 00:05:47.524856 udevadm[1233]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. May 8 00:05:47.573262 ignition[1162]: Successfully deleted config May 8 00:05:47.574540 systemd[1]: Finished ignition-delete-config.service - Ignition (delete config). May 8 00:05:47.594023 kernel: loop1: detected capacity change from 0 to 2960 May 8 00:05:47.618040 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 8 00:05:47.623388 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 8 00:05:47.641318 kernel: loop2: detected capacity change from 0 to 147912 May 8 00:05:47.649696 systemd-tmpfiles[1241]: ACLs are not supported, ignoring. May 8 00:05:47.650244 systemd-tmpfiles[1241]: ACLs are not supported, ignoring. May 8 00:05:47.660465 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 8 00:05:47.681982 kernel: loop3: detected capacity change from 0 to 205544 May 8 00:05:47.795914 kernel: loop4: detected capacity change from 0 to 138176 May 8 00:05:47.832985 kernel: loop5: detected capacity change from 0 to 2960 May 8 00:05:47.845002 kernel: loop6: detected capacity change from 0 to 147912 May 8 00:05:47.918196 kernel: loop7: detected capacity change from 0 to 205544 May 8 00:05:47.943180 (sd-merge)[1246]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-vmware'. May 8 00:05:47.943641 (sd-merge)[1246]: Merged extensions into '/usr'. May 8 00:05:47.947291 systemd[1]: Reload requested from client PID 1194 ('systemd-sysext') (unit systemd-sysext.service)... May 8 00:05:47.947376 systemd[1]: Reloading... May 8 00:05:47.993989 zram_generator::config[1270]: No configuration found. May 8 00:05:48.117905 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") May 8 00:05:48.137623 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 8 00:05:48.181180 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 8 00:05:48.181274 systemd[1]: Reloading finished in 233 ms. May 8 00:05:48.203296 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 8 00:05:48.210093 systemd[1]: Starting ensure-sysext.service... May 8 00:05:48.213356 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 8 00:05:48.226836 systemd[1]: Reload requested from client PID 1329 ('systemctl') (unit ensure-sysext.service)... May 8 00:05:48.226845 systemd[1]: Reloading... May 8 00:05:48.241656 systemd-tmpfiles[1330]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 8 00:05:48.244070 systemd-tmpfiles[1330]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 8 00:05:48.244605 systemd-tmpfiles[1330]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 8 00:05:48.244767 systemd-tmpfiles[1330]: ACLs are not supported, ignoring. May 8 00:05:48.244808 systemd-tmpfiles[1330]: ACLs are not supported, ignoring. May 8 00:05:48.248917 systemd-tmpfiles[1330]: Detected autofs mount point /boot during canonicalization of boot. May 8 00:05:48.248923 systemd-tmpfiles[1330]: Skipping /boot May 8 00:05:48.254316 systemd-tmpfiles[1330]: Detected autofs mount point /boot during canonicalization of boot. May 8 00:05:48.254322 systemd-tmpfiles[1330]: Skipping /boot May 8 00:05:48.286071 zram_generator::config[1356]: No configuration found. May 8 00:05:48.349197 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") May 8 00:05:48.369392 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 8 00:05:48.413748 systemd[1]: Reloading finished in 186 ms. May 8 00:05:48.426690 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 8 00:05:48.434161 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 8 00:05:48.438217 ldconfig[1183]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 8 00:05:48.442212 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 8 00:05:48.444184 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 8 00:05:48.447200 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 8 00:05:48.454018 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 8 00:05:48.455278 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 8 00:05:48.457635 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 8 00:05:48.459397 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 8 00:05:48.463265 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 8 00:05:48.467116 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 8 00:05:48.470075 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 8 00:05:48.473397 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 8 00:05:48.473584 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 8 00:05:48.473664 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 8 00:05:48.475425 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 8 00:05:48.476993 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 8 00:05:48.477659 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 8 00:05:48.477780 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 8 00:05:48.479019 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 8 00:05:48.480090 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 8 00:05:48.480246 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 8 00:05:48.480308 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 8 00:05:48.480371 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 8 00:05:48.482661 systemd[1]: modprobe@loop.service: Deactivated successfully. May 8 00:05:48.482768 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 8 00:05:48.487435 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 8 00:05:48.492184 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 8 00:05:48.495096 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 8 00:05:48.495297 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 8 00:05:48.495362 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 8 00:05:48.495472 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 8 00:05:48.496195 systemd[1]: Finished ensure-sysext.service. May 8 00:05:48.496544 systemd[1]: modprobe@drm.service: Deactivated successfully. May 8 00:05:48.496640 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 8 00:05:48.497715 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 8 00:05:48.508171 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 8 00:05:48.508513 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 8 00:05:48.508636 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 8 00:05:48.508919 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 8 00:05:48.509043 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 8 00:05:48.512326 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 8 00:05:48.512786 systemd-udevd[1428]: Using default interface naming scheme 'v255'. May 8 00:05:48.514109 systemd[1]: modprobe@loop.service: Deactivated successfully. May 8 00:05:48.514246 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 8 00:05:48.514526 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 8 00:05:48.535339 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 8 00:05:48.621337 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 8 00:05:48.628250 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 8 00:05:48.634352 augenrules[1467]: No rules May 8 00:05:48.635845 systemd[1]: audit-rules.service: Deactivated successfully. May 8 00:05:48.637120 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 8 00:05:48.651223 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 8 00:05:48.673491 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 8 00:05:48.680211 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 8 00:05:48.687186 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 8 00:05:48.687475 systemd[1]: Reached target time-set.target - System Time Set. May 8 00:05:48.691521 systemd-resolved[1422]: Positive Trust Anchors: May 8 00:05:48.691529 systemd-resolved[1422]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 8 00:05:48.691552 systemd-resolved[1422]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 8 00:05:48.697483 systemd-resolved[1422]: Defaulting to hostname 'linux'. May 8 00:05:48.699953 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 8 00:05:48.701914 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 8 00:05:48.708919 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 8 00:05:48.709172 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 8 00:05:48.743850 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. May 8 00:05:48.744942 systemd-networkd[1481]: lo: Link UP May 8 00:05:48.745605 systemd-networkd[1481]: lo: Gained carrier May 8 00:05:48.746128 systemd-networkd[1481]: Enumeration completed May 8 00:05:48.746583 systemd[1]: Started systemd-networkd.service - Network Configuration. May 8 00:05:48.746758 systemd[1]: Reached target network.target - Network. May 8 00:05:48.752117 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... May 8 00:05:48.753951 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 8 00:05:48.767733 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. May 8 00:05:48.783701 kernel: vmxnet3 0000:0b:00.0 ens192: intr type 3, mode 0, 3 vectors allocated May 8 00:05:48.783881 kernel: vmxnet3 0000:0b:00.0 ens192: NIC Link is Up 10000 Mbps May 8 00:05:48.781731 systemd-networkd[1481]: ens192: Configuring with /etc/systemd/network/00-vmware.network. May 8 00:05:48.785419 systemd-networkd[1481]: ens192: Link UP May 8 00:05:48.785783 systemd-networkd[1481]: ens192: Gained carrier May 8 00:05:48.791221 systemd-timesyncd[1450]: Network configuration changed, trying to establish connection. May 8 00:05:48.791983 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 May 8 00:05:48.799977 kernel: ACPI: button: Power Button [PWRF] May 8 00:05:48.802035 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 42 scanned by (udev-worker) (1479) May 8 00:05:48.852004 kernel: piix4_smbus 0000:00:07.3: SMBus Host Controller not enabled! May 8 00:05:48.875642 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_disk OEM. May 8 00:05:48.882531 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input3 May 8 00:05:48.880117 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 8 00:05:48.900110 (udev-worker)[1477]: id: Truncating stdout of 'dmi_memory_id' up to 16384 byte. May 8 00:05:48.907130 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 8 00:05:48.907623 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 8 00:05:48.922987 kernel: mousedev: PS/2 mouse device common for all mice May 8 00:05:48.936227 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. May 8 00:05:48.941115 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... May 8 00:05:48.948940 lvm[1519]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 8 00:05:48.975659 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. May 8 00:05:48.980716 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 8 00:05:48.981153 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 8 00:05:48.981273 systemd[1]: Reached target sysinit.target - System Initialization. May 8 00:05:48.981433 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 8 00:05:48.981562 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 8 00:05:48.981760 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 8 00:05:48.981905 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 8 00:05:48.982024 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 8 00:05:48.982132 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 8 00:05:48.982154 systemd[1]: Reached target paths.target - Path Units. May 8 00:05:48.982238 systemd[1]: Reached target timers.target - Timer Units. May 8 00:05:48.987358 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 8 00:05:48.988380 systemd[1]: Starting docker.socket - Docker Socket for the API... May 8 00:05:48.989938 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). May 8 00:05:48.990157 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). May 8 00:05:48.990275 systemd[1]: Reached target ssh-access.target - SSH Access Available. May 8 00:05:48.994140 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 8 00:05:48.994467 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. May 8 00:05:48.995391 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... May 8 00:05:48.995820 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 8 00:05:48.995952 systemd[1]: Reached target sockets.target - Socket Units. May 8 00:05:48.996055 systemd[1]: Reached target basic.target - Basic System. May 8 00:05:48.996167 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 8 00:05:48.996184 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 8 00:05:48.999079 systemd[1]: Starting containerd.service - containerd container runtime... May 8 00:05:49.000492 lvm[1526]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 8 00:05:49.001113 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 8 00:05:49.001855 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 8 00:05:49.004306 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 8 00:05:49.004409 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 8 00:05:49.006380 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 8 00:05:49.009062 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 8 00:05:49.012089 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 8 00:05:49.014372 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 8 00:05:49.015568 jq[1529]: false May 8 00:05:49.018410 systemd[1]: Starting systemd-logind.service - User Login Management... May 8 00:05:49.018999 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 8 00:05:49.019474 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 8 00:05:49.026101 systemd[1]: Starting update-engine.service - Update Engine... May 8 00:05:49.029995 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 8 00:05:49.038866 jq[1540]: true May 8 00:05:49.044028 systemd[1]: Starting vgauthd.service - VGAuth Service for open-vm-tools... May 8 00:05:49.044932 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. May 8 00:05:49.048275 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 8 00:05:49.048411 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 8 00:05:49.049142 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 8 00:05:49.049443 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 8 00:05:49.052099 update_engine[1537]: I20250508 00:05:49.051887 1537 main.cc:92] Flatcar Update Engine starting May 8 00:05:49.060839 dbus-daemon[1528]: [system] SELinux support is enabled May 8 00:05:49.065078 extend-filesystems[1530]: Found loop4 May 8 00:05:49.065078 extend-filesystems[1530]: Found loop5 May 8 00:05:49.065078 extend-filesystems[1530]: Found loop6 May 8 00:05:49.065078 extend-filesystems[1530]: Found loop7 May 8 00:05:49.065078 extend-filesystems[1530]: Found sda May 8 00:05:49.065078 extend-filesystems[1530]: Found sda1 May 8 00:05:49.065078 extend-filesystems[1530]: Found sda2 May 8 00:05:49.065078 extend-filesystems[1530]: Found sda3 May 8 00:05:49.065078 extend-filesystems[1530]: Found usr May 8 00:05:49.065078 extend-filesystems[1530]: Found sda4 May 8 00:05:49.065078 extend-filesystems[1530]: Found sda6 May 8 00:05:49.068379 update_engine[1537]: I20250508 00:05:49.064224 1537 update_check_scheduler.cc:74] Next update check in 7m52s May 8 00:05:49.061998 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 8 00:05:49.068438 jq[1549]: true May 8 00:05:49.071325 extend-filesystems[1530]: Found sda7 May 8 00:05:49.071325 extend-filesystems[1530]: Found sda9 May 8 00:05:49.071325 extend-filesystems[1530]: Checking size of /dev/sda9 May 8 00:05:49.074392 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 8 00:05:49.074415 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 8 00:05:49.075022 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 8 00:05:49.075036 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 8 00:05:49.075328 systemd[1]: Started update-engine.service - Update Engine. May 8 00:05:49.076483 (ntainerd)[1561]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 8 00:05:49.078122 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 8 00:05:49.078431 systemd[1]: motdgen.service: Deactivated successfully. May 8 00:05:49.078815 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 8 00:05:49.089202 extend-filesystems[1530]: Old size kept for /dev/sda9 May 8 00:05:49.089202 extend-filesystems[1530]: Found sr0 May 8 00:05:49.090059 systemd[1]: extend-filesystems.service: Deactivated successfully. May 8 00:05:49.090199 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 8 00:05:49.106132 systemd[1]: Started vgauthd.service - VGAuth Service for open-vm-tools. May 8 00:05:49.110027 systemd[1]: Starting vmtoolsd.service - Service for virtual machines hosted on VMware... May 8 00:05:49.110324 tar[1548]: linux-amd64/helm May 8 00:05:49.139246 systemd-logind[1535]: Watching system buttons on /dev/input/event1 (Power Button) May 8 00:05:49.139262 systemd-logind[1535]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 8 00:05:49.141302 systemd-logind[1535]: New seat seat0. May 8 00:05:49.142675 systemd[1]: Started systemd-logind.service - User Login Management. May 8 00:05:49.156404 unknown[1579]: Pref_Init: Using '/etc/vmware-tools/vgauth.conf' as preferences filepath May 8 00:05:49.160066 unknown[1579]: Core dump limit set to -1 May 8 00:05:49.166774 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 42 scanned by (udev-worker) (1482) May 8 00:05:49.165564 systemd[1]: Started vmtoolsd.service - Service for virtual machines hosted on VMware. May 8 00:05:49.178039 bash[1588]: Updated "/home/core/.ssh/authorized_keys" May 8 00:05:49.180398 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 8 00:05:49.181000 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. May 8 00:05:49.254596 locksmithd[1565]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 8 00:05:49.405248 containerd[1561]: time="2025-05-08T00:05:49.405199677Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 May 8 00:05:49.443240 containerd[1561]: time="2025-05-08T00:05:49.443206608Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 8 00:05:49.445592 containerd[1561]: time="2025-05-08T00:05:49.445567308Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.88-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 8 00:05:49.445592 containerd[1561]: time="2025-05-08T00:05:49.445588430Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 8 00:05:49.445654 containerd[1561]: time="2025-05-08T00:05:49.445601358Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 8 00:05:49.445705 containerd[1561]: time="2025-05-08T00:05:49.445692503Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 May 8 00:05:49.445728 containerd[1561]: time="2025-05-08T00:05:49.445707646Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 May 8 00:05:49.445760 containerd[1561]: time="2025-05-08T00:05:49.445746057Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 May 8 00:05:49.445784 containerd[1561]: time="2025-05-08T00:05:49.445772423Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 8 00:05:49.446930 containerd[1561]: time="2025-05-08T00:05:49.446915351Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 8 00:05:49.446980 containerd[1561]: time="2025-05-08T00:05:49.446932281Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 8 00:05:49.446980 containerd[1561]: time="2025-05-08T00:05:49.446945350Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 May 8 00:05:49.446980 containerd[1561]: time="2025-05-08T00:05:49.446952675Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 8 00:05:49.447033 containerd[1561]: time="2025-05-08T00:05:49.447011823Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 8 00:05:49.447215 containerd[1561]: time="2025-05-08T00:05:49.447199180Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 8 00:05:49.449419 containerd[1561]: time="2025-05-08T00:05:49.449395271Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 8 00:05:49.449419 containerd[1561]: time="2025-05-08T00:05:49.449415200Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 8 00:05:49.449485 containerd[1561]: time="2025-05-08T00:05:49.449473499Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 8 00:05:49.449514 containerd[1561]: time="2025-05-08T00:05:49.449505093Z" level=info msg="metadata content store policy set" policy=shared May 8 00:05:49.451778 containerd[1561]: time="2025-05-08T00:05:49.451756672Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 8 00:05:49.451845 containerd[1561]: time="2025-05-08T00:05:49.451789332Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 8 00:05:49.451845 containerd[1561]: time="2025-05-08T00:05:49.451802550Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 May 8 00:05:49.451845 containerd[1561]: time="2025-05-08T00:05:49.451812416Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 May 8 00:05:49.451845 containerd[1561]: time="2025-05-08T00:05:49.451820608Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 8 00:05:49.451910 containerd[1561]: time="2025-05-08T00:05:49.451900031Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 8 00:05:49.456980 containerd[1561]: time="2025-05-08T00:05:49.455364476Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 8 00:05:49.456980 containerd[1561]: time="2025-05-08T00:05:49.455486329Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 May 8 00:05:49.456980 containerd[1561]: time="2025-05-08T00:05:49.455496680Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 May 8 00:05:49.456980 containerd[1561]: time="2025-05-08T00:05:49.455506135Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 May 8 00:05:49.456980 containerd[1561]: time="2025-05-08T00:05:49.455514023Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 8 00:05:49.456980 containerd[1561]: time="2025-05-08T00:05:49.455521744Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 8 00:05:49.456980 containerd[1561]: time="2025-05-08T00:05:49.455528725Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 8 00:05:49.456980 containerd[1561]: time="2025-05-08T00:05:49.455536573Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 8 00:05:49.456980 containerd[1561]: time="2025-05-08T00:05:49.455544949Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 8 00:05:49.456980 containerd[1561]: time="2025-05-08T00:05:49.455551874Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 8 00:05:49.456980 containerd[1561]: time="2025-05-08T00:05:49.455558923Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 8 00:05:49.456980 containerd[1561]: time="2025-05-08T00:05:49.455565274Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 8 00:05:49.456980 containerd[1561]: time="2025-05-08T00:05:49.455580435Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 8 00:05:49.456980 containerd[1561]: time="2025-05-08T00:05:49.455588374Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 8 00:05:49.456903 systemd[1]: Started containerd.service - containerd container runtime. May 8 00:05:49.457339 containerd[1561]: time="2025-05-08T00:05:49.455598206Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 8 00:05:49.457339 containerd[1561]: time="2025-05-08T00:05:49.455606447Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 8 00:05:49.457339 containerd[1561]: time="2025-05-08T00:05:49.455614402Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 8 00:05:49.457339 containerd[1561]: time="2025-05-08T00:05:49.455625744Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 8 00:05:49.457339 containerd[1561]: time="2025-05-08T00:05:49.455632644Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 8 00:05:49.457339 containerd[1561]: time="2025-05-08T00:05:49.455640293Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 8 00:05:49.457339 containerd[1561]: time="2025-05-08T00:05:49.455647318Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 May 8 00:05:49.457339 containerd[1561]: time="2025-05-08T00:05:49.455655695Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 May 8 00:05:49.457339 containerd[1561]: time="2025-05-08T00:05:49.455663238Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 8 00:05:49.457339 containerd[1561]: time="2025-05-08T00:05:49.455669565Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 May 8 00:05:49.457339 containerd[1561]: time="2025-05-08T00:05:49.455675647Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 8 00:05:49.457339 containerd[1561]: time="2025-05-08T00:05:49.455683907Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 May 8 00:05:49.457339 containerd[1561]: time="2025-05-08T00:05:49.455695563Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 May 8 00:05:49.457339 containerd[1561]: time="2025-05-08T00:05:49.455703793Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 8 00:05:49.457339 containerd[1561]: time="2025-05-08T00:05:49.455709858Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 8 00:05:49.457538 containerd[1561]: time="2025-05-08T00:05:49.455736134Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 8 00:05:49.457538 containerd[1561]: time="2025-05-08T00:05:49.455748079Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 May 8 00:05:49.457538 containerd[1561]: time="2025-05-08T00:05:49.455756808Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 8 00:05:49.457538 containerd[1561]: time="2025-05-08T00:05:49.455763744Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 May 8 00:05:49.457538 containerd[1561]: time="2025-05-08T00:05:49.455768953Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 8 00:05:49.457538 containerd[1561]: time="2025-05-08T00:05:49.455775673Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 May 8 00:05:49.457538 containerd[1561]: time="2025-05-08T00:05:49.455780987Z" level=info msg="NRI interface is disabled by configuration." May 8 00:05:49.457538 containerd[1561]: time="2025-05-08T00:05:49.455794475Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 8 00:05:49.457658 containerd[1561]: time="2025-05-08T00:05:49.455982371Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 8 00:05:49.457658 containerd[1561]: time="2025-05-08T00:05:49.456011976Z" level=info msg="Connect containerd service" May 8 00:05:49.457658 containerd[1561]: time="2025-05-08T00:05:49.456036001Z" level=info msg="using legacy CRI server" May 8 00:05:49.457658 containerd[1561]: time="2025-05-08T00:05:49.456040649Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 8 00:05:49.457658 containerd[1561]: time="2025-05-08T00:05:49.456112965Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 8 00:05:49.457658 containerd[1561]: time="2025-05-08T00:05:49.456480014Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 8 00:05:49.457658 containerd[1561]: time="2025-05-08T00:05:49.456651803Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 8 00:05:49.457658 containerd[1561]: time="2025-05-08T00:05:49.456678178Z" level=info msg=serving... address=/run/containerd/containerd.sock May 8 00:05:49.457658 containerd[1561]: time="2025-05-08T00:05:49.456725731Z" level=info msg="Start subscribing containerd event" May 8 00:05:49.457658 containerd[1561]: time="2025-05-08T00:05:49.456752920Z" level=info msg="Start recovering state" May 8 00:05:49.457658 containerd[1561]: time="2025-05-08T00:05:49.456788108Z" level=info msg="Start event monitor" May 8 00:05:49.457658 containerd[1561]: time="2025-05-08T00:05:49.456794561Z" level=info msg="Start snapshots syncer" May 8 00:05:49.457658 containerd[1561]: time="2025-05-08T00:05:49.456799328Z" level=info msg="Start cni network conf syncer for default" May 8 00:05:49.457658 containerd[1561]: time="2025-05-08T00:05:49.456803365Z" level=info msg="Start streaming server" May 8 00:05:49.457658 containerd[1561]: time="2025-05-08T00:05:49.456838964Z" level=info msg="containerd successfully booted in 0.052702s" May 8 00:05:49.586056 sshd_keygen[1567]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 8 00:05:49.587061 tar[1548]: linux-amd64/LICENSE May 8 00:05:49.587061 tar[1548]: linux-amd64/README.md May 8 00:05:49.593665 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 8 00:05:49.601622 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 8 00:05:49.606165 systemd[1]: Starting issuegen.service - Generate /run/issue... May 8 00:05:49.609356 systemd[1]: issuegen.service: Deactivated successfully. May 8 00:05:49.609498 systemd[1]: Finished issuegen.service - Generate /run/issue. May 8 00:05:49.611478 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 8 00:05:49.619374 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 8 00:05:49.620827 systemd[1]: Started getty@tty1.service - Getty on tty1. May 8 00:05:49.624087 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. May 8 00:05:49.624287 systemd[1]: Reached target getty.target - Login Prompts. May 8 00:05:50.066117 systemd-networkd[1481]: ens192: Gained IPv6LL May 8 00:05:50.067048 systemd-timesyncd[1450]: Network configuration changed, trying to establish connection. May 8 00:05:50.067737 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 8 00:05:50.068479 systemd[1]: Reached target network-online.target - Network is Online. May 8 00:05:50.080409 systemd[1]: Starting coreos-metadata.service - VMware metadata agent... May 8 00:05:50.090743 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 00:05:50.093913 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 8 00:05:50.130839 systemd[1]: coreos-metadata.service: Deactivated successfully. May 8 00:05:50.131013 systemd[1]: Finished coreos-metadata.service - VMware metadata agent. May 8 00:05:50.132012 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 8 00:05:50.136039 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 8 00:05:50.920547 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:05:50.920878 systemd[1]: Reached target multi-user.target - Multi-User System. May 8 00:05:50.921685 systemd[1]: Startup finished in 987ms (kernel) + 6.508s (initrd) + 4.809s (userspace) = 12.305s. May 8 00:05:50.934485 (kubelet)[1706]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 8 00:05:50.969537 login[1673]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) May 8 00:05:50.971083 login[1674]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) May 8 00:05:50.976114 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 8 00:05:50.983155 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 8 00:05:50.991221 systemd-logind[1535]: New session 1 of user core. May 8 00:05:50.993901 systemd-logind[1535]: New session 2 of user core. May 8 00:05:50.998845 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 8 00:05:51.006716 systemd[1]: Starting user@500.service - User Manager for UID 500... May 8 00:05:51.009308 (systemd)[1713]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 8 00:05:51.011211 systemd-logind[1535]: New session c1 of user core. May 8 00:05:51.101424 systemd[1713]: Queued start job for default target default.target. May 8 00:05:51.110883 systemd[1713]: Created slice app.slice - User Application Slice. May 8 00:05:51.110906 systemd[1713]: Reached target paths.target - Paths. May 8 00:05:51.110936 systemd[1713]: Reached target timers.target - Timers. May 8 00:05:51.111809 systemd[1713]: Starting dbus.socket - D-Bus User Message Bus Socket... May 8 00:05:51.119366 systemd[1713]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 8 00:05:51.119463 systemd[1713]: Reached target sockets.target - Sockets. May 8 00:05:51.119549 systemd[1713]: Reached target basic.target - Basic System. May 8 00:05:51.119609 systemd[1713]: Reached target default.target - Main User Target. May 8 00:05:51.119666 systemd[1713]: Startup finished in 103ms. May 8 00:05:51.119687 systemd[1]: Started user@500.service - User Manager for UID 500. May 8 00:05:51.120815 systemd[1]: Started session-1.scope - Session 1 of User core. May 8 00:05:51.121402 systemd[1]: Started session-2.scope - Session 2 of User core. May 8 00:05:51.470273 kubelet[1706]: E0508 00:05:51.470207 1706 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 8 00:05:51.471508 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 8 00:05:51.471594 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 8 00:05:51.471788 systemd[1]: kubelet.service: Consumed 588ms CPU time, 236.7M memory peak. May 8 00:05:51.779205 systemd-timesyncd[1450]: Network configuration changed, trying to establish connection. May 8 00:06:01.721926 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 8 00:06:01.730129 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 00:06:01.863642 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:06:01.866176 (kubelet)[1757]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 8 00:06:01.928619 kubelet[1757]: E0508 00:06:01.928583 1757 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 8 00:06:01.930961 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 8 00:06:01.931066 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 8 00:06:01.931276 systemd[1]: kubelet.service: Consumed 83ms CPU time, 97.9M memory peak. May 8 00:06:12.147660 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 8 00:06:12.158176 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 00:06:12.436534 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:06:12.439600 (kubelet)[1772]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 8 00:06:12.508622 kubelet[1772]: E0508 00:06:12.508586 1772 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 8 00:06:12.509775 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 8 00:06:12.509856 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 8 00:06:12.510140 systemd[1]: kubelet.service: Consumed 85ms CPU time, 97.5M memory peak. May 8 00:06:19.281871 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 8 00:06:19.282595 systemd[1]: Started sshd@0-139.178.70.109:22-139.178.89.65:60742.service - OpenSSH per-connection server daemon (139.178.89.65:60742). May 8 00:06:19.324924 sshd[1781]: Accepted publickey for core from 139.178.89.65 port 60742 ssh2: RSA SHA256:YTfHkQoI5xSpvBwpAWL9S8jbOKDeUPQ5sL4eA5EtDVU May 8 00:06:19.325700 sshd-session[1781]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:06:19.328415 systemd-logind[1535]: New session 3 of user core. May 8 00:06:19.335080 systemd[1]: Started session-3.scope - Session 3 of User core. May 8 00:06:19.404214 systemd[1]: Started sshd@1-139.178.70.109:22-139.178.89.65:60748.service - OpenSSH per-connection server daemon (139.178.89.65:60748). May 8 00:06:19.437881 sshd[1786]: Accepted publickey for core from 139.178.89.65 port 60748 ssh2: RSA SHA256:YTfHkQoI5xSpvBwpAWL9S8jbOKDeUPQ5sL4eA5EtDVU May 8 00:06:19.438649 sshd-session[1786]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:06:19.441457 systemd-logind[1535]: New session 4 of user core. May 8 00:06:19.446073 systemd[1]: Started session-4.scope - Session 4 of User core. May 8 00:06:19.494248 sshd[1788]: Connection closed by 139.178.89.65 port 60748 May 8 00:06:19.494544 sshd-session[1786]: pam_unix(sshd:session): session closed for user core May 8 00:06:19.512216 systemd[1]: sshd@1-139.178.70.109:22-139.178.89.65:60748.service: Deactivated successfully. May 8 00:06:19.513120 systemd[1]: session-4.scope: Deactivated successfully. May 8 00:06:19.513572 systemd-logind[1535]: Session 4 logged out. Waiting for processes to exit. May 8 00:06:19.517391 systemd[1]: Started sshd@2-139.178.70.109:22-139.178.89.65:60764.service - OpenSSH per-connection server daemon (139.178.89.65:60764). May 8 00:06:19.519169 systemd-logind[1535]: Removed session 4. May 8 00:06:19.557632 sshd[1793]: Accepted publickey for core from 139.178.89.65 port 60764 ssh2: RSA SHA256:YTfHkQoI5xSpvBwpAWL9S8jbOKDeUPQ5sL4eA5EtDVU May 8 00:06:19.558544 sshd-session[1793]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:06:19.561849 systemd-logind[1535]: New session 5 of user core. May 8 00:06:19.567048 systemd[1]: Started session-5.scope - Session 5 of User core. May 8 00:06:19.612987 sshd[1796]: Connection closed by 139.178.89.65 port 60764 May 8 00:06:19.613905 sshd-session[1793]: pam_unix(sshd:session): session closed for user core May 8 00:06:19.626610 systemd[1]: sshd@2-139.178.70.109:22-139.178.89.65:60764.service: Deactivated successfully. May 8 00:06:19.627696 systemd[1]: session-5.scope: Deactivated successfully. May 8 00:06:19.628252 systemd-logind[1535]: Session 5 logged out. Waiting for processes to exit. May 8 00:06:19.634249 systemd[1]: Started sshd@3-139.178.70.109:22-139.178.89.65:60774.service - OpenSSH per-connection server daemon (139.178.89.65:60774). May 8 00:06:19.637364 systemd-logind[1535]: Removed session 5. May 8 00:06:19.671214 sshd[1801]: Accepted publickey for core from 139.178.89.65 port 60774 ssh2: RSA SHA256:YTfHkQoI5xSpvBwpAWL9S8jbOKDeUPQ5sL4eA5EtDVU May 8 00:06:19.672081 sshd-session[1801]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:06:19.675039 systemd-logind[1535]: New session 6 of user core. May 8 00:06:19.695139 systemd[1]: Started session-6.scope - Session 6 of User core. May 8 00:06:19.744482 sshd[1804]: Connection closed by 139.178.89.65 port 60774 May 8 00:06:19.745261 sshd-session[1801]: pam_unix(sshd:session): session closed for user core May 8 00:06:19.754997 systemd[1]: sshd@3-139.178.70.109:22-139.178.89.65:60774.service: Deactivated successfully. May 8 00:06:19.755843 systemd[1]: session-6.scope: Deactivated successfully. May 8 00:06:19.756592 systemd-logind[1535]: Session 6 logged out. Waiting for processes to exit. May 8 00:06:19.757326 systemd[1]: Started sshd@4-139.178.70.109:22-139.178.89.65:60776.service - OpenSSH per-connection server daemon (139.178.89.65:60776). May 8 00:06:19.758346 systemd-logind[1535]: Removed session 6. May 8 00:06:19.801857 sshd[1809]: Accepted publickey for core from 139.178.89.65 port 60776 ssh2: RSA SHA256:YTfHkQoI5xSpvBwpAWL9S8jbOKDeUPQ5sL4eA5EtDVU May 8 00:06:19.802619 sshd-session[1809]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:06:19.806138 systemd-logind[1535]: New session 7 of user core. May 8 00:06:19.815129 systemd[1]: Started session-7.scope - Session 7 of User core. May 8 00:06:19.870232 sudo[1813]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 8 00:06:19.870598 sudo[1813]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 8 00:06:19.878577 sudo[1813]: pam_unix(sudo:session): session closed for user root May 8 00:06:19.879587 sshd[1812]: Connection closed by 139.178.89.65 port 60776 May 8 00:06:19.880042 sshd-session[1809]: pam_unix(sshd:session): session closed for user core May 8 00:06:19.888931 systemd[1]: sshd@4-139.178.70.109:22-139.178.89.65:60776.service: Deactivated successfully. May 8 00:06:19.890037 systemd[1]: session-7.scope: Deactivated successfully. May 8 00:06:19.890996 systemd-logind[1535]: Session 7 logged out. Waiting for processes to exit. May 8 00:06:19.891999 systemd[1]: Started sshd@5-139.178.70.109:22-139.178.89.65:60778.service - OpenSSH per-connection server daemon (139.178.89.65:60778). May 8 00:06:19.893179 systemd-logind[1535]: Removed session 7. May 8 00:06:19.929492 sshd[1818]: Accepted publickey for core from 139.178.89.65 port 60778 ssh2: RSA SHA256:YTfHkQoI5xSpvBwpAWL9S8jbOKDeUPQ5sL4eA5EtDVU May 8 00:06:19.930305 sshd-session[1818]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:06:19.932946 systemd-logind[1535]: New session 8 of user core. May 8 00:06:19.947122 systemd[1]: Started session-8.scope - Session 8 of User core. May 8 00:06:19.996482 sudo[1823]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 8 00:06:19.996701 sudo[1823]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 8 00:06:19.999546 sudo[1823]: pam_unix(sudo:session): session closed for user root May 8 00:06:20.003177 sudo[1822]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules May 8 00:06:20.003331 sudo[1822]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 8 00:06:20.014259 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 8 00:06:20.031896 augenrules[1845]: No rules May 8 00:06:20.032270 systemd[1]: audit-rules.service: Deactivated successfully. May 8 00:06:20.032404 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 8 00:06:20.032897 sudo[1822]: pam_unix(sudo:session): session closed for user root May 8 00:06:20.033750 sshd[1821]: Connection closed by 139.178.89.65 port 60778 May 8 00:06:20.034031 sshd-session[1818]: pam_unix(sshd:session): session closed for user core May 8 00:06:20.043078 systemd[1]: sshd@5-139.178.70.109:22-139.178.89.65:60778.service: Deactivated successfully. May 8 00:06:20.043896 systemd[1]: session-8.scope: Deactivated successfully. May 8 00:06:20.044687 systemd-logind[1535]: Session 8 logged out. Waiting for processes to exit. May 8 00:06:20.047292 systemd[1]: Started sshd@6-139.178.70.109:22-139.178.89.65:60792.service - OpenSSH per-connection server daemon (139.178.89.65:60792). May 8 00:06:20.048132 systemd-logind[1535]: Removed session 8. May 8 00:06:20.083979 sshd[1853]: Accepted publickey for core from 139.178.89.65 port 60792 ssh2: RSA SHA256:YTfHkQoI5xSpvBwpAWL9S8jbOKDeUPQ5sL4eA5EtDVU May 8 00:06:20.084949 sshd-session[1853]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:06:20.088127 systemd-logind[1535]: New session 9 of user core. May 8 00:06:20.097147 systemd[1]: Started session-9.scope - Session 9 of User core. May 8 00:06:20.146319 sudo[1858]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 8 00:06:20.146481 sudo[1858]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 8 00:06:20.530383 (dockerd)[1876]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 8 00:06:20.530749 systemd[1]: Starting docker.service - Docker Application Container Engine... May 8 00:06:20.973813 dockerd[1876]: time="2025-05-08T00:06:20.973768824Z" level=info msg="Starting up" May 8 00:06:21.108981 dockerd[1876]: time="2025-05-08T00:06:21.108948740Z" level=info msg="Loading containers: start." May 8 00:06:21.286133 kernel: Initializing XFRM netlink socket May 8 00:06:21.339104 systemd-timesyncd[1450]: Network configuration changed, trying to establish connection. May 8 00:07:42.157790 systemd-resolved[1422]: Clock change detected. Flushing caches. May 8 00:07:42.158172 systemd-timesyncd[1450]: Contacted time server 104.234.61.117:123 (2.flatcar.pool.ntp.org). May 8 00:07:42.158215 systemd-timesyncd[1450]: Initial clock synchronization to Thu 2025-05-08 00:07:42.157658 UTC. May 8 00:07:42.185748 systemd-networkd[1481]: docker0: Link UP May 8 00:07:42.214744 dockerd[1876]: time="2025-05-08T00:07:42.214710188Z" level=info msg="Loading containers: done." May 8 00:07:42.230055 dockerd[1876]: time="2025-05-08T00:07:42.230025484Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 8 00:07:42.230160 dockerd[1876]: time="2025-05-08T00:07:42.230093710Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 May 8 00:07:42.230185 dockerd[1876]: time="2025-05-08T00:07:42.230162989Z" level=info msg="Daemon has completed initialization" May 8 00:07:42.246646 dockerd[1876]: time="2025-05-08T00:07:42.246384389Z" level=info msg="API listen on /run/docker.sock" May 8 00:07:42.246925 systemd[1]: Started docker.service - Docker Application Container Engine. May 8 00:07:43.450367 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. May 8 00:07:43.456868 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 00:07:43.922043 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:07:43.925218 (kubelet)[2070]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 8 00:07:43.955357 kubelet[2070]: E0508 00:07:43.955322 2070 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 8 00:07:43.956920 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 8 00:07:43.957009 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 8 00:07:43.957532 systemd[1]: kubelet.service: Consumed 86ms CPU time, 97.8M memory peak. May 8 00:07:44.440246 containerd[1561]: time="2025-05-08T00:07:44.440217879Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.8\"" May 8 00:07:45.335072 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1147524918.mount: Deactivated successfully. May 8 00:07:46.873199 containerd[1561]: time="2025-05-08T00:07:46.873149872Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:07:46.874101 containerd[1561]: time="2025-05-08T00:07:46.873920897Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.8: active requests=0, bytes read=27960987" May 8 00:07:46.874472 containerd[1561]: time="2025-05-08T00:07:46.874453541Z" level=info msg="ImageCreate event name:\"sha256:e6d208e868a9ca7f89efcb0d5bddc55a62df551cb4fb39c5099a2fe7b0e33adc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:07:46.876350 containerd[1561]: time="2025-05-08T00:07:46.876329848Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:30090db6a7d53799163ce82dae9e8ddb645fd47db93f2ec9da0cc787fd825625\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:07:46.877161 containerd[1561]: time="2025-05-08T00:07:46.877138958Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.8\" with image id \"sha256:e6d208e868a9ca7f89efcb0d5bddc55a62df551cb4fb39c5099a2fe7b0e33adc\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.8\", repo digest \"registry.k8s.io/kube-apiserver@sha256:30090db6a7d53799163ce82dae9e8ddb645fd47db93f2ec9da0cc787fd825625\", size \"27957787\" in 2.436895622s" May 8 00:07:46.877213 containerd[1561]: time="2025-05-08T00:07:46.877165788Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.8\" returns image reference \"sha256:e6d208e868a9ca7f89efcb0d5bddc55a62df551cb4fb39c5099a2fe7b0e33adc\"" May 8 00:07:46.878460 containerd[1561]: time="2025-05-08T00:07:46.878440172Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.8\"" May 8 00:07:48.934727 containerd[1561]: time="2025-05-08T00:07:48.934242636Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:07:48.945176 containerd[1561]: time="2025-05-08T00:07:48.945059711Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.8: active requests=0, bytes read=24713776" May 8 00:07:48.953623 containerd[1561]: time="2025-05-08T00:07:48.953588737Z" level=info msg="ImageCreate event name:\"sha256:fbda0bc3bc4bb93c8b2d8627a9aa8d945c200b51e48c88f9b837dde628fc7c8f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:07:48.965366 containerd[1561]: time="2025-05-08T00:07:48.965340536Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:29eaddc64792a689df48506e78bbc641d063ac8bb92d2e66ae2ad05977420747\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:07:48.965992 containerd[1561]: time="2025-05-08T00:07:48.965861250Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.8\" with image id \"sha256:fbda0bc3bc4bb93c8b2d8627a9aa8d945c200b51e48c88f9b837dde628fc7c8f\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.8\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:29eaddc64792a689df48506e78bbc641d063ac8bb92d2e66ae2ad05977420747\", size \"26202149\" in 2.087399164s" May 8 00:07:48.965992 containerd[1561]: time="2025-05-08T00:07:48.965878419Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.8\" returns image reference \"sha256:fbda0bc3bc4bb93c8b2d8627a9aa8d945c200b51e48c88f9b837dde628fc7c8f\"" May 8 00:07:48.966144 containerd[1561]: time="2025-05-08T00:07:48.966119125Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.8\"" May 8 00:07:50.613716 containerd[1561]: time="2025-05-08T00:07:50.613347197Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:07:50.623890 containerd[1561]: time="2025-05-08T00:07:50.623854316Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.8: active requests=0, bytes read=18780386" May 8 00:07:50.631996 containerd[1561]: time="2025-05-08T00:07:50.631969512Z" level=info msg="ImageCreate event name:\"sha256:2a9c646db0be37003c2b50605a252f7139145411d9e4e0badd8ae07f56ce5eb8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:07:50.642406 containerd[1561]: time="2025-05-08T00:07:50.642379882Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:22994a2632e81059720480b9f6bdeb133b08d58492d0b36dfd6e9768b159b22a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:07:50.643123 containerd[1561]: time="2025-05-08T00:07:50.643046252Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.8\" with image id \"sha256:2a9c646db0be37003c2b50605a252f7139145411d9e4e0badd8ae07f56ce5eb8\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.8\", repo digest \"registry.k8s.io/kube-scheduler@sha256:22994a2632e81059720480b9f6bdeb133b08d58492d0b36dfd6e9768b159b22a\", size \"20268777\" in 1.676909898s" May 8 00:07:50.643123 containerd[1561]: time="2025-05-08T00:07:50.643063184Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.8\" returns image reference \"sha256:2a9c646db0be37003c2b50605a252f7139145411d9e4e0badd8ae07f56ce5eb8\"" May 8 00:07:50.643617 containerd[1561]: time="2025-05-08T00:07:50.643406148Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.8\"" May 8 00:07:52.362061 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3582562635.mount: Deactivated successfully. May 8 00:07:52.818594 containerd[1561]: time="2025-05-08T00:07:52.818089301Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:07:52.823138 containerd[1561]: time="2025-05-08T00:07:52.823085191Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.8: active requests=0, bytes read=30354625" May 8 00:07:52.829573 containerd[1561]: time="2025-05-08T00:07:52.829516651Z" level=info msg="ImageCreate event name:\"sha256:7d73f013cedcf301aef42272c93e4c1174dab1a8eccd96840091ef04b63480f2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:07:52.838377 containerd[1561]: time="2025-05-08T00:07:52.838327096Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:dd0c9a37670f209947b1ed880f06a2e93e1d41da78c037f52f94b13858769838\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:07:52.838972 containerd[1561]: time="2025-05-08T00:07:52.838792157Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.8\" with image id \"sha256:7d73f013cedcf301aef42272c93e4c1174dab1a8eccd96840091ef04b63480f2\", repo tag \"registry.k8s.io/kube-proxy:v1.31.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:dd0c9a37670f209947b1ed880f06a2e93e1d41da78c037f52f94b13858769838\", size \"30353644\" in 2.195367889s" May 8 00:07:52.838972 containerd[1561]: time="2025-05-08T00:07:52.838819281Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.8\" returns image reference \"sha256:7d73f013cedcf301aef42272c93e4c1174dab1a8eccd96840091ef04b63480f2\"" May 8 00:07:52.839213 containerd[1561]: time="2025-05-08T00:07:52.839195049Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" May 8 00:07:53.777131 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3746114673.mount: Deactivated successfully. May 8 00:07:54.200484 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. May 8 00:07:54.205851 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 00:07:54.270796 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:07:54.272582 (kubelet)[2200]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 8 00:07:54.305016 kubelet[2200]: E0508 00:07:54.304987 2200 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 8 00:07:54.307426 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 8 00:07:54.307554 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 8 00:07:54.307822 systemd[1]: kubelet.service: Consumed 77ms CPU time, 93.7M memory peak. May 8 00:07:54.530774 containerd[1561]: time="2025-05-08T00:07:54.530135818Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:07:54.535070 containerd[1561]: time="2025-05-08T00:07:54.535012080Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" May 8 00:07:54.553027 containerd[1561]: time="2025-05-08T00:07:54.552980353Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:07:54.566804 containerd[1561]: time="2025-05-08T00:07:54.566761510Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:07:54.567931 containerd[1561]: time="2025-05-08T00:07:54.567789969Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.728510518s" May 8 00:07:54.567931 containerd[1561]: time="2025-05-08T00:07:54.567814690Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" May 8 00:07:54.568507 containerd[1561]: time="2025-05-08T00:07:54.568482947Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 8 00:07:55.041934 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4158473890.mount: Deactivated successfully. May 8 00:07:55.067769 containerd[1561]: time="2025-05-08T00:07:55.067730646Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:07:55.071635 containerd[1561]: time="2025-05-08T00:07:55.071590604Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" May 8 00:07:55.075635 containerd[1561]: time="2025-05-08T00:07:55.075607504Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:07:55.080372 containerd[1561]: time="2025-05-08T00:07:55.080349464Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:07:55.081046 containerd[1561]: time="2025-05-08T00:07:55.080745069Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 512.241259ms" May 8 00:07:55.081046 containerd[1561]: time="2025-05-08T00:07:55.080764784Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" May 8 00:07:55.081046 containerd[1561]: time="2025-05-08T00:07:55.081010160Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" May 8 00:07:55.433254 update_engine[1537]: I20250508 00:07:55.432899 1537 update_attempter.cc:509] Updating boot flags... May 8 00:07:55.465721 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 42 scanned by (udev-worker) (2221) May 8 00:07:55.520748 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 42 scanned by (udev-worker) (2217) May 8 00:07:55.620124 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2139981022.mount: Deactivated successfully. May 8 00:07:57.412849 containerd[1561]: time="2025-05-08T00:07:57.412806318Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:07:57.420725 containerd[1561]: time="2025-05-08T00:07:57.420668212Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56780013" May 8 00:07:57.426774 containerd[1561]: time="2025-05-08T00:07:57.426733315Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:07:57.432182 containerd[1561]: time="2025-05-08T00:07:57.432137174Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:07:57.433157 containerd[1561]: time="2025-05-08T00:07:57.432847186Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 2.351820672s" May 8 00:07:57.433157 containerd[1561]: time="2025-05-08T00:07:57.432872601Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" May 8 00:07:59.725528 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:07:59.725751 systemd[1]: kubelet.service: Consumed 77ms CPU time, 93.7M memory peak. May 8 00:07:59.740851 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 00:07:59.756369 systemd[1]: Reload requested from client PID 2310 ('systemctl') (unit session-9.scope)... May 8 00:07:59.756480 systemd[1]: Reloading... May 8 00:07:59.825762 zram_generator::config[2356]: No configuration found. May 8 00:07:59.890811 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") May 8 00:07:59.909308 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 8 00:07:59.976080 systemd[1]: Reloading finished in 219 ms. May 8 00:07:59.999533 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM May 8 00:07:59.999598 systemd[1]: kubelet.service: Failed with result 'signal'. May 8 00:07:59.999835 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:08:00.004851 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 00:08:00.258875 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:08:00.261930 (kubelet)[2422]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 8 00:08:00.328427 kubelet[2422]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 8 00:08:00.328666 kubelet[2422]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 8 00:08:00.328722 kubelet[2422]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 8 00:08:00.328815 kubelet[2422]: I0508 00:08:00.328793 2422 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 8 00:08:00.716065 kubelet[2422]: I0508 00:08:00.716035 2422 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" May 8 00:08:00.716065 kubelet[2422]: I0508 00:08:00.716060 2422 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 8 00:08:00.716293 kubelet[2422]: I0508 00:08:00.716279 2422 server.go:929] "Client rotation is on, will bootstrap in background" May 8 00:08:00.741473 kubelet[2422]: I0508 00:08:00.741452 2422 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 8 00:08:00.741645 kubelet[2422]: E0508 00:08:00.741527 2422 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://139.178.70.109:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 139.178.70.109:6443: connect: connection refused" logger="UnhandledError" May 8 00:08:00.748152 kubelet[2422]: E0508 00:08:00.748056 2422 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 8 00:08:00.748152 kubelet[2422]: I0508 00:08:00.748074 2422 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 8 00:08:00.752230 kubelet[2422]: I0508 00:08:00.752092 2422 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 8 00:08:00.753441 kubelet[2422]: I0508 00:08:00.753005 2422 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 8 00:08:00.753441 kubelet[2422]: I0508 00:08:00.753107 2422 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 8 00:08:00.753441 kubelet[2422]: I0508 00:08:00.753146 2422 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 8 00:08:00.753441 kubelet[2422]: I0508 00:08:00.753263 2422 topology_manager.go:138] "Creating topology manager with none policy" May 8 00:08:00.753583 kubelet[2422]: I0508 00:08:00.753269 2422 container_manager_linux.go:300] "Creating device plugin manager" May 8 00:08:00.753583 kubelet[2422]: I0508 00:08:00.753340 2422 state_mem.go:36] "Initialized new in-memory state store" May 8 00:08:00.754805 kubelet[2422]: I0508 00:08:00.754790 2422 kubelet.go:408] "Attempting to sync node with API server" May 8 00:08:00.754868 kubelet[2422]: I0508 00:08:00.754860 2422 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 8 00:08:00.754929 kubelet[2422]: I0508 00:08:00.754924 2422 kubelet.go:314] "Adding apiserver pod source" May 8 00:08:00.754965 kubelet[2422]: I0508 00:08:00.754960 2422 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 8 00:08:00.758182 kubelet[2422]: W0508 00:08:00.758141 2422 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://139.178.70.109:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 139.178.70.109:6443: connect: connection refused May 8 00:08:00.758243 kubelet[2422]: E0508 00:08:00.758190 2422 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://139.178.70.109:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 139.178.70.109:6443: connect: connection refused" logger="UnhandledError" May 8 00:08:00.760208 kubelet[2422]: W0508 00:08:00.760176 2422 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://139.178.70.109:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 139.178.70.109:6443: connect: connection refused May 8 00:08:00.760628 kubelet[2422]: E0508 00:08:00.760215 2422 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://139.178.70.109:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 139.178.70.109:6443: connect: connection refused" logger="UnhandledError" May 8 00:08:00.760628 kubelet[2422]: I0508 00:08:00.760277 2422 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" May 8 00:08:00.763523 kubelet[2422]: I0508 00:08:00.763399 2422 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 8 00:08:00.765208 kubelet[2422]: W0508 00:08:00.763979 2422 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 8 00:08:00.765208 kubelet[2422]: I0508 00:08:00.764928 2422 server.go:1269] "Started kubelet" May 8 00:08:00.766396 kubelet[2422]: I0508 00:08:00.766356 2422 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 8 00:08:00.768208 kubelet[2422]: I0508 00:08:00.768185 2422 server.go:460] "Adding debug handlers to kubelet server" May 8 00:08:00.771479 kubelet[2422]: I0508 00:08:00.770731 2422 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 8 00:08:00.771479 kubelet[2422]: I0508 00:08:00.770879 2422 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 8 00:08:00.771479 kubelet[2422]: I0508 00:08:00.771044 2422 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 8 00:08:00.774866 kubelet[2422]: E0508 00:08:00.772631 2422 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://139.178.70.109:6443/api/v1/namespaces/default/events\": dial tcp 139.178.70.109:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.183d64a2c65a2599 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-08 00:08:00.764896665 +0000 UTC m=+0.500423727,LastTimestamp:2025-05-08 00:08:00.764896665 +0000 UTC m=+0.500423727,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 8 00:08:00.774866 kubelet[2422]: I0508 00:08:00.774440 2422 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 8 00:08:00.776724 kubelet[2422]: I0508 00:08:00.776668 2422 volume_manager.go:289] "Starting Kubelet Volume Manager" May 8 00:08:00.777140 kubelet[2422]: E0508 00:08:00.776920 2422 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 8 00:08:00.779719 kubelet[2422]: E0508 00:08:00.779466 2422 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.109:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.109:6443: connect: connection refused" interval="200ms" May 8 00:08:00.781239 kubelet[2422]: I0508 00:08:00.780556 2422 desired_state_of_world_populator.go:146] "Desired state populator starts to run" May 8 00:08:00.781239 kubelet[2422]: I0508 00:08:00.780577 2422 reconciler.go:26] "Reconciler: start to sync state" May 8 00:08:00.781484 kubelet[2422]: W0508 00:08:00.781456 2422 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://139.178.70.109:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.70.109:6443: connect: connection refused May 8 00:08:00.781531 kubelet[2422]: E0508 00:08:00.781489 2422 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://139.178.70.109:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 139.178.70.109:6443: connect: connection refused" logger="UnhandledError" May 8 00:08:00.781671 kubelet[2422]: I0508 00:08:00.781657 2422 factory.go:221] Registration of the systemd container factory successfully May 8 00:08:00.781816 kubelet[2422]: I0508 00:08:00.781801 2422 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 8 00:08:00.783211 kubelet[2422]: I0508 00:08:00.783197 2422 factory.go:221] Registration of the containerd container factory successfully May 8 00:08:00.794487 kubelet[2422]: I0508 00:08:00.794448 2422 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 8 00:08:00.795380 kubelet[2422]: I0508 00:08:00.795139 2422 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 8 00:08:00.795380 kubelet[2422]: I0508 00:08:00.795161 2422 status_manager.go:217] "Starting to sync pod status with apiserver" May 8 00:08:00.795380 kubelet[2422]: I0508 00:08:00.795173 2422 kubelet.go:2321] "Starting kubelet main sync loop" May 8 00:08:00.795380 kubelet[2422]: E0508 00:08:00.795200 2422 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 8 00:08:00.798302 kubelet[2422]: W0508 00:08:00.798239 2422 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://139.178.70.109:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.70.109:6443: connect: connection refused May 8 00:08:00.798302 kubelet[2422]: E0508 00:08:00.798268 2422 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://139.178.70.109:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 139.178.70.109:6443: connect: connection refused" logger="UnhandledError" May 8 00:08:00.799427 kubelet[2422]: I0508 00:08:00.799140 2422 cpu_manager.go:214] "Starting CPU manager" policy="none" May 8 00:08:00.799427 kubelet[2422]: I0508 00:08:00.799149 2422 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 8 00:08:00.799427 kubelet[2422]: I0508 00:08:00.799159 2422 state_mem.go:36] "Initialized new in-memory state store" May 8 00:08:00.800446 kubelet[2422]: I0508 00:08:00.800436 2422 policy_none.go:49] "None policy: Start" May 8 00:08:00.800995 kubelet[2422]: I0508 00:08:00.800979 2422 memory_manager.go:170] "Starting memorymanager" policy="None" May 8 00:08:00.800995 kubelet[2422]: I0508 00:08:00.800995 2422 state_mem.go:35] "Initializing new in-memory state store" May 8 00:08:00.808085 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 8 00:08:00.821767 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 8 00:08:00.824405 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 8 00:08:00.829815 kubelet[2422]: I0508 00:08:00.829609 2422 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 8 00:08:00.829815 kubelet[2422]: I0508 00:08:00.829773 2422 eviction_manager.go:189] "Eviction manager: starting control loop" May 8 00:08:00.829815 kubelet[2422]: I0508 00:08:00.829783 2422 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 8 00:08:00.830066 kubelet[2422]: I0508 00:08:00.830055 2422 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 8 00:08:00.831223 kubelet[2422]: E0508 00:08:00.831206 2422 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" May 8 00:08:00.903588 systemd[1]: Created slice kubepods-burstable-pod6f320c393f2bfac2e26d69d4e2c71bc5.slice - libcontainer container kubepods-burstable-pod6f320c393f2bfac2e26d69d4e2c71bc5.slice. May 8 00:08:00.923532 systemd[1]: Created slice kubepods-burstable-podd4a6b755cb4739fbca401212ebb82b6d.slice - libcontainer container kubepods-burstable-podd4a6b755cb4739fbca401212ebb82b6d.slice. May 8 00:08:00.930983 kubelet[2422]: I0508 00:08:00.930951 2422 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 8 00:08:00.931384 kubelet[2422]: E0508 00:08:00.931319 2422 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://139.178.70.109:6443/api/v1/nodes\": dial tcp 139.178.70.109:6443: connect: connection refused" node="localhost" May 8 00:08:00.932863 systemd[1]: Created slice kubepods-burstable-pod0613557c150e4f35d1f3f822b5f32ff1.slice - libcontainer container kubepods-burstable-pod0613557c150e4f35d1f3f822b5f32ff1.slice. May 8 00:08:00.980645 kubelet[2422]: E0508 00:08:00.980550 2422 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.109:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.109:6443: connect: connection refused" interval="400ms" May 8 00:08:00.982072 kubelet[2422]: I0508 00:08:00.982047 2422 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6f320c393f2bfac2e26d69d4e2c71bc5-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"6f320c393f2bfac2e26d69d4e2c71bc5\") " pod="kube-system/kube-apiserver-localhost" May 8 00:08:00.982164 kubelet[2422]: I0508 00:08:00.982083 2422 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6f320c393f2bfac2e26d69d4e2c71bc5-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"6f320c393f2bfac2e26d69d4e2c71bc5\") " pod="kube-system/kube-apiserver-localhost" May 8 00:08:00.982164 kubelet[2422]: I0508 00:08:00.982096 2422 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6f320c393f2bfac2e26d69d4e2c71bc5-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"6f320c393f2bfac2e26d69d4e2c71bc5\") " pod="kube-system/kube-apiserver-localhost" May 8 00:08:00.982164 kubelet[2422]: I0508 00:08:00.982106 2422 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:08:00.982252 kubelet[2422]: I0508 00:08:00.982202 2422 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:08:00.982252 kubelet[2422]: I0508 00:08:00.982214 2422 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:08:00.982252 kubelet[2422]: I0508 00:08:00.982231 2422 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:08:00.982252 kubelet[2422]: I0508 00:08:00.982240 2422 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:08:00.982357 kubelet[2422]: I0508 00:08:00.982253 2422 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0613557c150e4f35d1f3f822b5f32ff1-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"0613557c150e4f35d1f3f822b5f32ff1\") " pod="kube-system/kube-scheduler-localhost" May 8 00:08:01.132170 kubelet[2422]: I0508 00:08:01.132150 2422 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 8 00:08:01.132423 kubelet[2422]: E0508 00:08:01.132382 2422 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://139.178.70.109:6443/api/v1/nodes\": dial tcp 139.178.70.109:6443: connect: connection refused" node="localhost" May 8 00:08:01.222966 containerd[1561]: time="2025-05-08T00:08:01.222922256Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:6f320c393f2bfac2e26d69d4e2c71bc5,Namespace:kube-system,Attempt:0,}" May 8 00:08:01.225772 containerd[1561]: time="2025-05-08T00:08:01.225663436Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d4a6b755cb4739fbca401212ebb82b6d,Namespace:kube-system,Attempt:0,}" May 8 00:08:01.236564 containerd[1561]: time="2025-05-08T00:08:01.236313143Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:0613557c150e4f35d1f3f822b5f32ff1,Namespace:kube-system,Attempt:0,}" May 8 00:08:01.381464 kubelet[2422]: E0508 00:08:01.381430 2422 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.109:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.109:6443: connect: connection refused" interval="800ms" May 8 00:08:01.534402 kubelet[2422]: I0508 00:08:01.534145 2422 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 8 00:08:01.534402 kubelet[2422]: E0508 00:08:01.534332 2422 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://139.178.70.109:6443/api/v1/nodes\": dial tcp 139.178.70.109:6443: connect: connection refused" node="localhost" May 8 00:08:01.667627 kubelet[2422]: W0508 00:08:01.667538 2422 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://139.178.70.109:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.70.109:6443: connect: connection refused May 8 00:08:01.667627 kubelet[2422]: E0508 00:08:01.667601 2422 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://139.178.70.109:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 139.178.70.109:6443: connect: connection refused" logger="UnhandledError" May 8 00:08:01.749054 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1882388910.mount: Deactivated successfully. May 8 00:08:01.750820 containerd[1561]: time="2025-05-08T00:08:01.750795389Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 8 00:08:01.751370 containerd[1561]: time="2025-05-08T00:08:01.751339696Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" May 8 00:08:01.751983 containerd[1561]: time="2025-05-08T00:08:01.751966302Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 8 00:08:01.753611 containerd[1561]: time="2025-05-08T00:08:01.753592224Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 8 00:08:01.754584 containerd[1561]: time="2025-05-08T00:08:01.754563617Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 8 00:08:01.755731 containerd[1561]: time="2025-05-08T00:08:01.755711129Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 8 00:08:01.755947 containerd[1561]: time="2025-05-08T00:08:01.755927622Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 8 00:08:01.760882 containerd[1561]: time="2025-05-08T00:08:01.760860060Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 8 00:08:01.761580 containerd[1561]: time="2025-05-08T00:08:01.761393122Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 538.371493ms" May 8 00:08:01.763049 containerd[1561]: time="2025-05-08T00:08:01.762804003Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 526.435154ms" May 8 00:08:01.763479 containerd[1561]: time="2025-05-08T00:08:01.763462043Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 537.748376ms" May 8 00:08:01.905509 containerd[1561]: time="2025-05-08T00:08:01.904224252Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:08:01.905509 containerd[1561]: time="2025-05-08T00:08:01.904268655Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:08:01.905509 containerd[1561]: time="2025-05-08T00:08:01.904277875Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:08:01.905509 containerd[1561]: time="2025-05-08T00:08:01.904848987Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:08:01.905509 containerd[1561]: time="2025-05-08T00:08:01.904984134Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:08:01.905509 containerd[1561]: time="2025-05-08T00:08:01.904996273Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:08:01.905509 containerd[1561]: time="2025-05-08T00:08:01.905074526Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:08:01.906394 containerd[1561]: time="2025-05-08T00:08:01.905801769Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:08:01.907458 containerd[1561]: time="2025-05-08T00:08:01.902465222Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:08:01.907458 containerd[1561]: time="2025-05-08T00:08:01.906878578Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:08:01.907458 containerd[1561]: time="2025-05-08T00:08:01.906901470Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:08:01.908002 containerd[1561]: time="2025-05-08T00:08:01.907962115Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:08:01.928861 systemd[1]: Started cri-containerd-5f57d863ed25a8c0726a5e2abd78a4d17ba9e865f1e48c4267d36a653669a9a4.scope - libcontainer container 5f57d863ed25a8c0726a5e2abd78a4d17ba9e865f1e48c4267d36a653669a9a4. May 8 00:08:01.935222 systemd[1]: Started cri-containerd-60b6d96793af660598d83b8d6062a4b9ad5494ab55f3e269783b219a7d10ed58.scope - libcontainer container 60b6d96793af660598d83b8d6062a4b9ad5494ab55f3e269783b219a7d10ed58. May 8 00:08:01.938430 systemd[1]: Started cri-containerd-f22e28aa12bc470f2780b6f0e036829aa4e3f35b65fff0f946ecfb47fd1f0f19.scope - libcontainer container f22e28aa12bc470f2780b6f0e036829aa4e3f35b65fff0f946ecfb47fd1f0f19. May 8 00:08:01.984218 containerd[1561]: time="2025-05-08T00:08:01.984107113Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:6f320c393f2bfac2e26d69d4e2c71bc5,Namespace:kube-system,Attempt:0,} returns sandbox id \"f22e28aa12bc470f2780b6f0e036829aa4e3f35b65fff0f946ecfb47fd1f0f19\"" May 8 00:08:01.987091 containerd[1561]: time="2025-05-08T00:08:01.986979640Z" level=info msg="CreateContainer within sandbox \"f22e28aa12bc470f2780b6f0e036829aa4e3f35b65fff0f946ecfb47fd1f0f19\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 8 00:08:01.995117 containerd[1561]: time="2025-05-08T00:08:01.995075080Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d4a6b755cb4739fbca401212ebb82b6d,Namespace:kube-system,Attempt:0,} returns sandbox id \"60b6d96793af660598d83b8d6062a4b9ad5494ab55f3e269783b219a7d10ed58\"" May 8 00:08:01.995362 containerd[1561]: time="2025-05-08T00:08:01.995307714Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:0613557c150e4f35d1f3f822b5f32ff1,Namespace:kube-system,Attempt:0,} returns sandbox id \"5f57d863ed25a8c0726a5e2abd78a4d17ba9e865f1e48c4267d36a653669a9a4\"" May 8 00:08:01.998281 containerd[1561]: time="2025-05-08T00:08:01.998097313Z" level=info msg="CreateContainer within sandbox \"60b6d96793af660598d83b8d6062a4b9ad5494ab55f3e269783b219a7d10ed58\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 8 00:08:01.998281 containerd[1561]: time="2025-05-08T00:08:01.998212600Z" level=info msg="CreateContainer within sandbox \"5f57d863ed25a8c0726a5e2abd78a4d17ba9e865f1e48c4267d36a653669a9a4\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 8 00:08:02.015637 containerd[1561]: time="2025-05-08T00:08:02.015562269Z" level=info msg="CreateContainer within sandbox \"f22e28aa12bc470f2780b6f0e036829aa4e3f35b65fff0f946ecfb47fd1f0f19\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"17d0fa0366da540857858b6b308117a95d1b739ca12c37461db31f1d44b31547\"" May 8 00:08:02.016680 containerd[1561]: time="2025-05-08T00:08:02.016505863Z" level=info msg="CreateContainer within sandbox \"5f57d863ed25a8c0726a5e2abd78a4d17ba9e865f1e48c4267d36a653669a9a4\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"552a858c3d8e1806ba8cde2a582540ada7be94d61a4a1b330a6096b5085c9b31\"" May 8 00:08:02.017225 containerd[1561]: time="2025-05-08T00:08:02.017207338Z" level=info msg="StartContainer for \"17d0fa0366da540857858b6b308117a95d1b739ca12c37461db31f1d44b31547\"" May 8 00:08:02.017605 containerd[1561]: time="2025-05-08T00:08:02.017208634Z" level=info msg="StartContainer for \"552a858c3d8e1806ba8cde2a582540ada7be94d61a4a1b330a6096b5085c9b31\"" May 8 00:08:02.019831 containerd[1561]: time="2025-05-08T00:08:02.019807028Z" level=info msg="CreateContainer within sandbox \"60b6d96793af660598d83b8d6062a4b9ad5494ab55f3e269783b219a7d10ed58\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"3cee7c1957a9fad1567a428f40574b5ce6957e6f7409cca70da8a166fb3a3164\"" May 8 00:08:02.020468 containerd[1561]: time="2025-05-08T00:08:02.020449016Z" level=info msg="StartContainer for \"3cee7c1957a9fad1567a428f40574b5ce6957e6f7409cca70da8a166fb3a3164\"" May 8 00:08:02.046831 systemd[1]: Started cri-containerd-552a858c3d8e1806ba8cde2a582540ada7be94d61a4a1b330a6096b5085c9b31.scope - libcontainer container 552a858c3d8e1806ba8cde2a582540ada7be94d61a4a1b330a6096b5085c9b31. May 8 00:08:02.059770 systemd[1]: Started cri-containerd-17d0fa0366da540857858b6b308117a95d1b739ca12c37461db31f1d44b31547.scope - libcontainer container 17d0fa0366da540857858b6b308117a95d1b739ca12c37461db31f1d44b31547. May 8 00:08:02.060740 systemd[1]: Started cri-containerd-3cee7c1957a9fad1567a428f40574b5ce6957e6f7409cca70da8a166fb3a3164.scope - libcontainer container 3cee7c1957a9fad1567a428f40574b5ce6957e6f7409cca70da8a166fb3a3164. May 8 00:08:02.100366 containerd[1561]: time="2025-05-08T00:08:02.100338897Z" level=info msg="StartContainer for \"3cee7c1957a9fad1567a428f40574b5ce6957e6f7409cca70da8a166fb3a3164\" returns successfully" May 8 00:08:02.116376 containerd[1561]: time="2025-05-08T00:08:02.116351493Z" level=info msg="StartContainer for \"17d0fa0366da540857858b6b308117a95d1b739ca12c37461db31f1d44b31547\" returns successfully" May 8 00:08:02.122532 containerd[1561]: time="2025-05-08T00:08:02.122505956Z" level=info msg="StartContainer for \"552a858c3d8e1806ba8cde2a582540ada7be94d61a4a1b330a6096b5085c9b31\" returns successfully" May 8 00:08:02.170280 kubelet[2422]: W0508 00:08:02.170198 2422 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://139.178.70.109:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.70.109:6443: connect: connection refused May 8 00:08:02.170419 kubelet[2422]: E0508 00:08:02.170403 2422 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://139.178.70.109:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 139.178.70.109:6443: connect: connection refused" logger="UnhandledError" May 8 00:08:02.183113 kubelet[2422]: E0508 00:08:02.183081 2422 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.109:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.109:6443: connect: connection refused" interval="1.6s" May 8 00:08:02.284269 kubelet[2422]: W0508 00:08:02.284183 2422 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://139.178.70.109:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 139.178.70.109:6443: connect: connection refused May 8 00:08:02.284269 kubelet[2422]: E0508 00:08:02.284243 2422 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://139.178.70.109:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 139.178.70.109:6443: connect: connection refused" logger="UnhandledError" May 8 00:08:02.339147 kubelet[2422]: I0508 00:08:02.339125 2422 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 8 00:08:02.339362 kubelet[2422]: E0508 00:08:02.339334 2422 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://139.178.70.109:6443/api/v1/nodes\": dial tcp 139.178.70.109:6443: connect: connection refused" node="localhost" May 8 00:08:02.350774 kubelet[2422]: W0508 00:08:02.350734 2422 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://139.178.70.109:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 139.178.70.109:6443: connect: connection refused May 8 00:08:02.350863 kubelet[2422]: E0508 00:08:02.350780 2422 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://139.178.70.109:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 139.178.70.109:6443: connect: connection refused" logger="UnhandledError" May 8 00:08:03.805876 kubelet[2422]: E0508 00:08:03.805752 2422 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" May 8 00:08:03.940403 kubelet[2422]: I0508 00:08:03.940351 2422 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 8 00:08:03.948858 kubelet[2422]: I0508 00:08:03.948804 2422 kubelet_node_status.go:75] "Successfully registered node" node="localhost" May 8 00:08:03.948858 kubelet[2422]: E0508 00:08:03.948833 2422 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" May 8 00:08:03.954318 kubelet[2422]: E0508 00:08:03.954289 2422 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 8 00:08:04.054667 kubelet[2422]: E0508 00:08:04.054642 2422 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 8 00:08:04.155264 kubelet[2422]: E0508 00:08:04.155069 2422 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 8 00:08:04.255766 kubelet[2422]: E0508 00:08:04.255736 2422 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 8 00:08:04.474736 kubelet[2422]: E0508 00:08:04.474675 2422 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" May 8 00:08:04.762155 kubelet[2422]: I0508 00:08:04.762065 2422 apiserver.go:52] "Watching apiserver" May 8 00:08:04.781159 kubelet[2422]: I0508 00:08:04.781129 2422 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" May 8 00:08:05.819176 systemd[1]: Reload requested from client PID 2700 ('systemctl') (unit session-9.scope)... May 8 00:08:05.819186 systemd[1]: Reloading... May 8 00:08:05.885753 zram_generator::config[2746]: No configuration found. May 8 00:08:05.950399 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") May 8 00:08:05.968436 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 8 00:08:06.049196 systemd[1]: Reloading finished in 229 ms. May 8 00:08:06.069490 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 8 00:08:06.081142 systemd[1]: kubelet.service: Deactivated successfully. May 8 00:08:06.081430 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:08:06.081530 systemd[1]: kubelet.service: Consumed 640ms CPU time, 114.1M memory peak. May 8 00:08:06.084950 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 00:08:06.399740 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:08:06.404669 (kubelet)[2812]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 8 00:08:06.467939 kubelet[2812]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 8 00:08:06.467939 kubelet[2812]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 8 00:08:06.467939 kubelet[2812]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 8 00:08:06.468248 kubelet[2812]: I0508 00:08:06.468007 2812 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 8 00:08:06.473436 kubelet[2812]: I0508 00:08:06.473417 2812 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" May 8 00:08:06.473708 kubelet[2812]: I0508 00:08:06.473528 2812 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 8 00:08:06.473763 kubelet[2812]: I0508 00:08:06.473757 2812 server.go:929] "Client rotation is on, will bootstrap in background" May 8 00:08:06.488097 kubelet[2812]: I0508 00:08:06.488079 2812 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 8 00:08:06.502664 kubelet[2812]: I0508 00:08:06.502645 2812 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 8 00:08:06.600741 kubelet[2812]: E0508 00:08:06.600705 2812 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 8 00:08:06.601388 kubelet[2812]: I0508 00:08:06.600867 2812 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 8 00:08:06.603037 kubelet[2812]: I0508 00:08:06.603020 2812 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 8 00:08:06.626652 kubelet[2812]: I0508 00:08:06.626623 2812 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 8 00:08:06.626849 kubelet[2812]: I0508 00:08:06.626797 2812 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 8 00:08:06.627091 kubelet[2812]: I0508 00:08:06.626822 2812 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 8 00:08:06.627180 kubelet[2812]: I0508 00:08:06.627097 2812 topology_manager.go:138] "Creating topology manager with none policy" May 8 00:08:06.627180 kubelet[2812]: I0508 00:08:06.627106 2812 container_manager_linux.go:300] "Creating device plugin manager" May 8 00:08:06.627180 kubelet[2812]: I0508 00:08:06.627155 2812 state_mem.go:36] "Initialized new in-memory state store" May 8 00:08:06.662302 kubelet[2812]: I0508 00:08:06.661657 2812 kubelet.go:408] "Attempting to sync node with API server" May 8 00:08:06.662302 kubelet[2812]: I0508 00:08:06.661684 2812 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 8 00:08:06.662302 kubelet[2812]: I0508 00:08:06.661720 2812 kubelet.go:314] "Adding apiserver pod source" May 8 00:08:06.662302 kubelet[2812]: I0508 00:08:06.661733 2812 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 8 00:08:06.662530 kubelet[2812]: I0508 00:08:06.662445 2812 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" May 8 00:08:06.663016 kubelet[2812]: I0508 00:08:06.662892 2812 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 8 00:08:06.663387 kubelet[2812]: I0508 00:08:06.663372 2812 server.go:1269] "Started kubelet" May 8 00:08:06.723667 kubelet[2812]: I0508 00:08:06.723616 2812 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 8 00:08:06.732497 kubelet[2812]: I0508 00:08:06.732457 2812 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 8 00:08:06.732669 kubelet[2812]: I0508 00:08:06.732657 2812 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 8 00:08:06.733329 kubelet[2812]: I0508 00:08:06.733317 2812 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 8 00:08:06.740032 kubelet[2812]: I0508 00:08:06.739279 2812 server.go:460] "Adding debug handlers to kubelet server" May 8 00:08:06.740032 kubelet[2812]: I0508 00:08:06.739902 2812 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 8 00:08:06.742171 kubelet[2812]: I0508 00:08:06.742083 2812 volume_manager.go:289] "Starting Kubelet Volume Manager" May 8 00:08:06.742171 kubelet[2812]: I0508 00:08:06.742140 2812 desired_state_of_world_populator.go:146] "Desired state populator starts to run" May 8 00:08:06.742223 kubelet[2812]: I0508 00:08:06.742214 2812 reconciler.go:26] "Reconciler: start to sync state" May 8 00:08:06.743113 kubelet[2812]: I0508 00:08:06.743098 2812 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 8 00:08:06.743999 kubelet[2812]: I0508 00:08:06.743985 2812 factory.go:221] Registration of the containerd container factory successfully May 8 00:08:06.743999 kubelet[2812]: I0508 00:08:06.743996 2812 factory.go:221] Registration of the systemd container factory successfully May 8 00:08:06.747292 kubelet[2812]: E0508 00:08:06.747280 2812 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 8 00:08:06.758914 kubelet[2812]: I0508 00:08:06.758813 2812 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 8 00:08:06.761220 kubelet[2812]: I0508 00:08:06.760929 2812 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 8 00:08:06.761220 kubelet[2812]: I0508 00:08:06.761050 2812 status_manager.go:217] "Starting to sync pod status with apiserver" May 8 00:08:06.761220 kubelet[2812]: I0508 00:08:06.761068 2812 kubelet.go:2321] "Starting kubelet main sync loop" May 8 00:08:06.761220 kubelet[2812]: E0508 00:08:06.761094 2812 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 8 00:08:06.778857 kubelet[2812]: I0508 00:08:06.778835 2812 cpu_manager.go:214] "Starting CPU manager" policy="none" May 8 00:08:06.778857 kubelet[2812]: I0508 00:08:06.778852 2812 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 8 00:08:06.778979 kubelet[2812]: I0508 00:08:06.778868 2812 state_mem.go:36] "Initialized new in-memory state store" May 8 00:08:06.778979 kubelet[2812]: I0508 00:08:06.778976 2812 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 8 00:08:06.779063 kubelet[2812]: I0508 00:08:06.778983 2812 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 8 00:08:06.779063 kubelet[2812]: I0508 00:08:06.778999 2812 policy_none.go:49] "None policy: Start" May 8 00:08:06.779717 kubelet[2812]: I0508 00:08:06.779359 2812 memory_manager.go:170] "Starting memorymanager" policy="None" May 8 00:08:06.779717 kubelet[2812]: I0508 00:08:06.779408 2812 state_mem.go:35] "Initializing new in-memory state store" May 8 00:08:06.779717 kubelet[2812]: I0508 00:08:06.779506 2812 state_mem.go:75] "Updated machine memory state" May 8 00:08:06.782688 kubelet[2812]: I0508 00:08:06.782670 2812 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 8 00:08:06.783159 kubelet[2812]: I0508 00:08:06.783148 2812 eviction_manager.go:189] "Eviction manager: starting control loop" May 8 00:08:06.783208 kubelet[2812]: I0508 00:08:06.783163 2812 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 8 00:08:06.783801 kubelet[2812]: I0508 00:08:06.783393 2812 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 8 00:08:06.829259 sudo[2846]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin May 8 00:08:06.829425 sudo[2846]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) May 8 00:08:06.877682 kubelet[2812]: E0508 00:08:06.877618 2812 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 8 00:08:06.890073 kubelet[2812]: I0508 00:08:06.890055 2812 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 8 00:08:06.899362 kubelet[2812]: I0508 00:08:06.898962 2812 kubelet_node_status.go:111] "Node was previously registered" node="localhost" May 8 00:08:06.899362 kubelet[2812]: I0508 00:08:06.899006 2812 kubelet_node_status.go:75] "Successfully registered node" node="localhost" May 8 00:08:06.943468 kubelet[2812]: I0508 00:08:06.943398 2812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6f320c393f2bfac2e26d69d4e2c71bc5-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"6f320c393f2bfac2e26d69d4e2c71bc5\") " pod="kube-system/kube-apiserver-localhost" May 8 00:08:06.943468 kubelet[2812]: I0508 00:08:06.943449 2812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:08:06.944027 kubelet[2812]: I0508 00:08:06.944002 2812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0613557c150e4f35d1f3f822b5f32ff1-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"0613557c150e4f35d1f3f822b5f32ff1\") " pod="kube-system/kube-scheduler-localhost" May 8 00:08:06.944067 kubelet[2812]: I0508 00:08:06.944031 2812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6f320c393f2bfac2e26d69d4e2c71bc5-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"6f320c393f2bfac2e26d69d4e2c71bc5\") " pod="kube-system/kube-apiserver-localhost" May 8 00:08:06.944087 kubelet[2812]: I0508 00:08:06.944080 2812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6f320c393f2bfac2e26d69d4e2c71bc5-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"6f320c393f2bfac2e26d69d4e2c71bc5\") " pod="kube-system/kube-apiserver-localhost" May 8 00:08:06.944105 kubelet[2812]: I0508 00:08:06.944092 2812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:08:06.944105 kubelet[2812]: I0508 00:08:06.944102 2812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:08:06.944136 kubelet[2812]: I0508 00:08:06.944110 2812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:08:06.944156 kubelet[2812]: I0508 00:08:06.944137 2812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:08:07.277626 sudo[2846]: pam_unix(sudo:session): session closed for user root May 8 00:08:07.662886 kubelet[2812]: I0508 00:08:07.662715 2812 apiserver.go:52] "Watching apiserver" May 8 00:08:07.742807 kubelet[2812]: I0508 00:08:07.742764 2812 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" May 8 00:08:07.777749 kubelet[2812]: E0508 00:08:07.777527 2812 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 8 00:08:07.821522 kubelet[2812]: I0508 00:08:07.821480 2812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=3.821453827 podStartE2EDuration="3.821453827s" podCreationTimestamp="2025-05-08 00:08:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 00:08:07.811826353 +0000 UTC m=+1.395223751" watchObservedRunningTime="2025-05-08 00:08:07.821453827 +0000 UTC m=+1.404851218" May 8 00:08:07.827006 kubelet[2812]: I0508 00:08:07.826832 2812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.826818534 podStartE2EDuration="1.826818534s" podCreationTimestamp="2025-05-08 00:08:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 00:08:07.822032512 +0000 UTC m=+1.405429911" watchObservedRunningTime="2025-05-08 00:08:07.826818534 +0000 UTC m=+1.410215929" May 8 00:08:07.844876 kubelet[2812]: I0508 00:08:07.844743 2812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.844731907 podStartE2EDuration="1.844731907s" podCreationTimestamp="2025-05-08 00:08:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 00:08:07.827235046 +0000 UTC m=+1.410632443" watchObservedRunningTime="2025-05-08 00:08:07.844731907 +0000 UTC m=+1.428129305" May 8 00:08:09.026376 sudo[1858]: pam_unix(sudo:session): session closed for user root May 8 00:08:09.027111 sshd[1857]: Connection closed by 139.178.89.65 port 60792 May 8 00:08:09.031655 sshd-session[1853]: pam_unix(sshd:session): session closed for user core May 8 00:08:09.033985 systemd[1]: sshd@6-139.178.70.109:22-139.178.89.65:60792.service: Deactivated successfully. May 8 00:08:09.035361 systemd[1]: session-9.scope: Deactivated successfully. May 8 00:08:09.035497 systemd[1]: session-9.scope: Consumed 3.395s CPU time, 210M memory peak. May 8 00:08:09.036367 systemd-logind[1535]: Session 9 logged out. Waiting for processes to exit. May 8 00:08:09.037117 systemd-logind[1535]: Removed session 9. May 8 00:08:10.383423 kubelet[2812]: I0508 00:08:10.383312 2812 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 8 00:08:10.383925 containerd[1561]: time="2025-05-08T00:08:10.383861436Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 8 00:08:10.384134 kubelet[2812]: I0508 00:08:10.383992 2812 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 8 00:08:11.226170 systemd[1]: Created slice kubepods-burstable-pod34c52980_bad2_443f_9e83_097e93133d79.slice - libcontainer container kubepods-burstable-pod34c52980_bad2_443f_9e83_097e93133d79.slice. May 8 00:08:11.243459 systemd[1]: Created slice kubepods-besteffort-pod9af01b41_7028_41b8_bfc5_42369ff7925d.slice - libcontainer container kubepods-besteffort-pod9af01b41_7028_41b8_bfc5_42369ff7925d.slice. May 8 00:08:11.271790 kubelet[2812]: I0508 00:08:11.271466 2812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/34c52980-bad2-443f-9e83-097e93133d79-cilium-run\") pod \"cilium-qgrpq\" (UID: \"34c52980-bad2-443f-9e83-097e93133d79\") " pod="kube-system/cilium-qgrpq" May 8 00:08:11.271790 kubelet[2812]: I0508 00:08:11.271497 2812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/34c52980-bad2-443f-9e83-097e93133d79-lib-modules\") pod \"cilium-qgrpq\" (UID: \"34c52980-bad2-443f-9e83-097e93133d79\") " pod="kube-system/cilium-qgrpq" May 8 00:08:11.271790 kubelet[2812]: I0508 00:08:11.271514 2812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/34c52980-bad2-443f-9e83-097e93133d79-xtables-lock\") pod \"cilium-qgrpq\" (UID: \"34c52980-bad2-443f-9e83-097e93133d79\") " pod="kube-system/cilium-qgrpq" May 8 00:08:11.271790 kubelet[2812]: I0508 00:08:11.271523 2812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w2bgc\" (UniqueName: \"kubernetes.io/projected/34c52980-bad2-443f-9e83-097e93133d79-kube-api-access-w2bgc\") pod \"cilium-qgrpq\" (UID: \"34c52980-bad2-443f-9e83-097e93133d79\") " pod="kube-system/cilium-qgrpq" May 8 00:08:11.271790 kubelet[2812]: I0508 00:08:11.271539 2812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9af01b41-7028-41b8-bfc5-42369ff7925d-lib-modules\") pod \"kube-proxy-c8cfm\" (UID: \"9af01b41-7028-41b8-bfc5-42369ff7925d\") " pod="kube-system/kube-proxy-c8cfm" May 8 00:08:11.271790 kubelet[2812]: I0508 00:08:11.271551 2812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/34c52980-bad2-443f-9e83-097e93133d79-hostproc\") pod \"cilium-qgrpq\" (UID: \"34c52980-bad2-443f-9e83-097e93133d79\") " pod="kube-system/cilium-qgrpq" May 8 00:08:11.271979 kubelet[2812]: I0508 00:08:11.271559 2812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/34c52980-bad2-443f-9e83-097e93133d79-cni-path\") pod \"cilium-qgrpq\" (UID: \"34c52980-bad2-443f-9e83-097e93133d79\") " pod="kube-system/cilium-qgrpq" May 8 00:08:11.271979 kubelet[2812]: I0508 00:08:11.271568 2812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/34c52980-bad2-443f-9e83-097e93133d79-hubble-tls\") pod \"cilium-qgrpq\" (UID: \"34c52980-bad2-443f-9e83-097e93133d79\") " pod="kube-system/cilium-qgrpq" May 8 00:08:11.271979 kubelet[2812]: I0508 00:08:11.271577 2812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/34c52980-bad2-443f-9e83-097e93133d79-bpf-maps\") pod \"cilium-qgrpq\" (UID: \"34c52980-bad2-443f-9e83-097e93133d79\") " pod="kube-system/cilium-qgrpq" May 8 00:08:11.271979 kubelet[2812]: I0508 00:08:11.271585 2812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/34c52980-bad2-443f-9e83-097e93133d79-cilium-config-path\") pod \"cilium-qgrpq\" (UID: \"34c52980-bad2-443f-9e83-097e93133d79\") " pod="kube-system/cilium-qgrpq" May 8 00:08:11.271979 kubelet[2812]: I0508 00:08:11.271599 2812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9af01b41-7028-41b8-bfc5-42369ff7925d-xtables-lock\") pod \"kube-proxy-c8cfm\" (UID: \"9af01b41-7028-41b8-bfc5-42369ff7925d\") " pod="kube-system/kube-proxy-c8cfm" May 8 00:08:11.271979 kubelet[2812]: I0508 00:08:11.271615 2812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/34c52980-bad2-443f-9e83-097e93133d79-clustermesh-secrets\") pod \"cilium-qgrpq\" (UID: \"34c52980-bad2-443f-9e83-097e93133d79\") " pod="kube-system/cilium-qgrpq" May 8 00:08:11.272091 kubelet[2812]: I0508 00:08:11.271627 2812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/34c52980-bad2-443f-9e83-097e93133d79-host-proc-sys-kernel\") pod \"cilium-qgrpq\" (UID: \"34c52980-bad2-443f-9e83-097e93133d79\") " pod="kube-system/cilium-qgrpq" May 8 00:08:11.272091 kubelet[2812]: I0508 00:08:11.271637 2812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/34c52980-bad2-443f-9e83-097e93133d79-cilium-cgroup\") pod \"cilium-qgrpq\" (UID: \"34c52980-bad2-443f-9e83-097e93133d79\") " pod="kube-system/cilium-qgrpq" May 8 00:08:11.272091 kubelet[2812]: I0508 00:08:11.271648 2812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/34c52980-bad2-443f-9e83-097e93133d79-etc-cni-netd\") pod \"cilium-qgrpq\" (UID: \"34c52980-bad2-443f-9e83-097e93133d79\") " pod="kube-system/cilium-qgrpq" May 8 00:08:11.272091 kubelet[2812]: I0508 00:08:11.271660 2812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d9m9c\" (UniqueName: \"kubernetes.io/projected/9af01b41-7028-41b8-bfc5-42369ff7925d-kube-api-access-d9m9c\") pod \"kube-proxy-c8cfm\" (UID: \"9af01b41-7028-41b8-bfc5-42369ff7925d\") " pod="kube-system/kube-proxy-c8cfm" May 8 00:08:11.272091 kubelet[2812]: I0508 00:08:11.271670 2812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/34c52980-bad2-443f-9e83-097e93133d79-host-proc-sys-net\") pod \"cilium-qgrpq\" (UID: \"34c52980-bad2-443f-9e83-097e93133d79\") " pod="kube-system/cilium-qgrpq" May 8 00:08:11.272202 kubelet[2812]: I0508 00:08:11.271680 2812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/9af01b41-7028-41b8-bfc5-42369ff7925d-kube-proxy\") pod \"kube-proxy-c8cfm\" (UID: \"9af01b41-7028-41b8-bfc5-42369ff7925d\") " pod="kube-system/kube-proxy-c8cfm" May 8 00:08:11.463928 systemd[1]: Created slice kubepods-besteffort-pod6a9134e1_ed8b_44b1_a409_7575c18174b9.slice - libcontainer container kubepods-besteffort-pod6a9134e1_ed8b_44b1_a409_7575c18174b9.slice. May 8 00:08:11.473822 kubelet[2812]: I0508 00:08:11.473794 2812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vsk7l\" (UniqueName: \"kubernetes.io/projected/6a9134e1-ed8b-44b1-a409-7575c18174b9-kube-api-access-vsk7l\") pod \"cilium-operator-5d85765b45-s57b8\" (UID: \"6a9134e1-ed8b-44b1-a409-7575c18174b9\") " pod="kube-system/cilium-operator-5d85765b45-s57b8" May 8 00:08:11.473822 kubelet[2812]: I0508 00:08:11.473825 2812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6a9134e1-ed8b-44b1-a409-7575c18174b9-cilium-config-path\") pod \"cilium-operator-5d85765b45-s57b8\" (UID: \"6a9134e1-ed8b-44b1-a409-7575c18174b9\") " pod="kube-system/cilium-operator-5d85765b45-s57b8" May 8 00:08:11.541195 containerd[1561]: time="2025-05-08T00:08:11.541085623Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-qgrpq,Uid:34c52980-bad2-443f-9e83-097e93133d79,Namespace:kube-system,Attempt:0,}" May 8 00:08:11.561857 containerd[1561]: time="2025-05-08T00:08:11.561642353Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-c8cfm,Uid:9af01b41-7028-41b8-bfc5-42369ff7925d,Namespace:kube-system,Attempt:0,}" May 8 00:08:11.660257 containerd[1561]: time="2025-05-08T00:08:11.660117610Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:08:11.660257 containerd[1561]: time="2025-05-08T00:08:11.660193972Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:08:11.660257 containerd[1561]: time="2025-05-08T00:08:11.660211507Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:08:11.660645 containerd[1561]: time="2025-05-08T00:08:11.660516339Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:08:11.664257 containerd[1561]: time="2025-05-08T00:08:11.664039428Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:08:11.664352 containerd[1561]: time="2025-05-08T00:08:11.664267677Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:08:11.664352 containerd[1561]: time="2025-05-08T00:08:11.664313736Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:08:11.664612 containerd[1561]: time="2025-05-08T00:08:11.664509295Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:08:11.680878 systemd[1]: Started cri-containerd-5d837844e3a7213543395e035d8e130be5c7a01aeb2922fa819d05fa4c27c984.scope - libcontainer container 5d837844e3a7213543395e035d8e130be5c7a01aeb2922fa819d05fa4c27c984. May 8 00:08:11.685016 systemd[1]: Started cri-containerd-e145a9e1c0b695d55f48c99c71401a3a9fdf7e467372e74726707c6ac0d2bf1f.scope - libcontainer container e145a9e1c0b695d55f48c99c71401a3a9fdf7e467372e74726707c6ac0d2bf1f. May 8 00:08:11.705935 containerd[1561]: time="2025-05-08T00:08:11.705849627Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-c8cfm,Uid:9af01b41-7028-41b8-bfc5-42369ff7925d,Namespace:kube-system,Attempt:0,} returns sandbox id \"5d837844e3a7213543395e035d8e130be5c7a01aeb2922fa819d05fa4c27c984\"" May 8 00:08:11.709632 containerd[1561]: time="2025-05-08T00:08:11.709528545Z" level=info msg="CreateContainer within sandbox \"5d837844e3a7213543395e035d8e130be5c7a01aeb2922fa819d05fa4c27c984\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 8 00:08:11.713881 containerd[1561]: time="2025-05-08T00:08:11.713762063Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-qgrpq,Uid:34c52980-bad2-443f-9e83-097e93133d79,Namespace:kube-system,Attempt:0,} returns sandbox id \"e145a9e1c0b695d55f48c99c71401a3a9fdf7e467372e74726707c6ac0d2bf1f\"" May 8 00:08:11.715472 containerd[1561]: time="2025-05-08T00:08:11.715144590Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" May 8 00:08:11.740455 containerd[1561]: time="2025-05-08T00:08:11.740421130Z" level=info msg="CreateContainer within sandbox \"5d837844e3a7213543395e035d8e130be5c7a01aeb2922fa819d05fa4c27c984\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"13669bfe0ef11ea18372eec422332363841d8d8f79f27efd9694b850dd08efe7\"" May 8 00:08:11.741342 containerd[1561]: time="2025-05-08T00:08:11.741302858Z" level=info msg="StartContainer for \"13669bfe0ef11ea18372eec422332363841d8d8f79f27efd9694b850dd08efe7\"" May 8 00:08:11.762846 systemd[1]: Started cri-containerd-13669bfe0ef11ea18372eec422332363841d8d8f79f27efd9694b850dd08efe7.scope - libcontainer container 13669bfe0ef11ea18372eec422332363841d8d8f79f27efd9694b850dd08efe7. May 8 00:08:11.768517 containerd[1561]: time="2025-05-08T00:08:11.768482257Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-s57b8,Uid:6a9134e1-ed8b-44b1-a409-7575c18174b9,Namespace:kube-system,Attempt:0,}" May 8 00:08:11.791156 containerd[1561]: time="2025-05-08T00:08:11.791127322Z" level=info msg="StartContainer for \"13669bfe0ef11ea18372eec422332363841d8d8f79f27efd9694b850dd08efe7\" returns successfully" May 8 00:08:11.857545 containerd[1561]: time="2025-05-08T00:08:11.856749235Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:08:11.857545 containerd[1561]: time="2025-05-08T00:08:11.857097296Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:08:11.857658 containerd[1561]: time="2025-05-08T00:08:11.857107917Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:08:11.857658 containerd[1561]: time="2025-05-08T00:08:11.857169705Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:08:11.870872 systemd[1]: Started cri-containerd-22ad8b0489d4b1587a88d009b3280e13255e34966ef42aee7078b79e18466266.scope - libcontainer container 22ad8b0489d4b1587a88d009b3280e13255e34966ef42aee7078b79e18466266. May 8 00:08:11.901024 containerd[1561]: time="2025-05-08T00:08:11.900985722Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-s57b8,Uid:6a9134e1-ed8b-44b1-a409-7575c18174b9,Namespace:kube-system,Attempt:0,} returns sandbox id \"22ad8b0489d4b1587a88d009b3280e13255e34966ef42aee7078b79e18466266\"" May 8 00:08:12.967886 kubelet[2812]: I0508 00:08:12.967594 2812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-c8cfm" podStartSLOduration=1.967580015 podStartE2EDuration="1.967580015s" podCreationTimestamp="2025-05-08 00:08:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 00:08:12.796731711 +0000 UTC m=+6.380129109" watchObservedRunningTime="2025-05-08 00:08:12.967580015 +0000 UTC m=+6.550977412" May 8 00:08:16.238867 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1866333397.mount: Deactivated successfully. May 8 00:08:18.170600 containerd[1561]: time="2025-05-08T00:08:18.170512627Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" May 8 00:08:18.172328 containerd[1561]: time="2025-05-08T00:08:18.172241965Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 6.45668735s" May 8 00:08:18.172328 containerd[1561]: time="2025-05-08T00:08:18.172263018Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" May 8 00:08:18.174289 containerd[1561]: time="2025-05-08T00:08:18.173718917Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" May 8 00:08:18.191754 containerd[1561]: time="2025-05-08T00:08:18.191685792Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:08:18.193103 containerd[1561]: time="2025-05-08T00:08:18.192875222Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:08:18.193999 containerd[1561]: time="2025-05-08T00:08:18.193983194Z" level=info msg="CreateContainer within sandbox \"e145a9e1c0b695d55f48c99c71401a3a9fdf7e467372e74726707c6ac0d2bf1f\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 8 00:08:18.391699 containerd[1561]: time="2025-05-08T00:08:18.391659451Z" level=info msg="CreateContainer within sandbox \"e145a9e1c0b695d55f48c99c71401a3a9fdf7e467372e74726707c6ac0d2bf1f\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"94f82d4b64350c9cdffc05f2bc1c0cb55583aef4f03a428aa25839b46c45546a\"" May 8 00:08:18.392745 containerd[1561]: time="2025-05-08T00:08:18.392708632Z" level=info msg="StartContainer for \"94f82d4b64350c9cdffc05f2bc1c0cb55583aef4f03a428aa25839b46c45546a\"" May 8 00:08:18.537810 systemd[1]: Started cri-containerd-94f82d4b64350c9cdffc05f2bc1c0cb55583aef4f03a428aa25839b46c45546a.scope - libcontainer container 94f82d4b64350c9cdffc05f2bc1c0cb55583aef4f03a428aa25839b46c45546a. May 8 00:08:18.607593 containerd[1561]: time="2025-05-08T00:08:18.607561659Z" level=info msg="StartContainer for \"94f82d4b64350c9cdffc05f2bc1c0cb55583aef4f03a428aa25839b46c45546a\" returns successfully" May 8 00:08:18.620574 systemd[1]: cri-containerd-94f82d4b64350c9cdffc05f2bc1c0cb55583aef4f03a428aa25839b46c45546a.scope: Deactivated successfully. May 8 00:08:18.703792 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-94f82d4b64350c9cdffc05f2bc1c0cb55583aef4f03a428aa25839b46c45546a-rootfs.mount: Deactivated successfully. May 8 00:08:18.941952 containerd[1561]: time="2025-05-08T00:08:18.937435565Z" level=info msg="shim disconnected" id=94f82d4b64350c9cdffc05f2bc1c0cb55583aef4f03a428aa25839b46c45546a namespace=k8s.io May 8 00:08:18.941952 containerd[1561]: time="2025-05-08T00:08:18.941803806Z" level=warning msg="cleaning up after shim disconnected" id=94f82d4b64350c9cdffc05f2bc1c0cb55583aef4f03a428aa25839b46c45546a namespace=k8s.io May 8 00:08:18.941952 containerd[1561]: time="2025-05-08T00:08:18.941811918Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 00:08:19.798867 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3386041096.mount: Deactivated successfully. May 8 00:08:19.835738 containerd[1561]: time="2025-05-08T00:08:19.833332312Z" level=info msg="CreateContainer within sandbox \"e145a9e1c0b695d55f48c99c71401a3a9fdf7e467372e74726707c6ac0d2bf1f\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 8 00:08:19.850423 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3285874014.mount: Deactivated successfully. May 8 00:08:19.854984 containerd[1561]: time="2025-05-08T00:08:19.854909342Z" level=info msg="CreateContainer within sandbox \"e145a9e1c0b695d55f48c99c71401a3a9fdf7e467372e74726707c6ac0d2bf1f\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"d1229d352fd5a2658e355fbe9bc6e2a9c069d26cb3cbfaa74c66ad2a5bb22983\"" May 8 00:08:19.855411 containerd[1561]: time="2025-05-08T00:08:19.855391389Z" level=info msg="StartContainer for \"d1229d352fd5a2658e355fbe9bc6e2a9c069d26cb3cbfaa74c66ad2a5bb22983\"" May 8 00:08:19.876810 systemd[1]: Started cri-containerd-d1229d352fd5a2658e355fbe9bc6e2a9c069d26cb3cbfaa74c66ad2a5bb22983.scope - libcontainer container d1229d352fd5a2658e355fbe9bc6e2a9c069d26cb3cbfaa74c66ad2a5bb22983. May 8 00:08:19.899859 containerd[1561]: time="2025-05-08T00:08:19.899827800Z" level=info msg="StartContainer for \"d1229d352fd5a2658e355fbe9bc6e2a9c069d26cb3cbfaa74c66ad2a5bb22983\" returns successfully" May 8 00:08:19.911954 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 8 00:08:19.912406 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 8 00:08:19.912642 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... May 8 00:08:19.919000 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 8 00:08:19.919155 systemd[1]: cri-containerd-d1229d352fd5a2658e355fbe9bc6e2a9c069d26cb3cbfaa74c66ad2a5bb22983.scope: Deactivated successfully. May 8 00:08:19.992603 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 8 00:08:20.069906 containerd[1561]: time="2025-05-08T00:08:20.069545798Z" level=info msg="shim disconnected" id=d1229d352fd5a2658e355fbe9bc6e2a9c069d26cb3cbfaa74c66ad2a5bb22983 namespace=k8s.io May 8 00:08:20.069906 containerd[1561]: time="2025-05-08T00:08:20.069579118Z" level=warning msg="cleaning up after shim disconnected" id=d1229d352fd5a2658e355fbe9bc6e2a9c069d26cb3cbfaa74c66ad2a5bb22983 namespace=k8s.io May 8 00:08:20.069906 containerd[1561]: time="2025-05-08T00:08:20.069584470Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 00:08:20.200552 containerd[1561]: time="2025-05-08T00:08:20.200501430Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:08:20.201031 containerd[1561]: time="2025-05-08T00:08:20.200998256Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" May 8 00:08:20.201371 containerd[1561]: time="2025-05-08T00:08:20.201115409Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:08:20.203503 containerd[1561]: time="2025-05-08T00:08:20.203481506Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.029742475s" May 8 00:08:20.203503 containerd[1561]: time="2025-05-08T00:08:20.203501392Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" May 8 00:08:20.205927 containerd[1561]: time="2025-05-08T00:08:20.205541384Z" level=info msg="CreateContainer within sandbox \"22ad8b0489d4b1587a88d009b3280e13255e34966ef42aee7078b79e18466266\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" May 8 00:08:20.216987 containerd[1561]: time="2025-05-08T00:08:20.216948333Z" level=info msg="CreateContainer within sandbox \"22ad8b0489d4b1587a88d009b3280e13255e34966ef42aee7078b79e18466266\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"3cd41093e95c7dc73447d2eac2b2866481f22db79bd3ef8ebc6a4c504b28e847\"" May 8 00:08:20.217517 containerd[1561]: time="2025-05-08T00:08:20.217389503Z" level=info msg="StartContainer for \"3cd41093e95c7dc73447d2eac2b2866481f22db79bd3ef8ebc6a4c504b28e847\"" May 8 00:08:20.235822 systemd[1]: Started cri-containerd-3cd41093e95c7dc73447d2eac2b2866481f22db79bd3ef8ebc6a4c504b28e847.scope - libcontainer container 3cd41093e95c7dc73447d2eac2b2866481f22db79bd3ef8ebc6a4c504b28e847. May 8 00:08:20.253440 containerd[1561]: time="2025-05-08T00:08:20.253415318Z" level=info msg="StartContainer for \"3cd41093e95c7dc73447d2eac2b2866481f22db79bd3ef8ebc6a4c504b28e847\" returns successfully" May 8 00:08:20.865713 containerd[1561]: time="2025-05-08T00:08:20.865674660Z" level=info msg="CreateContainer within sandbox \"e145a9e1c0b695d55f48c99c71401a3a9fdf7e467372e74726707c6ac0d2bf1f\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 8 00:08:20.882716 containerd[1561]: time="2025-05-08T00:08:20.882670999Z" level=info msg="CreateContainer within sandbox \"e145a9e1c0b695d55f48c99c71401a3a9fdf7e467372e74726707c6ac0d2bf1f\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"2f67b6e7ef3684a5abb8fa6150e866cbd967df1492a552f35f553ce609009dce\"" May 8 00:08:20.883046 containerd[1561]: time="2025-05-08T00:08:20.883031815Z" level=info msg="StartContainer for \"2f67b6e7ef3684a5abb8fa6150e866cbd967df1492a552f35f553ce609009dce\"" May 8 00:08:20.887301 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1058351370.mount: Deactivated successfully. May 8 00:08:20.913787 kubelet[2812]: I0508 00:08:20.910940 2812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-s57b8" podStartSLOduration=1.607923983 podStartE2EDuration="9.910361123s" podCreationTimestamp="2025-05-08 00:08:11 +0000 UTC" firstStartedPulling="2025-05-08 00:08:11.901867182 +0000 UTC m=+5.485264571" lastFinishedPulling="2025-05-08 00:08:20.204304319 +0000 UTC m=+13.787701711" observedRunningTime="2025-05-08 00:08:20.909581368 +0000 UTC m=+14.492978770" watchObservedRunningTime="2025-05-08 00:08:20.910361123 +0000 UTC m=+14.493758523" May 8 00:08:20.924848 systemd[1]: Started cri-containerd-2f67b6e7ef3684a5abb8fa6150e866cbd967df1492a552f35f553ce609009dce.scope - libcontainer container 2f67b6e7ef3684a5abb8fa6150e866cbd967df1492a552f35f553ce609009dce. May 8 00:08:20.955948 containerd[1561]: time="2025-05-08T00:08:20.955915665Z" level=info msg="StartContainer for \"2f67b6e7ef3684a5abb8fa6150e866cbd967df1492a552f35f553ce609009dce\" returns successfully" May 8 00:08:21.035019 systemd[1]: cri-containerd-2f67b6e7ef3684a5abb8fa6150e866cbd967df1492a552f35f553ce609009dce.scope: Deactivated successfully. May 8 00:08:21.035188 systemd[1]: cri-containerd-2f67b6e7ef3684a5abb8fa6150e866cbd967df1492a552f35f553ce609009dce.scope: Consumed 13ms CPU time, 5.1M memory peak, 1M read from disk. May 8 00:08:21.204103 containerd[1561]: time="2025-05-08T00:08:21.204027760Z" level=info msg="shim disconnected" id=2f67b6e7ef3684a5abb8fa6150e866cbd967df1492a552f35f553ce609009dce namespace=k8s.io May 8 00:08:21.204103 containerd[1561]: time="2025-05-08T00:08:21.204060497Z" level=warning msg="cleaning up after shim disconnected" id=2f67b6e7ef3684a5abb8fa6150e866cbd967df1492a552f35f553ce609009dce namespace=k8s.io May 8 00:08:21.204103 containerd[1561]: time="2025-05-08T00:08:21.204070911Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 00:08:21.793962 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2f67b6e7ef3684a5abb8fa6150e866cbd967df1492a552f35f553ce609009dce-rootfs.mount: Deactivated successfully. May 8 00:08:21.854203 containerd[1561]: time="2025-05-08T00:08:21.854174307Z" level=info msg="CreateContainer within sandbox \"e145a9e1c0b695d55f48c99c71401a3a9fdf7e467372e74726707c6ac0d2bf1f\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 8 00:08:21.966268 containerd[1561]: time="2025-05-08T00:08:21.966239680Z" level=info msg="CreateContainer within sandbox \"e145a9e1c0b695d55f48c99c71401a3a9fdf7e467372e74726707c6ac0d2bf1f\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"5892e426f6fe58a98adb85de8df217602940d433cfa1e5fa5afae153a4ac5e80\"" May 8 00:08:21.967129 containerd[1561]: time="2025-05-08T00:08:21.967052222Z" level=info msg="StartContainer for \"5892e426f6fe58a98adb85de8df217602940d433cfa1e5fa5afae153a4ac5e80\"" May 8 00:08:21.987835 systemd[1]: Started cri-containerd-5892e426f6fe58a98adb85de8df217602940d433cfa1e5fa5afae153a4ac5e80.scope - libcontainer container 5892e426f6fe58a98adb85de8df217602940d433cfa1e5fa5afae153a4ac5e80. May 8 00:08:22.002332 systemd[1]: cri-containerd-5892e426f6fe58a98adb85de8df217602940d433cfa1e5fa5afae153a4ac5e80.scope: Deactivated successfully. May 8 00:08:22.019064 containerd[1561]: time="2025-05-08T00:08:22.019039091Z" level=info msg="StartContainer for \"5892e426f6fe58a98adb85de8df217602940d433cfa1e5fa5afae153a4ac5e80\" returns successfully" May 8 00:08:22.193139 containerd[1561]: time="2025-05-08T00:08:22.193055957Z" level=info msg="shim disconnected" id=5892e426f6fe58a98adb85de8df217602940d433cfa1e5fa5afae153a4ac5e80 namespace=k8s.io May 8 00:08:22.193860 containerd[1561]: time="2025-05-08T00:08:22.193272540Z" level=warning msg="cleaning up after shim disconnected" id=5892e426f6fe58a98adb85de8df217602940d433cfa1e5fa5afae153a4ac5e80 namespace=k8s.io May 8 00:08:22.193860 containerd[1561]: time="2025-05-08T00:08:22.193283585Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 00:08:22.200688 containerd[1561]: time="2025-05-08T00:08:22.200395650Z" level=warning msg="cleanup warnings time=\"2025-05-08T00:08:22Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io May 8 00:08:22.793987 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5892e426f6fe58a98adb85de8df217602940d433cfa1e5fa5afae153a4ac5e80-rootfs.mount: Deactivated successfully. May 8 00:08:22.949279 containerd[1561]: time="2025-05-08T00:08:22.949256647Z" level=info msg="CreateContainer within sandbox \"e145a9e1c0b695d55f48c99c71401a3a9fdf7e467372e74726707c6ac0d2bf1f\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 8 00:08:22.994636 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2920379267.mount: Deactivated successfully. May 8 00:08:23.015752 containerd[1561]: time="2025-05-08T00:08:23.015725099Z" level=info msg="CreateContainer within sandbox \"e145a9e1c0b695d55f48c99c71401a3a9fdf7e467372e74726707c6ac0d2bf1f\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"4e293cbd234b264128505d0038878e75f957af2f596fb45995f262c4ba56af98\"" May 8 00:08:23.016048 containerd[1561]: time="2025-05-08T00:08:23.016019235Z" level=info msg="StartContainer for \"4e293cbd234b264128505d0038878e75f957af2f596fb45995f262c4ba56af98\"" May 8 00:08:23.046849 systemd[1]: Started cri-containerd-4e293cbd234b264128505d0038878e75f957af2f596fb45995f262c4ba56af98.scope - libcontainer container 4e293cbd234b264128505d0038878e75f957af2f596fb45995f262c4ba56af98. May 8 00:08:23.079101 containerd[1561]: time="2025-05-08T00:08:23.079021341Z" level=info msg="StartContainer for \"4e293cbd234b264128505d0038878e75f957af2f596fb45995f262c4ba56af98\" returns successfully" May 8 00:08:23.261944 kubelet[2812]: I0508 00:08:23.257929 2812 kubelet_node_status.go:488] "Fast updating node status as it just became ready" May 8 00:08:23.378659 kubelet[2812]: I0508 00:08:23.378405 2812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b3c835a3-2e4c-46da-a046-75f606264ab3-config-volume\") pod \"coredns-6f6b679f8f-72wm4\" (UID: \"b3c835a3-2e4c-46da-a046-75f606264ab3\") " pod="kube-system/coredns-6f6b679f8f-72wm4" May 8 00:08:23.378659 kubelet[2812]: I0508 00:08:23.378428 2812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m9gn6\" (UniqueName: \"kubernetes.io/projected/0f458808-0815-471b-ba5c-4463a98daae9-kube-api-access-m9gn6\") pod \"coredns-6f6b679f8f-h6wz2\" (UID: \"0f458808-0815-471b-ba5c-4463a98daae9\") " pod="kube-system/coredns-6f6b679f8f-h6wz2" May 8 00:08:23.378659 kubelet[2812]: I0508 00:08:23.378443 2812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-knd67\" (UniqueName: \"kubernetes.io/projected/b3c835a3-2e4c-46da-a046-75f606264ab3-kube-api-access-knd67\") pod \"coredns-6f6b679f8f-72wm4\" (UID: \"b3c835a3-2e4c-46da-a046-75f606264ab3\") " pod="kube-system/coredns-6f6b679f8f-72wm4" May 8 00:08:23.378659 kubelet[2812]: I0508 00:08:23.378455 2812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0f458808-0815-471b-ba5c-4463a98daae9-config-volume\") pod \"coredns-6f6b679f8f-h6wz2\" (UID: \"0f458808-0815-471b-ba5c-4463a98daae9\") " pod="kube-system/coredns-6f6b679f8f-h6wz2" May 8 00:08:23.384690 systemd[1]: Created slice kubepods-burstable-pod0f458808_0815_471b_ba5c_4463a98daae9.slice - libcontainer container kubepods-burstable-pod0f458808_0815_471b_ba5c_4463a98daae9.slice. May 8 00:08:23.394728 systemd[1]: Created slice kubepods-burstable-podb3c835a3_2e4c_46da_a046_75f606264ab3.slice - libcontainer container kubepods-burstable-podb3c835a3_2e4c_46da_a046_75f606264ab3.slice. May 8 00:08:23.699867 containerd[1561]: time="2025-05-08T00:08:23.699461608Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-h6wz2,Uid:0f458808-0815-471b-ba5c-4463a98daae9,Namespace:kube-system,Attempt:0,}" May 8 00:08:23.700513 containerd[1561]: time="2025-05-08T00:08:23.700496710Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-72wm4,Uid:b3c835a3-2e4c-46da-a046-75f606264ab3,Namespace:kube-system,Attempt:0,}" May 8 00:08:25.339215 systemd-networkd[1481]: cilium_host: Link UP May 8 00:08:25.339302 systemd-networkd[1481]: cilium_net: Link UP May 8 00:08:25.339304 systemd-networkd[1481]: cilium_net: Gained carrier May 8 00:08:25.339419 systemd-networkd[1481]: cilium_host: Gained carrier May 8 00:08:25.341642 systemd-networkd[1481]: cilium_net: Gained IPv6LL May 8 00:08:25.451930 systemd-networkd[1481]: cilium_vxlan: Link UP May 8 00:08:25.451934 systemd-networkd[1481]: cilium_vxlan: Gained carrier May 8 00:08:25.604787 systemd-networkd[1481]: cilium_host: Gained IPv6LL May 8 00:08:25.881821 kernel: NET: Registered PF_ALG protocol family May 8 00:08:26.323927 systemd-networkd[1481]: lxc_health: Link UP May 8 00:08:26.331839 systemd-networkd[1481]: lxc_health: Gained carrier May 8 00:08:26.755671 systemd-networkd[1481]: lxc41d270e7a6d8: Link UP May 8 00:08:26.759895 kernel: eth0: renamed from tmpe8026 May 8 00:08:26.766282 systemd-networkd[1481]: lxc41d270e7a6d8: Gained carrier May 8 00:08:26.804605 kernel: eth0: renamed from tmpf58b6 May 8 00:08:26.809614 systemd-networkd[1481]: lxc4b5d35013e13: Link UP May 8 00:08:26.816536 systemd-networkd[1481]: lxc4b5d35013e13: Gained carrier May 8 00:08:26.965860 systemd-networkd[1481]: cilium_vxlan: Gained IPv6LL May 8 00:08:27.412871 systemd-networkd[1481]: lxc_health: Gained IPv6LL May 8 00:08:27.568356 kubelet[2812]: I0508 00:08:27.566649 2812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-qgrpq" podStartSLOduration=10.100211713 podStartE2EDuration="16.559087779s" podCreationTimestamp="2025-05-08 00:08:11 +0000 UTC" firstStartedPulling="2025-05-08 00:08:11.714643898 +0000 UTC m=+5.298041294" lastFinishedPulling="2025-05-08 00:08:18.173519967 +0000 UTC m=+11.756917360" observedRunningTime="2025-05-08 00:08:24.036083465 +0000 UTC m=+17.619480865" watchObservedRunningTime="2025-05-08 00:08:27.559087779 +0000 UTC m=+21.142485180" May 8 00:08:28.052807 systemd-networkd[1481]: lxc4b5d35013e13: Gained IPv6LL May 8 00:08:28.372771 systemd-networkd[1481]: lxc41d270e7a6d8: Gained IPv6LL May 8 00:08:29.535909 containerd[1561]: time="2025-05-08T00:08:29.535848227Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:08:29.539729 containerd[1561]: time="2025-05-08T00:08:29.538735913Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:08:29.539729 containerd[1561]: time="2025-05-08T00:08:29.538754074Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:08:29.539729 containerd[1561]: time="2025-05-08T00:08:29.538816896Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:08:29.546191 containerd[1561]: time="2025-05-08T00:08:29.546126077Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:08:29.548723 containerd[1561]: time="2025-05-08T00:08:29.546815462Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:08:29.548723 containerd[1561]: time="2025-05-08T00:08:29.546829556Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:08:29.548723 containerd[1561]: time="2025-05-08T00:08:29.546881969Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:08:29.567994 systemd[1]: Started cri-containerd-f58b63586224b058d347dec259d65a1ec4ec3f9c6f7006901cf33fa2cf5b41a4.scope - libcontainer container f58b63586224b058d347dec259d65a1ec4ec3f9c6f7006901cf33fa2cf5b41a4. May 8 00:08:29.572324 systemd[1]: Started cri-containerd-e80265309e31faed578b58a7dc1a51adbfb4634be077dc1b0bfd5ab86147c146.scope - libcontainer container e80265309e31faed578b58a7dc1a51adbfb4634be077dc1b0bfd5ab86147c146. May 8 00:08:29.588101 systemd-resolved[1422]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 8 00:08:29.588370 systemd-resolved[1422]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 8 00:08:29.623000 containerd[1561]: time="2025-05-08T00:08:29.621815194Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-h6wz2,Uid:0f458808-0815-471b-ba5c-4463a98daae9,Namespace:kube-system,Attempt:0,} returns sandbox id \"e80265309e31faed578b58a7dc1a51adbfb4634be077dc1b0bfd5ab86147c146\"" May 8 00:08:29.629566 containerd[1561]: time="2025-05-08T00:08:29.629543463Z" level=info msg="CreateContainer within sandbox \"e80265309e31faed578b58a7dc1a51adbfb4634be077dc1b0bfd5ab86147c146\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 8 00:08:29.634652 containerd[1561]: time="2025-05-08T00:08:29.634581559Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-72wm4,Uid:b3c835a3-2e4c-46da-a046-75f606264ab3,Namespace:kube-system,Attempt:0,} returns sandbox id \"f58b63586224b058d347dec259d65a1ec4ec3f9c6f7006901cf33fa2cf5b41a4\"" May 8 00:08:29.637177 containerd[1561]: time="2025-05-08T00:08:29.636948780Z" level=info msg="CreateContainer within sandbox \"f58b63586224b058d347dec259d65a1ec4ec3f9c6f7006901cf33fa2cf5b41a4\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 8 00:08:29.754445 containerd[1561]: time="2025-05-08T00:08:29.754409970Z" level=info msg="CreateContainer within sandbox \"f58b63586224b058d347dec259d65a1ec4ec3f9c6f7006901cf33fa2cf5b41a4\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"881266c73db4fbee40a0cf21b5ad0e3f05807b64ea469f7f2211e3f11878a223\"" May 8 00:08:29.755054 containerd[1561]: time="2025-05-08T00:08:29.754907952Z" level=info msg="StartContainer for \"881266c73db4fbee40a0cf21b5ad0e3f05807b64ea469f7f2211e3f11878a223\"" May 8 00:08:29.755283 containerd[1561]: time="2025-05-08T00:08:29.755228216Z" level=info msg="CreateContainer within sandbox \"e80265309e31faed578b58a7dc1a51adbfb4634be077dc1b0bfd5ab86147c146\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f4e525a7e105ba8a4f25eab502780aa50614cf63802001dc6a267403d51479da\"" May 8 00:08:29.759306 containerd[1561]: time="2025-05-08T00:08:29.759249732Z" level=info msg="StartContainer for \"f4e525a7e105ba8a4f25eab502780aa50614cf63802001dc6a267403d51479da\"" May 8 00:08:29.779835 systemd[1]: Started cri-containerd-881266c73db4fbee40a0cf21b5ad0e3f05807b64ea469f7f2211e3f11878a223.scope - libcontainer container 881266c73db4fbee40a0cf21b5ad0e3f05807b64ea469f7f2211e3f11878a223. May 8 00:08:29.783716 systemd[1]: Started cri-containerd-f4e525a7e105ba8a4f25eab502780aa50614cf63802001dc6a267403d51479da.scope - libcontainer container f4e525a7e105ba8a4f25eab502780aa50614cf63802001dc6a267403d51479da. May 8 00:08:29.806282 containerd[1561]: time="2025-05-08T00:08:29.804935693Z" level=info msg="StartContainer for \"881266c73db4fbee40a0cf21b5ad0e3f05807b64ea469f7f2211e3f11878a223\" returns successfully" May 8 00:08:29.810279 containerd[1561]: time="2025-05-08T00:08:29.810139682Z" level=info msg="StartContainer for \"f4e525a7e105ba8a4f25eab502780aa50614cf63802001dc6a267403d51479da\" returns successfully" May 8 00:08:29.988181 kubelet[2812]: I0508 00:08:29.987541 2812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-h6wz2" podStartSLOduration=18.987530905 podStartE2EDuration="18.987530905s" podCreationTimestamp="2025-05-08 00:08:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 00:08:29.987447157 +0000 UTC m=+23.570844555" watchObservedRunningTime="2025-05-08 00:08:29.987530905 +0000 UTC m=+23.570928297" May 8 00:08:30.542167 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount965266243.mount: Deactivated successfully. May 8 00:08:30.994261 kubelet[2812]: I0508 00:08:30.993649 2812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-72wm4" podStartSLOduration=19.993637661 podStartE2EDuration="19.993637661s" podCreationTimestamp="2025-05-08 00:08:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 00:08:29.999332455 +0000 UTC m=+23.582729853" watchObservedRunningTime="2025-05-08 00:08:30.993637661 +0000 UTC m=+24.577035055" May 8 00:09:16.348208 systemd[1]: Started sshd@7-139.178.70.109:22-139.178.89.65:39068.service - OpenSSH per-connection server daemon (139.178.89.65:39068). May 8 00:09:16.481577 sshd[4211]: Accepted publickey for core from 139.178.89.65 port 39068 ssh2: RSA SHA256:YTfHkQoI5xSpvBwpAWL9S8jbOKDeUPQ5sL4eA5EtDVU May 8 00:09:16.483265 sshd-session[4211]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:09:16.491437 systemd-logind[1535]: New session 10 of user core. May 8 00:09:16.497796 systemd[1]: Started session-10.scope - Session 10 of User core. May 8 00:09:17.781074 sshd[4213]: Connection closed by 139.178.89.65 port 39068 May 8 00:09:17.783603 systemd-logind[1535]: Session 10 logged out. Waiting for processes to exit. May 8 00:09:17.781611 sshd-session[4211]: pam_unix(sshd:session): session closed for user core May 8 00:09:17.783993 systemd[1]: sshd@7-139.178.70.109:22-139.178.89.65:39068.service: Deactivated successfully. May 8 00:09:17.785552 systemd[1]: session-10.scope: Deactivated successfully. May 8 00:09:17.787612 systemd-logind[1535]: Removed session 10. May 8 00:09:22.791175 systemd[1]: Started sshd@8-139.178.70.109:22-139.178.89.65:40104.service - OpenSSH per-connection server daemon (139.178.89.65:40104). May 8 00:09:22.899015 sshd[4226]: Accepted publickey for core from 139.178.89.65 port 40104 ssh2: RSA SHA256:YTfHkQoI5xSpvBwpAWL9S8jbOKDeUPQ5sL4eA5EtDVU May 8 00:09:22.899968 sshd-session[4226]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:09:22.903736 systemd-logind[1535]: New session 11 of user core. May 8 00:09:22.911032 systemd[1]: Started session-11.scope - Session 11 of User core. May 8 00:09:23.036500 sshd[4228]: Connection closed by 139.178.89.65 port 40104 May 8 00:09:23.036983 sshd-session[4226]: pam_unix(sshd:session): session closed for user core May 8 00:09:23.039290 systemd[1]: sshd@8-139.178.70.109:22-139.178.89.65:40104.service: Deactivated successfully. May 8 00:09:23.039516 systemd-logind[1535]: Session 11 logged out. Waiting for processes to exit. May 8 00:09:23.041288 systemd[1]: session-11.scope: Deactivated successfully. May 8 00:09:23.042665 systemd-logind[1535]: Removed session 11. May 8 00:09:28.046568 systemd[1]: Started sshd@9-139.178.70.109:22-139.178.89.65:41524.service - OpenSSH per-connection server daemon (139.178.89.65:41524). May 8 00:09:28.082071 sshd[4241]: Accepted publickey for core from 139.178.89.65 port 41524 ssh2: RSA SHA256:YTfHkQoI5xSpvBwpAWL9S8jbOKDeUPQ5sL4eA5EtDVU May 8 00:09:28.082958 sshd-session[4241]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:09:28.085779 systemd-logind[1535]: New session 12 of user core. May 8 00:09:28.089803 systemd[1]: Started session-12.scope - Session 12 of User core. May 8 00:09:28.184542 sshd[4243]: Connection closed by 139.178.89.65 port 41524 May 8 00:09:28.185006 sshd-session[4241]: pam_unix(sshd:session): session closed for user core May 8 00:09:28.186894 systemd[1]: sshd@9-139.178.70.109:22-139.178.89.65:41524.service: Deactivated successfully. May 8 00:09:28.188027 systemd[1]: session-12.scope: Deactivated successfully. May 8 00:09:28.188500 systemd-logind[1535]: Session 12 logged out. Waiting for processes to exit. May 8 00:09:28.189073 systemd-logind[1535]: Removed session 12. May 8 00:09:33.193806 systemd[1]: Started sshd@10-139.178.70.109:22-139.178.89.65:41528.service - OpenSSH per-connection server daemon (139.178.89.65:41528). May 8 00:09:33.229110 sshd[4256]: Accepted publickey for core from 139.178.89.65 port 41528 ssh2: RSA SHA256:YTfHkQoI5xSpvBwpAWL9S8jbOKDeUPQ5sL4eA5EtDVU May 8 00:09:33.230067 sshd-session[4256]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:09:33.232900 systemd-logind[1535]: New session 13 of user core. May 8 00:09:33.240846 systemd[1]: Started session-13.scope - Session 13 of User core. May 8 00:09:33.349961 sshd[4258]: Connection closed by 139.178.89.65 port 41528 May 8 00:09:33.349954 sshd-session[4256]: pam_unix(sshd:session): session closed for user core May 8 00:09:33.359121 systemd[1]: sshd@10-139.178.70.109:22-139.178.89.65:41528.service: Deactivated successfully. May 8 00:09:33.360484 systemd[1]: session-13.scope: Deactivated successfully. May 8 00:09:33.361058 systemd-logind[1535]: Session 13 logged out. Waiting for processes to exit. May 8 00:09:33.364962 systemd[1]: Started sshd@11-139.178.70.109:22-139.178.89.65:41534.service - OpenSSH per-connection server daemon (139.178.89.65:41534). May 8 00:09:33.366290 systemd-logind[1535]: Removed session 13. May 8 00:09:33.399609 sshd[4269]: Accepted publickey for core from 139.178.89.65 port 41534 ssh2: RSA SHA256:YTfHkQoI5xSpvBwpAWL9S8jbOKDeUPQ5sL4eA5EtDVU May 8 00:09:33.400417 sshd-session[4269]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:09:33.403543 systemd-logind[1535]: New session 14 of user core. May 8 00:09:33.410793 systemd[1]: Started session-14.scope - Session 14 of User core. May 8 00:09:33.539886 sshd[4272]: Connection closed by 139.178.89.65 port 41534 May 8 00:09:33.540902 sshd-session[4269]: pam_unix(sshd:session): session closed for user core May 8 00:09:33.549157 systemd[1]: sshd@11-139.178.70.109:22-139.178.89.65:41534.service: Deactivated successfully. May 8 00:09:33.553007 systemd[1]: session-14.scope: Deactivated successfully. May 8 00:09:33.555339 systemd-logind[1535]: Session 14 logged out. Waiting for processes to exit. May 8 00:09:33.563008 systemd[1]: Started sshd@12-139.178.70.109:22-139.178.89.65:41544.service - OpenSSH per-connection server daemon (139.178.89.65:41544). May 8 00:09:33.564922 systemd-logind[1535]: Removed session 14. May 8 00:09:33.615020 sshd[4281]: Accepted publickey for core from 139.178.89.65 port 41544 ssh2: RSA SHA256:YTfHkQoI5xSpvBwpAWL9S8jbOKDeUPQ5sL4eA5EtDVU May 8 00:09:33.615924 sshd-session[4281]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:09:33.618725 systemd-logind[1535]: New session 15 of user core. May 8 00:09:33.625882 systemd[1]: Started session-15.scope - Session 15 of User core. May 8 00:09:33.720409 sshd[4284]: Connection closed by 139.178.89.65 port 41544 May 8 00:09:33.720349 sshd-session[4281]: pam_unix(sshd:session): session closed for user core May 8 00:09:33.722460 systemd-logind[1535]: Session 15 logged out. Waiting for processes to exit. May 8 00:09:33.722628 systemd[1]: sshd@12-139.178.70.109:22-139.178.89.65:41544.service: Deactivated successfully. May 8 00:09:33.723894 systemd[1]: session-15.scope: Deactivated successfully. May 8 00:09:33.724769 systemd-logind[1535]: Removed session 15. May 8 00:09:38.733103 systemd[1]: Started sshd@13-139.178.70.109:22-139.178.89.65:51686.service - OpenSSH per-connection server daemon (139.178.89.65:51686). May 8 00:09:38.772371 sshd[4296]: Accepted publickey for core from 139.178.89.65 port 51686 ssh2: RSA SHA256:YTfHkQoI5xSpvBwpAWL9S8jbOKDeUPQ5sL4eA5EtDVU May 8 00:09:38.773566 sshd-session[4296]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:09:38.776592 systemd-logind[1535]: New session 16 of user core. May 8 00:09:38.779834 systemd[1]: Started session-16.scope - Session 16 of User core. May 8 00:09:38.868461 sshd[4298]: Connection closed by 139.178.89.65 port 51686 May 8 00:09:38.868916 sshd-session[4296]: pam_unix(sshd:session): session closed for user core May 8 00:09:38.871141 systemd-logind[1535]: Session 16 logged out. Waiting for processes to exit. May 8 00:09:38.871608 systemd[1]: sshd@13-139.178.70.109:22-139.178.89.65:51686.service: Deactivated successfully. May 8 00:09:38.872934 systemd[1]: session-16.scope: Deactivated successfully. May 8 00:09:38.873648 systemd-logind[1535]: Removed session 16. May 8 00:09:43.879816 systemd[1]: Started sshd@14-139.178.70.109:22-139.178.89.65:51696.service - OpenSSH per-connection server daemon (139.178.89.65:51696). May 8 00:09:43.933216 sshd[4312]: Accepted publickey for core from 139.178.89.65 port 51696 ssh2: RSA SHA256:YTfHkQoI5xSpvBwpAWL9S8jbOKDeUPQ5sL4eA5EtDVU May 8 00:09:43.934315 sshd-session[4312]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:09:43.937101 systemd-logind[1535]: New session 17 of user core. May 8 00:09:43.940797 systemd[1]: Started session-17.scope - Session 17 of User core. May 8 00:09:44.063914 sshd[4314]: Connection closed by 139.178.89.65 port 51696 May 8 00:09:44.064486 sshd-session[4312]: pam_unix(sshd:session): session closed for user core May 8 00:09:44.074564 systemd-logind[1535]: Session 17 logged out. Waiting for processes to exit. May 8 00:09:44.074736 systemd[1]: sshd@14-139.178.70.109:22-139.178.89.65:51696.service: Deactivated successfully. May 8 00:09:44.076036 systemd[1]: session-17.scope: Deactivated successfully. May 8 00:09:44.076727 systemd-logind[1535]: Removed session 17. May 8 00:09:49.074198 systemd[1]: Started sshd@15-139.178.70.109:22-139.178.89.65:33672.service - OpenSSH per-connection server daemon (139.178.89.65:33672). May 8 00:09:49.110301 sshd[4326]: Accepted publickey for core from 139.178.89.65 port 33672 ssh2: RSA SHA256:YTfHkQoI5xSpvBwpAWL9S8jbOKDeUPQ5sL4eA5EtDVU May 8 00:09:49.111206 sshd-session[4326]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:09:49.113664 systemd-logind[1535]: New session 18 of user core. May 8 00:09:49.118876 systemd[1]: Started session-18.scope - Session 18 of User core. May 8 00:09:49.281442 sshd[4328]: Connection closed by 139.178.89.65 port 33672 May 8 00:09:49.281888 sshd-session[4326]: pam_unix(sshd:session): session closed for user core May 8 00:09:49.288841 systemd[1]: sshd@15-139.178.70.109:22-139.178.89.65:33672.service: Deactivated successfully. May 8 00:09:49.289875 systemd[1]: session-18.scope: Deactivated successfully. May 8 00:09:49.290707 systemd-logind[1535]: Session 18 logged out. Waiting for processes to exit. May 8 00:09:49.294014 systemd[1]: Started sshd@16-139.178.70.109:22-139.178.89.65:33680.service - OpenSSH per-connection server daemon (139.178.89.65:33680). May 8 00:09:49.295023 systemd-logind[1535]: Removed session 18. May 8 00:09:49.436816 sshd[4339]: Accepted publickey for core from 139.178.89.65 port 33680 ssh2: RSA SHA256:YTfHkQoI5xSpvBwpAWL9S8jbOKDeUPQ5sL4eA5EtDVU May 8 00:09:49.438257 sshd-session[4339]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:09:49.441765 systemd-logind[1535]: New session 19 of user core. May 8 00:09:49.443863 systemd[1]: Started session-19.scope - Session 19 of User core. May 8 00:09:50.355320 sshd[4342]: Connection closed by 139.178.89.65 port 33680 May 8 00:09:50.356054 sshd-session[4339]: pam_unix(sshd:session): session closed for user core May 8 00:09:50.364347 systemd[1]: sshd@16-139.178.70.109:22-139.178.89.65:33680.service: Deactivated successfully. May 8 00:09:50.365629 systemd[1]: session-19.scope: Deactivated successfully. May 8 00:09:50.367033 systemd-logind[1535]: Session 19 logged out. Waiting for processes to exit. May 8 00:09:50.368969 systemd[1]: Started sshd@17-139.178.70.109:22-139.178.89.65:33682.service - OpenSSH per-connection server daemon (139.178.89.65:33682). May 8 00:09:50.370388 systemd-logind[1535]: Removed session 19. May 8 00:09:50.475937 sshd[4350]: Accepted publickey for core from 139.178.89.65 port 33682 ssh2: RSA SHA256:YTfHkQoI5xSpvBwpAWL9S8jbOKDeUPQ5sL4eA5EtDVU May 8 00:09:50.476940 sshd-session[4350]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:09:50.481744 systemd-logind[1535]: New session 20 of user core. May 8 00:09:50.484777 systemd[1]: Started session-20.scope - Session 20 of User core. May 8 00:09:52.625730 sshd[4353]: Connection closed by 139.178.89.65 port 33682 May 8 00:09:52.626356 sshd-session[4350]: pam_unix(sshd:session): session closed for user core May 8 00:09:52.643962 systemd[1]: Started sshd@18-139.178.70.109:22-139.178.89.65:33684.service - OpenSSH per-connection server daemon (139.178.89.65:33684). May 8 00:09:52.750172 systemd[1]: sshd@17-139.178.70.109:22-139.178.89.65:33682.service: Deactivated successfully. May 8 00:09:52.751352 systemd[1]: session-20.scope: Deactivated successfully. May 8 00:09:52.751496 systemd[1]: session-20.scope: Consumed 376ms CPU time, 66.8M memory peak. May 8 00:09:52.751951 systemd-logind[1535]: Session 20 logged out. Waiting for processes to exit. May 8 00:09:52.752630 systemd-logind[1535]: Removed session 20. May 8 00:09:52.901084 sshd[4367]: Accepted publickey for core from 139.178.89.65 port 33684 ssh2: RSA SHA256:YTfHkQoI5xSpvBwpAWL9S8jbOKDeUPQ5sL4eA5EtDVU May 8 00:09:52.902860 sshd-session[4367]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:09:52.907138 systemd-logind[1535]: New session 21 of user core. May 8 00:09:52.912913 systemd[1]: Started session-21.scope - Session 21 of User core. May 8 00:09:53.352849 sshd[4372]: Connection closed by 139.178.89.65 port 33684 May 8 00:09:53.353583 sshd-session[4367]: pam_unix(sshd:session): session closed for user core May 8 00:09:53.364291 systemd[1]: sshd@18-139.178.70.109:22-139.178.89.65:33684.service: Deactivated successfully. May 8 00:09:53.367335 systemd[1]: session-21.scope: Deactivated successfully. May 8 00:09:53.369750 systemd-logind[1535]: Session 21 logged out. Waiting for processes to exit. May 8 00:09:53.380867 systemd[1]: Started sshd@19-139.178.70.109:22-139.178.89.65:33696.service - OpenSSH per-connection server daemon (139.178.89.65:33696). May 8 00:09:53.383127 systemd-logind[1535]: Removed session 21. May 8 00:09:53.422736 sshd[4380]: Accepted publickey for core from 139.178.89.65 port 33696 ssh2: RSA SHA256:YTfHkQoI5xSpvBwpAWL9S8jbOKDeUPQ5sL4eA5EtDVU May 8 00:09:53.424470 sshd-session[4380]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:09:53.432816 systemd-logind[1535]: New session 22 of user core. May 8 00:09:53.441917 systemd[1]: Started session-22.scope - Session 22 of User core. May 8 00:09:53.564650 sshd[4383]: Connection closed by 139.178.89.65 port 33696 May 8 00:09:53.565474 sshd-session[4380]: pam_unix(sshd:session): session closed for user core May 8 00:09:53.568626 systemd[1]: sshd@19-139.178.70.109:22-139.178.89.65:33696.service: Deactivated successfully. May 8 00:09:53.572410 systemd[1]: session-22.scope: Deactivated successfully. May 8 00:09:53.575497 systemd-logind[1535]: Session 22 logged out. Waiting for processes to exit. May 8 00:09:53.576737 systemd-logind[1535]: Removed session 22. May 8 00:09:58.575025 systemd[1]: Started sshd@20-139.178.70.109:22-139.178.89.65:52918.service - OpenSSH per-connection server daemon (139.178.89.65:52918). May 8 00:09:58.611636 sshd[4398]: Accepted publickey for core from 139.178.89.65 port 52918 ssh2: RSA SHA256:YTfHkQoI5xSpvBwpAWL9S8jbOKDeUPQ5sL4eA5EtDVU May 8 00:09:58.612937 sshd-session[4398]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:09:58.615966 systemd-logind[1535]: New session 23 of user core. May 8 00:09:58.619796 systemd[1]: Started session-23.scope - Session 23 of User core. May 8 00:09:58.711714 sshd[4400]: Connection closed by 139.178.89.65 port 52918 May 8 00:09:58.712109 sshd-session[4398]: pam_unix(sshd:session): session closed for user core May 8 00:09:58.714033 systemd[1]: sshd@20-139.178.70.109:22-139.178.89.65:52918.service: Deactivated successfully. May 8 00:09:58.715174 systemd[1]: session-23.scope: Deactivated successfully. May 8 00:09:58.715633 systemd-logind[1535]: Session 23 logged out. Waiting for processes to exit. May 8 00:09:58.716287 systemd-logind[1535]: Removed session 23. May 8 00:10:03.727914 systemd[1]: Started sshd@21-139.178.70.109:22-139.178.89.65:52928.service - OpenSSH per-connection server daemon (139.178.89.65:52928). May 8 00:10:03.763465 sshd[4412]: Accepted publickey for core from 139.178.89.65 port 52928 ssh2: RSA SHA256:YTfHkQoI5xSpvBwpAWL9S8jbOKDeUPQ5sL4eA5EtDVU May 8 00:10:03.764334 sshd-session[4412]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:10:03.767239 systemd-logind[1535]: New session 24 of user core. May 8 00:10:03.773861 systemd[1]: Started session-24.scope - Session 24 of User core. May 8 00:10:03.881094 sshd[4414]: Connection closed by 139.178.89.65 port 52928 May 8 00:10:03.881461 sshd-session[4412]: pam_unix(sshd:session): session closed for user core May 8 00:10:03.883614 systemd-logind[1535]: Session 24 logged out. Waiting for processes to exit. May 8 00:10:03.883752 systemd[1]: sshd@21-139.178.70.109:22-139.178.89.65:52928.service: Deactivated successfully. May 8 00:10:03.884818 systemd[1]: session-24.scope: Deactivated successfully. May 8 00:10:03.885358 systemd-logind[1535]: Removed session 24. May 8 00:10:08.891002 systemd[1]: Started sshd@22-139.178.70.109:22-139.178.89.65:46086.service - OpenSSH per-connection server daemon (139.178.89.65:46086). May 8 00:10:08.926756 sshd[4427]: Accepted publickey for core from 139.178.89.65 port 46086 ssh2: RSA SHA256:YTfHkQoI5xSpvBwpAWL9S8jbOKDeUPQ5sL4eA5EtDVU May 8 00:10:08.927630 sshd-session[4427]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:10:08.930528 systemd-logind[1535]: New session 25 of user core. May 8 00:10:08.939782 systemd[1]: Started session-25.scope - Session 25 of User core. May 8 00:10:09.053552 sshd[4429]: Connection closed by 139.178.89.65 port 46086 May 8 00:10:09.054033 sshd-session[4427]: pam_unix(sshd:session): session closed for user core May 8 00:10:09.061083 systemd[1]: sshd@22-139.178.70.109:22-139.178.89.65:46086.service: Deactivated successfully. May 8 00:10:09.062159 systemd[1]: session-25.scope: Deactivated successfully. May 8 00:10:09.063091 systemd-logind[1535]: Session 25 logged out. Waiting for processes to exit. May 8 00:10:09.068949 systemd[1]: Started sshd@23-139.178.70.109:22-139.178.89.65:46094.service - OpenSSH per-connection server daemon (139.178.89.65:46094). May 8 00:10:09.070262 systemd-logind[1535]: Removed session 25. May 8 00:10:09.103731 sshd[4440]: Accepted publickey for core from 139.178.89.65 port 46094 ssh2: RSA SHA256:YTfHkQoI5xSpvBwpAWL9S8jbOKDeUPQ5sL4eA5EtDVU May 8 00:10:09.104765 sshd-session[4440]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:10:09.107738 systemd-logind[1535]: New session 26 of user core. May 8 00:10:09.119150 systemd[1]: Started session-26.scope - Session 26 of User core. May 8 00:10:10.519233 containerd[1561]: time="2025-05-08T00:10:10.519192728Z" level=info msg="StopContainer for \"3cd41093e95c7dc73447d2eac2b2866481f22db79bd3ef8ebc6a4c504b28e847\" with timeout 30 (s)" May 8 00:10:10.521498 containerd[1561]: time="2025-05-08T00:10:10.521428656Z" level=info msg="Stop container \"3cd41093e95c7dc73447d2eac2b2866481f22db79bd3ef8ebc6a4c504b28e847\" with signal terminated" May 8 00:10:10.571620 systemd[1]: run-containerd-runc-k8s.io-4e293cbd234b264128505d0038878e75f957af2f596fb45995f262c4ba56af98-runc.aj08h4.mount: Deactivated successfully. May 8 00:10:10.572528 systemd[1]: cri-containerd-3cd41093e95c7dc73447d2eac2b2866481f22db79bd3ef8ebc6a4c504b28e847.scope: Deactivated successfully. May 8 00:10:10.587029 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3cd41093e95c7dc73447d2eac2b2866481f22db79bd3ef8ebc6a4c504b28e847-rootfs.mount: Deactivated successfully. May 8 00:10:10.591223 containerd[1561]: time="2025-05-08T00:10:10.591082844Z" level=info msg="shim disconnected" id=3cd41093e95c7dc73447d2eac2b2866481f22db79bd3ef8ebc6a4c504b28e847 namespace=k8s.io May 8 00:10:10.591223 containerd[1561]: time="2025-05-08T00:10:10.591143111Z" level=warning msg="cleaning up after shim disconnected" id=3cd41093e95c7dc73447d2eac2b2866481f22db79bd3ef8ebc6a4c504b28e847 namespace=k8s.io May 8 00:10:10.591223 containerd[1561]: time="2025-05-08T00:10:10.591152649Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 00:10:10.597180 containerd[1561]: time="2025-05-08T00:10:10.597139921Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 8 00:10:10.601600 containerd[1561]: time="2025-05-08T00:10:10.601567441Z" level=info msg="StopContainer for \"4e293cbd234b264128505d0038878e75f957af2f596fb45995f262c4ba56af98\" with timeout 2 (s)" May 8 00:10:10.601933 containerd[1561]: time="2025-05-08T00:10:10.601923509Z" level=info msg="Stop container \"4e293cbd234b264128505d0038878e75f957af2f596fb45995f262c4ba56af98\" with signal terminated" May 8 00:10:10.603141 containerd[1561]: time="2025-05-08T00:10:10.603124924Z" level=warning msg="cleanup warnings time=\"2025-05-08T00:10:10Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io May 8 00:10:10.605001 containerd[1561]: time="2025-05-08T00:10:10.604916054Z" level=info msg="StopContainer for \"3cd41093e95c7dc73447d2eac2b2866481f22db79bd3ef8ebc6a4c504b28e847\" returns successfully" May 8 00:10:10.605445 containerd[1561]: time="2025-05-08T00:10:10.605430569Z" level=info msg="StopPodSandbox for \"22ad8b0489d4b1587a88d009b3280e13255e34966ef42aee7078b79e18466266\"" May 8 00:10:10.606381 containerd[1561]: time="2025-05-08T00:10:10.606276132Z" level=info msg="Container to stop \"3cd41093e95c7dc73447d2eac2b2866481f22db79bd3ef8ebc6a4c504b28e847\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 8 00:10:10.610501 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-22ad8b0489d4b1587a88d009b3280e13255e34966ef42aee7078b79e18466266-shm.mount: Deactivated successfully. May 8 00:10:10.612819 systemd-networkd[1481]: lxc_health: Link DOWN May 8 00:10:10.612824 systemd-networkd[1481]: lxc_health: Lost carrier May 8 00:10:10.617776 systemd[1]: cri-containerd-22ad8b0489d4b1587a88d009b3280e13255e34966ef42aee7078b79e18466266.scope: Deactivated successfully. May 8 00:10:10.633469 systemd[1]: cri-containerd-4e293cbd234b264128505d0038878e75f957af2f596fb45995f262c4ba56af98.scope: Deactivated successfully. May 8 00:10:10.633819 systemd[1]: cri-containerd-4e293cbd234b264128505d0038878e75f957af2f596fb45995f262c4ba56af98.scope: Consumed 4.701s CPU time, 196.2M memory peak, 72.7M read from disk, 13.3M written to disk. May 8 00:10:10.641667 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-22ad8b0489d4b1587a88d009b3280e13255e34966ef42aee7078b79e18466266-rootfs.mount: Deactivated successfully. May 8 00:10:10.648395 containerd[1561]: time="2025-05-08T00:10:10.648180776Z" level=info msg="shim disconnected" id=22ad8b0489d4b1587a88d009b3280e13255e34966ef42aee7078b79e18466266 namespace=k8s.io May 8 00:10:10.648541 containerd[1561]: time="2025-05-08T00:10:10.648531615Z" level=warning msg="cleaning up after shim disconnected" id=22ad8b0489d4b1587a88d009b3280e13255e34966ef42aee7078b79e18466266 namespace=k8s.io May 8 00:10:10.648658 containerd[1561]: time="2025-05-08T00:10:10.648648920Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 00:10:10.656597 containerd[1561]: time="2025-05-08T00:10:10.656541762Z" level=info msg="shim disconnected" id=4e293cbd234b264128505d0038878e75f957af2f596fb45995f262c4ba56af98 namespace=k8s.io May 8 00:10:10.656597 containerd[1561]: time="2025-05-08T00:10:10.656586832Z" level=warning msg="cleaning up after shim disconnected" id=4e293cbd234b264128505d0038878e75f957af2f596fb45995f262c4ba56af98 namespace=k8s.io May 8 00:10:10.656597 containerd[1561]: time="2025-05-08T00:10:10.656594168Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 00:10:10.660044 containerd[1561]: time="2025-05-08T00:10:10.660024675Z" level=info msg="TearDown network for sandbox \"22ad8b0489d4b1587a88d009b3280e13255e34966ef42aee7078b79e18466266\" successfully" May 8 00:10:10.660125 containerd[1561]: time="2025-05-08T00:10:10.660116943Z" level=info msg="StopPodSandbox for \"22ad8b0489d4b1587a88d009b3280e13255e34966ef42aee7078b79e18466266\" returns successfully" May 8 00:10:10.672485 containerd[1561]: time="2025-05-08T00:10:10.672409665Z" level=info msg="StopContainer for \"4e293cbd234b264128505d0038878e75f957af2f596fb45995f262c4ba56af98\" returns successfully" May 8 00:10:10.672946 containerd[1561]: time="2025-05-08T00:10:10.672832636Z" level=info msg="StopPodSandbox for \"e145a9e1c0b695d55f48c99c71401a3a9fdf7e467372e74726707c6ac0d2bf1f\"" May 8 00:10:10.672946 containerd[1561]: time="2025-05-08T00:10:10.672853124Z" level=info msg="Container to stop \"2f67b6e7ef3684a5abb8fa6150e866cbd967df1492a552f35f553ce609009dce\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 8 00:10:10.672946 containerd[1561]: time="2025-05-08T00:10:10.672873552Z" level=info msg="Container to stop \"5892e426f6fe58a98adb85de8df217602940d433cfa1e5fa5afae153a4ac5e80\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 8 00:10:10.672946 containerd[1561]: time="2025-05-08T00:10:10.672878735Z" level=info msg="Container to stop \"4e293cbd234b264128505d0038878e75f957af2f596fb45995f262c4ba56af98\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 8 00:10:10.672946 containerd[1561]: time="2025-05-08T00:10:10.672889472Z" level=info msg="Container to stop \"94f82d4b64350c9cdffc05f2bc1c0cb55583aef4f03a428aa25839b46c45546a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 8 00:10:10.672946 containerd[1561]: time="2025-05-08T00:10:10.672894226Z" level=info msg="Container to stop \"d1229d352fd5a2658e355fbe9bc6e2a9c069d26cb3cbfaa74c66ad2a5bb22983\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 8 00:10:10.677655 systemd[1]: cri-containerd-e145a9e1c0b695d55f48c99c71401a3a9fdf7e467372e74726707c6ac0d2bf1f.scope: Deactivated successfully. May 8 00:10:10.699364 containerd[1561]: time="2025-05-08T00:10:10.699322072Z" level=info msg="shim disconnected" id=e145a9e1c0b695d55f48c99c71401a3a9fdf7e467372e74726707c6ac0d2bf1f namespace=k8s.io May 8 00:10:10.699488 containerd[1561]: time="2025-05-08T00:10:10.699447799Z" level=warning msg="cleaning up after shim disconnected" id=e145a9e1c0b695d55f48c99c71401a3a9fdf7e467372e74726707c6ac0d2bf1f namespace=k8s.io May 8 00:10:10.699488 containerd[1561]: time="2025-05-08T00:10:10.699456304Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 00:10:10.709093 containerd[1561]: time="2025-05-08T00:10:10.709060198Z" level=info msg="TearDown network for sandbox \"e145a9e1c0b695d55f48c99c71401a3a9fdf7e467372e74726707c6ac0d2bf1f\" successfully" May 8 00:10:10.709093 containerd[1561]: time="2025-05-08T00:10:10.709085134Z" level=info msg="StopPodSandbox for \"e145a9e1c0b695d55f48c99c71401a3a9fdf7e467372e74726707c6ac0d2bf1f\" returns successfully" May 8 00:10:10.763808 kubelet[2812]: I0508 00:10:10.763681 2812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vsk7l\" (UniqueName: \"kubernetes.io/projected/6a9134e1-ed8b-44b1-a409-7575c18174b9-kube-api-access-vsk7l\") pod \"6a9134e1-ed8b-44b1-a409-7575c18174b9\" (UID: \"6a9134e1-ed8b-44b1-a409-7575c18174b9\") " May 8 00:10:10.763808 kubelet[2812]: I0508 00:10:10.763756 2812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6a9134e1-ed8b-44b1-a409-7575c18174b9-cilium-config-path\") pod \"6a9134e1-ed8b-44b1-a409-7575c18174b9\" (UID: \"6a9134e1-ed8b-44b1-a409-7575c18174b9\") " May 8 00:10:10.772656 kubelet[2812]: I0508 00:10:10.771656 2812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a9134e1-ed8b-44b1-a409-7575c18174b9-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "6a9134e1-ed8b-44b1-a409-7575c18174b9" (UID: "6a9134e1-ed8b-44b1-a409-7575c18174b9"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 8 00:10:10.773985 kubelet[2812]: I0508 00:10:10.773959 2812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a9134e1-ed8b-44b1-a409-7575c18174b9-kube-api-access-vsk7l" (OuterVolumeSpecName: "kube-api-access-vsk7l") pod "6a9134e1-ed8b-44b1-a409-7575c18174b9" (UID: "6a9134e1-ed8b-44b1-a409-7575c18174b9"). InnerVolumeSpecName "kube-api-access-vsk7l". PluginName "kubernetes.io/projected", VolumeGidValue "" May 8 00:10:10.864982 kubelet[2812]: I0508 00:10:10.864876 2812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/34c52980-bad2-443f-9e83-097e93133d79-bpf-maps\") pod \"34c52980-bad2-443f-9e83-097e93133d79\" (UID: \"34c52980-bad2-443f-9e83-097e93133d79\") " May 8 00:10:10.864982 kubelet[2812]: I0508 00:10:10.864974 2812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/34c52980-bad2-443f-9e83-097e93133d79-host-proc-sys-kernel\") pod \"34c52980-bad2-443f-9e83-097e93133d79\" (UID: \"34c52980-bad2-443f-9e83-097e93133d79\") " May 8 00:10:10.865095 kubelet[2812]: I0508 00:10:10.864993 2812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/34c52980-bad2-443f-9e83-097e93133d79-cilium-config-path\") pod \"34c52980-bad2-443f-9e83-097e93133d79\" (UID: \"34c52980-bad2-443f-9e83-097e93133d79\") " May 8 00:10:10.865095 kubelet[2812]: I0508 00:10:10.865002 2812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/34c52980-bad2-443f-9e83-097e93133d79-host-proc-sys-net\") pod \"34c52980-bad2-443f-9e83-097e93133d79\" (UID: \"34c52980-bad2-443f-9e83-097e93133d79\") " May 8 00:10:10.865095 kubelet[2812]: I0508 00:10:10.865012 2812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/34c52980-bad2-443f-9e83-097e93133d79-lib-modules\") pod \"34c52980-bad2-443f-9e83-097e93133d79\" (UID: \"34c52980-bad2-443f-9e83-097e93133d79\") " May 8 00:10:10.865095 kubelet[2812]: I0508 00:10:10.865020 2812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/34c52980-bad2-443f-9e83-097e93133d79-etc-cni-netd\") pod \"34c52980-bad2-443f-9e83-097e93133d79\" (UID: \"34c52980-bad2-443f-9e83-097e93133d79\") " May 8 00:10:10.865095 kubelet[2812]: I0508 00:10:10.865030 2812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/34c52980-bad2-443f-9e83-097e93133d79-cni-path\") pod \"34c52980-bad2-443f-9e83-097e93133d79\" (UID: \"34c52980-bad2-443f-9e83-097e93133d79\") " May 8 00:10:10.865095 kubelet[2812]: I0508 00:10:10.865038 2812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/34c52980-bad2-443f-9e83-097e93133d79-cilium-run\") pod \"34c52980-bad2-443f-9e83-097e93133d79\" (UID: \"34c52980-bad2-443f-9e83-097e93133d79\") " May 8 00:10:10.865205 kubelet[2812]: I0508 00:10:10.865047 2812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/34c52980-bad2-443f-9e83-097e93133d79-cilium-cgroup\") pod \"34c52980-bad2-443f-9e83-097e93133d79\" (UID: \"34c52980-bad2-443f-9e83-097e93133d79\") " May 8 00:10:10.865205 kubelet[2812]: I0508 00:10:10.865058 2812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/34c52980-bad2-443f-9e83-097e93133d79-xtables-lock\") pod \"34c52980-bad2-443f-9e83-097e93133d79\" (UID: \"34c52980-bad2-443f-9e83-097e93133d79\") " May 8 00:10:10.865205 kubelet[2812]: I0508 00:10:10.865068 2812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w2bgc\" (UniqueName: \"kubernetes.io/projected/34c52980-bad2-443f-9e83-097e93133d79-kube-api-access-w2bgc\") pod \"34c52980-bad2-443f-9e83-097e93133d79\" (UID: \"34c52980-bad2-443f-9e83-097e93133d79\") " May 8 00:10:10.865205 kubelet[2812]: I0508 00:10:10.865076 2812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/34c52980-bad2-443f-9e83-097e93133d79-hostproc\") pod \"34c52980-bad2-443f-9e83-097e93133d79\" (UID: \"34c52980-bad2-443f-9e83-097e93133d79\") " May 8 00:10:10.865205 kubelet[2812]: I0508 00:10:10.865092 2812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/34c52980-bad2-443f-9e83-097e93133d79-hubble-tls\") pod \"34c52980-bad2-443f-9e83-097e93133d79\" (UID: \"34c52980-bad2-443f-9e83-097e93133d79\") " May 8 00:10:10.865205 kubelet[2812]: I0508 00:10:10.865104 2812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/34c52980-bad2-443f-9e83-097e93133d79-clustermesh-secrets\") pod \"34c52980-bad2-443f-9e83-097e93133d79\" (UID: \"34c52980-bad2-443f-9e83-097e93133d79\") " May 8 00:10:10.865304 kubelet[2812]: I0508 00:10:10.865136 2812 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6a9134e1-ed8b-44b1-a409-7575c18174b9-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 8 00:10:10.865304 kubelet[2812]: I0508 00:10:10.865145 2812 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-vsk7l\" (UniqueName: \"kubernetes.io/projected/6a9134e1-ed8b-44b1-a409-7575c18174b9-kube-api-access-vsk7l\") on node \"localhost\" DevicePath \"\"" May 8 00:10:10.865625 kubelet[2812]: I0508 00:10:10.865612 2812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/34c52980-bad2-443f-9e83-097e93133d79-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "34c52980-bad2-443f-9e83-097e93133d79" (UID: "34c52980-bad2-443f-9e83-097e93133d79"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 8 00:10:10.867062 kubelet[2812]: I0508 00:10:10.865664 2812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/34c52980-bad2-443f-9e83-097e93133d79-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "34c52980-bad2-443f-9e83-097e93133d79" (UID: "34c52980-bad2-443f-9e83-097e93133d79"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 8 00:10:10.867062 kubelet[2812]: I0508 00:10:10.865675 2812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/34c52980-bad2-443f-9e83-097e93133d79-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "34c52980-bad2-443f-9e83-097e93133d79" (UID: "34c52980-bad2-443f-9e83-097e93133d79"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 8 00:10:10.867062 kubelet[2812]: I0508 00:10:10.865683 2812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/34c52980-bad2-443f-9e83-097e93133d79-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "34c52980-bad2-443f-9e83-097e93133d79" (UID: "34c52980-bad2-443f-9e83-097e93133d79"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 8 00:10:10.867062 kubelet[2812]: I0508 00:10:10.865689 2812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/34c52980-bad2-443f-9e83-097e93133d79-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "34c52980-bad2-443f-9e83-097e93133d79" (UID: "34c52980-bad2-443f-9e83-097e93133d79"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 8 00:10:10.867062 kubelet[2812]: I0508 00:10:10.865707 2812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/34c52980-bad2-443f-9e83-097e93133d79-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "34c52980-bad2-443f-9e83-097e93133d79" (UID: "34c52980-bad2-443f-9e83-097e93133d79"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 8 00:10:10.867290 kubelet[2812]: I0508 00:10:10.865710 2812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/34c52980-bad2-443f-9e83-097e93133d79-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "34c52980-bad2-443f-9e83-097e93133d79" (UID: "34c52980-bad2-443f-9e83-097e93133d79"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 8 00:10:10.867290 kubelet[2812]: I0508 00:10:10.865714 2812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/34c52980-bad2-443f-9e83-097e93133d79-cni-path" (OuterVolumeSpecName: "cni-path") pod "34c52980-bad2-443f-9e83-097e93133d79" (UID: "34c52980-bad2-443f-9e83-097e93133d79"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 8 00:10:10.867290 kubelet[2812]: I0508 00:10:10.866793 2812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/34c52980-bad2-443f-9e83-097e93133d79-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "34c52980-bad2-443f-9e83-097e93133d79" (UID: "34c52980-bad2-443f-9e83-097e93133d79"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 8 00:10:10.867290 kubelet[2812]: I0508 00:10:10.866996 2812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/34c52980-bad2-443f-9e83-097e93133d79-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "34c52980-bad2-443f-9e83-097e93133d79" (UID: "34c52980-bad2-443f-9e83-097e93133d79"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 8 00:10:10.867290 kubelet[2812]: I0508 00:10:10.867044 2812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/34c52980-bad2-443f-9e83-097e93133d79-hostproc" (OuterVolumeSpecName: "hostproc") pod "34c52980-bad2-443f-9e83-097e93133d79" (UID: "34c52980-bad2-443f-9e83-097e93133d79"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 8 00:10:10.867559 kubelet[2812]: I0508 00:10:10.867451 2812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34c52980-bad2-443f-9e83-097e93133d79-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "34c52980-bad2-443f-9e83-097e93133d79" (UID: "34c52980-bad2-443f-9e83-097e93133d79"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" May 8 00:10:10.868846 kubelet[2812]: I0508 00:10:10.868832 2812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/34c52980-bad2-443f-9e83-097e93133d79-kube-api-access-w2bgc" (OuterVolumeSpecName: "kube-api-access-w2bgc") pod "34c52980-bad2-443f-9e83-097e93133d79" (UID: "34c52980-bad2-443f-9e83-097e93133d79"). InnerVolumeSpecName "kube-api-access-w2bgc". PluginName "kubernetes.io/projected", VolumeGidValue "" May 8 00:10:10.869378 kubelet[2812]: I0508 00:10:10.869366 2812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/34c52980-bad2-443f-9e83-097e93133d79-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "34c52980-bad2-443f-9e83-097e93133d79" (UID: "34c52980-bad2-443f-9e83-097e93133d79"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" May 8 00:10:10.965774 kubelet[2812]: I0508 00:10:10.965742 2812 reconciler_common.go:288] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/34c52980-bad2-443f-9e83-097e93133d79-lib-modules\") on node \"localhost\" DevicePath \"\"" May 8 00:10:10.966183 kubelet[2812]: I0508 00:10:10.965926 2812 reconciler_common.go:288] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/34c52980-bad2-443f-9e83-097e93133d79-cni-path\") on node \"localhost\" DevicePath \"\"" May 8 00:10:10.966183 kubelet[2812]: I0508 00:10:10.965936 2812 reconciler_common.go:288] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/34c52980-bad2-443f-9e83-097e93133d79-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" May 8 00:10:10.966183 kubelet[2812]: I0508 00:10:10.965944 2812 reconciler_common.go:288] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/34c52980-bad2-443f-9e83-097e93133d79-cilium-run\") on node \"localhost\" DevicePath \"\"" May 8 00:10:10.966183 kubelet[2812]: I0508 00:10:10.965953 2812 reconciler_common.go:288] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/34c52980-bad2-443f-9e83-097e93133d79-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" May 8 00:10:10.966183 kubelet[2812]: I0508 00:10:10.965960 2812 reconciler_common.go:288] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/34c52980-bad2-443f-9e83-097e93133d79-hostproc\") on node \"localhost\" DevicePath \"\"" May 8 00:10:10.966183 kubelet[2812]: I0508 00:10:10.965966 2812 reconciler_common.go:288] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/34c52980-bad2-443f-9e83-097e93133d79-hubble-tls\") on node \"localhost\" DevicePath \"\"" May 8 00:10:10.966183 kubelet[2812]: I0508 00:10:10.965973 2812 reconciler_common.go:288] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/34c52980-bad2-443f-9e83-097e93133d79-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" May 8 00:10:10.966183 kubelet[2812]: I0508 00:10:10.966026 2812 reconciler_common.go:288] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/34c52980-bad2-443f-9e83-097e93133d79-xtables-lock\") on node \"localhost\" DevicePath \"\"" May 8 00:10:10.966390 kubelet[2812]: I0508 00:10:10.966042 2812 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-w2bgc\" (UniqueName: \"kubernetes.io/projected/34c52980-bad2-443f-9e83-097e93133d79-kube-api-access-w2bgc\") on node \"localhost\" DevicePath \"\"" May 8 00:10:10.966390 kubelet[2812]: I0508 00:10:10.966051 2812 reconciler_common.go:288] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/34c52980-bad2-443f-9e83-097e93133d79-bpf-maps\") on node \"localhost\" DevicePath \"\"" May 8 00:10:10.966390 kubelet[2812]: I0508 00:10:10.966058 2812 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/34c52980-bad2-443f-9e83-097e93133d79-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" May 8 00:10:10.966390 kubelet[2812]: I0508 00:10:10.966065 2812 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/34c52980-bad2-443f-9e83-097e93133d79-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 8 00:10:10.966390 kubelet[2812]: I0508 00:10:10.966084 2812 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/34c52980-bad2-443f-9e83-097e93133d79-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" May 8 00:10:11.154612 kubelet[2812]: I0508 00:10:11.154346 2812 scope.go:117] "RemoveContainer" containerID="4e293cbd234b264128505d0038878e75f957af2f596fb45995f262c4ba56af98" May 8 00:10:11.158595 systemd[1]: Removed slice kubepods-burstable-pod34c52980_bad2_443f_9e83_097e93133d79.slice - libcontainer container kubepods-burstable-pod34c52980_bad2_443f_9e83_097e93133d79.slice. May 8 00:10:11.158660 systemd[1]: kubepods-burstable-pod34c52980_bad2_443f_9e83_097e93133d79.slice: Consumed 4.756s CPU time, 197.4M memory peak, 73.9M read from disk, 13.3M written to disk. May 8 00:10:11.164261 containerd[1561]: time="2025-05-08T00:10:11.163924211Z" level=info msg="RemoveContainer for \"4e293cbd234b264128505d0038878e75f957af2f596fb45995f262c4ba56af98\"" May 8 00:10:11.163991 systemd[1]: Removed slice kubepods-besteffort-pod6a9134e1_ed8b_44b1_a409_7575c18174b9.slice - libcontainer container kubepods-besteffort-pod6a9134e1_ed8b_44b1_a409_7575c18174b9.slice. May 8 00:10:11.167740 containerd[1561]: time="2025-05-08T00:10:11.166989800Z" level=info msg="RemoveContainer for \"4e293cbd234b264128505d0038878e75f957af2f596fb45995f262c4ba56af98\" returns successfully" May 8 00:10:11.167842 kubelet[2812]: I0508 00:10:11.167489 2812 scope.go:117] "RemoveContainer" containerID="5892e426f6fe58a98adb85de8df217602940d433cfa1e5fa5afae153a4ac5e80" May 8 00:10:11.170496 containerd[1561]: time="2025-05-08T00:10:11.170415682Z" level=info msg="RemoveContainer for \"5892e426f6fe58a98adb85de8df217602940d433cfa1e5fa5afae153a4ac5e80\"" May 8 00:10:11.172505 containerd[1561]: time="2025-05-08T00:10:11.172416551Z" level=info msg="RemoveContainer for \"5892e426f6fe58a98adb85de8df217602940d433cfa1e5fa5afae153a4ac5e80\" returns successfully" May 8 00:10:11.172907 kubelet[2812]: I0508 00:10:11.172774 2812 scope.go:117] "RemoveContainer" containerID="2f67b6e7ef3684a5abb8fa6150e866cbd967df1492a552f35f553ce609009dce" May 8 00:10:11.174737 containerd[1561]: time="2025-05-08T00:10:11.174482896Z" level=info msg="RemoveContainer for \"2f67b6e7ef3684a5abb8fa6150e866cbd967df1492a552f35f553ce609009dce\"" May 8 00:10:11.175822 containerd[1561]: time="2025-05-08T00:10:11.175809837Z" level=info msg="RemoveContainer for \"2f67b6e7ef3684a5abb8fa6150e866cbd967df1492a552f35f553ce609009dce\" returns successfully" May 8 00:10:11.176018 kubelet[2812]: I0508 00:10:11.175972 2812 scope.go:117] "RemoveContainer" containerID="d1229d352fd5a2658e355fbe9bc6e2a9c069d26cb3cbfaa74c66ad2a5bb22983" May 8 00:10:11.176580 containerd[1561]: time="2025-05-08T00:10:11.176570012Z" level=info msg="RemoveContainer for \"d1229d352fd5a2658e355fbe9bc6e2a9c069d26cb3cbfaa74c66ad2a5bb22983\"" May 8 00:10:11.177808 containerd[1561]: time="2025-05-08T00:10:11.177769675Z" level=info msg="RemoveContainer for \"d1229d352fd5a2658e355fbe9bc6e2a9c069d26cb3cbfaa74c66ad2a5bb22983\" returns successfully" May 8 00:10:11.177844 kubelet[2812]: I0508 00:10:11.177833 2812 scope.go:117] "RemoveContainer" containerID="94f82d4b64350c9cdffc05f2bc1c0cb55583aef4f03a428aa25839b46c45546a" May 8 00:10:11.178395 containerd[1561]: time="2025-05-08T00:10:11.178296415Z" level=info msg="RemoveContainer for \"94f82d4b64350c9cdffc05f2bc1c0cb55583aef4f03a428aa25839b46c45546a\"" May 8 00:10:11.179284 containerd[1561]: time="2025-05-08T00:10:11.179268914Z" level=info msg="RemoveContainer for \"94f82d4b64350c9cdffc05f2bc1c0cb55583aef4f03a428aa25839b46c45546a\" returns successfully" May 8 00:10:11.179516 kubelet[2812]: I0508 00:10:11.179377 2812 scope.go:117] "RemoveContainer" containerID="4e293cbd234b264128505d0038878e75f957af2f596fb45995f262c4ba56af98" May 8 00:10:11.179734 containerd[1561]: time="2025-05-08T00:10:11.179619520Z" level=error msg="ContainerStatus for \"4e293cbd234b264128505d0038878e75f957af2f596fb45995f262c4ba56af98\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4e293cbd234b264128505d0038878e75f957af2f596fb45995f262c4ba56af98\": not found" May 8 00:10:11.180036 kubelet[2812]: E0508 00:10:11.179932 2812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4e293cbd234b264128505d0038878e75f957af2f596fb45995f262c4ba56af98\": not found" containerID="4e293cbd234b264128505d0038878e75f957af2f596fb45995f262c4ba56af98" May 8 00:10:11.180036 kubelet[2812]: I0508 00:10:11.179952 2812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4e293cbd234b264128505d0038878e75f957af2f596fb45995f262c4ba56af98"} err="failed to get container status \"4e293cbd234b264128505d0038878e75f957af2f596fb45995f262c4ba56af98\": rpc error: code = NotFound desc = an error occurred when try to find container \"4e293cbd234b264128505d0038878e75f957af2f596fb45995f262c4ba56af98\": not found" May 8 00:10:11.180036 kubelet[2812]: I0508 00:10:11.180001 2812 scope.go:117] "RemoveContainer" containerID="5892e426f6fe58a98adb85de8df217602940d433cfa1e5fa5afae153a4ac5e80" May 8 00:10:11.180227 containerd[1561]: time="2025-05-08T00:10:11.180192556Z" level=error msg="ContainerStatus for \"5892e426f6fe58a98adb85de8df217602940d433cfa1e5fa5afae153a4ac5e80\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5892e426f6fe58a98adb85de8df217602940d433cfa1e5fa5afae153a4ac5e80\": not found" May 8 00:10:11.182560 kubelet[2812]: E0508 00:10:11.182549 2812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5892e426f6fe58a98adb85de8df217602940d433cfa1e5fa5afae153a4ac5e80\": not found" containerID="5892e426f6fe58a98adb85de8df217602940d433cfa1e5fa5afae153a4ac5e80" May 8 00:10:11.182666 kubelet[2812]: I0508 00:10:11.182606 2812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5892e426f6fe58a98adb85de8df217602940d433cfa1e5fa5afae153a4ac5e80"} err="failed to get container status \"5892e426f6fe58a98adb85de8df217602940d433cfa1e5fa5afae153a4ac5e80\": rpc error: code = NotFound desc = an error occurred when try to find container \"5892e426f6fe58a98adb85de8df217602940d433cfa1e5fa5afae153a4ac5e80\": not found" May 8 00:10:11.182666 kubelet[2812]: I0508 00:10:11.182617 2812 scope.go:117] "RemoveContainer" containerID="2f67b6e7ef3684a5abb8fa6150e866cbd967df1492a552f35f553ce609009dce" May 8 00:10:11.182823 containerd[1561]: time="2025-05-08T00:10:11.182810012Z" level=error msg="ContainerStatus for \"2f67b6e7ef3684a5abb8fa6150e866cbd967df1492a552f35f553ce609009dce\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2f67b6e7ef3684a5abb8fa6150e866cbd967df1492a552f35f553ce609009dce\": not found" May 8 00:10:11.188852 kubelet[2812]: E0508 00:10:11.188841 2812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2f67b6e7ef3684a5abb8fa6150e866cbd967df1492a552f35f553ce609009dce\": not found" containerID="2f67b6e7ef3684a5abb8fa6150e866cbd967df1492a552f35f553ce609009dce" May 8 00:10:11.188913 kubelet[2812]: I0508 00:10:11.188903 2812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2f67b6e7ef3684a5abb8fa6150e866cbd967df1492a552f35f553ce609009dce"} err="failed to get container status \"2f67b6e7ef3684a5abb8fa6150e866cbd967df1492a552f35f553ce609009dce\": rpc error: code = NotFound desc = an error occurred when try to find container \"2f67b6e7ef3684a5abb8fa6150e866cbd967df1492a552f35f553ce609009dce\": not found" May 8 00:10:11.188949 kubelet[2812]: I0508 00:10:11.188944 2812 scope.go:117] "RemoveContainer" containerID="d1229d352fd5a2658e355fbe9bc6e2a9c069d26cb3cbfaa74c66ad2a5bb22983" May 8 00:10:11.189136 containerd[1561]: time="2025-05-08T00:10:11.189102121Z" level=error msg="ContainerStatus for \"d1229d352fd5a2658e355fbe9bc6e2a9c069d26cb3cbfaa74c66ad2a5bb22983\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d1229d352fd5a2658e355fbe9bc6e2a9c069d26cb3cbfaa74c66ad2a5bb22983\": not found" May 8 00:10:11.189316 kubelet[2812]: E0508 00:10:11.189197 2812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d1229d352fd5a2658e355fbe9bc6e2a9c069d26cb3cbfaa74c66ad2a5bb22983\": not found" containerID="d1229d352fd5a2658e355fbe9bc6e2a9c069d26cb3cbfaa74c66ad2a5bb22983" May 8 00:10:11.189316 kubelet[2812]: I0508 00:10:11.189209 2812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d1229d352fd5a2658e355fbe9bc6e2a9c069d26cb3cbfaa74c66ad2a5bb22983"} err="failed to get container status \"d1229d352fd5a2658e355fbe9bc6e2a9c069d26cb3cbfaa74c66ad2a5bb22983\": rpc error: code = NotFound desc = an error occurred when try to find container \"d1229d352fd5a2658e355fbe9bc6e2a9c069d26cb3cbfaa74c66ad2a5bb22983\": not found" May 8 00:10:11.189316 kubelet[2812]: I0508 00:10:11.189216 2812 scope.go:117] "RemoveContainer" containerID="94f82d4b64350c9cdffc05f2bc1c0cb55583aef4f03a428aa25839b46c45546a" May 8 00:10:11.189381 containerd[1561]: time="2025-05-08T00:10:11.189288395Z" level=error msg="ContainerStatus for \"94f82d4b64350c9cdffc05f2bc1c0cb55583aef4f03a428aa25839b46c45546a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"94f82d4b64350c9cdffc05f2bc1c0cb55583aef4f03a428aa25839b46c45546a\": not found" May 8 00:10:11.189484 kubelet[2812]: E0508 00:10:11.189440 2812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"94f82d4b64350c9cdffc05f2bc1c0cb55583aef4f03a428aa25839b46c45546a\": not found" containerID="94f82d4b64350c9cdffc05f2bc1c0cb55583aef4f03a428aa25839b46c45546a" May 8 00:10:11.189484 kubelet[2812]: I0508 00:10:11.189457 2812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"94f82d4b64350c9cdffc05f2bc1c0cb55583aef4f03a428aa25839b46c45546a"} err="failed to get container status \"94f82d4b64350c9cdffc05f2bc1c0cb55583aef4f03a428aa25839b46c45546a\": rpc error: code = NotFound desc = an error occurred when try to find container \"94f82d4b64350c9cdffc05f2bc1c0cb55583aef4f03a428aa25839b46c45546a\": not found" May 8 00:10:11.189484 kubelet[2812]: I0508 00:10:11.189465 2812 scope.go:117] "RemoveContainer" containerID="3cd41093e95c7dc73447d2eac2b2866481f22db79bd3ef8ebc6a4c504b28e847" May 8 00:10:11.190297 containerd[1561]: time="2025-05-08T00:10:11.190066223Z" level=info msg="RemoveContainer for \"3cd41093e95c7dc73447d2eac2b2866481f22db79bd3ef8ebc6a4c504b28e847\"" May 8 00:10:11.191102 containerd[1561]: time="2025-05-08T00:10:11.191089720Z" level=info msg="RemoveContainer for \"3cd41093e95c7dc73447d2eac2b2866481f22db79bd3ef8ebc6a4c504b28e847\" returns successfully" May 8 00:10:11.191207 kubelet[2812]: I0508 00:10:11.191199 2812 scope.go:117] "RemoveContainer" containerID="3cd41093e95c7dc73447d2eac2b2866481f22db79bd3ef8ebc6a4c504b28e847" May 8 00:10:11.191422 containerd[1561]: time="2025-05-08T00:10:11.191340747Z" level=error msg="ContainerStatus for \"3cd41093e95c7dc73447d2eac2b2866481f22db79bd3ef8ebc6a4c504b28e847\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3cd41093e95c7dc73447d2eac2b2866481f22db79bd3ef8ebc6a4c504b28e847\": not found" May 8 00:10:11.191455 kubelet[2812]: E0508 00:10:11.191396 2812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3cd41093e95c7dc73447d2eac2b2866481f22db79bd3ef8ebc6a4c504b28e847\": not found" containerID="3cd41093e95c7dc73447d2eac2b2866481f22db79bd3ef8ebc6a4c504b28e847" May 8 00:10:11.191455 kubelet[2812]: I0508 00:10:11.191408 2812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3cd41093e95c7dc73447d2eac2b2866481f22db79bd3ef8ebc6a4c504b28e847"} err="failed to get container status \"3cd41093e95c7dc73447d2eac2b2866481f22db79bd3ef8ebc6a4c504b28e847\": rpc error: code = NotFound desc = an error occurred when try to find container \"3cd41093e95c7dc73447d2eac2b2866481f22db79bd3ef8ebc6a4c504b28e847\": not found" May 8 00:10:11.563886 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4e293cbd234b264128505d0038878e75f957af2f596fb45995f262c4ba56af98-rootfs.mount: Deactivated successfully. May 8 00:10:11.563949 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e145a9e1c0b695d55f48c99c71401a3a9fdf7e467372e74726707c6ac0d2bf1f-rootfs.mount: Deactivated successfully. May 8 00:10:11.563987 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e145a9e1c0b695d55f48c99c71401a3a9fdf7e467372e74726707c6ac0d2bf1f-shm.mount: Deactivated successfully. May 8 00:10:11.564034 systemd[1]: var-lib-kubelet-pods-6a9134e1\x2ded8b\x2d44b1\x2da409\x2d7575c18174b9-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dvsk7l.mount: Deactivated successfully. May 8 00:10:11.564074 systemd[1]: var-lib-kubelet-pods-34c52980\x2dbad2\x2d443f\x2d9e83\x2d097e93133d79-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dw2bgc.mount: Deactivated successfully. May 8 00:10:11.564114 systemd[1]: var-lib-kubelet-pods-34c52980\x2dbad2\x2d443f\x2d9e83\x2d097e93133d79-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 8 00:10:11.564153 systemd[1]: var-lib-kubelet-pods-34c52980\x2dbad2\x2d443f\x2d9e83\x2d097e93133d79-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 8 00:10:11.843081 kubelet[2812]: E0508 00:10:11.842988 2812 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 8 00:10:12.485528 sshd[4443]: Connection closed by 139.178.89.65 port 46094 May 8 00:10:12.486485 sshd-session[4440]: pam_unix(sshd:session): session closed for user core May 8 00:10:12.492016 systemd[1]: sshd@23-139.178.70.109:22-139.178.89.65:46094.service: Deactivated successfully. May 8 00:10:12.492986 systemd[1]: session-26.scope: Deactivated successfully. May 8 00:10:12.493422 systemd-logind[1535]: Session 26 logged out. Waiting for processes to exit. May 8 00:10:12.494504 systemd[1]: Started sshd@24-139.178.70.109:22-139.178.89.65:46108.service - OpenSSH per-connection server daemon (139.178.89.65:46108). May 8 00:10:12.496080 systemd-logind[1535]: Removed session 26. May 8 00:10:12.534685 sshd[4605]: Accepted publickey for core from 139.178.89.65 port 46108 ssh2: RSA SHA256:YTfHkQoI5xSpvBwpAWL9S8jbOKDeUPQ5sL4eA5EtDVU May 8 00:10:12.535440 sshd-session[4605]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:10:12.538160 systemd-logind[1535]: New session 27 of user core. May 8 00:10:12.542767 systemd[1]: Started session-27.scope - Session 27 of User core. May 8 00:10:12.763939 kubelet[2812]: I0508 00:10:12.763450 2812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="34c52980-bad2-443f-9e83-097e93133d79" path="/var/lib/kubelet/pods/34c52980-bad2-443f-9e83-097e93133d79/volumes" May 8 00:10:12.764325 kubelet[2812]: I0508 00:10:12.764312 2812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6a9134e1-ed8b-44b1-a409-7575c18174b9" path="/var/lib/kubelet/pods/6a9134e1-ed8b-44b1-a409-7575c18174b9/volumes" May 8 00:10:12.846987 sshd[4608]: Connection closed by 139.178.89.65 port 46108 May 8 00:10:12.847283 sshd-session[4605]: pam_unix(sshd:session): session closed for user core May 8 00:10:12.856672 systemd[1]: sshd@24-139.178.70.109:22-139.178.89.65:46108.service: Deactivated successfully. May 8 00:10:12.858192 systemd[1]: session-27.scope: Deactivated successfully. May 8 00:10:12.859259 systemd-logind[1535]: Session 27 logged out. Waiting for processes to exit. May 8 00:10:12.867377 systemd[1]: Started sshd@25-139.178.70.109:22-139.178.89.65:46110.service - OpenSSH per-connection server daemon (139.178.89.65:46110). May 8 00:10:12.868716 systemd-logind[1535]: Removed session 27. May 8 00:10:12.875656 kubelet[2812]: E0508 00:10:12.875617 2812 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="34c52980-bad2-443f-9e83-097e93133d79" containerName="apply-sysctl-overwrites" May 8 00:10:12.875656 kubelet[2812]: E0508 00:10:12.875652 2812 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6a9134e1-ed8b-44b1-a409-7575c18174b9" containerName="cilium-operator" May 8 00:10:12.875656 kubelet[2812]: E0508 00:10:12.875658 2812 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="34c52980-bad2-443f-9e83-097e93133d79" containerName="clean-cilium-state" May 8 00:10:12.875656 kubelet[2812]: E0508 00:10:12.875663 2812 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="34c52980-bad2-443f-9e83-097e93133d79" containerName="mount-cgroup" May 8 00:10:12.876314 kubelet[2812]: E0508 00:10:12.875666 2812 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="34c52980-bad2-443f-9e83-097e93133d79" containerName="cilium-agent" May 8 00:10:12.876314 kubelet[2812]: E0508 00:10:12.875670 2812 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="34c52980-bad2-443f-9e83-097e93133d79" containerName="mount-bpf-fs" May 8 00:10:12.878262 kubelet[2812]: I0508 00:10:12.878237 2812 memory_manager.go:354] "RemoveStaleState removing state" podUID="6a9134e1-ed8b-44b1-a409-7575c18174b9" containerName="cilium-operator" May 8 00:10:12.878262 kubelet[2812]: I0508 00:10:12.878255 2812 memory_manager.go:354] "RemoveStaleState removing state" podUID="34c52980-bad2-443f-9e83-097e93133d79" containerName="cilium-agent" May 8 00:10:12.891852 systemd[1]: Created slice kubepods-burstable-poddf8336e5_4f1a_4774_8a78_0d448ebe3f79.slice - libcontainer container kubepods-burstable-poddf8336e5_4f1a_4774_8a78_0d448ebe3f79.slice. May 8 00:10:12.921465 sshd[4617]: Accepted publickey for core from 139.178.89.65 port 46110 ssh2: RSA SHA256:YTfHkQoI5xSpvBwpAWL9S8jbOKDeUPQ5sL4eA5EtDVU May 8 00:10:12.922563 sshd-session[4617]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:10:12.926210 systemd-logind[1535]: New session 28 of user core. May 8 00:10:12.934779 systemd[1]: Started session-28.scope - Session 28 of User core. May 8 00:10:12.978509 kubelet[2812]: I0508 00:10:12.978475 2812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/df8336e5-4f1a-4774-8a78-0d448ebe3f79-hostproc\") pod \"cilium-8vs44\" (UID: \"df8336e5-4f1a-4774-8a78-0d448ebe3f79\") " pod="kube-system/cilium-8vs44" May 8 00:10:12.978509 kubelet[2812]: I0508 00:10:12.978503 2812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/df8336e5-4f1a-4774-8a78-0d448ebe3f79-clustermesh-secrets\") pod \"cilium-8vs44\" (UID: \"df8336e5-4f1a-4774-8a78-0d448ebe3f79\") " pod="kube-system/cilium-8vs44" May 8 00:10:12.978669 kubelet[2812]: I0508 00:10:12.978517 2812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/df8336e5-4f1a-4774-8a78-0d448ebe3f79-hubble-tls\") pod \"cilium-8vs44\" (UID: \"df8336e5-4f1a-4774-8a78-0d448ebe3f79\") " pod="kube-system/cilium-8vs44" May 8 00:10:12.978669 kubelet[2812]: I0508 00:10:12.978527 2812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7vbx8\" (UniqueName: \"kubernetes.io/projected/df8336e5-4f1a-4774-8a78-0d448ebe3f79-kube-api-access-7vbx8\") pod \"cilium-8vs44\" (UID: \"df8336e5-4f1a-4774-8a78-0d448ebe3f79\") " pod="kube-system/cilium-8vs44" May 8 00:10:12.978669 kubelet[2812]: I0508 00:10:12.978539 2812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/df8336e5-4f1a-4774-8a78-0d448ebe3f79-cilium-ipsec-secrets\") pod \"cilium-8vs44\" (UID: \"df8336e5-4f1a-4774-8a78-0d448ebe3f79\") " pod="kube-system/cilium-8vs44" May 8 00:10:12.978669 kubelet[2812]: I0508 00:10:12.978587 2812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/df8336e5-4f1a-4774-8a78-0d448ebe3f79-cilium-cgroup\") pod \"cilium-8vs44\" (UID: \"df8336e5-4f1a-4774-8a78-0d448ebe3f79\") " pod="kube-system/cilium-8vs44" May 8 00:10:12.978669 kubelet[2812]: I0508 00:10:12.978599 2812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/df8336e5-4f1a-4774-8a78-0d448ebe3f79-etc-cni-netd\") pod \"cilium-8vs44\" (UID: \"df8336e5-4f1a-4774-8a78-0d448ebe3f79\") " pod="kube-system/cilium-8vs44" May 8 00:10:12.978803 kubelet[2812]: I0508 00:10:12.978614 2812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/df8336e5-4f1a-4774-8a78-0d448ebe3f79-cilium-config-path\") pod \"cilium-8vs44\" (UID: \"df8336e5-4f1a-4774-8a78-0d448ebe3f79\") " pod="kube-system/cilium-8vs44" May 8 00:10:12.978803 kubelet[2812]: I0508 00:10:12.978655 2812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/df8336e5-4f1a-4774-8a78-0d448ebe3f79-lib-modules\") pod \"cilium-8vs44\" (UID: \"df8336e5-4f1a-4774-8a78-0d448ebe3f79\") " pod="kube-system/cilium-8vs44" May 8 00:10:12.978803 kubelet[2812]: I0508 00:10:12.978669 2812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/df8336e5-4f1a-4774-8a78-0d448ebe3f79-host-proc-sys-kernel\") pod \"cilium-8vs44\" (UID: \"df8336e5-4f1a-4774-8a78-0d448ebe3f79\") " pod="kube-system/cilium-8vs44" May 8 00:10:12.978803 kubelet[2812]: I0508 00:10:12.978680 2812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/df8336e5-4f1a-4774-8a78-0d448ebe3f79-cilium-run\") pod \"cilium-8vs44\" (UID: \"df8336e5-4f1a-4774-8a78-0d448ebe3f79\") " pod="kube-system/cilium-8vs44" May 8 00:10:12.978803 kubelet[2812]: I0508 00:10:12.978689 2812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/df8336e5-4f1a-4774-8a78-0d448ebe3f79-bpf-maps\") pod \"cilium-8vs44\" (UID: \"df8336e5-4f1a-4774-8a78-0d448ebe3f79\") " pod="kube-system/cilium-8vs44" May 8 00:10:12.978803 kubelet[2812]: I0508 00:10:12.978746 2812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/df8336e5-4f1a-4774-8a78-0d448ebe3f79-cni-path\") pod \"cilium-8vs44\" (UID: \"df8336e5-4f1a-4774-8a78-0d448ebe3f79\") " pod="kube-system/cilium-8vs44" May 8 00:10:12.978931 kubelet[2812]: I0508 00:10:12.978756 2812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/df8336e5-4f1a-4774-8a78-0d448ebe3f79-xtables-lock\") pod \"cilium-8vs44\" (UID: \"df8336e5-4f1a-4774-8a78-0d448ebe3f79\") " pod="kube-system/cilium-8vs44" May 8 00:10:12.978931 kubelet[2812]: I0508 00:10:12.978766 2812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/df8336e5-4f1a-4774-8a78-0d448ebe3f79-host-proc-sys-net\") pod \"cilium-8vs44\" (UID: \"df8336e5-4f1a-4774-8a78-0d448ebe3f79\") " pod="kube-system/cilium-8vs44" May 8 00:10:12.982413 sshd[4620]: Connection closed by 139.178.89.65 port 46110 May 8 00:10:12.982794 sshd-session[4617]: pam_unix(sshd:session): session closed for user core May 8 00:10:12.992266 systemd[1]: sshd@25-139.178.70.109:22-139.178.89.65:46110.service: Deactivated successfully. May 8 00:10:12.993897 systemd[1]: session-28.scope: Deactivated successfully. May 8 00:10:12.994589 systemd-logind[1535]: Session 28 logged out. Waiting for processes to exit. May 8 00:10:12.998930 systemd[1]: Started sshd@26-139.178.70.109:22-139.178.89.65:46124.service - OpenSSH per-connection server daemon (139.178.89.65:46124). May 8 00:10:13.000939 systemd-logind[1535]: Removed session 28. May 8 00:10:13.031629 sshd[4626]: Accepted publickey for core from 139.178.89.65 port 46124 ssh2: RSA SHA256:YTfHkQoI5xSpvBwpAWL9S8jbOKDeUPQ5sL4eA5EtDVU May 8 00:10:13.032079 sshd-session[4626]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:10:13.034786 systemd-logind[1535]: New session 29 of user core. May 8 00:10:13.047814 systemd[1]: Started session-29.scope - Session 29 of User core. May 8 00:10:13.197967 containerd[1561]: time="2025-05-08T00:10:13.197939412Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-8vs44,Uid:df8336e5-4f1a-4774-8a78-0d448ebe3f79,Namespace:kube-system,Attempt:0,}" May 8 00:10:13.210460 containerd[1561]: time="2025-05-08T00:10:13.210380769Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:10:13.210460 containerd[1561]: time="2025-05-08T00:10:13.210425825Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:10:13.210859 containerd[1561]: time="2025-05-08T00:10:13.210817905Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:10:13.210893 containerd[1561]: time="2025-05-08T00:10:13.210878781Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:10:13.226827 systemd[1]: Started cri-containerd-bb522a416d1a51f98df72cc26be4c1ad1d2d2e4aff843b4289b59225d04ef96c.scope - libcontainer container bb522a416d1a51f98df72cc26be4c1ad1d2d2e4aff843b4289b59225d04ef96c. May 8 00:10:13.241296 containerd[1561]: time="2025-05-08T00:10:13.241271023Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-8vs44,Uid:df8336e5-4f1a-4774-8a78-0d448ebe3f79,Namespace:kube-system,Attempt:0,} returns sandbox id \"bb522a416d1a51f98df72cc26be4c1ad1d2d2e4aff843b4289b59225d04ef96c\"" May 8 00:10:13.243390 containerd[1561]: time="2025-05-08T00:10:13.243372024Z" level=info msg="CreateContainer within sandbox \"bb522a416d1a51f98df72cc26be4c1ad1d2d2e4aff843b4289b59225d04ef96c\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 8 00:10:13.248371 containerd[1561]: time="2025-05-08T00:10:13.248347948Z" level=info msg="CreateContainer within sandbox \"bb522a416d1a51f98df72cc26be4c1ad1d2d2e4aff843b4289b59225d04ef96c\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"5099ee2397796b85ea6865b8f7217037740c0f1ef13f0c1b116e88bc54c40679\"" May 8 00:10:13.249095 containerd[1561]: time="2025-05-08T00:10:13.248720301Z" level=info msg="StartContainer for \"5099ee2397796b85ea6865b8f7217037740c0f1ef13f0c1b116e88bc54c40679\"" May 8 00:10:13.271780 systemd[1]: Started cri-containerd-5099ee2397796b85ea6865b8f7217037740c0f1ef13f0c1b116e88bc54c40679.scope - libcontainer container 5099ee2397796b85ea6865b8f7217037740c0f1ef13f0c1b116e88bc54c40679. May 8 00:10:13.288082 containerd[1561]: time="2025-05-08T00:10:13.287323632Z" level=info msg="StartContainer for \"5099ee2397796b85ea6865b8f7217037740c0f1ef13f0c1b116e88bc54c40679\" returns successfully" May 8 00:10:13.302841 systemd[1]: cri-containerd-5099ee2397796b85ea6865b8f7217037740c0f1ef13f0c1b116e88bc54c40679.scope: Deactivated successfully. May 8 00:10:13.303896 systemd[1]: cri-containerd-5099ee2397796b85ea6865b8f7217037740c0f1ef13f0c1b116e88bc54c40679.scope: Consumed 13ms CPU time, 9.6M memory peak, 3M read from disk. May 8 00:10:13.321643 containerd[1561]: time="2025-05-08T00:10:13.321594695Z" level=info msg="shim disconnected" id=5099ee2397796b85ea6865b8f7217037740c0f1ef13f0c1b116e88bc54c40679 namespace=k8s.io May 8 00:10:13.321805 containerd[1561]: time="2025-05-08T00:10:13.321637903Z" level=warning msg="cleaning up after shim disconnected" id=5099ee2397796b85ea6865b8f7217037740c0f1ef13f0c1b116e88bc54c40679 namespace=k8s.io May 8 00:10:13.321805 containerd[1561]: time="2025-05-08T00:10:13.321663106Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 00:10:13.329004 containerd[1561]: time="2025-05-08T00:10:13.328978645Z" level=warning msg="cleanup warnings time=\"2025-05-08T00:10:13Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io May 8 00:10:14.173005 containerd[1561]: time="2025-05-08T00:10:14.172919238Z" level=info msg="CreateContainer within sandbox \"bb522a416d1a51f98df72cc26be4c1ad1d2d2e4aff843b4289b59225d04ef96c\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 8 00:10:14.181316 containerd[1561]: time="2025-05-08T00:10:14.181284706Z" level=info msg="CreateContainer within sandbox \"bb522a416d1a51f98df72cc26be4c1ad1d2d2e4aff843b4289b59225d04ef96c\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"2d09aeb44da407156eb7eb9f63d0254577592943c4a0d0900a3ce8b990195857\"" May 8 00:10:14.181706 containerd[1561]: time="2025-05-08T00:10:14.181679301Z" level=info msg="StartContainer for \"2d09aeb44da407156eb7eb9f63d0254577592943c4a0d0900a3ce8b990195857\"" May 8 00:10:14.220972 systemd[1]: Started cri-containerd-2d09aeb44da407156eb7eb9f63d0254577592943c4a0d0900a3ce8b990195857.scope - libcontainer container 2d09aeb44da407156eb7eb9f63d0254577592943c4a0d0900a3ce8b990195857. May 8 00:10:14.236864 containerd[1561]: time="2025-05-08T00:10:14.236836559Z" level=info msg="StartContainer for \"2d09aeb44da407156eb7eb9f63d0254577592943c4a0d0900a3ce8b990195857\" returns successfully" May 8 00:10:14.248718 systemd[1]: cri-containerd-2d09aeb44da407156eb7eb9f63d0254577592943c4a0d0900a3ce8b990195857.scope: Deactivated successfully. May 8 00:10:14.248894 systemd[1]: cri-containerd-2d09aeb44da407156eb7eb9f63d0254577592943c4a0d0900a3ce8b990195857.scope: Consumed 11ms CPU time, 7.3M memory peak, 2M read from disk. May 8 00:10:14.268144 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2d09aeb44da407156eb7eb9f63d0254577592943c4a0d0900a3ce8b990195857-rootfs.mount: Deactivated successfully. May 8 00:10:14.271609 containerd[1561]: time="2025-05-08T00:10:14.271562777Z" level=info msg="shim disconnected" id=2d09aeb44da407156eb7eb9f63d0254577592943c4a0d0900a3ce8b990195857 namespace=k8s.io May 8 00:10:14.271609 containerd[1561]: time="2025-05-08T00:10:14.271608549Z" level=warning msg="cleaning up after shim disconnected" id=2d09aeb44da407156eb7eb9f63d0254577592943c4a0d0900a3ce8b990195857 namespace=k8s.io May 8 00:10:14.271824 containerd[1561]: time="2025-05-08T00:10:14.271615329Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 00:10:14.279037 containerd[1561]: time="2025-05-08T00:10:14.279005099Z" level=warning msg="cleanup warnings time=\"2025-05-08T00:10:14Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io May 8 00:10:15.171655 containerd[1561]: time="2025-05-08T00:10:15.171569678Z" level=info msg="CreateContainer within sandbox \"bb522a416d1a51f98df72cc26be4c1ad1d2d2e4aff843b4289b59225d04ef96c\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 8 00:10:15.239038 containerd[1561]: time="2025-05-08T00:10:15.238968447Z" level=info msg="CreateContainer within sandbox \"bb522a416d1a51f98df72cc26be4c1ad1d2d2e4aff843b4289b59225d04ef96c\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"2698ca7eb30b1027b17ad0d40de0f7d63f1c20af9bf06068b7010e45a5aea689\"" May 8 00:10:15.240117 containerd[1561]: time="2025-05-08T00:10:15.239472458Z" level=info msg="StartContainer for \"2698ca7eb30b1027b17ad0d40de0f7d63f1c20af9bf06068b7010e45a5aea689\"" May 8 00:10:15.265844 systemd[1]: Started cri-containerd-2698ca7eb30b1027b17ad0d40de0f7d63f1c20af9bf06068b7010e45a5aea689.scope - libcontainer container 2698ca7eb30b1027b17ad0d40de0f7d63f1c20af9bf06068b7010e45a5aea689. May 8 00:10:15.293795 containerd[1561]: time="2025-05-08T00:10:15.292931434Z" level=info msg="StartContainer for \"2698ca7eb30b1027b17ad0d40de0f7d63f1c20af9bf06068b7010e45a5aea689\" returns successfully" May 8 00:10:15.301276 systemd[1]: cri-containerd-2698ca7eb30b1027b17ad0d40de0f7d63f1c20af9bf06068b7010e45a5aea689.scope: Deactivated successfully. May 8 00:10:15.315659 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2698ca7eb30b1027b17ad0d40de0f7d63f1c20af9bf06068b7010e45a5aea689-rootfs.mount: Deactivated successfully. May 8 00:10:15.321315 containerd[1561]: time="2025-05-08T00:10:15.321199646Z" level=info msg="shim disconnected" id=2698ca7eb30b1027b17ad0d40de0f7d63f1c20af9bf06068b7010e45a5aea689 namespace=k8s.io May 8 00:10:15.321315 containerd[1561]: time="2025-05-08T00:10:15.321242143Z" level=warning msg="cleaning up after shim disconnected" id=2698ca7eb30b1027b17ad0d40de0f7d63f1c20af9bf06068b7010e45a5aea689 namespace=k8s.io May 8 00:10:15.321315 containerd[1561]: time="2025-05-08T00:10:15.321248502Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 00:10:16.174201 containerd[1561]: time="2025-05-08T00:10:16.174150800Z" level=info msg="CreateContainer within sandbox \"bb522a416d1a51f98df72cc26be4c1ad1d2d2e4aff843b4289b59225d04ef96c\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 8 00:10:16.185408 containerd[1561]: time="2025-05-08T00:10:16.184682009Z" level=info msg="CreateContainer within sandbox \"bb522a416d1a51f98df72cc26be4c1ad1d2d2e4aff843b4289b59225d04ef96c\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"862b10218b2f21fd6b03074019e7d2d32e23fa0d7f26bdb88532fb194ca2d5c6\"" May 8 00:10:16.185408 containerd[1561]: time="2025-05-08T00:10:16.185346037Z" level=info msg="StartContainer for \"862b10218b2f21fd6b03074019e7d2d32e23fa0d7f26bdb88532fb194ca2d5c6\"" May 8 00:10:16.206825 systemd[1]: Started cri-containerd-862b10218b2f21fd6b03074019e7d2d32e23fa0d7f26bdb88532fb194ca2d5c6.scope - libcontainer container 862b10218b2f21fd6b03074019e7d2d32e23fa0d7f26bdb88532fb194ca2d5c6. May 8 00:10:16.211459 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1797876693.mount: Deactivated successfully. May 8 00:10:16.222663 containerd[1561]: time="2025-05-08T00:10:16.222620813Z" level=info msg="StartContainer for \"862b10218b2f21fd6b03074019e7d2d32e23fa0d7f26bdb88532fb194ca2d5c6\" returns successfully" May 8 00:10:16.222790 systemd[1]: cri-containerd-862b10218b2f21fd6b03074019e7d2d32e23fa0d7f26bdb88532fb194ca2d5c6.scope: Deactivated successfully. May 8 00:10:16.234180 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-862b10218b2f21fd6b03074019e7d2d32e23fa0d7f26bdb88532fb194ca2d5c6-rootfs.mount: Deactivated successfully. May 8 00:10:16.237258 containerd[1561]: time="2025-05-08T00:10:16.237225022Z" level=info msg="shim disconnected" id=862b10218b2f21fd6b03074019e7d2d32e23fa0d7f26bdb88532fb194ca2d5c6 namespace=k8s.io May 8 00:10:16.237343 containerd[1561]: time="2025-05-08T00:10:16.237267654Z" level=warning msg="cleaning up after shim disconnected" id=862b10218b2f21fd6b03074019e7d2d32e23fa0d7f26bdb88532fb194ca2d5c6 namespace=k8s.io May 8 00:10:16.237343 containerd[1561]: time="2025-05-08T00:10:16.237274558Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 00:10:16.844098 kubelet[2812]: E0508 00:10:16.844021 2812 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 8 00:10:17.179045 containerd[1561]: time="2025-05-08T00:10:17.178662108Z" level=info msg="CreateContainer within sandbox \"bb522a416d1a51f98df72cc26be4c1ad1d2d2e4aff843b4289b59225d04ef96c\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 8 00:10:17.195637 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1639932608.mount: Deactivated successfully. May 8 00:10:17.197059 containerd[1561]: time="2025-05-08T00:10:17.197035388Z" level=info msg="CreateContainer within sandbox \"bb522a416d1a51f98df72cc26be4c1ad1d2d2e4aff843b4289b59225d04ef96c\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"84c50b3307384743d4a7293161687789855ca9f6a7aec5cb901e0db8d354d683\"" May 8 00:10:17.197779 containerd[1561]: time="2025-05-08T00:10:17.197509569Z" level=info msg="StartContainer for \"84c50b3307384743d4a7293161687789855ca9f6a7aec5cb901e0db8d354d683\"" May 8 00:10:17.222811 systemd[1]: Started cri-containerd-84c50b3307384743d4a7293161687789855ca9f6a7aec5cb901e0db8d354d683.scope - libcontainer container 84c50b3307384743d4a7293161687789855ca9f6a7aec5cb901e0db8d354d683. May 8 00:10:17.239615 containerd[1561]: time="2025-05-08T00:10:17.239584795Z" level=info msg="StartContainer for \"84c50b3307384743d4a7293161687789855ca9f6a7aec5cb901e0db8d354d683\" returns successfully" May 8 00:10:17.816727 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) May 8 00:10:19.163758 kubelet[2812]: I0508 00:10:19.163508 2812 setters.go:600] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-05-08T00:10:19Z","lastTransitionTime":"2025-05-08T00:10:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} May 8 00:10:19.367620 systemd[1]: run-containerd-runc-k8s.io-84c50b3307384743d4a7293161687789855ca9f6a7aec5cb901e0db8d354d683-runc.JwhlG4.mount: Deactivated successfully. May 8 00:10:20.374107 systemd-networkd[1481]: lxc_health: Link UP May 8 00:10:20.384878 systemd-networkd[1481]: lxc_health: Gained carrier May 8 00:10:21.211153 kubelet[2812]: I0508 00:10:21.210933 2812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-8vs44" podStartSLOduration=9.21091742 podStartE2EDuration="9.21091742s" podCreationTimestamp="2025-05-08 00:10:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 00:10:18.189659733 +0000 UTC m=+131.773057137" watchObservedRunningTime="2025-05-08 00:10:21.21091742 +0000 UTC m=+134.794314814" May 8 00:10:22.292837 systemd-networkd[1481]: lxc_health: Gained IPv6LL May 8 00:10:25.702789 sshd[4629]: Connection closed by 139.178.89.65 port 46124 May 8 00:10:25.706911 sshd-session[4626]: pam_unix(sshd:session): session closed for user core May 8 00:10:25.708595 systemd[1]: sshd@26-139.178.70.109:22-139.178.89.65:46124.service: Deactivated successfully. May 8 00:10:25.709892 systemd[1]: session-29.scope: Deactivated successfully. May 8 00:10:25.710790 systemd-logind[1535]: Session 29 logged out. Waiting for processes to exit. May 8 00:10:25.711426 systemd-logind[1535]: Removed session 29.