May 8 00:05:08.732886 kernel: Linux version 6.6.88-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT_DYNAMIC Wed May 7 22:19:27 -00 2025 May 8 00:05:08.732903 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=90f0413c3d850985bb1e645e67699e9890362068cb417837636fe4022f4be979 May 8 00:05:08.732909 kernel: Disabled fast string operations May 8 00:05:08.732913 kernel: BIOS-provided physical RAM map: May 8 00:05:08.732917 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ebff] usable May 8 00:05:08.732921 kernel: BIOS-e820: [mem 0x000000000009ec00-0x000000000009ffff] reserved May 8 00:05:08.732957 kernel: BIOS-e820: [mem 0x00000000000dc000-0x00000000000fffff] reserved May 8 00:05:08.732962 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007fedffff] usable May 8 00:05:08.732967 kernel: BIOS-e820: [mem 0x000000007fee0000-0x000000007fefefff] ACPI data May 8 00:05:08.732971 kernel: BIOS-e820: [mem 0x000000007feff000-0x000000007fefffff] ACPI NVS May 8 00:05:08.732975 kernel: BIOS-e820: [mem 0x000000007ff00000-0x000000007fffffff] usable May 8 00:05:08.732980 kernel: BIOS-e820: [mem 0x00000000f0000000-0x00000000f7ffffff] reserved May 8 00:05:08.732984 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec0ffff] reserved May 8 00:05:08.732989 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved May 8 00:05:08.732995 kernel: BIOS-e820: [mem 0x00000000fffe0000-0x00000000ffffffff] reserved May 8 00:05:08.733000 kernel: NX (Execute Disable) protection: active May 8 00:05:08.733005 kernel: APIC: Static calls initialized May 8 00:05:08.733010 kernel: SMBIOS 2.7 present. May 8 00:05:08.733015 kernel: DMI: VMware, Inc. VMware Virtual Platform/440BX Desktop Reference Platform, BIOS 6.00 05/28/2020 May 8 00:05:08.733020 kernel: vmware: hypercall mode: 0x00 May 8 00:05:08.733024 kernel: Hypervisor detected: VMware May 8 00:05:08.733029 kernel: vmware: TSC freq read from hypervisor : 3408.000 MHz May 8 00:05:08.733035 kernel: vmware: Host bus clock speed read from hypervisor : 66000000 Hz May 8 00:05:08.733040 kernel: vmware: using clock offset of 2713247900 ns May 8 00:05:08.733045 kernel: tsc: Detected 3408.000 MHz processor May 8 00:05:08.733050 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 8 00:05:08.733055 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 8 00:05:08.733060 kernel: last_pfn = 0x80000 max_arch_pfn = 0x400000000 May 8 00:05:08.733065 kernel: total RAM covered: 3072M May 8 00:05:08.733070 kernel: Found optimal setting for mtrr clean up May 8 00:05:08.733076 kernel: gran_size: 64K chunk_size: 64K num_reg: 2 lose cover RAM: 0G May 8 00:05:08.733081 kernel: MTRR map: 6 entries (5 fixed + 1 variable; max 21), built from 8 variable MTRRs May 8 00:05:08.733087 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 8 00:05:08.733092 kernel: Using GB pages for direct mapping May 8 00:05:08.733097 kernel: ACPI: Early table checksum verification disabled May 8 00:05:08.733102 kernel: ACPI: RSDP 0x00000000000F6A00 000024 (v02 PTLTD ) May 8 00:05:08.733107 kernel: ACPI: XSDT 0x000000007FEE965B 00005C (v01 INTEL 440BX 06040000 VMW 01324272) May 8 00:05:08.733112 kernel: ACPI: FACP 0x000000007FEFEE73 0000F4 (v04 INTEL 440BX 06040000 PTL 000F4240) May 8 00:05:08.733120 kernel: ACPI: DSDT 0x000000007FEEAD55 01411E (v01 PTLTD Custom 06040000 MSFT 03000001) May 8 00:05:08.733132 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 May 8 00:05:08.733144 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 May 8 00:05:08.733150 kernel: ACPI: BOOT 0x000000007FEEAD2D 000028 (v01 PTLTD $SBFTBL$ 06040000 LTP 00000001) May 8 00:05:08.733155 kernel: ACPI: APIC 0x000000007FEEA5EB 000742 (v01 PTLTD ? APIC 06040000 LTP 00000000) May 8 00:05:08.733160 kernel: ACPI: MCFG 0x000000007FEEA5AF 00003C (v01 PTLTD $PCITBL$ 06040000 LTP 00000001) May 8 00:05:08.733165 kernel: ACPI: SRAT 0x000000007FEE9757 0008A8 (v02 VMWARE MEMPLUG 06040000 VMW 00000001) May 8 00:05:08.733170 kernel: ACPI: HPET 0x000000007FEE971F 000038 (v01 VMWARE VMW HPET 06040000 VMW 00000001) May 8 00:05:08.733177 kernel: ACPI: WAET 0x000000007FEE96F7 000028 (v01 VMWARE VMW WAET 06040000 VMW 00000001) May 8 00:05:08.733182 kernel: ACPI: Reserving FACP table memory at [mem 0x7fefee73-0x7fefef66] May 8 00:05:08.733187 kernel: ACPI: Reserving DSDT table memory at [mem 0x7feead55-0x7fefee72] May 8 00:05:08.733192 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] May 8 00:05:08.733197 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] May 8 00:05:08.733202 kernel: ACPI: Reserving BOOT table memory at [mem 0x7feead2d-0x7feead54] May 8 00:05:08.733207 kernel: ACPI: Reserving APIC table memory at [mem 0x7feea5eb-0x7feead2c] May 8 00:05:08.733212 kernel: ACPI: Reserving MCFG table memory at [mem 0x7feea5af-0x7feea5ea] May 8 00:05:08.733217 kernel: ACPI: Reserving SRAT table memory at [mem 0x7fee9757-0x7fee9ffe] May 8 00:05:08.733223 kernel: ACPI: Reserving HPET table memory at [mem 0x7fee971f-0x7fee9756] May 8 00:05:08.733228 kernel: ACPI: Reserving WAET table memory at [mem 0x7fee96f7-0x7fee971e] May 8 00:05:08.733234 kernel: system APIC only can use physical flat May 8 00:05:08.733239 kernel: APIC: Switched APIC routing to: physical flat May 8 00:05:08.733244 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 May 8 00:05:08.733249 kernel: SRAT: PXM 0 -> APIC 0x02 -> Node 0 May 8 00:05:08.733254 kernel: SRAT: PXM 0 -> APIC 0x04 -> Node 0 May 8 00:05:08.733259 kernel: SRAT: PXM 0 -> APIC 0x06 -> Node 0 May 8 00:05:08.733264 kernel: SRAT: PXM 0 -> APIC 0x08 -> Node 0 May 8 00:05:08.733269 kernel: SRAT: PXM 0 -> APIC 0x0a -> Node 0 May 8 00:05:08.733275 kernel: SRAT: PXM 0 -> APIC 0x0c -> Node 0 May 8 00:05:08.733280 kernel: SRAT: PXM 0 -> APIC 0x0e -> Node 0 May 8 00:05:08.733285 kernel: SRAT: PXM 0 -> APIC 0x10 -> Node 0 May 8 00:05:08.733290 kernel: SRAT: PXM 0 -> APIC 0x12 -> Node 0 May 8 00:05:08.733295 kernel: SRAT: PXM 0 -> APIC 0x14 -> Node 0 May 8 00:05:08.733300 kernel: SRAT: PXM 0 -> APIC 0x16 -> Node 0 May 8 00:05:08.733305 kernel: SRAT: PXM 0 -> APIC 0x18 -> Node 0 May 8 00:05:08.733310 kernel: SRAT: PXM 0 -> APIC 0x1a -> Node 0 May 8 00:05:08.733315 kernel: SRAT: PXM 0 -> APIC 0x1c -> Node 0 May 8 00:05:08.733320 kernel: SRAT: PXM 0 -> APIC 0x1e -> Node 0 May 8 00:05:08.733326 kernel: SRAT: PXM 0 -> APIC 0x20 -> Node 0 May 8 00:05:08.733331 kernel: SRAT: PXM 0 -> APIC 0x22 -> Node 0 May 8 00:05:08.733336 kernel: SRAT: PXM 0 -> APIC 0x24 -> Node 0 May 8 00:05:08.733341 kernel: SRAT: PXM 0 -> APIC 0x26 -> Node 0 May 8 00:05:08.733346 kernel: SRAT: PXM 0 -> APIC 0x28 -> Node 0 May 8 00:05:08.733351 kernel: SRAT: PXM 0 -> APIC 0x2a -> Node 0 May 8 00:05:08.733356 kernel: SRAT: PXM 0 -> APIC 0x2c -> Node 0 May 8 00:05:08.733361 kernel: SRAT: PXM 0 -> APIC 0x2e -> Node 0 May 8 00:05:08.733366 kernel: SRAT: PXM 0 -> APIC 0x30 -> Node 0 May 8 00:05:08.733371 kernel: SRAT: PXM 0 -> APIC 0x32 -> Node 0 May 8 00:05:08.733377 kernel: SRAT: PXM 0 -> APIC 0x34 -> Node 0 May 8 00:05:08.733382 kernel: SRAT: PXM 0 -> APIC 0x36 -> Node 0 May 8 00:05:08.733387 kernel: SRAT: PXM 0 -> APIC 0x38 -> Node 0 May 8 00:05:08.733392 kernel: SRAT: PXM 0 -> APIC 0x3a -> Node 0 May 8 00:05:08.733397 kernel: SRAT: PXM 0 -> APIC 0x3c -> Node 0 May 8 00:05:08.733402 kernel: SRAT: PXM 0 -> APIC 0x3e -> Node 0 May 8 00:05:08.733406 kernel: SRAT: PXM 0 -> APIC 0x40 -> Node 0 May 8 00:05:08.733411 kernel: SRAT: PXM 0 -> APIC 0x42 -> Node 0 May 8 00:05:08.733416 kernel: SRAT: PXM 0 -> APIC 0x44 -> Node 0 May 8 00:05:08.733421 kernel: SRAT: PXM 0 -> APIC 0x46 -> Node 0 May 8 00:05:08.733427 kernel: SRAT: PXM 0 -> APIC 0x48 -> Node 0 May 8 00:05:08.733432 kernel: SRAT: PXM 0 -> APIC 0x4a -> Node 0 May 8 00:05:08.733437 kernel: SRAT: PXM 0 -> APIC 0x4c -> Node 0 May 8 00:05:08.733442 kernel: SRAT: PXM 0 -> APIC 0x4e -> Node 0 May 8 00:05:08.733447 kernel: SRAT: PXM 0 -> APIC 0x50 -> Node 0 May 8 00:05:08.733452 kernel: SRAT: PXM 0 -> APIC 0x52 -> Node 0 May 8 00:05:08.733457 kernel: SRAT: PXM 0 -> APIC 0x54 -> Node 0 May 8 00:05:08.733462 kernel: SRAT: PXM 0 -> APIC 0x56 -> Node 0 May 8 00:05:08.733467 kernel: SRAT: PXM 0 -> APIC 0x58 -> Node 0 May 8 00:05:08.733472 kernel: SRAT: PXM 0 -> APIC 0x5a -> Node 0 May 8 00:05:08.733478 kernel: SRAT: PXM 0 -> APIC 0x5c -> Node 0 May 8 00:05:08.733483 kernel: SRAT: PXM 0 -> APIC 0x5e -> Node 0 May 8 00:05:08.733488 kernel: SRAT: PXM 0 -> APIC 0x60 -> Node 0 May 8 00:05:08.733493 kernel: SRAT: PXM 0 -> APIC 0x62 -> Node 0 May 8 00:05:08.733498 kernel: SRAT: PXM 0 -> APIC 0x64 -> Node 0 May 8 00:05:08.733503 kernel: SRAT: PXM 0 -> APIC 0x66 -> Node 0 May 8 00:05:08.733508 kernel: SRAT: PXM 0 -> APIC 0x68 -> Node 0 May 8 00:05:08.733513 kernel: SRAT: PXM 0 -> APIC 0x6a -> Node 0 May 8 00:05:08.733517 kernel: SRAT: PXM 0 -> APIC 0x6c -> Node 0 May 8 00:05:08.733522 kernel: SRAT: PXM 0 -> APIC 0x6e -> Node 0 May 8 00:05:08.733528 kernel: SRAT: PXM 0 -> APIC 0x70 -> Node 0 May 8 00:05:08.733533 kernel: SRAT: PXM 0 -> APIC 0x72 -> Node 0 May 8 00:05:08.733538 kernel: SRAT: PXM 0 -> APIC 0x74 -> Node 0 May 8 00:05:08.733547 kernel: SRAT: PXM 0 -> APIC 0x76 -> Node 0 May 8 00:05:08.733553 kernel: SRAT: PXM 0 -> APIC 0x78 -> Node 0 May 8 00:05:08.733558 kernel: SRAT: PXM 0 -> APIC 0x7a -> Node 0 May 8 00:05:08.733564 kernel: SRAT: PXM 0 -> APIC 0x7c -> Node 0 May 8 00:05:08.733569 kernel: SRAT: PXM 0 -> APIC 0x7e -> Node 0 May 8 00:05:08.733574 kernel: SRAT: PXM 0 -> APIC 0x80 -> Node 0 May 8 00:05:08.733580 kernel: SRAT: PXM 0 -> APIC 0x82 -> Node 0 May 8 00:05:08.733586 kernel: SRAT: PXM 0 -> APIC 0x84 -> Node 0 May 8 00:05:08.733591 kernel: SRAT: PXM 0 -> APIC 0x86 -> Node 0 May 8 00:05:08.733596 kernel: SRAT: PXM 0 -> APIC 0x88 -> Node 0 May 8 00:05:08.733601 kernel: SRAT: PXM 0 -> APIC 0x8a -> Node 0 May 8 00:05:08.733607 kernel: SRAT: PXM 0 -> APIC 0x8c -> Node 0 May 8 00:05:08.733612 kernel: SRAT: PXM 0 -> APIC 0x8e -> Node 0 May 8 00:05:08.733617 kernel: SRAT: PXM 0 -> APIC 0x90 -> Node 0 May 8 00:05:08.733623 kernel: SRAT: PXM 0 -> APIC 0x92 -> Node 0 May 8 00:05:08.733628 kernel: SRAT: PXM 0 -> APIC 0x94 -> Node 0 May 8 00:05:08.733634 kernel: SRAT: PXM 0 -> APIC 0x96 -> Node 0 May 8 00:05:08.733639 kernel: SRAT: PXM 0 -> APIC 0x98 -> Node 0 May 8 00:05:08.733645 kernel: SRAT: PXM 0 -> APIC 0x9a -> Node 0 May 8 00:05:08.733650 kernel: SRAT: PXM 0 -> APIC 0x9c -> Node 0 May 8 00:05:08.733655 kernel: SRAT: PXM 0 -> APIC 0x9e -> Node 0 May 8 00:05:08.733661 kernel: SRAT: PXM 0 -> APIC 0xa0 -> Node 0 May 8 00:05:08.733666 kernel: SRAT: PXM 0 -> APIC 0xa2 -> Node 0 May 8 00:05:08.733671 kernel: SRAT: PXM 0 -> APIC 0xa4 -> Node 0 May 8 00:05:08.733677 kernel: SRAT: PXM 0 -> APIC 0xa6 -> Node 0 May 8 00:05:08.733682 kernel: SRAT: PXM 0 -> APIC 0xa8 -> Node 0 May 8 00:05:08.733688 kernel: SRAT: PXM 0 -> APIC 0xaa -> Node 0 May 8 00:05:08.733693 kernel: SRAT: PXM 0 -> APIC 0xac -> Node 0 May 8 00:05:08.733699 kernel: SRAT: PXM 0 -> APIC 0xae -> Node 0 May 8 00:05:08.733704 kernel: SRAT: PXM 0 -> APIC 0xb0 -> Node 0 May 8 00:05:08.733710 kernel: SRAT: PXM 0 -> APIC 0xb2 -> Node 0 May 8 00:05:08.733715 kernel: SRAT: PXM 0 -> APIC 0xb4 -> Node 0 May 8 00:05:08.733720 kernel: SRAT: PXM 0 -> APIC 0xb6 -> Node 0 May 8 00:05:08.733725 kernel: SRAT: PXM 0 -> APIC 0xb8 -> Node 0 May 8 00:05:08.733731 kernel: SRAT: PXM 0 -> APIC 0xba -> Node 0 May 8 00:05:08.733736 kernel: SRAT: PXM 0 -> APIC 0xbc -> Node 0 May 8 00:05:08.733742 kernel: SRAT: PXM 0 -> APIC 0xbe -> Node 0 May 8 00:05:08.733755 kernel: SRAT: PXM 0 -> APIC 0xc0 -> Node 0 May 8 00:05:08.733761 kernel: SRAT: PXM 0 -> APIC 0xc2 -> Node 0 May 8 00:05:08.733766 kernel: SRAT: PXM 0 -> APIC 0xc4 -> Node 0 May 8 00:05:08.733772 kernel: SRAT: PXM 0 -> APIC 0xc6 -> Node 0 May 8 00:05:08.733777 kernel: SRAT: PXM 0 -> APIC 0xc8 -> Node 0 May 8 00:05:08.733782 kernel: SRAT: PXM 0 -> APIC 0xca -> Node 0 May 8 00:05:08.733788 kernel: SRAT: PXM 0 -> APIC 0xcc -> Node 0 May 8 00:05:08.733793 kernel: SRAT: PXM 0 -> APIC 0xce -> Node 0 May 8 00:05:08.733798 kernel: SRAT: PXM 0 -> APIC 0xd0 -> Node 0 May 8 00:05:08.733805 kernel: SRAT: PXM 0 -> APIC 0xd2 -> Node 0 May 8 00:05:08.733810 kernel: SRAT: PXM 0 -> APIC 0xd4 -> Node 0 May 8 00:05:08.733815 kernel: SRAT: PXM 0 -> APIC 0xd6 -> Node 0 May 8 00:05:08.733821 kernel: SRAT: PXM 0 -> APIC 0xd8 -> Node 0 May 8 00:05:08.733826 kernel: SRAT: PXM 0 -> APIC 0xda -> Node 0 May 8 00:05:08.733831 kernel: SRAT: PXM 0 -> APIC 0xdc -> Node 0 May 8 00:05:08.733836 kernel: SRAT: PXM 0 -> APIC 0xde -> Node 0 May 8 00:05:08.733842 kernel: SRAT: PXM 0 -> APIC 0xe0 -> Node 0 May 8 00:05:08.733847 kernel: SRAT: PXM 0 -> APIC 0xe2 -> Node 0 May 8 00:05:08.733852 kernel: SRAT: PXM 0 -> APIC 0xe4 -> Node 0 May 8 00:05:08.733858 kernel: SRAT: PXM 0 -> APIC 0xe6 -> Node 0 May 8 00:05:08.733864 kernel: SRAT: PXM 0 -> APIC 0xe8 -> Node 0 May 8 00:05:08.733869 kernel: SRAT: PXM 0 -> APIC 0xea -> Node 0 May 8 00:05:08.733875 kernel: SRAT: PXM 0 -> APIC 0xec -> Node 0 May 8 00:05:08.733880 kernel: SRAT: PXM 0 -> APIC 0xee -> Node 0 May 8 00:05:08.733885 kernel: SRAT: PXM 0 -> APIC 0xf0 -> Node 0 May 8 00:05:08.733891 kernel: SRAT: PXM 0 -> APIC 0xf2 -> Node 0 May 8 00:05:08.733896 kernel: SRAT: PXM 0 -> APIC 0xf4 -> Node 0 May 8 00:05:08.733901 kernel: SRAT: PXM 0 -> APIC 0xf6 -> Node 0 May 8 00:05:08.733907 kernel: SRAT: PXM 0 -> APIC 0xf8 -> Node 0 May 8 00:05:08.733912 kernel: SRAT: PXM 0 -> APIC 0xfa -> Node 0 May 8 00:05:08.733918 kernel: SRAT: PXM 0 -> APIC 0xfc -> Node 0 May 8 00:05:08.733937 kernel: SRAT: PXM 0 -> APIC 0xfe -> Node 0 May 8 00:05:08.733944 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] May 8 00:05:08.733949 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] May 8 00:05:08.733955 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000-0xbfffffff] hotplug May 8 00:05:08.733961 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7fffffff] -> [mem 0x00000000-0x7fffffff] May 8 00:05:08.733966 kernel: NODE_DATA(0) allocated [mem 0x7fffa000-0x7fffffff] May 8 00:05:08.733972 kernel: Zone ranges: May 8 00:05:08.733977 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 8 00:05:08.733984 kernel: DMA32 [mem 0x0000000001000000-0x000000007fffffff] May 8 00:05:08.733989 kernel: Normal empty May 8 00:05:08.733995 kernel: Movable zone start for each node May 8 00:05:08.734000 kernel: Early memory node ranges May 8 00:05:08.734005 kernel: node 0: [mem 0x0000000000001000-0x000000000009dfff] May 8 00:05:08.734011 kernel: node 0: [mem 0x0000000000100000-0x000000007fedffff] May 8 00:05:08.734016 kernel: node 0: [mem 0x000000007ff00000-0x000000007fffffff] May 8 00:05:08.734022 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007fffffff] May 8 00:05:08.734028 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 8 00:05:08.734033 kernel: On node 0, zone DMA: 98 pages in unavailable ranges May 8 00:05:08.734040 kernel: On node 0, zone DMA32: 32 pages in unavailable ranges May 8 00:05:08.734045 kernel: ACPI: PM-Timer IO Port: 0x1008 May 8 00:05:08.734050 kernel: system APIC only can use physical flat May 8 00:05:08.734056 kernel: ACPI: LAPIC_NMI (acpi_id[0x00] high edge lint[0x1]) May 8 00:05:08.734061 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) May 8 00:05:08.734066 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) May 8 00:05:08.734072 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) May 8 00:05:08.734077 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) May 8 00:05:08.734082 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) May 8 00:05:08.734089 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) May 8 00:05:08.734094 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) May 8 00:05:08.734099 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) May 8 00:05:08.734105 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) May 8 00:05:08.734110 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) May 8 00:05:08.734116 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) May 8 00:05:08.734121 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) May 8 00:05:08.734126 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) May 8 00:05:08.734131 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) May 8 00:05:08.734137 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) May 8 00:05:08.734143 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) May 8 00:05:08.734149 kernel: ACPI: LAPIC_NMI (acpi_id[0x11] high edge lint[0x1]) May 8 00:05:08.734154 kernel: ACPI: LAPIC_NMI (acpi_id[0x12] high edge lint[0x1]) May 8 00:05:08.734159 kernel: ACPI: LAPIC_NMI (acpi_id[0x13] high edge lint[0x1]) May 8 00:05:08.734165 kernel: ACPI: LAPIC_NMI (acpi_id[0x14] high edge lint[0x1]) May 8 00:05:08.734170 kernel: ACPI: LAPIC_NMI (acpi_id[0x15] high edge lint[0x1]) May 8 00:05:08.734175 kernel: ACPI: LAPIC_NMI (acpi_id[0x16] high edge lint[0x1]) May 8 00:05:08.734181 kernel: ACPI: LAPIC_NMI (acpi_id[0x17] high edge lint[0x1]) May 8 00:05:08.734186 kernel: ACPI: LAPIC_NMI (acpi_id[0x18] high edge lint[0x1]) May 8 00:05:08.734193 kernel: ACPI: LAPIC_NMI (acpi_id[0x19] high edge lint[0x1]) May 8 00:05:08.734198 kernel: ACPI: LAPIC_NMI (acpi_id[0x1a] high edge lint[0x1]) May 8 00:05:08.734203 kernel: ACPI: LAPIC_NMI (acpi_id[0x1b] high edge lint[0x1]) May 8 00:05:08.734209 kernel: ACPI: LAPIC_NMI (acpi_id[0x1c] high edge lint[0x1]) May 8 00:05:08.734214 kernel: ACPI: LAPIC_NMI (acpi_id[0x1d] high edge lint[0x1]) May 8 00:05:08.734219 kernel: ACPI: LAPIC_NMI (acpi_id[0x1e] high edge lint[0x1]) May 8 00:05:08.734224 kernel: ACPI: LAPIC_NMI (acpi_id[0x1f] high edge lint[0x1]) May 8 00:05:08.734230 kernel: ACPI: LAPIC_NMI (acpi_id[0x20] high edge lint[0x1]) May 8 00:05:08.734235 kernel: ACPI: LAPIC_NMI (acpi_id[0x21] high edge lint[0x1]) May 8 00:05:08.734240 kernel: ACPI: LAPIC_NMI (acpi_id[0x22] high edge lint[0x1]) May 8 00:05:08.734247 kernel: ACPI: LAPIC_NMI (acpi_id[0x23] high edge lint[0x1]) May 8 00:05:08.734252 kernel: ACPI: LAPIC_NMI (acpi_id[0x24] high edge lint[0x1]) May 8 00:05:08.734257 kernel: ACPI: LAPIC_NMI (acpi_id[0x25] high edge lint[0x1]) May 8 00:05:08.734263 kernel: ACPI: LAPIC_NMI (acpi_id[0x26] high edge lint[0x1]) May 8 00:05:08.734268 kernel: ACPI: LAPIC_NMI (acpi_id[0x27] high edge lint[0x1]) May 8 00:05:08.734273 kernel: ACPI: LAPIC_NMI (acpi_id[0x28] high edge lint[0x1]) May 8 00:05:08.734278 kernel: ACPI: LAPIC_NMI (acpi_id[0x29] high edge lint[0x1]) May 8 00:05:08.734284 kernel: ACPI: LAPIC_NMI (acpi_id[0x2a] high edge lint[0x1]) May 8 00:05:08.734289 kernel: ACPI: LAPIC_NMI (acpi_id[0x2b] high edge lint[0x1]) May 8 00:05:08.734295 kernel: ACPI: LAPIC_NMI (acpi_id[0x2c] high edge lint[0x1]) May 8 00:05:08.734301 kernel: ACPI: LAPIC_NMI (acpi_id[0x2d] high edge lint[0x1]) May 8 00:05:08.734306 kernel: ACPI: LAPIC_NMI (acpi_id[0x2e] high edge lint[0x1]) May 8 00:05:08.734312 kernel: ACPI: LAPIC_NMI (acpi_id[0x2f] high edge lint[0x1]) May 8 00:05:08.734317 kernel: ACPI: LAPIC_NMI (acpi_id[0x30] high edge lint[0x1]) May 8 00:05:08.734322 kernel: ACPI: LAPIC_NMI (acpi_id[0x31] high edge lint[0x1]) May 8 00:05:08.734328 kernel: ACPI: LAPIC_NMI (acpi_id[0x32] high edge lint[0x1]) May 8 00:05:08.734333 kernel: ACPI: LAPIC_NMI (acpi_id[0x33] high edge lint[0x1]) May 8 00:05:08.734338 kernel: ACPI: LAPIC_NMI (acpi_id[0x34] high edge lint[0x1]) May 8 00:05:08.734343 kernel: ACPI: LAPIC_NMI (acpi_id[0x35] high edge lint[0x1]) May 8 00:05:08.734350 kernel: ACPI: LAPIC_NMI (acpi_id[0x36] high edge lint[0x1]) May 8 00:05:08.734355 kernel: ACPI: LAPIC_NMI (acpi_id[0x37] high edge lint[0x1]) May 8 00:05:08.734361 kernel: ACPI: LAPIC_NMI (acpi_id[0x38] high edge lint[0x1]) May 8 00:05:08.734366 kernel: ACPI: LAPIC_NMI (acpi_id[0x39] high edge lint[0x1]) May 8 00:05:08.734371 kernel: ACPI: LAPIC_NMI (acpi_id[0x3a] high edge lint[0x1]) May 8 00:05:08.734377 kernel: ACPI: LAPIC_NMI (acpi_id[0x3b] high edge lint[0x1]) May 8 00:05:08.734382 kernel: ACPI: LAPIC_NMI (acpi_id[0x3c] high edge lint[0x1]) May 8 00:05:08.734387 kernel: ACPI: LAPIC_NMI (acpi_id[0x3d] high edge lint[0x1]) May 8 00:05:08.734392 kernel: ACPI: LAPIC_NMI (acpi_id[0x3e] high edge lint[0x1]) May 8 00:05:08.734398 kernel: ACPI: LAPIC_NMI (acpi_id[0x3f] high edge lint[0x1]) May 8 00:05:08.734404 kernel: ACPI: LAPIC_NMI (acpi_id[0x40] high edge lint[0x1]) May 8 00:05:08.734409 kernel: ACPI: LAPIC_NMI (acpi_id[0x41] high edge lint[0x1]) May 8 00:05:08.734415 kernel: ACPI: LAPIC_NMI (acpi_id[0x42] high edge lint[0x1]) May 8 00:05:08.734420 kernel: ACPI: LAPIC_NMI (acpi_id[0x43] high edge lint[0x1]) May 8 00:05:08.734426 kernel: ACPI: LAPIC_NMI (acpi_id[0x44] high edge lint[0x1]) May 8 00:05:08.734431 kernel: ACPI: LAPIC_NMI (acpi_id[0x45] high edge lint[0x1]) May 8 00:05:08.734436 kernel: ACPI: LAPIC_NMI (acpi_id[0x46] high edge lint[0x1]) May 8 00:05:08.734442 kernel: ACPI: LAPIC_NMI (acpi_id[0x47] high edge lint[0x1]) May 8 00:05:08.734447 kernel: ACPI: LAPIC_NMI (acpi_id[0x48] high edge lint[0x1]) May 8 00:05:08.734452 kernel: ACPI: LAPIC_NMI (acpi_id[0x49] high edge lint[0x1]) May 8 00:05:08.734459 kernel: ACPI: LAPIC_NMI (acpi_id[0x4a] high edge lint[0x1]) May 8 00:05:08.734465 kernel: ACPI: LAPIC_NMI (acpi_id[0x4b] high edge lint[0x1]) May 8 00:05:08.734470 kernel: ACPI: LAPIC_NMI (acpi_id[0x4c] high edge lint[0x1]) May 8 00:05:08.734475 kernel: ACPI: LAPIC_NMI (acpi_id[0x4d] high edge lint[0x1]) May 8 00:05:08.734481 kernel: ACPI: LAPIC_NMI (acpi_id[0x4e] high edge lint[0x1]) May 8 00:05:08.734486 kernel: ACPI: LAPIC_NMI (acpi_id[0x4f] high edge lint[0x1]) May 8 00:05:08.734491 kernel: ACPI: LAPIC_NMI (acpi_id[0x50] high edge lint[0x1]) May 8 00:05:08.734497 kernel: ACPI: LAPIC_NMI (acpi_id[0x51] high edge lint[0x1]) May 8 00:05:08.734502 kernel: ACPI: LAPIC_NMI (acpi_id[0x52] high edge lint[0x1]) May 8 00:05:08.734508 kernel: ACPI: LAPIC_NMI (acpi_id[0x53] high edge lint[0x1]) May 8 00:05:08.734514 kernel: ACPI: LAPIC_NMI (acpi_id[0x54] high edge lint[0x1]) May 8 00:05:08.734519 kernel: ACPI: LAPIC_NMI (acpi_id[0x55] high edge lint[0x1]) May 8 00:05:08.734524 kernel: ACPI: LAPIC_NMI (acpi_id[0x56] high edge lint[0x1]) May 8 00:05:08.734530 kernel: ACPI: LAPIC_NMI (acpi_id[0x57] high edge lint[0x1]) May 8 00:05:08.734535 kernel: ACPI: LAPIC_NMI (acpi_id[0x58] high edge lint[0x1]) May 8 00:05:08.734540 kernel: ACPI: LAPIC_NMI (acpi_id[0x59] high edge lint[0x1]) May 8 00:05:08.734546 kernel: ACPI: LAPIC_NMI (acpi_id[0x5a] high edge lint[0x1]) May 8 00:05:08.734551 kernel: ACPI: LAPIC_NMI (acpi_id[0x5b] high edge lint[0x1]) May 8 00:05:08.734556 kernel: ACPI: LAPIC_NMI (acpi_id[0x5c] high edge lint[0x1]) May 8 00:05:08.734563 kernel: ACPI: LAPIC_NMI (acpi_id[0x5d] high edge lint[0x1]) May 8 00:05:08.734568 kernel: ACPI: LAPIC_NMI (acpi_id[0x5e] high edge lint[0x1]) May 8 00:05:08.734573 kernel: ACPI: LAPIC_NMI (acpi_id[0x5f] high edge lint[0x1]) May 8 00:05:08.734579 kernel: ACPI: LAPIC_NMI (acpi_id[0x60] high edge lint[0x1]) May 8 00:05:08.734584 kernel: ACPI: LAPIC_NMI (acpi_id[0x61] high edge lint[0x1]) May 8 00:05:08.734589 kernel: ACPI: LAPIC_NMI (acpi_id[0x62] high edge lint[0x1]) May 8 00:05:08.734595 kernel: ACPI: LAPIC_NMI (acpi_id[0x63] high edge lint[0x1]) May 8 00:05:08.734600 kernel: ACPI: LAPIC_NMI (acpi_id[0x64] high edge lint[0x1]) May 8 00:05:08.734605 kernel: ACPI: LAPIC_NMI (acpi_id[0x65] high edge lint[0x1]) May 8 00:05:08.734611 kernel: ACPI: LAPIC_NMI (acpi_id[0x66] high edge lint[0x1]) May 8 00:05:08.734617 kernel: ACPI: LAPIC_NMI (acpi_id[0x67] high edge lint[0x1]) May 8 00:05:08.734622 kernel: ACPI: LAPIC_NMI (acpi_id[0x68] high edge lint[0x1]) May 8 00:05:08.734628 kernel: ACPI: LAPIC_NMI (acpi_id[0x69] high edge lint[0x1]) May 8 00:05:08.734633 kernel: ACPI: LAPIC_NMI (acpi_id[0x6a] high edge lint[0x1]) May 8 00:05:08.734639 kernel: ACPI: LAPIC_NMI (acpi_id[0x6b] high edge lint[0x1]) May 8 00:05:08.734644 kernel: ACPI: LAPIC_NMI (acpi_id[0x6c] high edge lint[0x1]) May 8 00:05:08.734649 kernel: ACPI: LAPIC_NMI (acpi_id[0x6d] high edge lint[0x1]) May 8 00:05:08.734655 kernel: ACPI: LAPIC_NMI (acpi_id[0x6e] high edge lint[0x1]) May 8 00:05:08.734660 kernel: ACPI: LAPIC_NMI (acpi_id[0x6f] high edge lint[0x1]) May 8 00:05:08.734666 kernel: ACPI: LAPIC_NMI (acpi_id[0x70] high edge lint[0x1]) May 8 00:05:08.734672 kernel: ACPI: LAPIC_NMI (acpi_id[0x71] high edge lint[0x1]) May 8 00:05:08.734677 kernel: ACPI: LAPIC_NMI (acpi_id[0x72] high edge lint[0x1]) May 8 00:05:08.734682 kernel: ACPI: LAPIC_NMI (acpi_id[0x73] high edge lint[0x1]) May 8 00:05:08.734688 kernel: ACPI: LAPIC_NMI (acpi_id[0x74] high edge lint[0x1]) May 8 00:05:08.734693 kernel: ACPI: LAPIC_NMI (acpi_id[0x75] high edge lint[0x1]) May 8 00:05:08.734698 kernel: ACPI: LAPIC_NMI (acpi_id[0x76] high edge lint[0x1]) May 8 00:05:08.734704 kernel: ACPI: LAPIC_NMI (acpi_id[0x77] high edge lint[0x1]) May 8 00:05:08.734709 kernel: ACPI: LAPIC_NMI (acpi_id[0x78] high edge lint[0x1]) May 8 00:05:08.734714 kernel: ACPI: LAPIC_NMI (acpi_id[0x79] high edge lint[0x1]) May 8 00:05:08.734721 kernel: ACPI: LAPIC_NMI (acpi_id[0x7a] high edge lint[0x1]) May 8 00:05:08.734726 kernel: ACPI: LAPIC_NMI (acpi_id[0x7b] high edge lint[0x1]) May 8 00:05:08.734731 kernel: ACPI: LAPIC_NMI (acpi_id[0x7c] high edge lint[0x1]) May 8 00:05:08.734736 kernel: ACPI: LAPIC_NMI (acpi_id[0x7d] high edge lint[0x1]) May 8 00:05:08.734742 kernel: ACPI: LAPIC_NMI (acpi_id[0x7e] high edge lint[0x1]) May 8 00:05:08.734747 kernel: ACPI: LAPIC_NMI (acpi_id[0x7f] high edge lint[0x1]) May 8 00:05:08.734753 kernel: IOAPIC[0]: apic_id 1, version 17, address 0xfec00000, GSI 0-23 May 8 00:05:08.734758 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 high edge) May 8 00:05:08.734763 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 8 00:05:08.734770 kernel: ACPI: HPET id: 0x8086af01 base: 0xfed00000 May 8 00:05:08.734775 kernel: TSC deadline timer available May 8 00:05:08.734781 kernel: smpboot: Allowing 128 CPUs, 126 hotplug CPUs May 8 00:05:08.734786 kernel: [mem 0x80000000-0xefffffff] available for PCI devices May 8 00:05:08.734792 kernel: Booting paravirtualized kernel on VMware hypervisor May 8 00:05:08.734797 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 8 00:05:08.734803 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:128 nr_cpu_ids:128 nr_node_ids:1 May 8 00:05:08.734808 kernel: percpu: Embedded 58 pages/cpu s197096 r8192 d32280 u262144 May 8 00:05:08.734814 kernel: pcpu-alloc: s197096 r8192 d32280 u262144 alloc=1*2097152 May 8 00:05:08.734820 kernel: pcpu-alloc: [0] 000 001 002 003 004 005 006 007 May 8 00:05:08.734826 kernel: pcpu-alloc: [0] 008 009 010 011 012 013 014 015 May 8 00:05:08.734831 kernel: pcpu-alloc: [0] 016 017 018 019 020 021 022 023 May 8 00:05:08.734836 kernel: pcpu-alloc: [0] 024 025 026 027 028 029 030 031 May 8 00:05:08.734842 kernel: pcpu-alloc: [0] 032 033 034 035 036 037 038 039 May 8 00:05:08.734854 kernel: pcpu-alloc: [0] 040 041 042 043 044 045 046 047 May 8 00:05:08.734860 kernel: pcpu-alloc: [0] 048 049 050 051 052 053 054 055 May 8 00:05:08.734866 kernel: pcpu-alloc: [0] 056 057 058 059 060 061 062 063 May 8 00:05:08.734872 kernel: pcpu-alloc: [0] 064 065 066 067 068 069 070 071 May 8 00:05:08.734878 kernel: pcpu-alloc: [0] 072 073 074 075 076 077 078 079 May 8 00:05:08.734884 kernel: pcpu-alloc: [0] 080 081 082 083 084 085 086 087 May 8 00:05:08.734889 kernel: pcpu-alloc: [0] 088 089 090 091 092 093 094 095 May 8 00:05:08.734895 kernel: pcpu-alloc: [0] 096 097 098 099 100 101 102 103 May 8 00:05:08.734901 kernel: pcpu-alloc: [0] 104 105 106 107 108 109 110 111 May 8 00:05:08.734906 kernel: pcpu-alloc: [0] 112 113 114 115 116 117 118 119 May 8 00:05:08.734912 kernel: pcpu-alloc: [0] 120 121 122 123 124 125 126 127 May 8 00:05:08.734918 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=90f0413c3d850985bb1e645e67699e9890362068cb417837636fe4022f4be979 May 8 00:05:08.734933 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 8 00:05:08.734947 kernel: random: crng init done May 8 00:05:08.734954 kernel: printk: log_buf_len individual max cpu contribution: 4096 bytes May 8 00:05:08.734962 kernel: printk: log_buf_len total cpu_extra contributions: 520192 bytes May 8 00:05:08.734967 kernel: printk: log_buf_len min size: 262144 bytes May 8 00:05:08.734973 kernel: printk: log_buf_len: 1048576 bytes May 8 00:05:08.734979 kernel: printk: early log buf free: 239648(91%) May 8 00:05:08.734985 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 8 00:05:08.734991 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) May 8 00:05:08.734997 kernel: Fallback order for Node 0: 0 May 8 00:05:08.735006 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515808 May 8 00:05:08.735013 kernel: Policy zone: DMA32 May 8 00:05:08.735018 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 8 00:05:08.735024 kernel: Memory: 1934288K/2096628K available (14336K kernel code, 2295K rwdata, 22864K rodata, 43484K init, 1592K bss, 162080K reserved, 0K cma-reserved) May 8 00:05:08.735032 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=128, Nodes=1 May 8 00:05:08.735038 kernel: ftrace: allocating 37918 entries in 149 pages May 8 00:05:08.735044 kernel: ftrace: allocated 149 pages with 4 groups May 8 00:05:08.735049 kernel: Dynamic Preempt: voluntary May 8 00:05:08.735055 kernel: rcu: Preemptible hierarchical RCU implementation. May 8 00:05:08.735061 kernel: rcu: RCU event tracing is enabled. May 8 00:05:08.735067 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=128. May 8 00:05:08.735073 kernel: Trampoline variant of Tasks RCU enabled. May 8 00:05:08.735079 kernel: Rude variant of Tasks RCU enabled. May 8 00:05:08.735084 kernel: Tracing variant of Tasks RCU enabled. May 8 00:05:08.735091 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 8 00:05:08.735097 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=128 May 8 00:05:08.735103 kernel: NR_IRQS: 33024, nr_irqs: 1448, preallocated irqs: 16 May 8 00:05:08.735108 kernel: rcu: srcu_init: Setting srcu_struct sizes to big. May 8 00:05:08.735114 kernel: Console: colour VGA+ 80x25 May 8 00:05:08.735120 kernel: printk: console [tty0] enabled May 8 00:05:08.735126 kernel: printk: console [ttyS0] enabled May 8 00:05:08.735131 kernel: ACPI: Core revision 20230628 May 8 00:05:08.735137 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 133484882848 ns May 8 00:05:08.735144 kernel: APIC: Switch to symmetric I/O mode setup May 8 00:05:08.735150 kernel: x2apic enabled May 8 00:05:08.735156 kernel: APIC: Switched APIC routing to: physical x2apic May 8 00:05:08.735162 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 May 8 00:05:08.735167 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns May 8 00:05:08.735173 kernel: Calibrating delay loop (skipped) preset value.. 6816.00 BogoMIPS (lpj=3408000) May 8 00:05:08.735179 kernel: Disabled fast string operations May 8 00:05:08.735185 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 May 8 00:05:08.735190 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 May 8 00:05:08.735197 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 8 00:05:08.735203 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit May 8 00:05:08.735209 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall May 8 00:05:08.735215 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS May 8 00:05:08.735221 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch May 8 00:05:08.735227 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT May 8 00:05:08.735232 kernel: RETBleed: Mitigation: Enhanced IBRS May 8 00:05:08.735238 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier May 8 00:05:08.735244 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl May 8 00:05:08.735251 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode May 8 00:05:08.735257 kernel: SRBDS: Unknown: Dependent on hypervisor status May 8 00:05:08.735263 kernel: GDS: Unknown: Dependent on hypervisor status May 8 00:05:08.735269 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' May 8 00:05:08.735275 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' May 8 00:05:08.735280 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' May 8 00:05:08.735286 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 May 8 00:05:08.735292 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. May 8 00:05:08.735298 kernel: Freeing SMP alternatives memory: 32K May 8 00:05:08.735304 kernel: pid_max: default: 131072 minimum: 1024 May 8 00:05:08.735310 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 8 00:05:08.735317 kernel: landlock: Up and running. May 8 00:05:08.735323 kernel: SELinux: Initializing. May 8 00:05:08.735328 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) May 8 00:05:08.735334 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) May 8 00:05:08.735340 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) May 8 00:05:08.735346 kernel: RCU Tasks: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. May 8 00:05:08.735352 kernel: RCU Tasks Rude: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. May 8 00:05:08.735359 kernel: RCU Tasks Trace: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. May 8 00:05:08.735364 kernel: Performance Events: Skylake events, core PMU driver. May 8 00:05:08.735370 kernel: core: CPUID marked event: 'cpu cycles' unavailable May 8 00:05:08.735376 kernel: core: CPUID marked event: 'instructions' unavailable May 8 00:05:08.735382 kernel: core: CPUID marked event: 'bus cycles' unavailable May 8 00:05:08.735387 kernel: core: CPUID marked event: 'cache references' unavailable May 8 00:05:08.735393 kernel: core: CPUID marked event: 'cache misses' unavailable May 8 00:05:08.735398 kernel: core: CPUID marked event: 'branch instructions' unavailable May 8 00:05:08.735405 kernel: core: CPUID marked event: 'branch misses' unavailable May 8 00:05:08.735411 kernel: ... version: 1 May 8 00:05:08.735416 kernel: ... bit width: 48 May 8 00:05:08.735422 kernel: ... generic registers: 4 May 8 00:05:08.735428 kernel: ... value mask: 0000ffffffffffff May 8 00:05:08.735434 kernel: ... max period: 000000007fffffff May 8 00:05:08.735439 kernel: ... fixed-purpose events: 0 May 8 00:05:08.735445 kernel: ... event mask: 000000000000000f May 8 00:05:08.735451 kernel: signal: max sigframe size: 1776 May 8 00:05:08.735458 kernel: rcu: Hierarchical SRCU implementation. May 8 00:05:08.735463 kernel: rcu: Max phase no-delay instances is 400. May 8 00:05:08.735469 kernel: NMI watchdog: Perf NMI watchdog permanently disabled May 8 00:05:08.735475 kernel: smp: Bringing up secondary CPUs ... May 8 00:05:08.735481 kernel: smpboot: x86: Booting SMP configuration: May 8 00:05:08.735486 kernel: .... node #0, CPUs: #1 May 8 00:05:08.735492 kernel: Disabled fast string operations May 8 00:05:08.735498 kernel: smpboot: CPU 1 Converting physical 2 to logical package 1 May 8 00:05:08.735504 kernel: smpboot: CPU 1 Converting physical 0 to logical die 1 May 8 00:05:08.735509 kernel: smp: Brought up 1 node, 2 CPUs May 8 00:05:08.735516 kernel: smpboot: Max logical packages: 128 May 8 00:05:08.735522 kernel: smpboot: Total of 2 processors activated (13632.00 BogoMIPS) May 8 00:05:08.735528 kernel: devtmpfs: initialized May 8 00:05:08.735534 kernel: x86/mm: Memory block size: 128MB May 8 00:05:08.735539 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7feff000-0x7fefffff] (4096 bytes) May 8 00:05:08.735545 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 8 00:05:08.735551 kernel: futex hash table entries: 32768 (order: 9, 2097152 bytes, linear) May 8 00:05:08.735557 kernel: pinctrl core: initialized pinctrl subsystem May 8 00:05:08.735563 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 8 00:05:08.735569 kernel: audit: initializing netlink subsys (disabled) May 8 00:05:08.735575 kernel: audit: type=2000 audit(1746662707.066:1): state=initialized audit_enabled=0 res=1 May 8 00:05:08.735581 kernel: thermal_sys: Registered thermal governor 'step_wise' May 8 00:05:08.735587 kernel: thermal_sys: Registered thermal governor 'user_space' May 8 00:05:08.735592 kernel: cpuidle: using governor menu May 8 00:05:08.735598 kernel: Simple Boot Flag at 0x36 set to 0x80 May 8 00:05:08.735604 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 8 00:05:08.735610 kernel: dca service started, version 1.12.1 May 8 00:05:08.735615 kernel: PCI: MMCONFIG for domain 0000 [bus 00-7f] at [mem 0xf0000000-0xf7ffffff] (base 0xf0000000) May 8 00:05:08.735622 kernel: PCI: Using configuration type 1 for base access May 8 00:05:08.735628 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 8 00:05:08.735634 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 8 00:05:08.735640 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page May 8 00:05:08.735645 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 8 00:05:08.735651 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page May 8 00:05:08.735657 kernel: ACPI: Added _OSI(Module Device) May 8 00:05:08.735663 kernel: ACPI: Added _OSI(Processor Device) May 8 00:05:08.735668 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 8 00:05:08.735675 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 8 00:05:08.735681 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 8 00:05:08.735687 kernel: ACPI: [Firmware Bug]: BIOS _OSI(Linux) query ignored May 8 00:05:08.735693 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC May 8 00:05:08.735698 kernel: ACPI: Interpreter enabled May 8 00:05:08.735704 kernel: ACPI: PM: (supports S0 S1 S5) May 8 00:05:08.735711 kernel: ACPI: Using IOAPIC for interrupt routing May 8 00:05:08.735717 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 8 00:05:08.735722 kernel: PCI: Using E820 reservations for host bridge windows May 8 00:05:08.735729 kernel: ACPI: Enabled 4 GPEs in block 00 to 0F May 8 00:05:08.735735 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-7f]) May 8 00:05:08.735812 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 8 00:05:08.735865 kernel: acpi PNP0A03:00: _OSC: platform does not support [AER LTR] May 8 00:05:08.735915 kernel: acpi PNP0A03:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability] May 8 00:05:08.735923 kernel: PCI host bridge to bus 0000:00 May 8 00:05:08.735987 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] May 8 00:05:08.736036 kernel: pci_bus 0000:00: root bus resource [mem 0x000cc000-0x000dbfff window] May 8 00:05:08.736081 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] May 8 00:05:08.736125 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] May 8 00:05:08.736169 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xfeff window] May 8 00:05:08.736212 kernel: pci_bus 0000:00: root bus resource [bus 00-7f] May 8 00:05:08.736274 kernel: pci 0000:00:00.0: [8086:7190] type 00 class 0x060000 May 8 00:05:08.736337 kernel: pci 0000:00:01.0: [8086:7191] type 01 class 0x060400 May 8 00:05:08.736397 kernel: pci 0000:00:07.0: [8086:7110] type 00 class 0x060100 May 8 00:05:08.736453 kernel: pci 0000:00:07.1: [8086:7111] type 00 class 0x01018a May 8 00:05:08.736505 kernel: pci 0000:00:07.1: reg 0x20: [io 0x1060-0x106f] May 8 00:05:08.736556 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] May 8 00:05:08.736607 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x14: [io 0x03f6] May 8 00:05:08.736657 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] May 8 00:05:08.736711 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x1c: [io 0x0376] May 8 00:05:08.736765 kernel: pci 0000:00:07.3: [8086:7113] type 00 class 0x068000 May 8 00:05:08.736817 kernel: pci 0000:00:07.3: quirk: [io 0x1000-0x103f] claimed by PIIX4 ACPI May 8 00:05:08.736868 kernel: pci 0000:00:07.3: quirk: [io 0x1040-0x104f] claimed by PIIX4 SMB May 8 00:05:08.736922 kernel: pci 0000:00:07.7: [15ad:0740] type 00 class 0x088000 May 8 00:05:08.736995 kernel: pci 0000:00:07.7: reg 0x10: [io 0x1080-0x10bf] May 8 00:05:08.737060 kernel: pci 0000:00:07.7: reg 0x14: [mem 0xfebfe000-0xfebfffff 64bit] May 8 00:05:08.737115 kernel: pci 0000:00:0f.0: [15ad:0405] type 00 class 0x030000 May 8 00:05:08.737166 kernel: pci 0000:00:0f.0: reg 0x10: [io 0x1070-0x107f] May 8 00:05:08.737217 kernel: pci 0000:00:0f.0: reg 0x14: [mem 0xe8000000-0xefffffff pref] May 8 00:05:08.737267 kernel: pci 0000:00:0f.0: reg 0x18: [mem 0xfe000000-0xfe7fffff] May 8 00:05:08.737317 kernel: pci 0000:00:0f.0: reg 0x30: [mem 0x00000000-0x00007fff pref] May 8 00:05:08.737367 kernel: pci 0000:00:0f.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] May 8 00:05:08.737425 kernel: pci 0000:00:11.0: [15ad:0790] type 01 class 0x060401 May 8 00:05:08.737481 kernel: pci 0000:00:15.0: [15ad:07a0] type 01 class 0x060400 May 8 00:05:08.737532 kernel: pci 0000:00:15.0: PME# supported from D0 D3hot D3cold May 8 00:05:08.737586 kernel: pci 0000:00:15.1: [15ad:07a0] type 01 class 0x060400 May 8 00:05:08.737638 kernel: pci 0000:00:15.1: PME# supported from D0 D3hot D3cold May 8 00:05:08.737692 kernel: pci 0000:00:15.2: [15ad:07a0] type 01 class 0x060400 May 8 00:05:08.737746 kernel: pci 0000:00:15.2: PME# supported from D0 D3hot D3cold May 8 00:05:08.737803 kernel: pci 0000:00:15.3: [15ad:07a0] type 01 class 0x060400 May 8 00:05:08.737855 kernel: pci 0000:00:15.3: PME# supported from D0 D3hot D3cold May 8 00:05:08.737909 kernel: pci 0000:00:15.4: [15ad:07a0] type 01 class 0x060400 May 8 00:05:08.737977 kernel: pci 0000:00:15.4: PME# supported from D0 D3hot D3cold May 8 00:05:08.738033 kernel: pci 0000:00:15.5: [15ad:07a0] type 01 class 0x060400 May 8 00:05:08.738088 kernel: pci 0000:00:15.5: PME# supported from D0 D3hot D3cold May 8 00:05:08.738162 kernel: pci 0000:00:15.6: [15ad:07a0] type 01 class 0x060400 May 8 00:05:08.738215 kernel: pci 0000:00:15.6: PME# supported from D0 D3hot D3cold May 8 00:05:08.738269 kernel: pci 0000:00:15.7: [15ad:07a0] type 01 class 0x060400 May 8 00:05:08.738321 kernel: pci 0000:00:15.7: PME# supported from D0 D3hot D3cold May 8 00:05:08.738378 kernel: pci 0000:00:16.0: [15ad:07a0] type 01 class 0x060400 May 8 00:05:08.738429 kernel: pci 0000:00:16.0: PME# supported from D0 D3hot D3cold May 8 00:05:08.738494 kernel: pci 0000:00:16.1: [15ad:07a0] type 01 class 0x060400 May 8 00:05:08.738551 kernel: pci 0000:00:16.1: PME# supported from D0 D3hot D3cold May 8 00:05:08.738605 kernel: pci 0000:00:16.2: [15ad:07a0] type 01 class 0x060400 May 8 00:05:08.738656 kernel: pci 0000:00:16.2: PME# supported from D0 D3hot D3cold May 8 00:05:08.738713 kernel: pci 0000:00:16.3: [15ad:07a0] type 01 class 0x060400 May 8 00:05:08.738767 kernel: pci 0000:00:16.3: PME# supported from D0 D3hot D3cold May 8 00:05:08.738821 kernel: pci 0000:00:16.4: [15ad:07a0] type 01 class 0x060400 May 8 00:05:08.738872 kernel: pci 0000:00:16.4: PME# supported from D0 D3hot D3cold May 8 00:05:08.738939 kernel: pci 0000:00:16.5: [15ad:07a0] type 01 class 0x060400 May 8 00:05:08.738999 kernel: pci 0000:00:16.5: PME# supported from D0 D3hot D3cold May 8 00:05:08.739054 kernel: pci 0000:00:16.6: [15ad:07a0] type 01 class 0x060400 May 8 00:05:08.739109 kernel: pci 0000:00:16.6: PME# supported from D0 D3hot D3cold May 8 00:05:08.739164 kernel: pci 0000:00:16.7: [15ad:07a0] type 01 class 0x060400 May 8 00:05:08.739217 kernel: pci 0000:00:16.7: PME# supported from D0 D3hot D3cold May 8 00:05:08.739271 kernel: pci 0000:00:17.0: [15ad:07a0] type 01 class 0x060400 May 8 00:05:08.739323 kernel: pci 0000:00:17.0: PME# supported from D0 D3hot D3cold May 8 00:05:08.739377 kernel: pci 0000:00:17.1: [15ad:07a0] type 01 class 0x060400 May 8 00:05:08.739431 kernel: pci 0000:00:17.1: PME# supported from D0 D3hot D3cold May 8 00:05:08.739488 kernel: pci 0000:00:17.2: [15ad:07a0] type 01 class 0x060400 May 8 00:05:08.739544 kernel: pci 0000:00:17.2: PME# supported from D0 D3hot D3cold May 8 00:05:08.739601 kernel: pci 0000:00:17.3: [15ad:07a0] type 01 class 0x060400 May 8 00:05:08.739652 kernel: pci 0000:00:17.3: PME# supported from D0 D3hot D3cold May 8 00:05:08.739706 kernel: pci 0000:00:17.4: [15ad:07a0] type 01 class 0x060400 May 8 00:05:08.739757 kernel: pci 0000:00:17.4: PME# supported from D0 D3hot D3cold May 8 00:05:08.739814 kernel: pci 0000:00:17.5: [15ad:07a0] type 01 class 0x060400 May 8 00:05:08.739865 kernel: pci 0000:00:17.5: PME# supported from D0 D3hot D3cold May 8 00:05:08.739919 kernel: pci 0000:00:17.6: [15ad:07a0] type 01 class 0x060400 May 8 00:05:08.740014 kernel: pci 0000:00:17.6: PME# supported from D0 D3hot D3cold May 8 00:05:08.740071 kernel: pci 0000:00:17.7: [15ad:07a0] type 01 class 0x060400 May 8 00:05:08.740123 kernel: pci 0000:00:17.7: PME# supported from D0 D3hot D3cold May 8 00:05:08.740181 kernel: pci 0000:00:18.0: [15ad:07a0] type 01 class 0x060400 May 8 00:05:08.740233 kernel: pci 0000:00:18.0: PME# supported from D0 D3hot D3cold May 8 00:05:08.740287 kernel: pci 0000:00:18.1: [15ad:07a0] type 01 class 0x060400 May 8 00:05:08.740338 kernel: pci 0000:00:18.1: PME# supported from D0 D3hot D3cold May 8 00:05:08.740393 kernel: pci 0000:00:18.2: [15ad:07a0] type 01 class 0x060400 May 8 00:05:08.740444 kernel: pci 0000:00:18.2: PME# supported from D0 D3hot D3cold May 8 00:05:08.740502 kernel: pci 0000:00:18.3: [15ad:07a0] type 01 class 0x060400 May 8 00:05:08.740555 kernel: pci 0000:00:18.3: PME# supported from D0 D3hot D3cold May 8 00:05:08.740624 kernel: pci 0000:00:18.4: [15ad:07a0] type 01 class 0x060400 May 8 00:05:08.740678 kernel: pci 0000:00:18.4: PME# supported from D0 D3hot D3cold May 8 00:05:08.740732 kernel: pci 0000:00:18.5: [15ad:07a0] type 01 class 0x060400 May 8 00:05:08.740784 kernel: pci 0000:00:18.5: PME# supported from D0 D3hot D3cold May 8 00:05:08.740841 kernel: pci 0000:00:18.6: [15ad:07a0] type 01 class 0x060400 May 8 00:05:08.740892 kernel: pci 0000:00:18.6: PME# supported from D0 D3hot D3cold May 8 00:05:08.742970 kernel: pci 0000:00:18.7: [15ad:07a0] type 01 class 0x060400 May 8 00:05:08.743038 kernel: pci 0000:00:18.7: PME# supported from D0 D3hot D3cold May 8 00:05:08.743094 kernel: pci_bus 0000:01: extended config space not accessible May 8 00:05:08.743149 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] May 8 00:05:08.743201 kernel: pci_bus 0000:02: extended config space not accessible May 8 00:05:08.743213 kernel: acpiphp: Slot [32] registered May 8 00:05:08.743219 kernel: acpiphp: Slot [33] registered May 8 00:05:08.743225 kernel: acpiphp: Slot [34] registered May 8 00:05:08.743231 kernel: acpiphp: Slot [35] registered May 8 00:05:08.743237 kernel: acpiphp: Slot [36] registered May 8 00:05:08.743242 kernel: acpiphp: Slot [37] registered May 8 00:05:08.743248 kernel: acpiphp: Slot [38] registered May 8 00:05:08.743254 kernel: acpiphp: Slot [39] registered May 8 00:05:08.743260 kernel: acpiphp: Slot [40] registered May 8 00:05:08.743267 kernel: acpiphp: Slot [41] registered May 8 00:05:08.743273 kernel: acpiphp: Slot [42] registered May 8 00:05:08.743278 kernel: acpiphp: Slot [43] registered May 8 00:05:08.743284 kernel: acpiphp: Slot [44] registered May 8 00:05:08.743290 kernel: acpiphp: Slot [45] registered May 8 00:05:08.743296 kernel: acpiphp: Slot [46] registered May 8 00:05:08.743301 kernel: acpiphp: Slot [47] registered May 8 00:05:08.743307 kernel: acpiphp: Slot [48] registered May 8 00:05:08.743313 kernel: acpiphp: Slot [49] registered May 8 00:05:08.743320 kernel: acpiphp: Slot [50] registered May 8 00:05:08.743325 kernel: acpiphp: Slot [51] registered May 8 00:05:08.743331 kernel: acpiphp: Slot [52] registered May 8 00:05:08.743337 kernel: acpiphp: Slot [53] registered May 8 00:05:08.743343 kernel: acpiphp: Slot [54] registered May 8 00:05:08.743349 kernel: acpiphp: Slot [55] registered May 8 00:05:08.743354 kernel: acpiphp: Slot [56] registered May 8 00:05:08.743360 kernel: acpiphp: Slot [57] registered May 8 00:05:08.743366 kernel: acpiphp: Slot [58] registered May 8 00:05:08.743373 kernel: acpiphp: Slot [59] registered May 8 00:05:08.743379 kernel: acpiphp: Slot [60] registered May 8 00:05:08.743384 kernel: acpiphp: Slot [61] registered May 8 00:05:08.743390 kernel: acpiphp: Slot [62] registered May 8 00:05:08.743396 kernel: acpiphp: Slot [63] registered May 8 00:05:08.743448 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] (subtractive decode) May 8 00:05:08.743500 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] May 8 00:05:08.743550 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] May 8 00:05:08.743600 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] May 8 00:05:08.743653 kernel: pci 0000:00:11.0: bridge window [mem 0x000a0000-0x000bffff window] (subtractive decode) May 8 00:05:08.743703 kernel: pci 0000:00:11.0: bridge window [mem 0x000cc000-0x000dbfff window] (subtractive decode) May 8 00:05:08.743754 kernel: pci 0000:00:11.0: bridge window [mem 0xc0000000-0xfebfffff window] (subtractive decode) May 8 00:05:08.743803 kernel: pci 0000:00:11.0: bridge window [io 0x0000-0x0cf7 window] (subtractive decode) May 8 00:05:08.743854 kernel: pci 0000:00:11.0: bridge window [io 0x0d00-0xfeff window] (subtractive decode) May 8 00:05:08.743911 kernel: pci 0000:03:00.0: [15ad:07c0] type 00 class 0x010700 May 8 00:05:08.743979 kernel: pci 0000:03:00.0: reg 0x10: [io 0x4000-0x4007] May 8 00:05:08.744035 kernel: pci 0000:03:00.0: reg 0x14: [mem 0xfd5f8000-0xfd5fffff 64bit] May 8 00:05:08.744086 kernel: pci 0000:03:00.0: reg 0x30: [mem 0x00000000-0x0000ffff pref] May 8 00:05:08.744137 kernel: pci 0000:03:00.0: PME# supported from D0 D3hot D3cold May 8 00:05:08.744227 kernel: pci 0000:03:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' May 8 00:05:08.744294 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] May 8 00:05:08.744348 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] May 8 00:05:08.744398 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] May 8 00:05:08.744450 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] May 8 00:05:08.744504 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] May 8 00:05:08.744555 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] May 8 00:05:08.744604 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] May 8 00:05:08.744656 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] May 8 00:05:08.744706 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] May 8 00:05:08.744757 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] May 8 00:05:08.744806 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] May 8 00:05:08.744860 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] May 8 00:05:08.744913 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] May 8 00:05:08.744992 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] May 8 00:05:08.745043 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] May 8 00:05:08.745093 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] May 8 00:05:08.745144 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] May 8 00:05:08.745199 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] May 8 00:05:08.745248 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] May 8 00:05:08.745299 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] May 8 00:05:08.745352 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] May 8 00:05:08.745402 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] May 8 00:05:08.745452 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] May 8 00:05:08.745506 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] May 8 00:05:08.745557 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] May 8 00:05:08.745607 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] May 8 00:05:08.745664 kernel: pci 0000:0b:00.0: [15ad:07b0] type 00 class 0x020000 May 8 00:05:08.745717 kernel: pci 0000:0b:00.0: reg 0x10: [mem 0xfd4fc000-0xfd4fcfff] May 8 00:05:08.745770 kernel: pci 0000:0b:00.0: reg 0x14: [mem 0xfd4fd000-0xfd4fdfff] May 8 00:05:08.745823 kernel: pci 0000:0b:00.0: reg 0x18: [mem 0xfd4fe000-0xfd4fffff] May 8 00:05:08.745874 kernel: pci 0000:0b:00.0: reg 0x1c: [io 0x5000-0x500f] May 8 00:05:08.747950 kernel: pci 0000:0b:00.0: reg 0x30: [mem 0x00000000-0x0000ffff pref] May 8 00:05:08.748019 kernel: pci 0000:0b:00.0: supports D1 D2 May 8 00:05:08.748076 kernel: pci 0000:0b:00.0: PME# supported from D0 D1 D2 D3hot D3cold May 8 00:05:08.748130 kernel: pci 0000:0b:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' May 8 00:05:08.748184 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] May 8 00:05:08.748235 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] May 8 00:05:08.748285 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] May 8 00:05:08.748338 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] May 8 00:05:08.748392 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] May 8 00:05:08.748444 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] May 8 00:05:08.748494 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] May 8 00:05:08.748547 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] May 8 00:05:08.748597 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] May 8 00:05:08.748648 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] May 8 00:05:08.748699 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] May 8 00:05:08.748754 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] May 8 00:05:08.748804 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] May 8 00:05:08.748855 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] May 8 00:05:08.748908 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] May 8 00:05:08.749473 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] May 8 00:05:08.749529 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] May 8 00:05:08.749582 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] May 8 00:05:08.749634 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] May 8 00:05:08.749687 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] May 8 00:05:08.749739 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] May 8 00:05:08.749790 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] May 8 00:05:08.749841 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] May 8 00:05:08.749893 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] May 8 00:05:08.749952 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] May 8 00:05:08.750023 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] May 8 00:05:08.750078 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] May 8 00:05:08.750131 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] May 8 00:05:08.750182 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] May 8 00:05:08.750233 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] May 8 00:05:08.750284 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] May 8 00:05:08.750334 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] May 8 00:05:08.750385 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] May 8 00:05:08.750436 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] May 8 00:05:08.750488 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] May 8 00:05:08.750541 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] May 8 00:05:08.750592 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] May 8 00:05:08.750642 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] May 8 00:05:08.750694 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] May 8 00:05:08.750744 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] May 8 00:05:08.750794 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] May 8 00:05:08.750845 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] May 8 00:05:08.750897 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] May 8 00:05:08.752973 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] May 8 00:05:08.753040 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] May 8 00:05:08.753095 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] May 8 00:05:08.753147 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] May 8 00:05:08.753199 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] May 8 00:05:08.753250 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] May 8 00:05:08.753301 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] May 8 00:05:08.753357 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] May 8 00:05:08.753409 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] May 8 00:05:08.753459 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] May 8 00:05:08.753511 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] May 8 00:05:08.753562 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] May 8 00:05:08.753613 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] May 8 00:05:08.753662 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] May 8 00:05:08.753716 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] May 8 00:05:08.753768 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] May 8 00:05:08.753819 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] May 8 00:05:08.753869 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] May 8 00:05:08.753921 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] May 8 00:05:08.753998 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] May 8 00:05:08.754050 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] May 8 00:05:08.754103 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] May 8 00:05:08.754154 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] May 8 00:05:08.754207 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] May 8 00:05:08.754261 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] May 8 00:05:08.754312 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] May 8 00:05:08.754362 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] May 8 00:05:08.754414 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] May 8 00:05:08.754464 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] May 8 00:05:08.754514 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] May 8 00:05:08.754566 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] May 8 00:05:08.754619 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] May 8 00:05:08.754669 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] May 8 00:05:08.754721 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] May 8 00:05:08.754771 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] May 8 00:05:08.754821 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] May 8 00:05:08.754830 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 9 May 8 00:05:08.754836 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 0 May 8 00:05:08.754842 kernel: ACPI: PCI: Interrupt link LNKB disabled May 8 00:05:08.754850 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 May 8 00:05:08.754856 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 10 May 8 00:05:08.754862 kernel: iommu: Default domain type: Translated May 8 00:05:08.754867 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 8 00:05:08.754873 kernel: PCI: Using ACPI for IRQ routing May 8 00:05:08.754879 kernel: PCI: pci_cache_line_size set to 64 bytes May 8 00:05:08.754885 kernel: e820: reserve RAM buffer [mem 0x0009ec00-0x0009ffff] May 8 00:05:08.754891 kernel: e820: reserve RAM buffer [mem 0x7fee0000-0x7fffffff] May 8 00:05:08.755219 kernel: pci 0000:00:0f.0: vgaarb: setting as boot VGA device May 8 00:05:08.755279 kernel: pci 0000:00:0f.0: vgaarb: bridge control possible May 8 00:05:08.755331 kernel: pci 0000:00:0f.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none May 8 00:05:08.755340 kernel: vgaarb: loaded May 8 00:05:08.755347 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 May 8 00:05:08.755353 kernel: hpet0: 16 comparators, 64-bit 14.318180 MHz counter May 8 00:05:08.755359 kernel: clocksource: Switched to clocksource tsc-early May 8 00:05:08.755365 kernel: VFS: Disk quotas dquot_6.6.0 May 8 00:05:08.755371 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 8 00:05:08.755377 kernel: pnp: PnP ACPI init May 8 00:05:08.755434 kernel: system 00:00: [io 0x1000-0x103f] has been reserved May 8 00:05:08.755482 kernel: system 00:00: [io 0x1040-0x104f] has been reserved May 8 00:05:08.755527 kernel: system 00:00: [io 0x0cf0-0x0cf1] has been reserved May 8 00:05:08.755577 kernel: system 00:04: [mem 0xfed00000-0xfed003ff] has been reserved May 8 00:05:08.755627 kernel: pnp 00:06: [dma 2] May 8 00:05:08.755680 kernel: system 00:07: [io 0xfce0-0xfcff] has been reserved May 8 00:05:08.755729 kernel: system 00:07: [mem 0xf0000000-0xf7ffffff] has been reserved May 8 00:05:08.755775 kernel: system 00:07: [mem 0xfe800000-0xfe9fffff] has been reserved May 8 00:05:08.755783 kernel: pnp: PnP ACPI: found 8 devices May 8 00:05:08.755789 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 8 00:05:08.755795 kernel: NET: Registered PF_INET protocol family May 8 00:05:08.755801 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) May 8 00:05:08.755807 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) May 8 00:05:08.755813 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 8 00:05:08.755819 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) May 8 00:05:08.755827 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) May 8 00:05:08.755832 kernel: TCP: Hash tables configured (established 16384 bind 16384) May 8 00:05:08.755838 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) May 8 00:05:08.755844 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) May 8 00:05:08.755850 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 8 00:05:08.755856 kernel: NET: Registered PF_XDP protocol family May 8 00:05:08.755907 kernel: pci 0000:00:15.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 May 8 00:05:08.755980 kernel: pci 0000:00:15.3: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 May 8 00:05:08.756057 kernel: pci 0000:00:15.4: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 May 8 00:05:08.756118 kernel: pci 0000:00:15.5: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 May 8 00:05:08.756172 kernel: pci 0000:00:15.6: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 May 8 00:05:08.756225 kernel: pci 0000:00:15.7: bridge window [io 0x1000-0x0fff] to [bus 0a] add_size 1000 May 8 00:05:08.756277 kernel: pci 0000:00:16.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 0b] add_size 200000 add_align 100000 May 8 00:05:08.756328 kernel: pci 0000:00:16.3: bridge window [io 0x1000-0x0fff] to [bus 0e] add_size 1000 May 8 00:05:08.756383 kernel: pci 0000:00:16.4: bridge window [io 0x1000-0x0fff] to [bus 0f] add_size 1000 May 8 00:05:08.756435 kernel: pci 0000:00:16.5: bridge window [io 0x1000-0x0fff] to [bus 10] add_size 1000 May 8 00:05:08.756486 kernel: pci 0000:00:16.6: bridge window [io 0x1000-0x0fff] to [bus 11] add_size 1000 May 8 00:05:08.756537 kernel: pci 0000:00:16.7: bridge window [io 0x1000-0x0fff] to [bus 12] add_size 1000 May 8 00:05:08.756588 kernel: pci 0000:00:17.3: bridge window [io 0x1000-0x0fff] to [bus 16] add_size 1000 May 8 00:05:08.756642 kernel: pci 0000:00:17.4: bridge window [io 0x1000-0x0fff] to [bus 17] add_size 1000 May 8 00:05:08.756693 kernel: pci 0000:00:17.5: bridge window [io 0x1000-0x0fff] to [bus 18] add_size 1000 May 8 00:05:08.756744 kernel: pci 0000:00:17.6: bridge window [io 0x1000-0x0fff] to [bus 19] add_size 1000 May 8 00:05:08.756794 kernel: pci 0000:00:17.7: bridge window [io 0x1000-0x0fff] to [bus 1a] add_size 1000 May 8 00:05:08.756845 kernel: pci 0000:00:18.2: bridge window [io 0x1000-0x0fff] to [bus 1d] add_size 1000 May 8 00:05:08.756896 kernel: pci 0000:00:18.3: bridge window [io 0x1000-0x0fff] to [bus 1e] add_size 1000 May 8 00:05:08.756998 kernel: pci 0000:00:18.4: bridge window [io 0x1000-0x0fff] to [bus 1f] add_size 1000 May 8 00:05:08.757061 kernel: pci 0000:00:18.5: bridge window [io 0x1000-0x0fff] to [bus 20] add_size 1000 May 8 00:05:08.757112 kernel: pci 0000:00:18.6: bridge window [io 0x1000-0x0fff] to [bus 21] add_size 1000 May 8 00:05:08.757163 kernel: pci 0000:00:18.7: bridge window [io 0x1000-0x0fff] to [bus 22] add_size 1000 May 8 00:05:08.757215 kernel: pci 0000:00:15.0: BAR 15: assigned [mem 0xc0000000-0xc01fffff 64bit pref] May 8 00:05:08.757265 kernel: pci 0000:00:16.0: BAR 15: assigned [mem 0xc0200000-0xc03fffff 64bit pref] May 8 00:05:08.757315 kernel: pci 0000:00:15.3: BAR 13: no space for [io size 0x1000] May 8 00:05:08.757369 kernel: pci 0000:00:15.3: BAR 13: failed to assign [io size 0x1000] May 8 00:05:08.757421 kernel: pci 0000:00:15.4: BAR 13: no space for [io size 0x1000] May 8 00:05:08.757471 kernel: pci 0000:00:15.4: BAR 13: failed to assign [io size 0x1000] May 8 00:05:08.757522 kernel: pci 0000:00:15.5: BAR 13: no space for [io size 0x1000] May 8 00:05:08.757573 kernel: pci 0000:00:15.5: BAR 13: failed to assign [io size 0x1000] May 8 00:05:08.757623 kernel: pci 0000:00:15.6: BAR 13: no space for [io size 0x1000] May 8 00:05:08.757674 kernel: pci 0000:00:15.6: BAR 13: failed to assign [io size 0x1000] May 8 00:05:08.757724 kernel: pci 0000:00:15.7: BAR 13: no space for [io size 0x1000] May 8 00:05:08.757778 kernel: pci 0000:00:15.7: BAR 13: failed to assign [io size 0x1000] May 8 00:05:08.757830 kernel: pci 0000:00:16.3: BAR 13: no space for [io size 0x1000] May 8 00:05:08.757881 kernel: pci 0000:00:16.3: BAR 13: failed to assign [io size 0x1000] May 8 00:05:08.759030 kernel: pci 0000:00:16.4: BAR 13: no space for [io size 0x1000] May 8 00:05:08.759088 kernel: pci 0000:00:16.4: BAR 13: failed to assign [io size 0x1000] May 8 00:05:08.759141 kernel: pci 0000:00:16.5: BAR 13: no space for [io size 0x1000] May 8 00:05:08.759192 kernel: pci 0000:00:16.5: BAR 13: failed to assign [io size 0x1000] May 8 00:05:08.759244 kernel: pci 0000:00:16.6: BAR 13: no space for [io size 0x1000] May 8 00:05:08.759298 kernel: pci 0000:00:16.6: BAR 13: failed to assign [io size 0x1000] May 8 00:05:08.759348 kernel: pci 0000:00:16.7: BAR 13: no space for [io size 0x1000] May 8 00:05:08.759399 kernel: pci 0000:00:16.7: BAR 13: failed to assign [io size 0x1000] May 8 00:05:08.759449 kernel: pci 0000:00:17.3: BAR 13: no space for [io size 0x1000] May 8 00:05:08.759501 kernel: pci 0000:00:17.3: BAR 13: failed to assign [io size 0x1000] May 8 00:05:08.759551 kernel: pci 0000:00:17.4: BAR 13: no space for [io size 0x1000] May 8 00:05:08.759602 kernel: pci 0000:00:17.4: BAR 13: failed to assign [io size 0x1000] May 8 00:05:08.759653 kernel: pci 0000:00:17.5: BAR 13: no space for [io size 0x1000] May 8 00:05:08.759706 kernel: pci 0000:00:17.5: BAR 13: failed to assign [io size 0x1000] May 8 00:05:08.759758 kernel: pci 0000:00:17.6: BAR 13: no space for [io size 0x1000] May 8 00:05:08.759810 kernel: pci 0000:00:17.6: BAR 13: failed to assign [io size 0x1000] May 8 00:05:08.759861 kernel: pci 0000:00:17.7: BAR 13: no space for [io size 0x1000] May 8 00:05:08.759912 kernel: pci 0000:00:17.7: BAR 13: failed to assign [io size 0x1000] May 8 00:05:08.760722 kernel: pci 0000:00:18.2: BAR 13: no space for [io size 0x1000] May 8 00:05:08.760795 kernel: pci 0000:00:18.2: BAR 13: failed to assign [io size 0x1000] May 8 00:05:08.760849 kernel: pci 0000:00:18.3: BAR 13: no space for [io size 0x1000] May 8 00:05:08.760904 kernel: pci 0000:00:18.3: BAR 13: failed to assign [io size 0x1000] May 8 00:05:08.761063 kernel: pci 0000:00:18.4: BAR 13: no space for [io size 0x1000] May 8 00:05:08.761117 kernel: pci 0000:00:18.4: BAR 13: failed to assign [io size 0x1000] May 8 00:05:08.761169 kernel: pci 0000:00:18.5: BAR 13: no space for [io size 0x1000] May 8 00:05:08.761221 kernel: pci 0000:00:18.5: BAR 13: failed to assign [io size 0x1000] May 8 00:05:08.761273 kernel: pci 0000:00:18.6: BAR 13: no space for [io size 0x1000] May 8 00:05:08.761324 kernel: pci 0000:00:18.6: BAR 13: failed to assign [io size 0x1000] May 8 00:05:08.761375 kernel: pci 0000:00:18.7: BAR 13: no space for [io size 0x1000] May 8 00:05:08.761429 kernel: pci 0000:00:18.7: BAR 13: failed to assign [io size 0x1000] May 8 00:05:08.761480 kernel: pci 0000:00:18.7: BAR 13: no space for [io size 0x1000] May 8 00:05:08.761531 kernel: pci 0000:00:18.7: BAR 13: failed to assign [io size 0x1000] May 8 00:05:08.761582 kernel: pci 0000:00:18.6: BAR 13: no space for [io size 0x1000] May 8 00:05:08.761632 kernel: pci 0000:00:18.6: BAR 13: failed to assign [io size 0x1000] May 8 00:05:08.761683 kernel: pci 0000:00:18.5: BAR 13: no space for [io size 0x1000] May 8 00:05:08.761733 kernel: pci 0000:00:18.5: BAR 13: failed to assign [io size 0x1000] May 8 00:05:08.761783 kernel: pci 0000:00:18.4: BAR 13: no space for [io size 0x1000] May 8 00:05:08.761834 kernel: pci 0000:00:18.4: BAR 13: failed to assign [io size 0x1000] May 8 00:05:08.761887 kernel: pci 0000:00:18.3: BAR 13: no space for [io size 0x1000] May 8 00:05:08.761944 kernel: pci 0000:00:18.3: BAR 13: failed to assign [io size 0x1000] May 8 00:05:08.761995 kernel: pci 0000:00:18.2: BAR 13: no space for [io size 0x1000] May 8 00:05:08.762045 kernel: pci 0000:00:18.2: BAR 13: failed to assign [io size 0x1000] May 8 00:05:08.762095 kernel: pci 0000:00:17.7: BAR 13: no space for [io size 0x1000] May 8 00:05:08.762146 kernel: pci 0000:00:17.7: BAR 13: failed to assign [io size 0x1000] May 8 00:05:08.762196 kernel: pci 0000:00:17.6: BAR 13: no space for [io size 0x1000] May 8 00:05:08.762246 kernel: pci 0000:00:17.6: BAR 13: failed to assign [io size 0x1000] May 8 00:05:08.762297 kernel: pci 0000:00:17.5: BAR 13: no space for [io size 0x1000] May 8 00:05:08.762348 kernel: pci 0000:00:17.5: BAR 13: failed to assign [io size 0x1000] May 8 00:05:08.762420 kernel: pci 0000:00:17.4: BAR 13: no space for [io size 0x1000] May 8 00:05:08.762484 kernel: pci 0000:00:17.4: BAR 13: failed to assign [io size 0x1000] May 8 00:05:08.762537 kernel: pci 0000:00:17.3: BAR 13: no space for [io size 0x1000] May 8 00:05:08.762587 kernel: pci 0000:00:17.3: BAR 13: failed to assign [io size 0x1000] May 8 00:05:08.762639 kernel: pci 0000:00:16.7: BAR 13: no space for [io size 0x1000] May 8 00:05:08.762689 kernel: pci 0000:00:16.7: BAR 13: failed to assign [io size 0x1000] May 8 00:05:08.762738 kernel: pci 0000:00:16.6: BAR 13: no space for [io size 0x1000] May 8 00:05:08.762789 kernel: pci 0000:00:16.6: BAR 13: failed to assign [io size 0x1000] May 8 00:05:08.762840 kernel: pci 0000:00:16.5: BAR 13: no space for [io size 0x1000] May 8 00:05:08.762893 kernel: pci 0000:00:16.5: BAR 13: failed to assign [io size 0x1000] May 8 00:05:08.762957 kernel: pci 0000:00:16.4: BAR 13: no space for [io size 0x1000] May 8 00:05:08.763013 kernel: pci 0000:00:16.4: BAR 13: failed to assign [io size 0x1000] May 8 00:05:08.763064 kernel: pci 0000:00:16.3: BAR 13: no space for [io size 0x1000] May 8 00:05:08.763115 kernel: pci 0000:00:16.3: BAR 13: failed to assign [io size 0x1000] May 8 00:05:08.763167 kernel: pci 0000:00:15.7: BAR 13: no space for [io size 0x1000] May 8 00:05:08.763217 kernel: pci 0000:00:15.7: BAR 13: failed to assign [io size 0x1000] May 8 00:05:08.763269 kernel: pci 0000:00:15.6: BAR 13: no space for [io size 0x1000] May 8 00:05:08.763319 kernel: pci 0000:00:15.6: BAR 13: failed to assign [io size 0x1000] May 8 00:05:08.763371 kernel: pci 0000:00:15.5: BAR 13: no space for [io size 0x1000] May 8 00:05:08.763425 kernel: pci 0000:00:15.5: BAR 13: failed to assign [io size 0x1000] May 8 00:05:08.763476 kernel: pci 0000:00:15.4: BAR 13: no space for [io size 0x1000] May 8 00:05:08.763527 kernel: pci 0000:00:15.4: BAR 13: failed to assign [io size 0x1000] May 8 00:05:08.763578 kernel: pci 0000:00:15.3: BAR 13: no space for [io size 0x1000] May 8 00:05:08.763628 kernel: pci 0000:00:15.3: BAR 13: failed to assign [io size 0x1000] May 8 00:05:08.763680 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] May 8 00:05:08.763733 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] May 8 00:05:08.763784 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] May 8 00:05:08.763834 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] May 8 00:05:08.763887 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] May 8 00:05:08.763954 kernel: pci 0000:03:00.0: BAR 6: assigned [mem 0xfd500000-0xfd50ffff pref] May 8 00:05:08.764009 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] May 8 00:05:08.764059 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] May 8 00:05:08.764110 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] May 8 00:05:08.764161 kernel: pci 0000:00:15.0: bridge window [mem 0xc0000000-0xc01fffff 64bit pref] May 8 00:05:08.764214 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] May 8 00:05:08.764265 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] May 8 00:05:08.764319 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] May 8 00:05:08.764370 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] May 8 00:05:08.764422 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] May 8 00:05:08.764472 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] May 8 00:05:08.764522 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] May 8 00:05:08.764573 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] May 8 00:05:08.764623 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] May 8 00:05:08.764674 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] May 8 00:05:08.764725 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] May 8 00:05:08.764778 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] May 8 00:05:08.764829 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] May 8 00:05:08.764879 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] May 8 00:05:08.764944 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] May 8 00:05:08.764998 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] May 8 00:05:08.765049 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] May 8 00:05:08.765102 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] May 8 00:05:08.765153 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] May 8 00:05:08.765203 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] May 8 00:05:08.765254 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] May 8 00:05:08.765305 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] May 8 00:05:08.765356 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] May 8 00:05:08.765409 kernel: pci 0000:0b:00.0: BAR 6: assigned [mem 0xfd400000-0xfd40ffff pref] May 8 00:05:08.765462 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] May 8 00:05:08.765513 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] May 8 00:05:08.765564 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] May 8 00:05:08.765618 kernel: pci 0000:00:16.0: bridge window [mem 0xc0200000-0xc03fffff 64bit pref] May 8 00:05:08.765671 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] May 8 00:05:08.765722 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] May 8 00:05:08.765773 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] May 8 00:05:08.765824 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] May 8 00:05:08.765877 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] May 8 00:05:08.765943 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] May 8 00:05:08.766000 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] May 8 00:05:08.766056 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] May 8 00:05:08.766110 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] May 8 00:05:08.766162 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] May 8 00:05:08.766212 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] May 8 00:05:08.766264 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] May 8 00:05:08.766315 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] May 8 00:05:08.766366 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] May 8 00:05:08.766416 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] May 8 00:05:08.766467 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] May 8 00:05:08.766518 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] May 8 00:05:08.766572 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] May 8 00:05:08.766623 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] May 8 00:05:08.766675 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] May 8 00:05:08.766726 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] May 8 00:05:08.766777 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] May 8 00:05:08.766828 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] May 8 00:05:08.766880 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] May 8 00:05:08.766938 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] May 8 00:05:08.766990 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] May 8 00:05:08.767046 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] May 8 00:05:08.767100 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] May 8 00:05:08.767151 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] May 8 00:05:08.767201 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] May 8 00:05:08.767252 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] May 8 00:05:08.767303 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] May 8 00:05:08.767353 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] May 8 00:05:08.767404 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] May 8 00:05:08.767459 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] May 8 00:05:08.767578 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] May 8 00:05:08.767840 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] May 8 00:05:08.767899 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] May 8 00:05:08.767968 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] May 8 00:05:08.768030 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] May 8 00:05:08.768083 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] May 8 00:05:08.768133 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] May 8 00:05:08.768184 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] May 8 00:05:08.768236 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] May 8 00:05:08.768287 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] May 8 00:05:08.768338 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] May 8 00:05:08.768393 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] May 8 00:05:08.768445 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] May 8 00:05:08.768496 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] May 8 00:05:08.768547 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] May 8 00:05:08.768598 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] May 8 00:05:08.768649 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] May 8 00:05:08.768700 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] May 8 00:05:08.768750 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] May 8 00:05:08.768801 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] May 8 00:05:08.768854 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] May 8 00:05:08.768906 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] May 8 00:05:08.770572 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] May 8 00:05:08.770638 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] May 8 00:05:08.770691 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] May 8 00:05:08.770742 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] May 8 00:05:08.770794 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] May 8 00:05:08.770845 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] May 8 00:05:08.770895 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] May 8 00:05:08.770975 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] May 8 00:05:08.771033 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] May 8 00:05:08.771085 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] May 8 00:05:08.771139 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] May 8 00:05:08.771198 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] May 8 00:05:08.771253 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] May 8 00:05:08.771306 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] May 8 00:05:08.771362 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] May 8 00:05:08.771414 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] May 8 00:05:08.771469 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] May 8 00:05:08.771526 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] May 8 00:05:08.771581 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] May 8 00:05:08.771633 kernel: pci_bus 0000:00: resource 4 [mem 0x000a0000-0x000bffff window] May 8 00:05:08.771681 kernel: pci_bus 0000:00: resource 5 [mem 0x000cc000-0x000dbfff window] May 8 00:05:08.771730 kernel: pci_bus 0000:00: resource 6 [mem 0xc0000000-0xfebfffff window] May 8 00:05:08.771774 kernel: pci_bus 0000:00: resource 7 [io 0x0000-0x0cf7 window] May 8 00:05:08.771823 kernel: pci_bus 0000:00: resource 8 [io 0x0d00-0xfeff window] May 8 00:05:08.771873 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x3fff] May 8 00:05:08.771923 kernel: pci_bus 0000:02: resource 1 [mem 0xfd600000-0xfdffffff] May 8 00:05:08.772031 kernel: pci_bus 0000:02: resource 2 [mem 0xe7b00000-0xe7ffffff 64bit pref] May 8 00:05:08.772078 kernel: pci_bus 0000:02: resource 4 [mem 0x000a0000-0x000bffff window] May 8 00:05:08.772125 kernel: pci_bus 0000:02: resource 5 [mem 0x000cc000-0x000dbfff window] May 8 00:05:08.772177 kernel: pci_bus 0000:02: resource 6 [mem 0xc0000000-0xfebfffff window] May 8 00:05:08.772224 kernel: pci_bus 0000:02: resource 7 [io 0x0000-0x0cf7 window] May 8 00:05:08.772269 kernel: pci_bus 0000:02: resource 8 [io 0x0d00-0xfeff window] May 8 00:05:08.772321 kernel: pci_bus 0000:03: resource 0 [io 0x4000-0x4fff] May 8 00:05:08.772371 kernel: pci_bus 0000:03: resource 1 [mem 0xfd500000-0xfd5fffff] May 8 00:05:08.772423 kernel: pci_bus 0000:03: resource 2 [mem 0xc0000000-0xc01fffff 64bit pref] May 8 00:05:08.772475 kernel: pci_bus 0000:04: resource 0 [io 0x8000-0x8fff] May 8 00:05:08.772528 kernel: pci_bus 0000:04: resource 1 [mem 0xfd100000-0xfd1fffff] May 8 00:05:08.772576 kernel: pci_bus 0000:04: resource 2 [mem 0xe7800000-0xe78fffff 64bit pref] May 8 00:05:08.772633 kernel: pci_bus 0000:05: resource 0 [io 0xc000-0xcfff] May 8 00:05:08.772689 kernel: pci_bus 0000:05: resource 1 [mem 0xfcd00000-0xfcdfffff] May 8 00:05:08.772736 kernel: pci_bus 0000:05: resource 2 [mem 0xe7400000-0xe74fffff 64bit pref] May 8 00:05:08.772791 kernel: pci_bus 0000:06: resource 1 [mem 0xfc900000-0xfc9fffff] May 8 00:05:08.772840 kernel: pci_bus 0000:06: resource 2 [mem 0xe7000000-0xe70fffff 64bit pref] May 8 00:05:08.772893 kernel: pci_bus 0000:07: resource 1 [mem 0xfc500000-0xfc5fffff] May 8 00:05:08.774594 kernel: pci_bus 0000:07: resource 2 [mem 0xe6c00000-0xe6cfffff 64bit pref] May 8 00:05:08.774649 kernel: pci_bus 0000:08: resource 1 [mem 0xfc100000-0xfc1fffff] May 8 00:05:08.774700 kernel: pci_bus 0000:08: resource 2 [mem 0xe6800000-0xe68fffff 64bit pref] May 8 00:05:08.774750 kernel: pci_bus 0000:09: resource 1 [mem 0xfbd00000-0xfbdfffff] May 8 00:05:08.774798 kernel: pci_bus 0000:09: resource 2 [mem 0xe6400000-0xe64fffff 64bit pref] May 8 00:05:08.774851 kernel: pci_bus 0000:0a: resource 1 [mem 0xfb900000-0xfb9fffff] May 8 00:05:08.774907 kernel: pci_bus 0000:0a: resource 2 [mem 0xe6000000-0xe60fffff 64bit pref] May 8 00:05:08.774980 kernel: pci_bus 0000:0b: resource 0 [io 0x5000-0x5fff] May 8 00:05:08.775028 kernel: pci_bus 0000:0b: resource 1 [mem 0xfd400000-0xfd4fffff] May 8 00:05:08.775075 kernel: pci_bus 0000:0b: resource 2 [mem 0xc0200000-0xc03fffff 64bit pref] May 8 00:05:08.775125 kernel: pci_bus 0000:0c: resource 0 [io 0x9000-0x9fff] May 8 00:05:08.775172 kernel: pci_bus 0000:0c: resource 1 [mem 0xfd000000-0xfd0fffff] May 8 00:05:08.775219 kernel: pci_bus 0000:0c: resource 2 [mem 0xe7700000-0xe77fffff 64bit pref] May 8 00:05:08.775272 kernel: pci_bus 0000:0d: resource 0 [io 0xd000-0xdfff] May 8 00:05:08.775322 kernel: pci_bus 0000:0d: resource 1 [mem 0xfcc00000-0xfccfffff] May 8 00:05:08.775369 kernel: pci_bus 0000:0d: resource 2 [mem 0xe7300000-0xe73fffff 64bit pref] May 8 00:05:08.775419 kernel: pci_bus 0000:0e: resource 1 [mem 0xfc800000-0xfc8fffff] May 8 00:05:08.775465 kernel: pci_bus 0000:0e: resource 2 [mem 0xe6f00000-0xe6ffffff 64bit pref] May 8 00:05:08.775516 kernel: pci_bus 0000:0f: resource 1 [mem 0xfc400000-0xfc4fffff] May 8 00:05:08.775563 kernel: pci_bus 0000:0f: resource 2 [mem 0xe6b00000-0xe6bfffff 64bit pref] May 8 00:05:08.775615 kernel: pci_bus 0000:10: resource 1 [mem 0xfc000000-0xfc0fffff] May 8 00:05:08.775662 kernel: pci_bus 0000:10: resource 2 [mem 0xe6700000-0xe67fffff 64bit pref] May 8 00:05:08.775712 kernel: pci_bus 0000:11: resource 1 [mem 0xfbc00000-0xfbcfffff] May 8 00:05:08.775760 kernel: pci_bus 0000:11: resource 2 [mem 0xe6300000-0xe63fffff 64bit pref] May 8 00:05:08.775810 kernel: pci_bus 0000:12: resource 1 [mem 0xfb800000-0xfb8fffff] May 8 00:05:08.775857 kernel: pci_bus 0000:12: resource 2 [mem 0xe5f00000-0xe5ffffff 64bit pref] May 8 00:05:08.775907 kernel: pci_bus 0000:13: resource 0 [io 0x6000-0x6fff] May 8 00:05:08.775974 kernel: pci_bus 0000:13: resource 1 [mem 0xfd300000-0xfd3fffff] May 8 00:05:08.776023 kernel: pci_bus 0000:13: resource 2 [mem 0xe7a00000-0xe7afffff 64bit pref] May 8 00:05:08.776074 kernel: pci_bus 0000:14: resource 0 [io 0xa000-0xafff] May 8 00:05:08.776120 kernel: pci_bus 0000:14: resource 1 [mem 0xfcf00000-0xfcffffff] May 8 00:05:08.776166 kernel: pci_bus 0000:14: resource 2 [mem 0xe7600000-0xe76fffff 64bit pref] May 8 00:05:08.776216 kernel: pci_bus 0000:15: resource 0 [io 0xe000-0xefff] May 8 00:05:08.776266 kernel: pci_bus 0000:15: resource 1 [mem 0xfcb00000-0xfcbfffff] May 8 00:05:08.776313 kernel: pci_bus 0000:15: resource 2 [mem 0xe7200000-0xe72fffff 64bit pref] May 8 00:05:08.776365 kernel: pci_bus 0000:16: resource 1 [mem 0xfc700000-0xfc7fffff] May 8 00:05:08.776412 kernel: pci_bus 0000:16: resource 2 [mem 0xe6e00000-0xe6efffff 64bit pref] May 8 00:05:08.776463 kernel: pci_bus 0000:17: resource 1 [mem 0xfc300000-0xfc3fffff] May 8 00:05:08.776509 kernel: pci_bus 0000:17: resource 2 [mem 0xe6a00000-0xe6afffff 64bit pref] May 8 00:05:08.776560 kernel: pci_bus 0000:18: resource 1 [mem 0xfbf00000-0xfbffffff] May 8 00:05:08.776610 kernel: pci_bus 0000:18: resource 2 [mem 0xe6600000-0xe66fffff 64bit pref] May 8 00:05:08.776660 kernel: pci_bus 0000:19: resource 1 [mem 0xfbb00000-0xfbbfffff] May 8 00:05:08.776708 kernel: pci_bus 0000:19: resource 2 [mem 0xe6200000-0xe62fffff 64bit pref] May 8 00:05:08.776759 kernel: pci_bus 0000:1a: resource 1 [mem 0xfb700000-0xfb7fffff] May 8 00:05:08.776806 kernel: pci_bus 0000:1a: resource 2 [mem 0xe5e00000-0xe5efffff 64bit pref] May 8 00:05:08.776859 kernel: pci_bus 0000:1b: resource 0 [io 0x7000-0x7fff] May 8 00:05:08.776907 kernel: pci_bus 0000:1b: resource 1 [mem 0xfd200000-0xfd2fffff] May 8 00:05:08.777033 kernel: pci_bus 0000:1b: resource 2 [mem 0xe7900000-0xe79fffff 64bit pref] May 8 00:05:08.777086 kernel: pci_bus 0000:1c: resource 0 [io 0xb000-0xbfff] May 8 00:05:08.777134 kernel: pci_bus 0000:1c: resource 1 [mem 0xfce00000-0xfcefffff] May 8 00:05:08.777181 kernel: pci_bus 0000:1c: resource 2 [mem 0xe7500000-0xe75fffff 64bit pref] May 8 00:05:08.777231 kernel: pci_bus 0000:1d: resource 1 [mem 0xfca00000-0xfcafffff] May 8 00:05:08.777282 kernel: pci_bus 0000:1d: resource 2 [mem 0xe7100000-0xe71fffff 64bit pref] May 8 00:05:08.777335 kernel: pci_bus 0000:1e: resource 1 [mem 0xfc600000-0xfc6fffff] May 8 00:05:08.777383 kernel: pci_bus 0000:1e: resource 2 [mem 0xe6d00000-0xe6dfffff 64bit pref] May 8 00:05:08.777433 kernel: pci_bus 0000:1f: resource 1 [mem 0xfc200000-0xfc2fffff] May 8 00:05:08.777485 kernel: pci_bus 0000:1f: resource 2 [mem 0xe6900000-0xe69fffff 64bit pref] May 8 00:05:08.777537 kernel: pci_bus 0000:20: resource 1 [mem 0xfbe00000-0xfbefffff] May 8 00:05:08.777587 kernel: pci_bus 0000:20: resource 2 [mem 0xe6500000-0xe65fffff 64bit pref] May 8 00:05:08.777636 kernel: pci_bus 0000:21: resource 1 [mem 0xfba00000-0xfbafffff] May 8 00:05:08.777684 kernel: pci_bus 0000:21: resource 2 [mem 0xe6100000-0xe61fffff 64bit pref] May 8 00:05:08.777734 kernel: pci_bus 0000:22: resource 1 [mem 0xfb600000-0xfb6fffff] May 8 00:05:08.777781 kernel: pci_bus 0000:22: resource 2 [mem 0xe5d00000-0xe5dfffff 64bit pref] May 8 00:05:08.777838 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers May 8 00:05:08.777849 kernel: PCI: CLS 32 bytes, default 64 May 8 00:05:08.777856 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer May 8 00:05:08.777863 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns May 8 00:05:08.777869 kernel: clocksource: Switched to clocksource tsc May 8 00:05:08.777875 kernel: Initialise system trusted keyrings May 8 00:05:08.777881 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 May 8 00:05:08.777887 kernel: Key type asymmetric registered May 8 00:05:08.777893 kernel: Asymmetric key parser 'x509' registered May 8 00:05:08.777900 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) May 8 00:05:08.777907 kernel: io scheduler mq-deadline registered May 8 00:05:08.777914 kernel: io scheduler kyber registered May 8 00:05:08.777920 kernel: io scheduler bfq registered May 8 00:05:08.777984 kernel: pcieport 0000:00:15.0: PME: Signaling with IRQ 24 May 8 00:05:08.778037 kernel: pcieport 0000:00:15.0: pciehp: Slot #160 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 8 00:05:08.778091 kernel: pcieport 0000:00:15.1: PME: Signaling with IRQ 25 May 8 00:05:08.778143 kernel: pcieport 0000:00:15.1: pciehp: Slot #161 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 8 00:05:08.778196 kernel: pcieport 0000:00:15.2: PME: Signaling with IRQ 26 May 8 00:05:08.778248 kernel: pcieport 0000:00:15.2: pciehp: Slot #162 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 8 00:05:08.778302 kernel: pcieport 0000:00:15.3: PME: Signaling with IRQ 27 May 8 00:05:08.778354 kernel: pcieport 0000:00:15.3: pciehp: Slot #163 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 8 00:05:08.778405 kernel: pcieport 0000:00:15.4: PME: Signaling with IRQ 28 May 8 00:05:08.778457 kernel: pcieport 0000:00:15.4: pciehp: Slot #164 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 8 00:05:08.778508 kernel: pcieport 0000:00:15.5: PME: Signaling with IRQ 29 May 8 00:05:08.778564 kernel: pcieport 0000:00:15.5: pciehp: Slot #165 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 8 00:05:08.778615 kernel: pcieport 0000:00:15.6: PME: Signaling with IRQ 30 May 8 00:05:08.778667 kernel: pcieport 0000:00:15.6: pciehp: Slot #166 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 8 00:05:08.778718 kernel: pcieport 0000:00:15.7: PME: Signaling with IRQ 31 May 8 00:05:08.778770 kernel: pcieport 0000:00:15.7: pciehp: Slot #167 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 8 00:05:08.778824 kernel: pcieport 0000:00:16.0: PME: Signaling with IRQ 32 May 8 00:05:08.778876 kernel: pcieport 0000:00:16.0: pciehp: Slot #192 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 8 00:05:08.778957 kernel: pcieport 0000:00:16.1: PME: Signaling with IRQ 33 May 8 00:05:08.779012 kernel: pcieport 0000:00:16.1: pciehp: Slot #193 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 8 00:05:08.779064 kernel: pcieport 0000:00:16.2: PME: Signaling with IRQ 34 May 8 00:05:08.779118 kernel: pcieport 0000:00:16.2: pciehp: Slot #194 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 8 00:05:08.779169 kernel: pcieport 0000:00:16.3: PME: Signaling with IRQ 35 May 8 00:05:08.779224 kernel: pcieport 0000:00:16.3: pciehp: Slot #195 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 8 00:05:08.779275 kernel: pcieport 0000:00:16.4: PME: Signaling with IRQ 36 May 8 00:05:08.779325 kernel: pcieport 0000:00:16.4: pciehp: Slot #196 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 8 00:05:08.779377 kernel: pcieport 0000:00:16.5: PME: Signaling with IRQ 37 May 8 00:05:08.779429 kernel: pcieport 0000:00:16.5: pciehp: Slot #197 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 8 00:05:08.779480 kernel: pcieport 0000:00:16.6: PME: Signaling with IRQ 38 May 8 00:05:08.779534 kernel: pcieport 0000:00:16.6: pciehp: Slot #198 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 8 00:05:08.779587 kernel: pcieport 0000:00:16.7: PME: Signaling with IRQ 39 May 8 00:05:08.779639 kernel: pcieport 0000:00:16.7: pciehp: Slot #199 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 8 00:05:08.779691 kernel: pcieport 0000:00:17.0: PME: Signaling with IRQ 40 May 8 00:05:08.779742 kernel: pcieport 0000:00:17.0: pciehp: Slot #224 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 8 00:05:08.779797 kernel: pcieport 0000:00:17.1: PME: Signaling with IRQ 41 May 8 00:05:08.779848 kernel: pcieport 0000:00:17.1: pciehp: Slot #225 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 8 00:05:08.779898 kernel: pcieport 0000:00:17.2: PME: Signaling with IRQ 42 May 8 00:05:08.779957 kernel: pcieport 0000:00:17.2: pciehp: Slot #226 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 8 00:05:08.780009 kernel: pcieport 0000:00:17.3: PME: Signaling with IRQ 43 May 8 00:05:08.780060 kernel: pcieport 0000:00:17.3: pciehp: Slot #227 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 8 00:05:08.780114 kernel: pcieport 0000:00:17.4: PME: Signaling with IRQ 44 May 8 00:05:08.780169 kernel: pcieport 0000:00:17.4: pciehp: Slot #228 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 8 00:05:08.780221 kernel: pcieport 0000:00:17.5: PME: Signaling with IRQ 45 May 8 00:05:08.780272 kernel: pcieport 0000:00:17.5: pciehp: Slot #229 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 8 00:05:08.780323 kernel: pcieport 0000:00:17.6: PME: Signaling with IRQ 46 May 8 00:05:08.780375 kernel: pcieport 0000:00:17.6: pciehp: Slot #230 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 8 00:05:08.780429 kernel: pcieport 0000:00:17.7: PME: Signaling with IRQ 47 May 8 00:05:08.780480 kernel: pcieport 0000:00:17.7: pciehp: Slot #231 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 8 00:05:08.780530 kernel: pcieport 0000:00:18.0: PME: Signaling with IRQ 48 May 8 00:05:08.780581 kernel: pcieport 0000:00:18.0: pciehp: Slot #256 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 8 00:05:08.780633 kernel: pcieport 0000:00:18.1: PME: Signaling with IRQ 49 May 8 00:05:08.780685 kernel: pcieport 0000:00:18.1: pciehp: Slot #257 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 8 00:05:08.780735 kernel: pcieport 0000:00:18.2: PME: Signaling with IRQ 50 May 8 00:05:08.780789 kernel: pcieport 0000:00:18.2: pciehp: Slot #258 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 8 00:05:08.780840 kernel: pcieport 0000:00:18.3: PME: Signaling with IRQ 51 May 8 00:05:08.780892 kernel: pcieport 0000:00:18.3: pciehp: Slot #259 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 8 00:05:08.781089 kernel: pcieport 0000:00:18.4: PME: Signaling with IRQ 52 May 8 00:05:08.781359 kernel: pcieport 0000:00:18.4: pciehp: Slot #260 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 8 00:05:08.781421 kernel: pcieport 0000:00:18.5: PME: Signaling with IRQ 53 May 8 00:05:08.781476 kernel: pcieport 0000:00:18.5: pciehp: Slot #261 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 8 00:05:08.781529 kernel: pcieport 0000:00:18.6: PME: Signaling with IRQ 54 May 8 00:05:08.781580 kernel: pcieport 0000:00:18.6: pciehp: Slot #262 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 8 00:05:08.781632 kernel: pcieport 0000:00:18.7: PME: Signaling with IRQ 55 May 8 00:05:08.781687 kernel: pcieport 0000:00:18.7: pciehp: Slot #263 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 8 00:05:08.781696 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 8 00:05:08.781703 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 8 00:05:08.781709 kernel: 00:05: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 8 00:05:08.781716 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBC,PNP0f13:MOUS] at 0x60,0x64 irq 1,12 May 8 00:05:08.781722 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 May 8 00:05:08.781728 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 May 8 00:05:08.781779 kernel: rtc_cmos 00:01: registered as rtc0 May 8 00:05:08.781830 kernel: rtc_cmos 00:01: setting system clock to 2025-05-08T00:05:08 UTC (1746662708) May 8 00:05:08.781878 kernel: rtc_cmos 00:01: alarms up to one month, y3k, 114 bytes nvram May 8 00:05:08.781891 kernel: intel_pstate: CPU model not supported May 8 00:05:08.781902 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 May 8 00:05:08.781913 kernel: NET: Registered PF_INET6 protocol family May 8 00:05:08.781920 kernel: Segment Routing with IPv6 May 8 00:05:08.781964 kernel: In-situ OAM (IOAM) with IPv6 May 8 00:05:08.781971 kernel: NET: Registered PF_PACKET protocol family May 8 00:05:08.781979 kernel: Key type dns_resolver registered May 8 00:05:08.781986 kernel: IPI shorthand broadcast: enabled May 8 00:05:08.781992 kernel: sched_clock: Marking stable (884003510, 225805837)->(1166118822, -56309475) May 8 00:05:08.781999 kernel: registered taskstats version 1 May 8 00:05:08.782005 kernel: Loading compiled-in X.509 certificates May 8 00:05:08.782011 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.88-flatcar: dac8423f6f9fa2fb5f636925d45d7c2572b3a9b6' May 8 00:05:08.782018 kernel: Key type .fscrypt registered May 8 00:05:08.782024 kernel: Key type fscrypt-provisioning registered May 8 00:05:08.782030 kernel: ima: No TPM chip found, activating TPM-bypass! May 8 00:05:08.782038 kernel: ima: Allocated hash algorithm: sha1 May 8 00:05:08.782044 kernel: ima: No architecture policies found May 8 00:05:08.782050 kernel: clk: Disabling unused clocks May 8 00:05:08.782056 kernel: Freeing unused kernel image (initmem) memory: 43484K May 8 00:05:08.782062 kernel: Write protecting the kernel read-only data: 38912k May 8 00:05:08.782069 kernel: Freeing unused kernel image (rodata/data gap) memory: 1712K May 8 00:05:08.782075 kernel: Run /init as init process May 8 00:05:08.782082 kernel: with arguments: May 8 00:05:08.782088 kernel: /init May 8 00:05:08.782095 kernel: with environment: May 8 00:05:08.782101 kernel: HOME=/ May 8 00:05:08.782108 kernel: TERM=linux May 8 00:05:08.782114 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 8 00:05:08.782121 systemd[1]: Successfully made /usr/ read-only. May 8 00:05:08.782129 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 8 00:05:08.782136 systemd[1]: Detected virtualization vmware. May 8 00:05:08.782142 systemd[1]: Detected architecture x86-64. May 8 00:05:08.782149 systemd[1]: Running in initrd. May 8 00:05:08.782156 systemd[1]: No hostname configured, using default hostname. May 8 00:05:08.782163 systemd[1]: Hostname set to . May 8 00:05:08.782169 systemd[1]: Initializing machine ID from random generator. May 8 00:05:08.782175 systemd[1]: Queued start job for default target initrd.target. May 8 00:05:08.782182 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 8 00:05:08.782188 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 8 00:05:08.782195 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 8 00:05:08.782203 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 8 00:05:08.782210 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 8 00:05:08.782217 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 8 00:05:08.782224 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 8 00:05:08.782231 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 8 00:05:08.782237 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 8 00:05:08.782244 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 8 00:05:08.782252 systemd[1]: Reached target paths.target - Path Units. May 8 00:05:08.782258 systemd[1]: Reached target slices.target - Slice Units. May 8 00:05:08.782264 systemd[1]: Reached target swap.target - Swaps. May 8 00:05:08.782271 systemd[1]: Reached target timers.target - Timer Units. May 8 00:05:08.782277 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 8 00:05:08.782284 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 8 00:05:08.782292 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 8 00:05:08.782298 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. May 8 00:05:08.782306 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 8 00:05:08.782312 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 8 00:05:08.782319 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 8 00:05:08.782325 systemd[1]: Reached target sockets.target - Socket Units. May 8 00:05:08.782332 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 8 00:05:08.782338 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 8 00:05:08.782345 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 8 00:05:08.782351 systemd[1]: Starting systemd-fsck-usr.service... May 8 00:05:08.782358 systemd[1]: Starting systemd-journald.service - Journal Service... May 8 00:05:08.782366 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 8 00:05:08.782373 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 8 00:05:08.782393 systemd-journald[217]: Collecting audit messages is disabled. May 8 00:05:08.782411 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 8 00:05:08.782419 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 8 00:05:08.782426 systemd[1]: Finished systemd-fsck-usr.service. May 8 00:05:08.782433 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 8 00:05:08.782440 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 8 00:05:08.782448 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 8 00:05:08.782454 kernel: Bridge firewalling registered May 8 00:05:08.782461 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 8 00:05:08.782468 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 8 00:05:08.782474 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 8 00:05:08.782482 systemd-journald[217]: Journal started May 8 00:05:08.782496 systemd-journald[217]: Runtime Journal (/run/log/journal/064a858f13c34b6d87b5202a0bf9dba4) is 4.8M, max 38.6M, 33.8M free. May 8 00:05:08.746207 systemd-modules-load[218]: Inserted module 'overlay' May 8 00:05:08.770634 systemd-modules-load[218]: Inserted module 'br_netfilter' May 8 00:05:08.785869 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 8 00:05:08.786113 systemd[1]: Started systemd-journald.service - Journal Service. May 8 00:05:08.791128 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 8 00:05:08.791672 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 8 00:05:08.794106 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 8 00:05:08.799509 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 8 00:05:08.801044 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 8 00:05:08.804055 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 8 00:05:08.806759 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 8 00:05:08.808002 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 8 00:05:08.816978 dracut-cmdline[254]: dracut-dracut-053 May 8 00:05:08.820342 dracut-cmdline[254]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=90f0413c3d850985bb1e645e67699e9890362068cb417837636fe4022f4be979 May 8 00:05:08.825645 systemd-resolved[247]: Positive Trust Anchors: May 8 00:05:08.825848 systemd-resolved[247]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 8 00:05:08.826013 systemd-resolved[247]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 8 00:05:08.828420 systemd-resolved[247]: Defaulting to hostname 'linux'. May 8 00:05:08.829474 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 8 00:05:08.829635 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 8 00:05:08.861943 kernel: SCSI subsystem initialized May 8 00:05:08.867941 kernel: Loading iSCSI transport class v2.0-870. May 8 00:05:08.874946 kernel: iscsi: registered transport (tcp) May 8 00:05:08.887939 kernel: iscsi: registered transport (qla4xxx) May 8 00:05:08.887969 kernel: QLogic iSCSI HBA Driver May 8 00:05:08.906954 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 8 00:05:08.910021 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 8 00:05:08.925092 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 8 00:05:08.925136 kernel: device-mapper: uevent: version 1.0.3 May 8 00:05:08.926153 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 8 00:05:08.956968 kernel: raid6: avx2x4 gen() 47526 MB/s May 8 00:05:08.973969 kernel: raid6: avx2x2 gen() 52932 MB/s May 8 00:05:08.991192 kernel: raid6: avx2x1 gen() 44765 MB/s May 8 00:05:08.991241 kernel: raid6: using algorithm avx2x2 gen() 52932 MB/s May 8 00:05:09.009192 kernel: raid6: .... xor() 32124 MB/s, rmw enabled May 8 00:05:09.009254 kernel: raid6: using avx2x2 recovery algorithm May 8 00:05:09.022946 kernel: xor: automatically using best checksumming function avx May 8 00:05:09.110944 kernel: Btrfs loaded, zoned=no, fsverity=no May 8 00:05:09.115978 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 8 00:05:09.122041 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 8 00:05:09.130607 systemd-udevd[436]: Using default interface naming scheme 'v255'. May 8 00:05:09.133467 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 8 00:05:09.139036 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 8 00:05:09.145686 dracut-pre-trigger[441]: rd.md=0: removing MD RAID activation May 8 00:05:09.160761 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 8 00:05:09.165004 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 8 00:05:09.235210 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 8 00:05:09.241062 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 8 00:05:09.247692 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 8 00:05:09.248739 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 8 00:05:09.249375 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 8 00:05:09.249747 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 8 00:05:09.254034 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 8 00:05:09.260291 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 8 00:05:09.308939 kernel: VMware vmxnet3 virtual NIC driver - version 1.7.0.0-k-NAPI May 8 00:05:09.316938 kernel: VMware PVSCSI driver - version 1.0.7.0-k May 8 00:05:09.321633 kernel: vmxnet3 0000:0b:00.0: # of Tx queues : 2, # of Rx queues : 2 May 8 00:05:09.334177 kernel: vmxnet3 0000:0b:00.0 eth0: NIC Link is Up 10000 Mbps May 8 00:05:09.334272 kernel: vmw_pvscsi: using 64bit dma May 8 00:05:09.334289 kernel: cryptd: max_cpu_qlen set to 1000 May 8 00:05:09.334299 kernel: vmw_pvscsi: max_id: 16 May 8 00:05:09.334306 kernel: vmw_pvscsi: setting ring_pages to 8 May 8 00:05:09.336261 kernel: AVX2 version of gcm_enc/dec engaged. May 8 00:05:09.336296 kernel: AES CTR mode by8 optimization enabled May 8 00:05:09.342213 kernel: vmxnet3 0000:0b:00.0 ens192: renamed from eth0 May 8 00:05:09.346974 kernel: vmw_pvscsi: enabling reqCallThreshold May 8 00:05:09.347000 kernel: vmw_pvscsi: driver-based request coalescing enabled May 8 00:05:09.347013 kernel: vmw_pvscsi: using MSI-X May 8 00:05:09.347213 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 8 00:05:09.347454 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 8 00:05:09.349297 kernel: scsi host0: VMware PVSCSI storage adapter rev 2, req/cmp/msg rings: 8/8/1 pages, cmd_per_lun=254 May 8 00:05:09.348982 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 8 00:05:09.349082 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 8 00:05:09.349153 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 8 00:05:09.349753 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 8 00:05:09.354940 kernel: vmw_pvscsi 0000:03:00.0: VMware PVSCSI rev 2 host #0 May 8 00:05:09.356888 kernel: scsi 0:0:0:0: Direct-Access VMware Virtual disk 2.0 PQ: 0 ANSI: 6 May 8 00:05:09.356292 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 8 00:05:09.358940 kernel: libata version 3.00 loaded. May 8 00:05:09.363968 kernel: ata_piix 0000:00:07.1: version 2.13 May 8 00:05:09.365107 kernel: scsi host1: ata_piix May 8 00:05:09.365180 kernel: scsi host2: ata_piix May 8 00:05:09.365240 kernel: ata1: PATA max UDMA/33 cmd 0x1f0 ctl 0x3f6 bmdma 0x1060 irq 14 May 8 00:05:09.365250 kernel: ata2: PATA max UDMA/33 cmd 0x170 ctl 0x376 bmdma 0x1068 irq 15 May 8 00:05:09.374531 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 8 00:05:09.380002 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 8 00:05:09.391340 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 8 00:05:09.534946 kernel: ata2.00: ATAPI: VMware Virtual IDE CDROM Drive, 00000001, max UDMA/33 May 8 00:05:09.540958 kernel: scsi 2:0:0:0: CD-ROM NECVMWar VMware IDE CDR10 1.00 PQ: 0 ANSI: 5 May 8 00:05:09.551482 kernel: sd 0:0:0:0: [sda] 17805312 512-byte logical blocks: (9.12 GB/8.49 GiB) May 8 00:05:09.556414 kernel: sd 0:0:0:0: [sda] Write Protect is off May 8 00:05:09.556487 kernel: sd 0:0:0:0: [sda] Mode Sense: 31 00 00 00 May 8 00:05:09.556554 kernel: sd 0:0:0:0: [sda] Cache data unavailable May 8 00:05:09.556615 kernel: sd 0:0:0:0: [sda] Assuming drive cache: write through May 8 00:05:09.556675 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 8 00:05:09.556684 kernel: sd 0:0:0:0: [sda] Attached SCSI disk May 8 00:05:09.563943 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 1x/1x writer dvd-ram cd/rw xa/form2 cdda tray May 8 00:05:09.576861 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 May 8 00:05:09.576872 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 May 8 00:05:09.598938 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by (udev-worker) (488) May 8 00:05:09.611101 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_disk OEM. May 8 00:05:09.616893 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_disk ROOT. May 8 00:05:09.622564 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_disk EFI-SYSTEM. May 8 00:05:09.682942 kernel: BTRFS: device fsid 1c9931ea-0995-4065-8a57-32743027822a devid 1 transid 42 /dev/sda3 scanned by (udev-worker) (498) May 8 00:05:09.690442 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_disk USR-A. May 8 00:05:09.690744 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_disk USR-A. May 8 00:05:09.696008 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 8 00:05:10.113951 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 8 00:05:11.215790 disk-uuid[597]: The operation has completed successfully. May 8 00:05:11.216021 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 8 00:05:11.538834 systemd[1]: disk-uuid.service: Deactivated successfully. May 8 00:05:11.538903 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 8 00:05:11.543020 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 8 00:05:11.548506 sh[612]: Success May 8 00:05:11.569985 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" May 8 00:05:11.931753 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 8 00:05:11.932565 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 8 00:05:11.932754 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 8 00:05:11.971412 kernel: BTRFS info (device dm-0): first mount of filesystem 1c9931ea-0995-4065-8a57-32743027822a May 8 00:05:11.971466 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm May 8 00:05:11.971488 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 8 00:05:11.972524 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 8 00:05:11.973474 kernel: BTRFS info (device dm-0): using free space tree May 8 00:05:11.982945 kernel: BTRFS info (device dm-0): enabling ssd optimizations May 8 00:05:11.984649 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 8 00:05:11.993105 systemd[1]: Starting afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments... May 8 00:05:11.994722 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 8 00:05:12.012985 kernel: BTRFS info (device sda6): first mount of filesystem 13774eeb-24b8-4f6d-a245-c0facb6e43f9 May 8 00:05:12.013021 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm May 8 00:05:12.014582 kernel: BTRFS info (device sda6): using free space tree May 8 00:05:12.021946 kernel: BTRFS info (device sda6): enabling ssd optimizations May 8 00:05:12.027196 kernel: BTRFS info (device sda6): last unmount of filesystem 13774eeb-24b8-4f6d-a245-c0facb6e43f9 May 8 00:05:12.028255 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 8 00:05:12.034106 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 8 00:05:12.110275 systemd[1]: Finished afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments. May 8 00:05:12.124966 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 8 00:05:12.150377 ignition[669]: Ignition 2.20.0 May 8 00:05:12.150383 ignition[669]: Stage: fetch-offline May 8 00:05:12.150404 ignition[669]: no configs at "/usr/lib/ignition/base.d" May 8 00:05:12.150409 ignition[669]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" May 8 00:05:12.150464 ignition[669]: parsed url from cmdline: "" May 8 00:05:12.150466 ignition[669]: no config URL provided May 8 00:05:12.150469 ignition[669]: reading system config file "/usr/lib/ignition/user.ign" May 8 00:05:12.150474 ignition[669]: no config at "/usr/lib/ignition/user.ign" May 8 00:05:12.150871 ignition[669]: config successfully fetched May 8 00:05:12.150888 ignition[669]: parsing config with SHA512: e84210e03c51e625e0a7301798bc6097224608e162c7e8984dc5932b3a630e7e0dd62d87ebfb0658499166622f8d9cf267aa99dd12be76191229dfed299bf135 May 8 00:05:12.154212 unknown[669]: fetched base config from "system" May 8 00:05:12.154219 unknown[669]: fetched user config from "vmware" May 8 00:05:12.154582 ignition[669]: fetch-offline: fetch-offline passed May 8 00:05:12.154627 ignition[669]: Ignition finished successfully May 8 00:05:12.155681 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 8 00:05:12.178326 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 8 00:05:12.183007 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 8 00:05:12.196238 systemd-networkd[805]: lo: Link UP May 8 00:05:12.196244 systemd-networkd[805]: lo: Gained carrier May 8 00:05:12.197171 systemd-networkd[805]: Enumeration completed May 8 00:05:12.197339 systemd[1]: Started systemd-networkd.service - Network Configuration. May 8 00:05:12.197426 systemd-networkd[805]: ens192: Configuring with /etc/systemd/network/10-dracut-cmdline-99.network. May 8 00:05:12.197490 systemd[1]: Reached target network.target - Network. May 8 00:05:12.197583 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 8 00:05:12.200826 kernel: vmxnet3 0000:0b:00.0 ens192: intr type 3, mode 0, 3 vectors allocated May 8 00:05:12.200943 kernel: vmxnet3 0000:0b:00.0 ens192: NIC Link is Up 10000 Mbps May 8 00:05:12.200070 systemd-networkd[805]: ens192: Link UP May 8 00:05:12.200073 systemd-networkd[805]: ens192: Gained carrier May 8 00:05:12.210056 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 8 00:05:12.218030 ignition[808]: Ignition 2.20.0 May 8 00:05:12.218042 ignition[808]: Stage: kargs May 8 00:05:12.218143 ignition[808]: no configs at "/usr/lib/ignition/base.d" May 8 00:05:12.218150 ignition[808]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" May 8 00:05:12.218659 ignition[808]: kargs: kargs passed May 8 00:05:12.218683 ignition[808]: Ignition finished successfully May 8 00:05:12.219888 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 8 00:05:12.227062 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 8 00:05:12.234026 ignition[815]: Ignition 2.20.0 May 8 00:05:12.234034 ignition[815]: Stage: disks May 8 00:05:12.234167 ignition[815]: no configs at "/usr/lib/ignition/base.d" May 8 00:05:12.234174 ignition[815]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" May 8 00:05:12.235285 ignition[815]: disks: disks passed May 8 00:05:12.235331 ignition[815]: Ignition finished successfully May 8 00:05:12.236114 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 8 00:05:12.236334 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 8 00:05:12.236441 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 8 00:05:12.236628 systemd[1]: Reached target local-fs.target - Local File Systems. May 8 00:05:12.236809 systemd[1]: Reached target sysinit.target - System Initialization. May 8 00:05:12.236983 systemd[1]: Reached target basic.target - Basic System. May 8 00:05:12.240032 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 8 00:05:12.250459 systemd-fsck[823]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks May 8 00:05:12.252311 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 8 00:05:12.258051 systemd[1]: Mounting sysroot.mount - /sysroot... May 8 00:05:12.315954 kernel: EXT4-fs (sda9): mounted filesystem 369e2962-701e-4244-8c1c-27f8fa83bc64 r/w with ordered data mode. Quota mode: none. May 8 00:05:12.315862 systemd[1]: Mounted sysroot.mount - /sysroot. May 8 00:05:12.316248 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 8 00:05:12.327053 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 8 00:05:12.328592 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 8 00:05:12.328942 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 8 00:05:12.328978 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 8 00:05:12.328996 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 8 00:05:12.332837 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 8 00:05:12.333969 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 8 00:05:12.336948 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (831) May 8 00:05:12.338953 kernel: BTRFS info (device sda6): first mount of filesystem 13774eeb-24b8-4f6d-a245-c0facb6e43f9 May 8 00:05:12.338974 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm May 8 00:05:12.340487 kernel: BTRFS info (device sda6): using free space tree May 8 00:05:12.347968 kernel: BTRFS info (device sda6): enabling ssd optimizations May 8 00:05:12.349173 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 8 00:05:12.377865 initrd-setup-root[855]: cut: /sysroot/etc/passwd: No such file or directory May 8 00:05:12.384681 initrd-setup-root[862]: cut: /sysroot/etc/group: No such file or directory May 8 00:05:12.391265 initrd-setup-root[869]: cut: /sysroot/etc/shadow: No such file or directory May 8 00:05:12.394146 initrd-setup-root[876]: cut: /sysroot/etc/gshadow: No such file or directory May 8 00:05:12.468153 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 8 00:05:12.472058 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 8 00:05:12.474657 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 8 00:05:12.479949 kernel: BTRFS info (device sda6): last unmount of filesystem 13774eeb-24b8-4f6d-a245-c0facb6e43f9 May 8 00:05:12.492028 ignition[943]: INFO : Ignition 2.20.0 May 8 00:05:12.492028 ignition[943]: INFO : Stage: mount May 8 00:05:12.493763 ignition[943]: INFO : no configs at "/usr/lib/ignition/base.d" May 8 00:05:12.493763 ignition[943]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" May 8 00:05:12.493763 ignition[943]: INFO : mount: mount passed May 8 00:05:12.493763 ignition[943]: INFO : Ignition finished successfully May 8 00:05:12.493139 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 8 00:05:12.501030 systemd[1]: Starting ignition-files.service - Ignition (files)... May 8 00:05:12.507561 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 8 00:05:12.970037 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 8 00:05:12.975076 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 8 00:05:12.983943 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (955) May 8 00:05:12.988750 kernel: BTRFS info (device sda6): first mount of filesystem 13774eeb-24b8-4f6d-a245-c0facb6e43f9 May 8 00:05:12.988778 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm May 8 00:05:12.988786 kernel: BTRFS info (device sda6): using free space tree May 8 00:05:12.998770 kernel: BTRFS info (device sda6): enabling ssd optimizations May 8 00:05:12.999948 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 8 00:05:13.014145 ignition[972]: INFO : Ignition 2.20.0 May 8 00:05:13.014145 ignition[972]: INFO : Stage: files May 8 00:05:13.014735 ignition[972]: INFO : no configs at "/usr/lib/ignition/base.d" May 8 00:05:13.014735 ignition[972]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" May 8 00:05:13.014969 ignition[972]: DEBUG : files: compiled without relabeling support, skipping May 8 00:05:13.017348 ignition[972]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 8 00:05:13.017348 ignition[972]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 8 00:05:13.021310 ignition[972]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 8 00:05:13.021489 ignition[972]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 8 00:05:13.021613 ignition[972]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 8 00:05:13.021555 unknown[972]: wrote ssh authorized keys file for user: core May 8 00:05:13.025012 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" May 8 00:05:13.025282 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 May 8 00:05:13.112307 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 8 00:05:13.463577 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" May 8 00:05:13.463577 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" May 8 00:05:13.463577 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 May 8 00:05:13.930248 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK May 8 00:05:13.989284 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" May 8 00:05:13.989284 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" May 8 00:05:13.989762 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" May 8 00:05:13.989762 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" May 8 00:05:13.989762 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" May 8 00:05:13.989762 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 8 00:05:13.989762 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 8 00:05:13.989762 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 8 00:05:13.989762 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 8 00:05:13.989762 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" May 8 00:05:13.989762 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 8 00:05:13.991487 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" May 8 00:05:13.991487 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" May 8 00:05:13.991487 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" May 8 00:05:13.991487 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-x86-64.raw: attempt #1 May 8 00:05:14.217127 systemd-networkd[805]: ens192: Gained IPv6LL May 8 00:05:14.299173 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK May 8 00:05:14.570864 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" May 8 00:05:14.571194 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/etc/systemd/network/00-vmware.network" May 8 00:05:14.571194 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/etc/systemd/network/00-vmware.network" May 8 00:05:14.571194 ignition[972]: INFO : files: op(d): [started] processing unit "prepare-helm.service" May 8 00:05:14.576158 ignition[972]: INFO : files: op(d): op(e): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 8 00:05:14.576388 ignition[972]: INFO : files: op(d): op(e): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 8 00:05:14.576388 ignition[972]: INFO : files: op(d): [finished] processing unit "prepare-helm.service" May 8 00:05:14.576388 ignition[972]: INFO : files: op(f): [started] processing unit "coreos-metadata.service" May 8 00:05:14.576388 ignition[972]: INFO : files: op(f): op(10): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 8 00:05:14.576388 ignition[972]: INFO : files: op(f): op(10): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 8 00:05:14.576388 ignition[972]: INFO : files: op(f): [finished] processing unit "coreos-metadata.service" May 8 00:05:14.576388 ignition[972]: INFO : files: op(11): [started] setting preset to disabled for "coreos-metadata.service" May 8 00:05:14.827751 ignition[972]: INFO : files: op(11): op(12): [started] removing enablement symlink(s) for "coreos-metadata.service" May 8 00:05:14.830257 ignition[972]: INFO : files: op(11): op(12): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 8 00:05:14.830257 ignition[972]: INFO : files: op(11): [finished] setting preset to disabled for "coreos-metadata.service" May 8 00:05:14.830257 ignition[972]: INFO : files: op(13): [started] setting preset to enabled for "prepare-helm.service" May 8 00:05:14.830257 ignition[972]: INFO : files: op(13): [finished] setting preset to enabled for "prepare-helm.service" May 8 00:05:14.830878 ignition[972]: INFO : files: createResultFile: createFiles: op(14): [started] writing file "/sysroot/etc/.ignition-result.json" May 8 00:05:14.830878 ignition[972]: INFO : files: createResultFile: createFiles: op(14): [finished] writing file "/sysroot/etc/.ignition-result.json" May 8 00:05:14.830878 ignition[972]: INFO : files: files passed May 8 00:05:14.830878 ignition[972]: INFO : Ignition finished successfully May 8 00:05:14.832061 systemd[1]: Finished ignition-files.service - Ignition (files). May 8 00:05:14.837108 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 8 00:05:14.838743 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 8 00:05:14.839387 systemd[1]: ignition-quench.service: Deactivated successfully. May 8 00:05:14.839572 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 8 00:05:14.846605 initrd-setup-root-after-ignition[1002]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 8 00:05:14.846605 initrd-setup-root-after-ignition[1002]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 8 00:05:14.847615 initrd-setup-root-after-ignition[1006]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 8 00:05:14.848463 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 8 00:05:14.848826 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 8 00:05:14.852062 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 8 00:05:14.866045 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 8 00:05:14.866260 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 8 00:05:14.866745 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 8 00:05:14.866975 systemd[1]: Reached target initrd.target - Initrd Default Target. May 8 00:05:14.867239 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 8 00:05:14.867863 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 8 00:05:14.877330 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 8 00:05:14.882033 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 8 00:05:14.888146 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 8 00:05:14.888505 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 8 00:05:14.888674 systemd[1]: Stopped target timers.target - Timer Units. May 8 00:05:14.888823 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 8 00:05:14.888900 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 8 00:05:14.889661 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 8 00:05:14.889810 systemd[1]: Stopped target basic.target - Basic System. May 8 00:05:14.890217 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 8 00:05:14.890486 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 8 00:05:14.890766 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 8 00:05:14.891064 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 8 00:05:14.891348 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 8 00:05:14.891774 systemd[1]: Stopped target sysinit.target - System Initialization. May 8 00:05:14.892036 systemd[1]: Stopped target local-fs.target - Local File Systems. May 8 00:05:14.892330 systemd[1]: Stopped target swap.target - Swaps. May 8 00:05:14.892579 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 8 00:05:14.892657 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 8 00:05:14.893185 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 8 00:05:14.893482 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 8 00:05:14.893775 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 8 00:05:14.893932 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 8 00:05:14.894081 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 8 00:05:14.894144 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 8 00:05:14.894390 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 8 00:05:14.894454 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 8 00:05:14.894674 systemd[1]: Stopped target paths.target - Path Units. May 8 00:05:14.894806 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 8 00:05:14.897953 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 8 00:05:14.898156 systemd[1]: Stopped target slices.target - Slice Units. May 8 00:05:14.898347 systemd[1]: Stopped target sockets.target - Socket Units. May 8 00:05:14.898531 systemd[1]: iscsid.socket: Deactivated successfully. May 8 00:05:14.898585 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 8 00:05:14.898783 systemd[1]: iscsiuio.socket: Deactivated successfully. May 8 00:05:14.898826 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 8 00:05:14.899077 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 8 00:05:14.899142 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 8 00:05:14.899368 systemd[1]: ignition-files.service: Deactivated successfully. May 8 00:05:14.899427 systemd[1]: Stopped ignition-files.service - Ignition (files). May 8 00:05:14.904084 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 8 00:05:14.904207 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 8 00:05:14.904298 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 8 00:05:14.906059 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 8 00:05:14.906244 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 8 00:05:14.906335 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 8 00:05:14.906546 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 8 00:05:14.906647 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 8 00:05:14.910706 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 8 00:05:14.910760 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 8 00:05:14.913609 ignition[1026]: INFO : Ignition 2.20.0 May 8 00:05:14.913838 ignition[1026]: INFO : Stage: umount May 8 00:05:14.914068 ignition[1026]: INFO : no configs at "/usr/lib/ignition/base.d" May 8 00:05:14.914198 ignition[1026]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" May 8 00:05:14.914943 ignition[1026]: INFO : umount: umount passed May 8 00:05:14.915074 ignition[1026]: INFO : Ignition finished successfully May 8 00:05:14.915667 systemd[1]: ignition-mount.service: Deactivated successfully. May 8 00:05:14.915955 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 8 00:05:14.916350 systemd[1]: Stopped target network.target - Network. May 8 00:05:14.916763 systemd[1]: ignition-disks.service: Deactivated successfully. May 8 00:05:14.916874 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 8 00:05:14.917537 systemd[1]: ignition-kargs.service: Deactivated successfully. May 8 00:05:14.917570 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 8 00:05:14.917802 systemd[1]: ignition-setup.service: Deactivated successfully. May 8 00:05:14.917823 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 8 00:05:14.918456 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 8 00:05:14.918479 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 8 00:05:14.918679 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 8 00:05:14.919286 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 8 00:05:14.921807 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 8 00:05:14.922185 systemd[1]: systemd-resolved.service: Deactivated successfully. May 8 00:05:14.922873 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 8 00:05:14.925214 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. May 8 00:05:14.925370 systemd[1]: systemd-networkd.service: Deactivated successfully. May 8 00:05:14.925432 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 8 00:05:14.926160 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. May 8 00:05:14.926644 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 8 00:05:14.926670 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 8 00:05:14.931024 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 8 00:05:14.931157 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 8 00:05:14.931189 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 8 00:05:14.931325 systemd[1]: afterburn-network-kargs.service: Deactivated successfully. May 8 00:05:14.931347 systemd[1]: Stopped afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments. May 8 00:05:14.931459 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 8 00:05:14.931481 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 8 00:05:14.931632 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 8 00:05:14.931653 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 8 00:05:14.931755 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 8 00:05:14.931775 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 8 00:05:14.931963 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 8 00:05:14.933140 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 8 00:05:14.933188 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. May 8 00:05:14.944054 systemd[1]: systemd-udevd.service: Deactivated successfully. May 8 00:05:14.944149 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 8 00:05:14.944643 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 8 00:05:14.944678 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 8 00:05:14.944985 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 8 00:05:14.945005 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 8 00:05:14.945369 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 8 00:05:14.945396 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 8 00:05:14.946163 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 8 00:05:14.946187 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 8 00:05:14.946736 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 8 00:05:14.946760 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 8 00:05:14.954076 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 8 00:05:14.954191 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 8 00:05:14.954231 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 8 00:05:14.954866 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 8 00:05:14.954893 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 8 00:05:14.955634 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. May 8 00:05:14.955668 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 8 00:05:14.955863 systemd[1]: network-cleanup.service: Deactivated successfully. May 8 00:05:14.955913 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 8 00:05:14.957565 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 8 00:05:14.957618 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 8 00:05:15.048674 systemd[1]: sysroot-boot.service: Deactivated successfully. May 8 00:05:15.048743 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 8 00:05:15.049157 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 8 00:05:15.049285 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 8 00:05:15.049318 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 8 00:05:15.056034 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 8 00:05:15.066746 systemd[1]: Switching root. May 8 00:05:15.100511 systemd-journald[217]: Journal stopped May 8 00:05:18.125566 systemd-journald[217]: Received SIGTERM from PID 1 (systemd). May 8 00:05:18.125587 kernel: SELinux: policy capability network_peer_controls=1 May 8 00:05:18.125595 kernel: SELinux: policy capability open_perms=1 May 8 00:05:18.125601 kernel: SELinux: policy capability extended_socket_class=1 May 8 00:05:18.125606 kernel: SELinux: policy capability always_check_network=0 May 8 00:05:18.125611 kernel: SELinux: policy capability cgroup_seclabel=1 May 8 00:05:18.125619 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 8 00:05:18.125625 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 8 00:05:18.125630 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 8 00:05:18.125636 kernel: audit: type=1403 audit(1746662716.332:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 8 00:05:18.125646 systemd[1]: Successfully loaded SELinux policy in 87.646ms. May 8 00:05:18.125661 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 7.106ms. May 8 00:05:18.125677 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 8 00:05:18.125687 systemd[1]: Detected virtualization vmware. May 8 00:05:18.125693 systemd[1]: Detected architecture x86-64. May 8 00:05:18.125707 systemd[1]: Detected first boot. May 8 00:05:18.125717 systemd[1]: Initializing machine ID from random generator. May 8 00:05:18.125726 zram_generator::config[1071]: No configuration found. May 8 00:05:18.125813 kernel: vmw_vmci 0000:00:07.7: Using capabilities 0xc May 8 00:05:18.125823 kernel: Guest personality initialized and is active May 8 00:05:18.125829 kernel: VMCI host device registered (name=vmci, major=10, minor=125) May 8 00:05:18.125835 kernel: Initialized host personality May 8 00:05:18.125841 kernel: NET: Registered PF_VSOCK protocol family May 8 00:05:18.125847 systemd[1]: Populated /etc with preset unit settings. May 8 00:05:18.125857 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") May 8 00:05:18.125864 systemd[1]: COREOS_CUSTOM_PUBLIC_IPV4=$(ip addr show ens192 | grep -v "inet 10." | grep -Po "inet \K[\d.]+")" > ${OUTPUT}" May 8 00:05:18.125871 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. May 8 00:05:18.125877 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 8 00:05:18.125883 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 8 00:05:18.125890 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 8 00:05:18.125899 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 8 00:05:18.125906 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 8 00:05:18.125914 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 8 00:05:18.126246 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 8 00:05:18.126260 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 8 00:05:18.126268 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 8 00:05:18.126274 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 8 00:05:18.126281 systemd[1]: Created slice user.slice - User and Session Slice. May 8 00:05:18.126290 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 8 00:05:18.126297 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 8 00:05:18.126306 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 8 00:05:18.126313 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 8 00:05:18.126319 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 8 00:05:18.126326 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 8 00:05:18.126333 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... May 8 00:05:18.126340 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 8 00:05:18.126348 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 8 00:05:18.126356 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 8 00:05:18.126363 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 8 00:05:18.126370 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 8 00:05:18.126376 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 8 00:05:18.126383 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 8 00:05:18.126390 systemd[1]: Reached target slices.target - Slice Units. May 8 00:05:18.126396 systemd[1]: Reached target swap.target - Swaps. May 8 00:05:18.126405 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 8 00:05:18.126411 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 8 00:05:18.126419 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. May 8 00:05:18.126425 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 8 00:05:18.126432 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 8 00:05:18.126440 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 8 00:05:18.126447 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 8 00:05:18.126454 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 8 00:05:18.126461 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 8 00:05:18.126467 systemd[1]: Mounting media.mount - External Media Directory... May 8 00:05:18.126474 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 8 00:05:18.126481 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 8 00:05:18.126488 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 8 00:05:18.126496 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 8 00:05:18.126504 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 8 00:05:18.126511 systemd[1]: Reached target machines.target - Containers. May 8 00:05:18.126518 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 8 00:05:18.126525 systemd[1]: Starting ignition-delete-config.service - Ignition (delete config)... May 8 00:05:18.126531 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 8 00:05:18.126538 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 8 00:05:18.126545 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 8 00:05:18.126553 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 8 00:05:18.126560 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 8 00:05:18.126567 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 8 00:05:18.126573 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 8 00:05:18.126580 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 8 00:05:18.126587 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 8 00:05:18.126594 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 8 00:05:18.126601 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 8 00:05:18.126608 systemd[1]: Stopped systemd-fsck-usr.service. May 8 00:05:18.126618 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 8 00:05:18.126628 kernel: fuse: init (API version 7.39) May 8 00:05:18.126634 systemd[1]: Starting systemd-journald.service - Journal Service... May 8 00:05:18.126641 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 8 00:05:18.126648 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 8 00:05:18.126656 kernel: loop: module loaded May 8 00:05:18.126662 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 8 00:05:18.126669 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... May 8 00:05:18.126678 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 8 00:05:18.126685 systemd[1]: verity-setup.service: Deactivated successfully. May 8 00:05:18.126691 systemd[1]: Stopped verity-setup.service. May 8 00:05:18.127324 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 8 00:05:18.127337 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 8 00:05:18.127359 systemd-journald[1164]: Collecting audit messages is disabled. May 8 00:05:18.127381 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 8 00:05:18.127390 systemd-journald[1164]: Journal started May 8 00:05:18.127405 systemd-journald[1164]: Runtime Journal (/run/log/journal/7d46aa6801d7474cafb49fa7b3c9f850) is 4.8M, max 38.6M, 33.8M free. May 8 00:05:17.956725 systemd[1]: Queued start job for default target multi-user.target. May 8 00:05:17.966162 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. May 8 00:05:17.966403 systemd[1]: systemd-journald.service: Deactivated successfully. May 8 00:05:18.130474 jq[1141]: true May 8 00:05:18.133970 systemd[1]: Started systemd-journald.service - Journal Service. May 8 00:05:18.134173 systemd[1]: Mounted media.mount - External Media Directory. May 8 00:05:18.134322 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 8 00:05:18.134469 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 8 00:05:18.134616 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 8 00:05:18.134861 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 8 00:05:18.136108 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 8 00:05:18.136361 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 8 00:05:18.136961 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 8 00:05:18.137216 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 8 00:05:18.137317 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 8 00:05:18.137546 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 8 00:05:18.138012 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 8 00:05:18.138266 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 8 00:05:18.138359 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 8 00:05:18.139098 systemd[1]: modprobe@loop.service: Deactivated successfully. May 8 00:05:18.139196 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 8 00:05:18.139453 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 8 00:05:18.140199 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 8 00:05:18.140643 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 8 00:05:18.146935 jq[1186]: true May 8 00:05:18.152661 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. May 8 00:05:18.152935 kernel: ACPI: bus type drm_connector registered May 8 00:05:18.156128 systemd[1]: modprobe@drm.service: Deactivated successfully. May 8 00:05:18.156255 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 8 00:05:18.159285 systemd[1]: Reached target network-pre.target - Preparation for Network. May 8 00:05:18.176977 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 8 00:05:18.191011 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 8 00:05:18.191153 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 8 00:05:18.191179 systemd[1]: Reached target local-fs.target - Local File Systems. May 8 00:05:18.191917 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. May 8 00:05:18.197776 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 8 00:05:18.200605 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 8 00:05:18.200953 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 8 00:05:18.206823 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 8 00:05:18.213022 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 8 00:05:18.213161 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 8 00:05:18.215853 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 8 00:05:18.216040 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 8 00:05:18.220166 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 8 00:05:18.222056 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 8 00:05:18.227057 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 8 00:05:18.228234 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 8 00:05:18.228415 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 8 00:05:18.228676 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 8 00:05:18.238050 systemd-journald[1164]: Time spent on flushing to /var/log/journal/7d46aa6801d7474cafb49fa7b3c9f850 is 35.127ms for 1852 entries. May 8 00:05:18.238050 systemd-journald[1164]: System Journal (/var/log/journal/7d46aa6801d7474cafb49fa7b3c9f850) is 8M, max 584.8M, 576.8M free. May 8 00:05:18.288655 systemd-journald[1164]: Received client request to flush runtime journal. May 8 00:05:18.288681 kernel: loop0: detected capacity change from 0 to 218376 May 8 00:05:18.279725 ignition[1197]: Ignition 2.20.0 May 8 00:05:18.246454 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 8 00:05:18.279889 ignition[1197]: deleting config from guestinfo properties May 8 00:05:18.246657 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 8 00:05:18.301041 ignition[1197]: Successfully deleted config May 8 00:05:18.256098 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... May 8 00:05:18.277250 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 8 00:05:18.298440 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... May 8 00:05:18.299155 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 8 00:05:18.306143 systemd[1]: Finished ignition-delete-config.service - Ignition (delete config). May 8 00:05:18.311197 udevadm[1233]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. May 8 00:05:18.326983 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 8 00:05:18.329234 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. May 8 00:05:18.348946 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 8 00:05:18.366028 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 8 00:05:18.369875 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 8 00:05:18.374937 kernel: loop1: detected capacity change from 0 to 2960 May 8 00:05:18.389079 systemd-tmpfiles[1243]: ACLs are not supported, ignoring. May 8 00:05:18.389614 systemd-tmpfiles[1243]: ACLs are not supported, ignoring. May 8 00:05:18.394136 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 8 00:05:18.422941 kernel: loop2: detected capacity change from 0 to 147912 May 8 00:05:18.473942 kernel: loop3: detected capacity change from 0 to 138176 May 8 00:05:18.696622 kernel: loop4: detected capacity change from 0 to 218376 May 8 00:05:18.809066 kernel: loop5: detected capacity change from 0 to 2960 May 8 00:05:18.828937 kernel: loop6: detected capacity change from 0 to 147912 May 8 00:05:18.868178 kernel: loop7: detected capacity change from 0 to 138176 May 8 00:05:18.896472 (sd-merge)[1249]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-vmware'. May 8 00:05:18.896764 (sd-merge)[1249]: Merged extensions into '/usr'. May 8 00:05:18.905208 systemd[1]: Reload requested from client PID 1221 ('systemd-sysext') (unit systemd-sysext.service)... May 8 00:05:18.905311 systemd[1]: Reloading... May 8 00:05:18.954940 zram_generator::config[1273]: No configuration found. May 8 00:05:19.022853 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") May 8 00:05:19.041043 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 8 00:05:19.084523 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 8 00:05:19.084720 systemd[1]: Reloading finished in 178 ms. May 8 00:05:19.099593 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 8 00:05:19.105479 systemd[1]: Starting ensure-sysext.service... May 8 00:05:19.108519 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 8 00:05:19.122380 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 8 00:05:19.124599 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 8 00:05:19.132830 systemd-tmpfiles[1333]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 8 00:05:19.133009 systemd-tmpfiles[1333]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 8 00:05:19.133492 systemd-tmpfiles[1333]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 8 00:05:19.133650 systemd-tmpfiles[1333]: ACLs are not supported, ignoring. May 8 00:05:19.133686 systemd-tmpfiles[1333]: ACLs are not supported, ignoring. May 8 00:05:19.136483 systemd[1]: Reload requested from client PID 1332 ('systemctl') (unit ensure-sysext.service)... May 8 00:05:19.136497 systemd[1]: Reloading... May 8 00:05:19.155211 systemd-tmpfiles[1333]: Detected autofs mount point /boot during canonicalization of boot. May 8 00:05:19.155218 systemd-tmpfiles[1333]: Skipping /boot May 8 00:05:19.157947 systemd-udevd[1335]: Using default interface naming scheme 'v255'. May 8 00:05:19.164821 systemd-tmpfiles[1333]: Detected autofs mount point /boot during canonicalization of boot. May 8 00:05:19.164828 systemd-tmpfiles[1333]: Skipping /boot May 8 00:05:19.183940 zram_generator::config[1360]: No configuration found. May 8 00:05:19.249691 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") May 8 00:05:19.270859 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 8 00:05:19.315876 systemd[1]: Reloading finished in 179 ms. May 8 00:05:19.333753 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 8 00:05:19.338815 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 8 00:05:19.347137 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 8 00:05:19.349670 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 8 00:05:19.352639 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 8 00:05:19.362190 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 8 00:05:19.365722 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 8 00:05:19.366542 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 8 00:05:19.368081 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 8 00:05:19.371917 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 8 00:05:19.372094 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 8 00:05:19.372161 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 8 00:05:19.372220 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 8 00:05:19.372725 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 8 00:05:19.372875 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 8 00:05:19.380519 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 8 00:05:19.381093 systemd[1]: Finished ensure-sysext.service. May 8 00:05:19.381404 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 8 00:05:19.381793 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 8 00:05:19.382489 systemd[1]: modprobe@loop.service: Deactivated successfully. May 8 00:05:19.382592 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 8 00:05:19.385612 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 8 00:05:19.387180 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 8 00:05:19.389511 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 8 00:05:19.389685 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 8 00:05:19.389707 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 8 00:05:19.389743 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 8 00:05:19.394233 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 8 00:05:19.397002 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 8 00:05:19.397141 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 8 00:05:19.397408 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 8 00:05:19.398003 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 8 00:05:19.398273 systemd[1]: modprobe@drm.service: Deactivated successfully. May 8 00:05:19.398374 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 8 00:05:19.399495 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 8 00:05:19.428782 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 8 00:05:19.434129 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 8 00:05:19.434298 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 8 00:05:19.486469 augenrules[1483]: No rules May 8 00:05:19.486488 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. May 8 00:05:19.487453 systemd[1]: audit-rules.service: Deactivated successfully. May 8 00:05:19.487606 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 8 00:05:19.516235 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 8 00:05:19.545986 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 May 8 00:05:19.554599 kernel: ACPI: button: Power Button [PWRF] May 8 00:05:19.578412 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 8 00:05:19.578658 systemd[1]: Reached target time-set.target - System Time Set. May 8 00:05:19.589415 systemd-resolved[1425]: Positive Trust Anchors: May 8 00:05:19.589702 systemd-resolved[1425]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 8 00:05:19.589796 systemd-resolved[1425]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 8 00:05:19.597938 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 42 scanned by (udev-worker) (1469) May 8 00:05:19.604018 systemd-networkd[1461]: lo: Link UP May 8 00:05:19.605043 systemd-networkd[1461]: lo: Gained carrier May 8 00:05:19.605843 systemd-resolved[1425]: Defaulting to hostname 'linux'. May 8 00:05:19.606512 systemd-networkd[1461]: Enumeration completed May 8 00:05:19.607921 systemd[1]: Started systemd-networkd.service - Network Configuration. May 8 00:05:19.608535 systemd-networkd[1461]: ens192: Configuring with /etc/systemd/network/00-vmware.network. May 8 00:05:19.612119 kernel: vmxnet3 0000:0b:00.0 ens192: intr type 3, mode 0, 3 vectors allocated May 8 00:05:19.612288 kernel: vmxnet3 0000:0b:00.0 ens192: NIC Link is Up 10000 Mbps May 8 00:05:19.614014 systemd-networkd[1461]: ens192: Link UP May 8 00:05:19.614096 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... May 8 00:05:19.614792 systemd-networkd[1461]: ens192: Gained carrier May 8 00:05:19.616500 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 8 00:05:19.616976 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 8 00:05:19.617138 systemd[1]: Reached target network.target - Network. May 8 00:05:19.617381 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 8 00:05:19.618562 systemd-timesyncd[1442]: Network configuration changed, trying to establish connection. May 8 00:05:19.645752 ldconfig[1213]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 8 00:05:19.646459 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. May 8 00:05:19.651977 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 8 00:05:19.660935 kernel: piix4_smbus 0000:00:07.3: SMBus Host Controller not enabled! May 8 00:05:19.661124 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 8 00:05:19.661493 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 8 00:05:19.662382 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 8 00:05:19.676164 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 8 00:05:19.692182 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_disk OEM. May 8 00:05:19.697161 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 8 00:05:19.702940 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input3 May 8 00:05:19.706529 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 8 00:05:19.725154 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 8 00:05:19.731658 (udev-worker)[1472]: id: Truncating stdout of 'dmi_memory_id' up to 16384 byte. May 8 00:05:19.737939 kernel: mousedev: PS/2 mouse device common for all mice May 8 00:05:19.775852 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. May 8 00:05:19.783072 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... May 8 00:05:19.791915 lvm[1519]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 8 00:05:19.815093 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. May 8 00:05:19.816157 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 8 00:05:19.820047 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... May 8 00:05:19.820403 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 8 00:05:19.820912 systemd[1]: Reached target sysinit.target - System Initialization. May 8 00:05:19.821383 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 8 00:05:19.821523 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 8 00:05:19.821738 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 8 00:05:19.821888 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 8 00:05:19.822017 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 8 00:05:19.822132 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 8 00:05:19.822148 systemd[1]: Reached target paths.target - Path Units. May 8 00:05:19.822244 systemd[1]: Reached target timers.target - Timer Units. May 8 00:05:19.822998 lvm[1523]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 8 00:05:19.823270 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 8 00:05:19.824371 systemd[1]: Starting docker.socket - Docker Socket for the API... May 8 00:05:19.826558 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). May 8 00:05:19.826768 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). May 8 00:05:19.826890 systemd[1]: Reached target ssh-access.target - SSH Access Available. May 8 00:05:19.829922 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 8 00:05:19.830344 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. May 8 00:05:19.830948 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 8 00:05:19.831103 systemd[1]: Reached target sockets.target - Socket Units. May 8 00:05:19.831201 systemd[1]: Reached target basic.target - Basic System. May 8 00:05:19.831326 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 8 00:05:19.831340 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 8 00:05:19.832222 systemd[1]: Starting containerd.service - containerd container runtime... May 8 00:05:19.834083 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 8 00:05:19.836632 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 8 00:05:19.837619 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 8 00:05:19.839805 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 8 00:05:19.840718 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 8 00:05:19.842452 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 8 00:05:19.844125 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 8 00:05:19.846072 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 8 00:05:19.849848 systemd[1]: Starting systemd-logind.service - User Login Management... May 8 00:05:19.850467 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 8 00:05:19.851059 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 8 00:05:19.851431 systemd[1]: Starting update-engine.service - Update Engine... May 8 00:05:19.853122 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 8 00:05:19.857060 systemd[1]: Starting vgauthd.service - VGAuth Service for open-vm-tools... May 8 00:05:19.857695 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. May 8 00:05:19.873013 jq[1529]: false May 8 00:05:19.871716 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 8 00:05:19.872102 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 8 00:05:19.876209 systemd[1]: Started vgauthd.service - VGAuth Service for open-vm-tools. May 8 00:05:19.877749 systemd[1]: Starting vmtoolsd.service - Service for virtual machines hosted on VMware... May 8 00:05:19.878037 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 8 00:05:19.878962 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 8 00:05:19.881753 jq[1538]: true May 8 00:05:19.888362 (ntainerd)[1554]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 8 00:05:19.891321 dbus-daemon[1528]: [system] SELinux support is enabled May 8 00:05:19.893126 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 8 00:05:19.895716 extend-filesystems[1530]: Found loop4 May 8 00:05:19.895716 extend-filesystems[1530]: Found loop5 May 8 00:05:19.895716 extend-filesystems[1530]: Found loop6 May 8 00:05:19.895716 extend-filesystems[1530]: Found loop7 May 8 00:05:19.895716 extend-filesystems[1530]: Found sda May 8 00:05:19.895716 extend-filesystems[1530]: Found sda1 May 8 00:05:19.895716 extend-filesystems[1530]: Found sda2 May 8 00:05:19.895716 extend-filesystems[1530]: Found sda3 May 8 00:05:19.895716 extend-filesystems[1530]: Found usr May 8 00:05:19.895716 extend-filesystems[1530]: Found sda4 May 8 00:05:19.895716 extend-filesystems[1530]: Found sda6 May 8 00:05:19.895716 extend-filesystems[1530]: Found sda7 May 8 00:05:19.895716 extend-filesystems[1530]: Found sda9 May 8 00:05:19.895716 extend-filesystems[1530]: Checking size of /dev/sda9 May 8 00:05:19.894846 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 8 00:05:19.894864 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 8 00:05:19.895031 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 8 00:05:19.895041 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 8 00:05:19.905009 jq[1552]: true May 8 00:05:19.913100 systemd[1]: motdgen.service: Deactivated successfully. May 8 00:05:19.913396 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 8 00:05:19.918015 update_engine[1537]: I20250508 00:05:19.916802 1537 main.cc:92] Flatcar Update Engine starting May 8 00:05:19.918886 tar[1541]: linux-amd64/LICENSE May 8 00:05:19.918886 tar[1541]: linux-amd64/helm May 8 00:05:19.925192 update_engine[1537]: I20250508 00:05:19.922876 1537 update_check_scheduler.cc:74] Next update check in 10m57s May 8 00:05:19.923126 systemd[1]: Started update-engine.service - Update Engine. May 8 00:05:19.929039 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 8 00:05:19.950086 systemd[1]: Started vmtoolsd.service - Service for virtual machines hosted on VMware. May 8 00:05:19.952559 unknown[1548]: Pref_Init: Using '/etc/vmware-tools/vgauth.conf' as preferences filepath May 8 00:05:19.956951 extend-filesystems[1530]: Old size kept for /dev/sda9 May 8 00:05:19.956951 extend-filesystems[1530]: Found sr0 May 8 00:05:19.955081 systemd[1]: extend-filesystems.service: Deactivated successfully. May 8 00:05:19.955233 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 8 00:05:19.962820 unknown[1548]: Core dump limit set to -1 May 8 00:05:19.989707 systemd-logind[1535]: Watching system buttons on /dev/input/event1 (Power Button) May 8 00:05:19.989721 systemd-logind[1535]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 8 00:05:19.990499 systemd-logind[1535]: New seat seat0. May 8 00:05:19.991153 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 42 scanned by (udev-worker) (1468) May 8 00:05:19.991008 systemd[1]: Started systemd-logind.service - User Login Management. May 8 00:05:20.019838 bash[1590]: Updated "/home/core/.ssh/authorized_keys" May 8 00:05:20.020654 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 8 00:05:20.021244 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. May 8 00:05:20.067171 locksmithd[1569]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 8 00:05:20.266346 sshd_keygen[1564]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 8 00:05:20.276330 containerd[1554]: time="2025-05-08T00:05:20.276284115Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 May 8 00:05:20.289972 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 8 00:05:20.299338 systemd[1]: Starting issuegen.service - Generate /run/issue... May 8 00:05:20.305157 systemd[1]: issuegen.service: Deactivated successfully. May 8 00:05:20.305309 systemd[1]: Finished issuegen.service - Generate /run/issue. May 8 00:05:20.308378 containerd[1554]: time="2025-05-08T00:05:20.308349006Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 8 00:05:20.311567 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 8 00:05:20.311786 containerd[1554]: time="2025-05-08T00:05:20.311307903Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.88-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 8 00:05:20.311786 containerd[1554]: time="2025-05-08T00:05:20.311332760Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 8 00:05:20.311786 containerd[1554]: time="2025-05-08T00:05:20.311344802Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 8 00:05:20.311786 containerd[1554]: time="2025-05-08T00:05:20.311442413Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 May 8 00:05:20.311786 containerd[1554]: time="2025-05-08T00:05:20.311452420Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 May 8 00:05:20.311786 containerd[1554]: time="2025-05-08T00:05:20.311494756Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 May 8 00:05:20.311786 containerd[1554]: time="2025-05-08T00:05:20.311503646Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 8 00:05:20.311786 containerd[1554]: time="2025-05-08T00:05:20.311616072Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 8 00:05:20.311786 containerd[1554]: time="2025-05-08T00:05:20.311624234Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 8 00:05:20.311786 containerd[1554]: time="2025-05-08T00:05:20.311632116Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 May 8 00:05:20.311786 containerd[1554]: time="2025-05-08T00:05:20.311637408Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 8 00:05:20.312027 containerd[1554]: time="2025-05-08T00:05:20.311677801Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 8 00:05:20.312027 containerd[1554]: time="2025-05-08T00:05:20.311789900Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 8 00:05:20.312027 containerd[1554]: time="2025-05-08T00:05:20.311854993Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 8 00:05:20.312027 containerd[1554]: time="2025-05-08T00:05:20.311862759Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 8 00:05:20.312027 containerd[1554]: time="2025-05-08T00:05:20.311904919Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 8 00:05:20.312027 containerd[1554]: time="2025-05-08T00:05:20.311944311Z" level=info msg="metadata content store policy set" policy=shared May 8 00:05:20.319475 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 8 00:05:20.325245 systemd[1]: Started getty@tty1.service - Getty on tty1. May 8 00:05:20.326651 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. May 8 00:05:20.327148 systemd[1]: Reached target getty.target - Login Prompts. May 8 00:05:20.328132 containerd[1554]: time="2025-05-08T00:05:20.327991900Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 8 00:05:20.328132 containerd[1554]: time="2025-05-08T00:05:20.328033981Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 8 00:05:20.328132 containerd[1554]: time="2025-05-08T00:05:20.328058593Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 May 8 00:05:20.328132 containerd[1554]: time="2025-05-08T00:05:20.328076492Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 May 8 00:05:20.328132 containerd[1554]: time="2025-05-08T00:05:20.328085995Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 8 00:05:20.328391 containerd[1554]: time="2025-05-08T00:05:20.328318697Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 8 00:05:20.328504 containerd[1554]: time="2025-05-08T00:05:20.328496029Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 8 00:05:20.328649 containerd[1554]: time="2025-05-08T00:05:20.328603424Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 May 8 00:05:20.328649 containerd[1554]: time="2025-05-08T00:05:20.328615246Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 May 8 00:05:20.328649 containerd[1554]: time="2025-05-08T00:05:20.328626833Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 May 8 00:05:20.328649 containerd[1554]: time="2025-05-08T00:05:20.328634828Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 8 00:05:20.328792 containerd[1554]: time="2025-05-08T00:05:20.328720588Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 8 00:05:20.328792 containerd[1554]: time="2025-05-08T00:05:20.328733698Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 8 00:05:20.328792 containerd[1554]: time="2025-05-08T00:05:20.328743191Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 8 00:05:20.328792 containerd[1554]: time="2025-05-08T00:05:20.328751858Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 8 00:05:20.328792 containerd[1554]: time="2025-05-08T00:05:20.328759450Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 8 00:05:20.328792 containerd[1554]: time="2025-05-08T00:05:20.328767571Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 8 00:05:20.328792 containerd[1554]: time="2025-05-08T00:05:20.328774332Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 8 00:05:20.329014 containerd[1554]: time="2025-05-08T00:05:20.328785926Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 8 00:05:20.329014 containerd[1554]: time="2025-05-08T00:05:20.328910050Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 8 00:05:20.329014 containerd[1554]: time="2025-05-08T00:05:20.328917478Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 8 00:05:20.329014 containerd[1554]: time="2025-05-08T00:05:20.328943887Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 8 00:05:20.329014 containerd[1554]: time="2025-05-08T00:05:20.328954357Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 8 00:05:20.329014 containerd[1554]: time="2025-05-08T00:05:20.328966478Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 8 00:05:20.329014 containerd[1554]: time="2025-05-08T00:05:20.328974064Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 8 00:05:20.329014 containerd[1554]: time="2025-05-08T00:05:20.328981286Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 8 00:05:20.329014 containerd[1554]: time="2025-05-08T00:05:20.328989535Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 May 8 00:05:20.329014 containerd[1554]: time="2025-05-08T00:05:20.328999630Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 May 8 00:05:20.329232 containerd[1554]: time="2025-05-08T00:05:20.329006176Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 8 00:05:20.329232 containerd[1554]: time="2025-05-08T00:05:20.329170351Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 May 8 00:05:20.329232 containerd[1554]: time="2025-05-08T00:05:20.329180014Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 8 00:05:20.329232 containerd[1554]: time="2025-05-08T00:05:20.329188260Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 May 8 00:05:20.329232 containerd[1554]: time="2025-05-08T00:05:20.329201300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 May 8 00:05:20.329232 containerd[1554]: time="2025-05-08T00:05:20.329208693Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 8 00:05:20.329232 containerd[1554]: time="2025-05-08T00:05:20.329214811Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 8 00:05:20.329485 containerd[1554]: time="2025-05-08T00:05:20.329352132Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 8 00:05:20.329485 containerd[1554]: time="2025-05-08T00:05:20.329366218Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 May 8 00:05:20.329485 containerd[1554]: time="2025-05-08T00:05:20.329372267Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 8 00:05:20.329485 containerd[1554]: time="2025-05-08T00:05:20.329378614Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 May 8 00:05:20.329485 containerd[1554]: time="2025-05-08T00:05:20.329384167Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 8 00:05:20.329485 containerd[1554]: time="2025-05-08T00:05:20.329391125Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 May 8 00:05:20.329485 containerd[1554]: time="2025-05-08T00:05:20.329396647Z" level=info msg="NRI interface is disabled by configuration." May 8 00:05:20.329485 containerd[1554]: time="2025-05-08T00:05:20.329411627Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 8 00:05:20.329845 containerd[1554]: time="2025-05-08T00:05:20.329729171Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 8 00:05:20.329845 containerd[1554]: time="2025-05-08T00:05:20.329770383Z" level=info msg="Connect containerd service" May 8 00:05:20.329845 containerd[1554]: time="2025-05-08T00:05:20.329798034Z" level=info msg="using legacy CRI server" May 8 00:05:20.329845 containerd[1554]: time="2025-05-08T00:05:20.329802920Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 8 00:05:20.330159 containerd[1554]: time="2025-05-08T00:05:20.330003000Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 8 00:05:20.330503 containerd[1554]: time="2025-05-08T00:05:20.330462821Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 8 00:05:20.330692 containerd[1554]: time="2025-05-08T00:05:20.330631839Z" level=info msg="Start subscribing containerd event" May 8 00:05:20.330692 containerd[1554]: time="2025-05-08T00:05:20.330667483Z" level=info msg="Start recovering state" May 8 00:05:20.330734 containerd[1554]: time="2025-05-08T00:05:20.330706439Z" level=info msg="Start event monitor" May 8 00:05:20.330734 containerd[1554]: time="2025-05-08T00:05:20.330718534Z" level=info msg="Start snapshots syncer" May 8 00:05:20.330734 containerd[1554]: time="2025-05-08T00:05:20.330723574Z" level=info msg="Start cni network conf syncer for default" May 8 00:05:20.330734 containerd[1554]: time="2025-05-08T00:05:20.330728961Z" level=info msg="Start streaming server" May 8 00:05:20.330917 containerd[1554]: time="2025-05-08T00:05:20.330838760Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 8 00:05:20.331037 containerd[1554]: time="2025-05-08T00:05:20.330999184Z" level=info msg=serving... address=/run/containerd/containerd.sock May 8 00:05:20.331389 systemd[1]: Started containerd.service - containerd container runtime. May 8 00:05:20.332245 containerd[1554]: time="2025-05-08T00:05:20.332012622Z" level=info msg="containerd successfully booted in 0.057113s" May 8 00:05:20.432819 tar[1541]: linux-amd64/README.md May 8 00:05:20.440219 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 8 00:05:20.681035 systemd-networkd[1461]: ens192: Gained IPv6LL May 8 00:05:20.681465 systemd-timesyncd[1442]: Network configuration changed, trying to establish connection. May 8 00:05:20.682812 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 8 00:05:20.683329 systemd[1]: Reached target network-online.target - Network is Online. May 8 00:05:20.689098 systemd[1]: Starting coreos-metadata.service - VMware metadata agent... May 8 00:05:20.694091 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 00:05:20.696316 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 8 00:05:20.738677 systemd[1]: coreos-metadata.service: Deactivated successfully. May 8 00:05:20.738823 systemd[1]: Finished coreos-metadata.service - VMware metadata agent. May 8 00:05:20.739483 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 8 00:05:20.743457 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 8 00:05:21.389516 systemd-timesyncd[1442]: Network configuration changed, trying to establish connection. May 8 00:05:22.645096 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:05:22.645706 systemd[1]: Reached target multi-user.target - Multi-User System. May 8 00:05:22.648286 systemd[1]: Startup finished in 966ms (kernel) + 7.649s (initrd) + 6.401s (userspace) = 15.017s. May 8 00:05:22.650969 (kubelet)[1707]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 8 00:05:22.685688 login[1672]: pam_lastlog(login:session): file /var/log/lastlog is locked/read, retrying May 8 00:05:22.687757 login[1671]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) May 8 00:05:22.693337 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 8 00:05:22.697286 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 8 00:05:22.701253 systemd-logind[1535]: New session 1 of user core. May 8 00:05:22.709165 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 8 00:05:22.714133 systemd[1]: Starting user@500.service - User Manager for UID 500... May 8 00:05:22.717257 (systemd)[1715]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 8 00:05:22.718739 systemd-logind[1535]: New session c1 of user core. May 8 00:05:22.836383 systemd[1715]: Queued start job for default target default.target. May 8 00:05:22.840754 systemd[1715]: Created slice app.slice - User Application Slice. May 8 00:05:22.841119 systemd[1715]: Reached target paths.target - Paths. May 8 00:05:22.841202 systemd[1715]: Reached target timers.target - Timers. May 8 00:05:22.842064 systemd[1715]: Starting dbus.socket - D-Bus User Message Bus Socket... May 8 00:05:22.849775 systemd[1715]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 8 00:05:22.849812 systemd[1715]: Reached target sockets.target - Sockets. May 8 00:05:22.849843 systemd[1715]: Reached target basic.target - Basic System. May 8 00:05:22.849866 systemd[1715]: Reached target default.target - Main User Target. May 8 00:05:22.849882 systemd[1715]: Startup finished in 127ms. May 8 00:05:22.850143 systemd[1]: Started user@500.service - User Manager for UID 500. May 8 00:05:22.852088 systemd[1]: Started session-1.scope - Session 1 of User core. May 8 00:05:23.668722 kubelet[1707]: E0508 00:05:23.668657 1707 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 8 00:05:23.670625 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 8 00:05:23.670724 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 8 00:05:23.671238 systemd[1]: kubelet.service: Consumed 663ms CPU time, 252.9M memory peak. May 8 00:05:23.687722 login[1672]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) May 8 00:05:23.691041 systemd-logind[1535]: New session 2 of user core. May 8 00:05:23.708083 systemd[1]: Started session-2.scope - Session 2 of User core. May 8 00:05:24.035770 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 8 00:05:24.036610 systemd[1]: Started sshd@0-139.178.70.106:22-124.11.64.11:41627.service - OpenSSH per-connection server daemon (124.11.64.11:41627). May 8 00:05:27.574356 sshd-session[1752]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=124.11.64.11 user=bin May 8 00:05:29.606685 sshd[1750]: PAM: Permission denied for bin from 124.11.64.11 May 8 00:05:30.199085 sshd[1750]: Connection closed by authenticating user bin 124.11.64.11 port 41627 [preauth] May 8 00:05:30.199986 systemd[1]: sshd@0-139.178.70.106:22-124.11.64.11:41627.service: Deactivated successfully. May 8 00:05:33.786978 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 8 00:05:33.796087 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 00:05:34.145175 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:05:34.147862 (kubelet)[1763]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 8 00:05:34.234371 kubelet[1763]: E0508 00:05:34.234327 1763 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 8 00:05:34.237579 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 8 00:05:34.237765 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 8 00:05:34.238167 systemd[1]: kubelet.service: Consumed 99ms CPU time, 105.9M memory peak. May 8 00:05:44.287099 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 8 00:05:44.293146 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 00:05:44.626074 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:05:44.637129 (kubelet)[1779]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 8 00:05:44.680787 kubelet[1779]: E0508 00:05:44.680752 1779 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 8 00:05:44.682370 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 8 00:05:44.682502 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 8 00:05:44.682800 systemd[1]: kubelet.service: Consumed 94ms CPU time, 102.3M memory peak. May 8 00:05:50.097410 systemd[1]: Started sshd@1-139.178.70.106:22-139.178.89.65:46920.service - OpenSSH per-connection server daemon (139.178.89.65:46920). May 8 00:05:50.126804 sshd[1787]: Accepted publickey for core from 139.178.89.65 port 46920 ssh2: RSA SHA256:YTfHkQoI5xSpvBwpAWL9S8jbOKDeUPQ5sL4eA5EtDVU May 8 00:05:50.127691 sshd-session[1787]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:05:50.130340 systemd-logind[1535]: New session 3 of user core. May 8 00:05:50.139059 systemd[1]: Started session-3.scope - Session 3 of User core. May 8 00:05:50.196096 systemd[1]: Started sshd@2-139.178.70.106:22-139.178.89.65:46932.service - OpenSSH per-connection server daemon (139.178.89.65:46932). May 8 00:05:50.224162 sshd[1792]: Accepted publickey for core from 139.178.89.65 port 46932 ssh2: RSA SHA256:YTfHkQoI5xSpvBwpAWL9S8jbOKDeUPQ5sL4eA5EtDVU May 8 00:05:50.224944 sshd-session[1792]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:05:50.227971 systemd-logind[1535]: New session 4 of user core. May 8 00:05:50.237089 systemd[1]: Started session-4.scope - Session 4 of User core. May 8 00:05:50.288142 sshd[1794]: Connection closed by 139.178.89.65 port 46932 May 8 00:05:50.288089 sshd-session[1792]: pam_unix(sshd:session): session closed for user core May 8 00:05:50.297588 systemd[1]: sshd@2-139.178.70.106:22-139.178.89.65:46932.service: Deactivated successfully. May 8 00:05:50.298796 systemd[1]: session-4.scope: Deactivated successfully. May 8 00:05:50.299394 systemd-logind[1535]: Session 4 logged out. Waiting for processes to exit. May 8 00:05:50.303128 systemd[1]: Started sshd@3-139.178.70.106:22-139.178.89.65:46942.service - OpenSSH per-connection server daemon (139.178.89.65:46942). May 8 00:05:50.304204 systemd-logind[1535]: Removed session 4. May 8 00:05:50.332531 sshd[1799]: Accepted publickey for core from 139.178.89.65 port 46942 ssh2: RSA SHA256:YTfHkQoI5xSpvBwpAWL9S8jbOKDeUPQ5sL4eA5EtDVU May 8 00:05:50.333324 sshd-session[1799]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:05:50.335975 systemd-logind[1535]: New session 5 of user core. May 8 00:05:50.342044 systemd[1]: Started session-5.scope - Session 5 of User core. May 8 00:05:50.388319 sshd[1802]: Connection closed by 139.178.89.65 port 46942 May 8 00:05:50.388221 sshd-session[1799]: pam_unix(sshd:session): session closed for user core May 8 00:05:50.401902 systemd[1]: Started sshd@4-139.178.70.106:22-139.178.89.65:46956.service - OpenSSH per-connection server daemon (139.178.89.65:46956). May 8 00:05:50.402269 systemd[1]: sshd@3-139.178.70.106:22-139.178.89.65:46942.service: Deactivated successfully. May 8 00:05:50.403115 systemd[1]: session-5.scope: Deactivated successfully. May 8 00:05:50.404607 systemd-logind[1535]: Session 5 logged out. Waiting for processes to exit. May 8 00:05:50.405735 systemd-logind[1535]: Removed session 5. May 8 00:05:50.431142 sshd[1805]: Accepted publickey for core from 139.178.89.65 port 46956 ssh2: RSA SHA256:YTfHkQoI5xSpvBwpAWL9S8jbOKDeUPQ5sL4eA5EtDVU May 8 00:05:50.431945 sshd-session[1805]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:05:50.435054 systemd-logind[1535]: New session 6 of user core. May 8 00:05:50.441060 systemd[1]: Started session-6.scope - Session 6 of User core. May 8 00:05:50.490527 sshd[1810]: Connection closed by 139.178.89.65 port 46956 May 8 00:05:50.490160 sshd-session[1805]: pam_unix(sshd:session): session closed for user core May 8 00:05:50.500102 systemd[1]: sshd@4-139.178.70.106:22-139.178.89.65:46956.service: Deactivated successfully. May 8 00:05:50.501078 systemd[1]: session-6.scope: Deactivated successfully. May 8 00:05:50.502079 systemd-logind[1535]: Session 6 logged out. Waiting for processes to exit. May 8 00:05:50.505088 systemd[1]: Started sshd@5-139.178.70.106:22-139.178.89.65:46972.service - OpenSSH per-connection server daemon (139.178.89.65:46972). May 8 00:05:50.506055 systemd-logind[1535]: Removed session 6. May 8 00:05:50.531815 sshd[1815]: Accepted publickey for core from 139.178.89.65 port 46972 ssh2: RSA SHA256:YTfHkQoI5xSpvBwpAWL9S8jbOKDeUPQ5sL4eA5EtDVU May 8 00:05:50.532570 sshd-session[1815]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:05:50.536011 systemd-logind[1535]: New session 7 of user core. May 8 00:05:50.542010 systemd[1]: Started session-7.scope - Session 7 of User core. May 8 00:05:50.599099 sudo[1819]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 8 00:05:50.599255 sudo[1819]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 8 00:05:50.611276 sudo[1819]: pam_unix(sudo:session): session closed for user root May 8 00:05:50.612087 sshd[1818]: Connection closed by 139.178.89.65 port 46972 May 8 00:05:50.612882 sshd-session[1815]: pam_unix(sshd:session): session closed for user core May 8 00:05:50.621508 systemd[1]: sshd@5-139.178.70.106:22-139.178.89.65:46972.service: Deactivated successfully. May 8 00:05:50.622598 systemd[1]: session-7.scope: Deactivated successfully. May 8 00:05:50.623511 systemd-logind[1535]: Session 7 logged out. Waiting for processes to exit. May 8 00:05:50.627125 systemd[1]: Started sshd@6-139.178.70.106:22-139.178.89.65:46974.service - OpenSSH per-connection server daemon (139.178.89.65:46974). May 8 00:05:50.630050 systemd-logind[1535]: Removed session 7. May 8 00:05:50.653752 sshd[1824]: Accepted publickey for core from 139.178.89.65 port 46974 ssh2: RSA SHA256:YTfHkQoI5xSpvBwpAWL9S8jbOKDeUPQ5sL4eA5EtDVU May 8 00:05:50.654749 sshd-session[1824]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:05:50.658314 systemd-logind[1535]: New session 8 of user core. May 8 00:05:50.664012 systemd[1]: Started session-8.scope - Session 8 of User core. May 8 00:05:50.713092 sudo[1829]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 8 00:05:50.713477 sudo[1829]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 8 00:05:50.715496 sudo[1829]: pam_unix(sudo:session): session closed for user root May 8 00:05:50.718474 sudo[1828]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules May 8 00:05:50.718624 sudo[1828]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 8 00:05:50.733195 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 8 00:05:50.748132 augenrules[1851]: No rules May 8 00:05:50.748499 systemd[1]: audit-rules.service: Deactivated successfully. May 8 00:05:50.748791 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 8 00:05:50.750566 sudo[1828]: pam_unix(sudo:session): session closed for user root May 8 00:05:50.751581 sshd[1827]: Connection closed by 139.178.89.65 port 46974 May 8 00:05:50.751878 sshd-session[1824]: pam_unix(sshd:session): session closed for user core May 8 00:05:50.753859 systemd[1]: sshd@6-139.178.70.106:22-139.178.89.65:46974.service: Deactivated successfully. May 8 00:05:50.755251 systemd[1]: session-8.scope: Deactivated successfully. May 8 00:05:50.762460 systemd-logind[1535]: Session 8 logged out. Waiting for processes to exit. May 8 00:05:50.763898 systemd[1]: Started sshd@7-139.178.70.106:22-139.178.89.65:46976.service - OpenSSH per-connection server daemon (139.178.89.65:46976). May 8 00:05:50.764969 systemd-logind[1535]: Removed session 8. May 8 00:05:50.791722 sshd[1859]: Accepted publickey for core from 139.178.89.65 port 46976 ssh2: RSA SHA256:YTfHkQoI5xSpvBwpAWL9S8jbOKDeUPQ5sL4eA5EtDVU May 8 00:05:50.792473 sshd-session[1859]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:05:50.796329 systemd-logind[1535]: New session 9 of user core. May 8 00:05:50.802057 systemd[1]: Started session-9.scope - Session 9 of User core. May 8 00:05:50.849818 sudo[1863]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 8 00:05:50.850002 sudo[1863]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 8 00:05:51.145223 (dockerd)[1881]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 8 00:05:51.145550 systemd[1]: Starting docker.service - Docker Application Container Engine... May 8 00:05:51.410463 dockerd[1881]: time="2025-05-08T00:05:51.410318782Z" level=info msg="Starting up" May 8 00:05:51.464963 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport4152540545-merged.mount: Deactivated successfully. May 8 00:05:51.480270 dockerd[1881]: time="2025-05-08T00:05:51.480146779Z" level=info msg="Loading containers: start." May 8 00:05:51.582960 kernel: Initializing XFRM netlink socket May 8 00:07:12.323682 systemd-resolved[1425]: Clock change detected. Flushing caches. May 8 00:07:12.323790 systemd-timesyncd[1442]: Contacted time server 168.235.89.132:123 (2.flatcar.pool.ntp.org). May 8 00:07:12.323822 systemd-timesyncd[1442]: Initial clock synchronization to Thu 2025-05-08 00:07:12.323651 UTC. May 8 00:07:12.332094 systemd-networkd[1461]: docker0: Link UP May 8 00:07:12.358691 dockerd[1881]: time="2025-05-08T00:07:12.358622863Z" level=info msg="Loading containers: done." May 8 00:07:12.368575 dockerd[1881]: time="2025-05-08T00:07:12.368081680Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 8 00:07:12.368575 dockerd[1881]: time="2025-05-08T00:07:12.368154976Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 May 8 00:07:12.368575 dockerd[1881]: time="2025-05-08T00:07:12.368211742Z" level=info msg="Daemon has completed initialization" May 8 00:07:12.384070 dockerd[1881]: time="2025-05-08T00:07:12.384025146Z" level=info msg="API listen on /run/docker.sock" May 8 00:07:12.384258 systemd[1]: Started docker.service - Docker Application Container Engine. May 8 00:07:13.158579 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1492575922-merged.mount: Deactivated successfully. May 8 00:07:13.474442 containerd[1554]: time="2025-05-08T00:07:13.474330325Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.4\"" May 8 00:07:14.016169 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1003518914.mount: Deactivated successfully. May 8 00:07:14.915591 containerd[1554]: time="2025-05-08T00:07:14.914942208Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:07:14.915591 containerd[1554]: time="2025-05-08T00:07:14.915375148Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.4: active requests=0, bytes read=28682879" May 8 00:07:14.915591 containerd[1554]: time="2025-05-08T00:07:14.915564672Z" level=info msg="ImageCreate event name:\"sha256:1c20c8797e48698afa3380793df2f1fb260e3209df72d8e864e1bc73af8336e5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:07:14.917305 containerd[1554]: time="2025-05-08T00:07:14.917289048Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:631c6cc78b2862be4fed7df3384a643ef7297eebadae22e8ef9cbe2e19b6386f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:07:14.917971 containerd[1554]: time="2025-05-08T00:07:14.917953500Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.4\" with image id \"sha256:1c20c8797e48698afa3380793df2f1fb260e3209df72d8e864e1bc73af8336e5\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.4\", repo digest \"registry.k8s.io/kube-apiserver@sha256:631c6cc78b2862be4fed7df3384a643ef7297eebadae22e8ef9cbe2e19b6386f\", size \"28679679\" in 1.443600191s" May 8 00:07:14.918014 containerd[1554]: time="2025-05-08T00:07:14.917973800Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.4\" returns image reference \"sha256:1c20c8797e48698afa3380793df2f1fb260e3209df72d8e864e1bc73af8336e5\"" May 8 00:07:14.918649 containerd[1554]: time="2025-05-08T00:07:14.918634829Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.4\"" May 8 00:07:15.483076 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. May 8 00:07:15.500001 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 00:07:15.578292 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:07:15.581233 (kubelet)[2128]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 8 00:07:15.648145 kubelet[2128]: E0508 00:07:15.648112 2128 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 8 00:07:15.649866 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 8 00:07:15.649956 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 8 00:07:15.650257 systemd[1]: kubelet.service: Consumed 90ms CPU time, 104.2M memory peak. May 8 00:07:16.983492 containerd[1554]: time="2025-05-08T00:07:16.983462434Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:07:16.987812 containerd[1554]: time="2025-05-08T00:07:16.987788453Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.4: active requests=0, bytes read=24779589" May 8 00:07:16.999686 containerd[1554]: time="2025-05-08T00:07:16.998863262Z" level=info msg="ImageCreate event name:\"sha256:4db5364cd5509e0fc8e9f821fbc4b31ed79d4c9ae21809d22030ad67d530a61a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:07:17.006645 containerd[1554]: time="2025-05-08T00:07:17.006622240Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:25e29187ea66f0ff9b9a00114849c3a30b649005c900a8b2a69e3f3fa56448fb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:07:17.007132 containerd[1554]: time="2025-05-08T00:07:17.007116962Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.4\" with image id \"sha256:4db5364cd5509e0fc8e9f821fbc4b31ed79d4c9ae21809d22030ad67d530a61a\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.4\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:25e29187ea66f0ff9b9a00114849c3a30b649005c900a8b2a69e3f3fa56448fb\", size \"26267962\" in 2.088466134s" May 8 00:07:17.007181 containerd[1554]: time="2025-05-08T00:07:17.007173313Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.4\" returns image reference \"sha256:4db5364cd5509e0fc8e9f821fbc4b31ed79d4c9ae21809d22030ad67d530a61a\"" May 8 00:07:17.007635 containerd[1554]: time="2025-05-08T00:07:17.007617130Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.4\"" May 8 00:07:18.703225 containerd[1554]: time="2025-05-08T00:07:18.702525409Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:07:18.705219 containerd[1554]: time="2025-05-08T00:07:18.705194104Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.4: active requests=0, bytes read=19169938" May 8 00:07:18.706195 containerd[1554]: time="2025-05-08T00:07:18.706176771Z" level=info msg="ImageCreate event name:\"sha256:70a252485ed1f2e8332b6f0a5f8f57443bfbc3c480228f8dcd82ad5ab5cc4000\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:07:18.708402 containerd[1554]: time="2025-05-08T00:07:18.708376489Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:09c55f8dac59a4b8e5e354140f5a4bdd6fa9bd95c42d6bcba6782ed37c31b5a2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:07:18.709202 containerd[1554]: time="2025-05-08T00:07:18.709179526Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.4\" with image id \"sha256:70a252485ed1f2e8332b6f0a5f8f57443bfbc3c480228f8dcd82ad5ab5cc4000\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.4\", repo digest \"registry.k8s.io/kube-scheduler@sha256:09c55f8dac59a4b8e5e354140f5a4bdd6fa9bd95c42d6bcba6782ed37c31b5a2\", size \"20658329\" in 1.701504746s" May 8 00:07:18.709246 containerd[1554]: time="2025-05-08T00:07:18.709202186Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.4\" returns image reference \"sha256:70a252485ed1f2e8332b6f0a5f8f57443bfbc3c480228f8dcd82ad5ab5cc4000\"" May 8 00:07:18.709829 containerd[1554]: time="2025-05-08T00:07:18.709806124Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.4\"" May 8 00:07:20.251635 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3505558385.mount: Deactivated successfully. May 8 00:07:20.676972 containerd[1554]: time="2025-05-08T00:07:20.676862423Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:07:20.682350 containerd[1554]: time="2025-05-08T00:07:20.682287310Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.4: active requests=0, bytes read=30917856" May 8 00:07:20.688538 containerd[1554]: time="2025-05-08T00:07:20.688495733Z" level=info msg="ImageCreate event name:\"sha256:608f0c8bf7f9651ca79f170235ea5eefb978a0c1da132e7477a88ad37d171ad3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:07:20.695253 containerd[1554]: time="2025-05-08T00:07:20.695213712Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:152638222ecf265eb8e5352e3c50e8fc520994e8ffcff1ee1490c975f7fc2b36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:07:20.695961 containerd[1554]: time="2025-05-08T00:07:20.695660113Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.4\" with image id \"sha256:608f0c8bf7f9651ca79f170235ea5eefb978a0c1da132e7477a88ad37d171ad3\", repo tag \"registry.k8s.io/kube-proxy:v1.32.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:152638222ecf265eb8e5352e3c50e8fc520994e8ffcff1ee1490c975f7fc2b36\", size \"30916875\" in 1.985775195s" May 8 00:07:20.695961 containerd[1554]: time="2025-05-08T00:07:20.695685029Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.4\" returns image reference \"sha256:608f0c8bf7f9651ca79f170235ea5eefb978a0c1da132e7477a88ad37d171ad3\"" May 8 00:07:20.696060 containerd[1554]: time="2025-05-08T00:07:20.696035164Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" May 8 00:07:21.669370 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount181521706.mount: Deactivated successfully. May 8 00:07:22.377432 containerd[1554]: time="2025-05-08T00:07:22.377400851Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:07:22.391950 containerd[1554]: time="2025-05-08T00:07:22.391905745Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" May 8 00:07:22.402535 containerd[1554]: time="2025-05-08T00:07:22.402482997Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:07:22.414033 containerd[1554]: time="2025-05-08T00:07:22.413991455Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:07:22.414853 containerd[1554]: time="2025-05-08T00:07:22.414613714Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.718562673s" May 8 00:07:22.414853 containerd[1554]: time="2025-05-08T00:07:22.414637156Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" May 8 00:07:22.415110 containerd[1554]: time="2025-05-08T00:07:22.415028739Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 8 00:07:23.813168 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount268121850.mount: Deactivated successfully. May 8 00:07:23.839577 containerd[1554]: time="2025-05-08T00:07:23.839263417Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:07:23.840504 containerd[1554]: time="2025-05-08T00:07:23.840480761Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" May 8 00:07:23.843642 containerd[1554]: time="2025-05-08T00:07:23.843598591Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:07:23.845016 containerd[1554]: time="2025-05-08T00:07:23.845000028Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:07:23.845730 containerd[1554]: time="2025-05-08T00:07:23.845315425Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 1.430271802s" May 8 00:07:23.845730 containerd[1554]: time="2025-05-08T00:07:23.845334337Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" May 8 00:07:23.845810 containerd[1554]: time="2025-05-08T00:07:23.845756284Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" May 8 00:07:24.550501 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2910744502.mount: Deactivated successfully. May 8 00:07:25.732471 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. May 8 00:07:25.737699 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 00:07:26.296362 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:07:26.299471 (kubelet)[2265]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 8 00:07:26.340621 update_engine[1537]: I20250508 00:07:26.340570 1537 update_attempter.cc:509] Updating boot flags... May 8 00:07:26.403139 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 42 scanned by (udev-worker) (2275) May 8 00:07:26.461186 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 42 scanned by (udev-worker) (2275) May 8 00:07:26.476191 kubelet[2265]: E0508 00:07:26.475924 2265 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 8 00:07:26.480060 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 8 00:07:26.480273 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 8 00:07:26.480501 systemd[1]: kubelet.service: Consumed 99ms CPU time, 103.7M memory peak. May 8 00:07:28.804738 containerd[1554]: time="2025-05-08T00:07:28.804478642Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:07:28.805130 containerd[1554]: time="2025-05-08T00:07:28.805102764Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57551360" May 8 00:07:28.805185 containerd[1554]: time="2025-05-08T00:07:28.805170959Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:07:28.806986 containerd[1554]: time="2025-05-08T00:07:28.806973927Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:07:28.808797 containerd[1554]: time="2025-05-08T00:07:28.808771364Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 4.962998223s" May 8 00:07:28.810464 containerd[1554]: time="2025-05-08T00:07:28.808845991Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" May 8 00:07:30.694293 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:07:30.694829 systemd[1]: kubelet.service: Consumed 99ms CPU time, 103.7M memory peak. May 8 00:07:30.703950 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 00:07:30.720918 systemd[1]: Reload requested from client PID 2323 ('systemctl') (unit session-9.scope)... May 8 00:07:30.721018 systemd[1]: Reloading... May 8 00:07:30.799567 zram_generator::config[2370]: No configuration found. May 8 00:07:30.852582 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") May 8 00:07:30.871148 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 8 00:07:30.941647 systemd[1]: Reloading finished in 220 ms. May 8 00:07:30.959450 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM May 8 00:07:30.959504 systemd[1]: kubelet.service: Failed with result 'signal'. May 8 00:07:30.959694 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:07:30.964807 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 00:07:31.240006 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:07:31.242668 (kubelet)[2435]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 8 00:07:31.279120 kubelet[2435]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 8 00:07:31.279120 kubelet[2435]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 8 00:07:31.279120 kubelet[2435]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 8 00:07:31.288114 kubelet[2435]: I0508 00:07:31.287965 2435 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 8 00:07:31.558992 kubelet[2435]: I0508 00:07:31.558764 2435 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" May 8 00:07:31.558992 kubelet[2435]: I0508 00:07:31.558787 2435 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 8 00:07:31.558992 kubelet[2435]: I0508 00:07:31.558943 2435 server.go:954] "Client rotation is on, will bootstrap in background" May 8 00:07:31.812199 kubelet[2435]: I0508 00:07:31.811786 2435 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 8 00:07:31.821404 kubelet[2435]: E0508 00:07:31.821384 2435 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://139.178.70.106:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 139.178.70.106:6443: connect: connection refused" logger="UnhandledError" May 8 00:07:31.951774 kubelet[2435]: E0508 00:07:31.951737 2435 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 8 00:07:31.951774 kubelet[2435]: I0508 00:07:31.951770 2435 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 8 00:07:31.973229 kubelet[2435]: I0508 00:07:31.973199 2435 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 8 00:07:31.987025 kubelet[2435]: I0508 00:07:31.986980 2435 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 8 00:07:31.987807 kubelet[2435]: I0508 00:07:31.987021 2435 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 8 00:07:31.987807 kubelet[2435]: I0508 00:07:31.987190 2435 topology_manager.go:138] "Creating topology manager with none policy" May 8 00:07:31.987807 kubelet[2435]: I0508 00:07:31.987198 2435 container_manager_linux.go:304] "Creating device plugin manager" May 8 00:07:31.987807 kubelet[2435]: I0508 00:07:31.987290 2435 state_mem.go:36] "Initialized new in-memory state store" May 8 00:07:31.991228 kubelet[2435]: I0508 00:07:31.991218 2435 kubelet.go:446] "Attempting to sync node with API server" May 8 00:07:31.991446 kubelet[2435]: I0508 00:07:31.991252 2435 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 8 00:07:31.991446 kubelet[2435]: I0508 00:07:31.991266 2435 kubelet.go:352] "Adding apiserver pod source" May 8 00:07:31.991446 kubelet[2435]: I0508 00:07:31.991273 2435 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 8 00:07:31.994157 kubelet[2435]: W0508 00:07:31.994127 2435 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://139.178.70.106:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 139.178.70.106:6443: connect: connection refused May 8 00:07:31.994223 kubelet[2435]: E0508 00:07:31.994208 2435 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://139.178.70.106:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 139.178.70.106:6443: connect: connection refused" logger="UnhandledError" May 8 00:07:31.994302 kubelet[2435]: W0508 00:07:31.994288 2435 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://139.178.70.106:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 139.178.70.106:6443: connect: connection refused May 8 00:07:31.994352 kubelet[2435]: E0508 00:07:31.994345 2435 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://139.178.70.106:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 139.178.70.106:6443: connect: connection refused" logger="UnhandledError" May 8 00:07:31.997857 kubelet[2435]: I0508 00:07:31.997830 2435 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" May 8 00:07:32.000105 kubelet[2435]: I0508 00:07:32.000016 2435 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 8 00:07:32.001558 kubelet[2435]: W0508 00:07:32.001355 2435 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 8 00:07:32.001722 kubelet[2435]: I0508 00:07:32.001712 2435 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 8 00:07:32.001744 kubelet[2435]: I0508 00:07:32.001731 2435 server.go:1287] "Started kubelet" May 8 00:07:32.002031 kubelet[2435]: I0508 00:07:32.002010 2435 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 8 00:07:32.005067 kubelet[2435]: I0508 00:07:32.005052 2435 server.go:490] "Adding debug handlers to kubelet server" May 8 00:07:32.007204 kubelet[2435]: I0508 00:07:32.006996 2435 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 8 00:07:32.007204 kubelet[2435]: I0508 00:07:32.007143 2435 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 8 00:07:32.012938 kubelet[2435]: I0508 00:07:32.012922 2435 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 8 00:07:32.027272 kubelet[2435]: I0508 00:07:32.027248 2435 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 8 00:07:32.030681 kubelet[2435]: I0508 00:07:32.030666 2435 volume_manager.go:297] "Starting Kubelet Volume Manager" May 8 00:07:32.030821 kubelet[2435]: E0508 00:07:32.030806 2435 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 8 00:07:32.032459 kubelet[2435]: I0508 00:07:32.032444 2435 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 8 00:07:32.032497 kubelet[2435]: I0508 00:07:32.032474 2435 reconciler.go:26] "Reconciler: start to sync state" May 8 00:07:32.043623 kubelet[2435]: E0508 00:07:32.043253 2435 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.106:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.106:6443: connect: connection refused" interval="200ms" May 8 00:07:32.043623 kubelet[2435]: E0508 00:07:32.007966 2435 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://139.178.70.106:6443/api/v1/namespaces/default/events\": dial tcp 139.178.70.106:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.183d649c13eee953 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-08 00:07:32.001720659 +0000 UTC m=+0.756957571,LastTimestamp:2025-05-08 00:07:32.001720659 +0000 UTC m=+0.756957571,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 8 00:07:32.043623 kubelet[2435]: W0508 00:07:32.043485 2435 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://139.178.70.106:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.70.106:6443: connect: connection refused May 8 00:07:32.043623 kubelet[2435]: E0508 00:07:32.043513 2435 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://139.178.70.106:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 139.178.70.106:6443: connect: connection refused" logger="UnhandledError" May 8 00:07:32.044849 kubelet[2435]: I0508 00:07:32.044266 2435 factory.go:221] Registration of the systemd container factory successfully May 8 00:07:32.044849 kubelet[2435]: I0508 00:07:32.044314 2435 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 8 00:07:32.049652 kubelet[2435]: I0508 00:07:32.048665 2435 factory.go:221] Registration of the containerd container factory successfully May 8 00:07:32.051081 kubelet[2435]: I0508 00:07:32.051066 2435 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 8 00:07:32.051751 kubelet[2435]: I0508 00:07:32.051743 2435 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 8 00:07:32.051805 kubelet[2435]: I0508 00:07:32.051798 2435 status_manager.go:227] "Starting to sync pod status with apiserver" May 8 00:07:32.051850 kubelet[2435]: I0508 00:07:32.051845 2435 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 8 00:07:32.051881 kubelet[2435]: I0508 00:07:32.051877 2435 kubelet.go:2388] "Starting kubelet main sync loop" May 8 00:07:32.051936 kubelet[2435]: E0508 00:07:32.051927 2435 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 8 00:07:32.056244 kubelet[2435]: W0508 00:07:32.056223 2435 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://139.178.70.106:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.70.106:6443: connect: connection refused May 8 00:07:32.056321 kubelet[2435]: E0508 00:07:32.056310 2435 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://139.178.70.106:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 139.178.70.106:6443: connect: connection refused" logger="UnhandledError" May 8 00:07:32.057104 kubelet[2435]: E0508 00:07:32.057095 2435 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 8 00:07:32.074477 kubelet[2435]: E0508 00:07:32.073839 2435 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://139.178.70.106:6443/api/v1/namespaces/default/events\": dial tcp 139.178.70.106:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.183d649c13eee953 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-08 00:07:32.001720659 +0000 UTC m=+0.756957571,LastTimestamp:2025-05-08 00:07:32.001720659 +0000 UTC m=+0.756957571,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 8 00:07:32.076831 kubelet[2435]: I0508 00:07:32.076819 2435 cpu_manager.go:221] "Starting CPU manager" policy="none" May 8 00:07:32.076910 kubelet[2435]: I0508 00:07:32.076903 2435 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 8 00:07:32.076946 kubelet[2435]: I0508 00:07:32.076942 2435 state_mem.go:36] "Initialized new in-memory state store" May 8 00:07:32.079343 kubelet[2435]: I0508 00:07:32.079326 2435 policy_none.go:49] "None policy: Start" May 8 00:07:32.079343 kubelet[2435]: I0508 00:07:32.079347 2435 memory_manager.go:186] "Starting memorymanager" policy="None" May 8 00:07:32.079431 kubelet[2435]: I0508 00:07:32.079360 2435 state_mem.go:35] "Initializing new in-memory state store" May 8 00:07:32.082590 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 8 00:07:32.092312 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 8 00:07:32.094428 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 8 00:07:32.098000 kubelet[2435]: I0508 00:07:32.097983 2435 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 8 00:07:32.098103 kubelet[2435]: I0508 00:07:32.098092 2435 eviction_manager.go:189] "Eviction manager: starting control loop" May 8 00:07:32.098127 kubelet[2435]: I0508 00:07:32.098110 2435 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 8 00:07:32.098632 kubelet[2435]: I0508 00:07:32.098422 2435 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 8 00:07:32.099082 kubelet[2435]: E0508 00:07:32.099006 2435 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 8 00:07:32.099082 kubelet[2435]: E0508 00:07:32.099033 2435 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" May 8 00:07:32.157877 systemd[1]: Created slice kubepods-burstable-pode636c1ec48a17d0e1d62a61feec9b823.slice - libcontainer container kubepods-burstable-pode636c1ec48a17d0e1d62a61feec9b823.slice. May 8 00:07:32.164275 kubelet[2435]: E0508 00:07:32.164252 2435 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 8 00:07:32.168064 systemd[1]: Created slice kubepods-burstable-pod5386fe11ed933ab82453de11903c7f47.slice - libcontainer container kubepods-burstable-pod5386fe11ed933ab82453de11903c7f47.slice. May 8 00:07:32.173589 kubelet[2435]: E0508 00:07:32.173574 2435 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 8 00:07:32.175652 systemd[1]: Created slice kubepods-burstable-pod2980a8ab51edc665be10a02e33130e15.slice - libcontainer container kubepods-burstable-pod2980a8ab51edc665be10a02e33130e15.slice. May 8 00:07:32.176865 kubelet[2435]: E0508 00:07:32.176850 2435 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 8 00:07:32.200052 kubelet[2435]: I0508 00:07:32.200018 2435 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 8 00:07:32.200312 kubelet[2435]: E0508 00:07:32.200295 2435 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://139.178.70.106:6443/api/v1/nodes\": dial tcp 139.178.70.106:6443: connect: connection refused" node="localhost" May 8 00:07:32.233832 kubelet[2435]: I0508 00:07:32.233686 2435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2980a8ab51edc665be10a02e33130e15-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"2980a8ab51edc665be10a02e33130e15\") " pod="kube-system/kube-scheduler-localhost" May 8 00:07:32.233832 kubelet[2435]: I0508 00:07:32.233718 2435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e636c1ec48a17d0e1d62a61feec9b823-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"e636c1ec48a17d0e1d62a61feec9b823\") " pod="kube-system/kube-apiserver-localhost" May 8 00:07:32.233832 kubelet[2435]: I0508 00:07:32.233730 2435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e636c1ec48a17d0e1d62a61feec9b823-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"e636c1ec48a17d0e1d62a61feec9b823\") " pod="kube-system/kube-apiserver-localhost" May 8 00:07:32.233832 kubelet[2435]: I0508 00:07:32.233739 2435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:07:32.233832 kubelet[2435]: I0508 00:07:32.233748 2435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:07:32.233990 kubelet[2435]: I0508 00:07:32.233758 2435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:07:32.233990 kubelet[2435]: I0508 00:07:32.233766 2435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e636c1ec48a17d0e1d62a61feec9b823-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"e636c1ec48a17d0e1d62a61feec9b823\") " pod="kube-system/kube-apiserver-localhost" May 8 00:07:32.233990 kubelet[2435]: I0508 00:07:32.233775 2435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:07:32.233990 kubelet[2435]: I0508 00:07:32.233784 2435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:07:32.244192 kubelet[2435]: E0508 00:07:32.244159 2435 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.106:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.106:6443: connect: connection refused" interval="400ms" May 8 00:07:32.402534 kubelet[2435]: I0508 00:07:32.401796 2435 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 8 00:07:32.402534 kubelet[2435]: E0508 00:07:32.402476 2435 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://139.178.70.106:6443/api/v1/nodes\": dial tcp 139.178.70.106:6443: connect: connection refused" node="localhost" May 8 00:07:32.466202 containerd[1554]: time="2025-05-08T00:07:32.466037777Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:e636c1ec48a17d0e1d62a61feec9b823,Namespace:kube-system,Attempt:0,}" May 8 00:07:32.475039 containerd[1554]: time="2025-05-08T00:07:32.475013795Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:5386fe11ed933ab82453de11903c7f47,Namespace:kube-system,Attempt:0,}" May 8 00:07:32.478493 containerd[1554]: time="2025-05-08T00:07:32.478362654Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:2980a8ab51edc665be10a02e33130e15,Namespace:kube-system,Attempt:0,}" May 8 00:07:32.645152 kubelet[2435]: E0508 00:07:32.645125 2435 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.106:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.106:6443: connect: connection refused" interval="800ms" May 8 00:07:32.803918 kubelet[2435]: I0508 00:07:32.803887 2435 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 8 00:07:32.902311 kubelet[2435]: E0508 00:07:32.804145 2435 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://139.178.70.106:6443/api/v1/nodes\": dial tcp 139.178.70.106:6443: connect: connection refused" node="localhost" May 8 00:07:32.991689 kubelet[2435]: W0508 00:07:32.991582 2435 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://139.178.70.106:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 139.178.70.106:6443: connect: connection refused May 8 00:07:32.991689 kubelet[2435]: E0508 00:07:32.991653 2435 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://139.178.70.106:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 139.178.70.106:6443: connect: connection refused" logger="UnhandledError" May 8 00:07:33.283202 kubelet[2435]: W0508 00:07:33.283164 2435 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://139.178.70.106:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.70.106:6443: connect: connection refused May 8 00:07:33.283290 kubelet[2435]: E0508 00:07:33.283220 2435 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://139.178.70.106:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 139.178.70.106:6443: connect: connection refused" logger="UnhandledError" May 8 00:07:33.307837 kubelet[2435]: W0508 00:07:33.307796 2435 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://139.178.70.106:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.70.106:6443: connect: connection refused May 8 00:07:33.307928 kubelet[2435]: E0508 00:07:33.307850 2435 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://139.178.70.106:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 139.178.70.106:6443: connect: connection refused" logger="UnhandledError" May 8 00:07:33.445453 kubelet[2435]: E0508 00:07:33.445417 2435 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.106:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.106:6443: connect: connection refused" interval="1.6s" May 8 00:07:33.544751 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount502398060.mount: Deactivated successfully. May 8 00:07:33.567611 kubelet[2435]: W0508 00:07:33.567565 2435 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://139.178.70.106:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 139.178.70.106:6443: connect: connection refused May 8 00:07:33.567690 kubelet[2435]: E0508 00:07:33.567621 2435 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://139.178.70.106:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 139.178.70.106:6443: connect: connection refused" logger="UnhandledError" May 8 00:07:33.572683 containerd[1554]: time="2025-05-08T00:07:33.572608744Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 8 00:07:33.599144 containerd[1554]: time="2025-05-08T00:07:33.599107230Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" May 8 00:07:33.604852 containerd[1554]: time="2025-05-08T00:07:33.604823463Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 8 00:07:33.605768 kubelet[2435]: I0508 00:07:33.605701 2435 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 8 00:07:33.605951 kubelet[2435]: E0508 00:07:33.605930 2435 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://139.178.70.106:6443/api/v1/nodes\": dial tcp 139.178.70.106:6443: connect: connection refused" node="localhost" May 8 00:07:33.612813 containerd[1554]: time="2025-05-08T00:07:33.612786925Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 8 00:07:33.623223 containerd[1554]: time="2025-05-08T00:07:33.623054358Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 8 00:07:33.630606 containerd[1554]: time="2025-05-08T00:07:33.630505791Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 8 00:07:33.638583 containerd[1554]: time="2025-05-08T00:07:33.638337317Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 8 00:07:33.639065 containerd[1554]: time="2025-05-08T00:07:33.639039924Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.172932957s" May 8 00:07:33.642963 containerd[1554]: time="2025-05-08T00:07:33.641340862Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 8 00:07:33.642963 containerd[1554]: time="2025-05-08T00:07:33.642881170Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.167811044s" May 8 00:07:33.653788 containerd[1554]: time="2025-05-08T00:07:33.653761716Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.175351039s" May 8 00:07:33.943260 containerd[1554]: time="2025-05-08T00:07:33.943166769Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:07:33.943345 containerd[1554]: time="2025-05-08T00:07:33.943204604Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:07:33.943345 containerd[1554]: time="2025-05-08T00:07:33.943214446Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:07:33.944029 containerd[1554]: time="2025-05-08T00:07:33.943996314Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:07:33.944100 containerd[1554]: time="2025-05-08T00:07:33.944086892Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:07:33.944163 containerd[1554]: time="2025-05-08T00:07:33.944150154Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:07:33.944262 containerd[1554]: time="2025-05-08T00:07:33.944249800Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:07:33.947310 containerd[1554]: time="2025-05-08T00:07:33.943010507Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:07:33.947310 containerd[1554]: time="2025-05-08T00:07:33.946991362Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:07:33.947310 containerd[1554]: time="2025-05-08T00:07:33.947014767Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:07:33.947310 containerd[1554]: time="2025-05-08T00:07:33.947066367Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:07:33.947428 containerd[1554]: time="2025-05-08T00:07:33.946099982Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:07:33.965835 systemd[1]: Started cri-containerd-2d501f467c7ba08b8b6da8acd40829e13cca4ea455e9d12a2c05433670491600.scope - libcontainer container 2d501f467c7ba08b8b6da8acd40829e13cca4ea455e9d12a2c05433670491600. May 8 00:07:33.971634 systemd[1]: Started cri-containerd-a83fa2bb634a302595701ceb25b51607e89a904368b643985fe64d3da44adb1d.scope - libcontainer container a83fa2bb634a302595701ceb25b51607e89a904368b643985fe64d3da44adb1d. May 8 00:07:33.974954 systemd[1]: Started cri-containerd-6225c108428a82134e129abf48ae31a209126f24ab1972600d1cda28aa7c9900.scope - libcontainer container 6225c108428a82134e129abf48ae31a209126f24ab1972600d1cda28aa7c9900. May 8 00:07:34.007190 kubelet[2435]: E0508 00:07:34.007106 2435 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://139.178.70.106:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 139.178.70.106:6443: connect: connection refused" logger="UnhandledError" May 8 00:07:34.019340 containerd[1554]: time="2025-05-08T00:07:34.017264029Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:5386fe11ed933ab82453de11903c7f47,Namespace:kube-system,Attempt:0,} returns sandbox id \"2d501f467c7ba08b8b6da8acd40829e13cca4ea455e9d12a2c05433670491600\"" May 8 00:07:34.020399 containerd[1554]: time="2025-05-08T00:07:34.020382553Z" level=info msg="CreateContainer within sandbox \"2d501f467c7ba08b8b6da8acd40829e13cca4ea455e9d12a2c05433670491600\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 8 00:07:34.025902 containerd[1554]: time="2025-05-08T00:07:34.025877855Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:e636c1ec48a17d0e1d62a61feec9b823,Namespace:kube-system,Attempt:0,} returns sandbox id \"a83fa2bb634a302595701ceb25b51607e89a904368b643985fe64d3da44adb1d\"" May 8 00:07:34.030102 containerd[1554]: time="2025-05-08T00:07:34.030081005Z" level=info msg="CreateContainer within sandbox \"a83fa2bb634a302595701ceb25b51607e89a904368b643985fe64d3da44adb1d\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 8 00:07:34.033628 containerd[1554]: time="2025-05-08T00:07:34.033561698Z" level=info msg="CreateContainer within sandbox \"2d501f467c7ba08b8b6da8acd40829e13cca4ea455e9d12a2c05433670491600\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"2248d5c3d3900918ec0d3167dac048a0cdce5905276d1d1a6e87052f9b1070d9\"" May 8 00:07:34.034007 containerd[1554]: time="2025-05-08T00:07:34.033864599Z" level=info msg="StartContainer for \"2248d5c3d3900918ec0d3167dac048a0cdce5905276d1d1a6e87052f9b1070d9\"" May 8 00:07:34.037727 containerd[1554]: time="2025-05-08T00:07:34.037684352Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:2980a8ab51edc665be10a02e33130e15,Namespace:kube-system,Attempt:0,} returns sandbox id \"6225c108428a82134e129abf48ae31a209126f24ab1972600d1cda28aa7c9900\"" May 8 00:07:34.038318 containerd[1554]: time="2025-05-08T00:07:34.038303731Z" level=info msg="CreateContainer within sandbox \"a83fa2bb634a302595701ceb25b51607e89a904368b643985fe64d3da44adb1d\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"99c3c54c70aac5bfcc8811344b67ad5f4702afc29a5dc517e1db5e4ba345803d\"" May 8 00:07:34.038715 containerd[1554]: time="2025-05-08T00:07:34.038643239Z" level=info msg="StartContainer for \"99c3c54c70aac5bfcc8811344b67ad5f4702afc29a5dc517e1db5e4ba345803d\"" May 8 00:07:34.039130 containerd[1554]: time="2025-05-08T00:07:34.039027600Z" level=info msg="CreateContainer within sandbox \"6225c108428a82134e129abf48ae31a209126f24ab1972600d1cda28aa7c9900\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 8 00:07:34.049517 containerd[1554]: time="2025-05-08T00:07:34.049492238Z" level=info msg="CreateContainer within sandbox \"6225c108428a82134e129abf48ae31a209126f24ab1972600d1cda28aa7c9900\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"62d90aabfe5706a9a33781e292adb809221b4ab55b3cbc9ab78f33c3e49f37e5\"" May 8 00:07:34.049903 containerd[1554]: time="2025-05-08T00:07:34.049827001Z" level=info msg="StartContainer for \"62d90aabfe5706a9a33781e292adb809221b4ab55b3cbc9ab78f33c3e49f37e5\"" May 8 00:07:34.065663 systemd[1]: Started cri-containerd-2248d5c3d3900918ec0d3167dac048a0cdce5905276d1d1a6e87052f9b1070d9.scope - libcontainer container 2248d5c3d3900918ec0d3167dac048a0cdce5905276d1d1a6e87052f9b1070d9. May 8 00:07:34.066722 systemd[1]: Started cri-containerd-99c3c54c70aac5bfcc8811344b67ad5f4702afc29a5dc517e1db5e4ba345803d.scope - libcontainer container 99c3c54c70aac5bfcc8811344b67ad5f4702afc29a5dc517e1db5e4ba345803d. May 8 00:07:34.079638 systemd[1]: Started cri-containerd-62d90aabfe5706a9a33781e292adb809221b4ab55b3cbc9ab78f33c3e49f37e5.scope - libcontainer container 62d90aabfe5706a9a33781e292adb809221b4ab55b3cbc9ab78f33c3e49f37e5. May 8 00:07:34.107658 containerd[1554]: time="2025-05-08T00:07:34.107567936Z" level=info msg="StartContainer for \"99c3c54c70aac5bfcc8811344b67ad5f4702afc29a5dc517e1db5e4ba345803d\" returns successfully" May 8 00:07:34.117689 containerd[1554]: time="2025-05-08T00:07:34.117665913Z" level=info msg="StartContainer for \"2248d5c3d3900918ec0d3167dac048a0cdce5905276d1d1a6e87052f9b1070d9\" returns successfully" May 8 00:07:34.130184 containerd[1554]: time="2025-05-08T00:07:34.130159341Z" level=info msg="StartContainer for \"62d90aabfe5706a9a33781e292adb809221b4ab55b3cbc9ab78f33c3e49f37e5\" returns successfully" May 8 00:07:35.069273 kubelet[2435]: E0508 00:07:35.069253 2435 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 8 00:07:35.070686 kubelet[2435]: E0508 00:07:35.069800 2435 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 8 00:07:35.071535 kubelet[2435]: E0508 00:07:35.071525 2435 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 8 00:07:35.208573 kubelet[2435]: I0508 00:07:35.207987 2435 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 8 00:07:35.501413 kubelet[2435]: E0508 00:07:35.501387 2435 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" May 8 00:07:35.577600 kubelet[2435]: I0508 00:07:35.577574 2435 kubelet_node_status.go:79] "Successfully registered node" node="localhost" May 8 00:07:35.631610 kubelet[2435]: I0508 00:07:35.631583 2435 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 8 00:07:35.635247 kubelet[2435]: E0508 00:07:35.635074 2435 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" May 8 00:07:35.635247 kubelet[2435]: I0508 00:07:35.635092 2435 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 8 00:07:35.636155 kubelet[2435]: E0508 00:07:35.636093 2435 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" May 8 00:07:35.636155 kubelet[2435]: I0508 00:07:35.636114 2435 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" May 8 00:07:35.637218 kubelet[2435]: E0508 00:07:35.637197 2435 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" May 8 00:07:35.996665 kubelet[2435]: I0508 00:07:35.996646 2435 apiserver.go:52] "Watching apiserver" May 8 00:07:36.033383 kubelet[2435]: I0508 00:07:36.033343 2435 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 8 00:07:36.071893 kubelet[2435]: I0508 00:07:36.071837 2435 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" May 8 00:07:36.072584 kubelet[2435]: I0508 00:07:36.072270 2435 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 8 00:07:36.072584 kubelet[2435]: I0508 00:07:36.072478 2435 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 8 00:07:36.076052 kubelet[2435]: E0508 00:07:36.075825 2435 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" May 8 00:07:36.076052 kubelet[2435]: E0508 00:07:36.075976 2435 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" May 8 00:07:36.076288 kubelet[2435]: E0508 00:07:36.076280 2435 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" May 8 00:07:37.073949 kubelet[2435]: I0508 00:07:37.073928 2435 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 8 00:07:37.074204 kubelet[2435]: I0508 00:07:37.074113 2435 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 8 00:07:37.461266 systemd[1]: Reload requested from client PID 2711 ('systemctl') (unit session-9.scope)... May 8 00:07:37.461275 systemd[1]: Reloading... May 8 00:07:37.515641 zram_generator::config[2756]: No configuration found. May 8 00:07:37.586139 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") May 8 00:07:37.604461 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 8 00:07:37.679653 systemd[1]: Reloading finished in 218 ms. May 8 00:07:37.698985 kubelet[2435]: I0508 00:07:37.698945 2435 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 8 00:07:37.699222 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 8 00:07:37.711293 systemd[1]: kubelet.service: Deactivated successfully. May 8 00:07:37.711468 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:07:37.711506 systemd[1]: kubelet.service: Consumed 529ms CPU time, 124.9M memory peak. May 8 00:07:37.714761 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 00:07:37.886145 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:07:37.889928 (kubelet)[2823]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 8 00:07:38.085804 kubelet[2823]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 8 00:07:38.085804 kubelet[2823]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 8 00:07:38.085804 kubelet[2823]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 8 00:07:38.085804 kubelet[2823]: I0508 00:07:38.084614 2823 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 8 00:07:38.114843 kubelet[2823]: I0508 00:07:38.114814 2823 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" May 8 00:07:38.114843 kubelet[2823]: I0508 00:07:38.114837 2823 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 8 00:07:38.115050 kubelet[2823]: I0508 00:07:38.115034 2823 server.go:954] "Client rotation is on, will bootstrap in background" May 8 00:07:38.119718 kubelet[2823]: I0508 00:07:38.119698 2823 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 8 00:07:38.150040 kubelet[2823]: I0508 00:07:38.149904 2823 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 8 00:07:38.152160 kubelet[2823]: E0508 00:07:38.152112 2823 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 8 00:07:38.152224 kubelet[2823]: I0508 00:07:38.152161 2823 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 8 00:07:38.155551 kubelet[2823]: I0508 00:07:38.154774 2823 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 8 00:07:38.155551 kubelet[2823]: I0508 00:07:38.154924 2823 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 8 00:07:38.155551 kubelet[2823]: I0508 00:07:38.154940 2823 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 8 00:07:38.155551 kubelet[2823]: I0508 00:07:38.155126 2823 topology_manager.go:138] "Creating topology manager with none policy" May 8 00:07:38.155695 kubelet[2823]: I0508 00:07:38.155150 2823 container_manager_linux.go:304] "Creating device plugin manager" May 8 00:07:38.155695 kubelet[2823]: I0508 00:07:38.155176 2823 state_mem.go:36] "Initialized new in-memory state store" May 8 00:07:38.155695 kubelet[2823]: I0508 00:07:38.155303 2823 kubelet.go:446] "Attempting to sync node with API server" May 8 00:07:38.155695 kubelet[2823]: I0508 00:07:38.155310 2823 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 8 00:07:38.155695 kubelet[2823]: I0508 00:07:38.155323 2823 kubelet.go:352] "Adding apiserver pod source" May 8 00:07:38.155695 kubelet[2823]: I0508 00:07:38.155336 2823 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 8 00:07:38.182529 kubelet[2823]: I0508 00:07:38.182504 2823 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" May 8 00:07:38.183412 kubelet[2823]: I0508 00:07:38.182882 2823 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 8 00:07:38.183412 kubelet[2823]: I0508 00:07:38.183230 2823 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 8 00:07:38.183412 kubelet[2823]: I0508 00:07:38.183252 2823 server.go:1287] "Started kubelet" May 8 00:07:38.184162 kubelet[2823]: I0508 00:07:38.183913 2823 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 8 00:07:38.192805 kubelet[2823]: I0508 00:07:38.192449 2823 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 8 00:07:38.198563 kubelet[2823]: I0508 00:07:38.196805 2823 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 8 00:07:38.198563 kubelet[2823]: I0508 00:07:38.197975 2823 server.go:490] "Adding debug handlers to kubelet server" May 8 00:07:38.199218 kubelet[2823]: I0508 00:07:38.199141 2823 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 8 00:07:38.203406 kubelet[2823]: I0508 00:07:38.203391 2823 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 8 00:07:38.210301 kubelet[2823]: I0508 00:07:38.210288 2823 volume_manager.go:297] "Starting Kubelet Volume Manager" May 8 00:07:38.210516 kubelet[2823]: E0508 00:07:38.210506 2823 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 8 00:07:38.210899 kubelet[2823]: I0508 00:07:38.210885 2823 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 8 00:07:38.210981 kubelet[2823]: I0508 00:07:38.210972 2823 reconciler.go:26] "Reconciler: start to sync state" May 8 00:07:38.214134 kubelet[2823]: I0508 00:07:38.214119 2823 factory.go:221] Registration of the systemd container factory successfully May 8 00:07:38.214269 kubelet[2823]: I0508 00:07:38.214186 2823 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 8 00:07:38.215110 kubelet[2823]: E0508 00:07:38.214976 2823 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 8 00:07:38.215239 kubelet[2823]: I0508 00:07:38.215179 2823 factory.go:221] Registration of the containerd container factory successfully May 8 00:07:38.235168 kubelet[2823]: I0508 00:07:38.235099 2823 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 8 00:07:38.235819 kubelet[2823]: I0508 00:07:38.235808 2823 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 8 00:07:38.240419 kubelet[2823]: I0508 00:07:38.240164 2823 status_manager.go:227] "Starting to sync pod status with apiserver" May 8 00:07:38.240419 kubelet[2823]: I0508 00:07:38.240200 2823 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 8 00:07:38.240419 kubelet[2823]: I0508 00:07:38.240205 2823 kubelet.go:2388] "Starting kubelet main sync loop" May 8 00:07:38.240419 kubelet[2823]: E0508 00:07:38.240244 2823 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 8 00:07:38.276208 kubelet[2823]: I0508 00:07:38.276188 2823 cpu_manager.go:221] "Starting CPU manager" policy="none" May 8 00:07:38.276208 kubelet[2823]: I0508 00:07:38.276204 2823 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 8 00:07:38.276312 kubelet[2823]: I0508 00:07:38.276222 2823 state_mem.go:36] "Initialized new in-memory state store" May 8 00:07:38.276371 kubelet[2823]: I0508 00:07:38.276357 2823 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 8 00:07:38.276393 kubelet[2823]: I0508 00:07:38.276372 2823 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 8 00:07:38.283000 kubelet[2823]: I0508 00:07:38.282980 2823 policy_none.go:49] "None policy: Start" May 8 00:07:38.283051 kubelet[2823]: I0508 00:07:38.283003 2823 memory_manager.go:186] "Starting memorymanager" policy="None" May 8 00:07:38.283051 kubelet[2823]: I0508 00:07:38.283014 2823 state_mem.go:35] "Initializing new in-memory state store" May 8 00:07:38.283111 kubelet[2823]: I0508 00:07:38.283098 2823 state_mem.go:75] "Updated machine memory state" May 8 00:07:38.285631 kubelet[2823]: I0508 00:07:38.285613 2823 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 8 00:07:38.285719 kubelet[2823]: I0508 00:07:38.285707 2823 eviction_manager.go:189] "Eviction manager: starting control loop" May 8 00:07:38.285747 kubelet[2823]: I0508 00:07:38.285717 2823 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 8 00:07:38.285924 kubelet[2823]: I0508 00:07:38.285912 2823 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 8 00:07:38.286823 kubelet[2823]: E0508 00:07:38.286801 2823 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 8 00:07:38.341535 kubelet[2823]: I0508 00:07:38.341458 2823 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 8 00:07:38.346387 kubelet[2823]: I0508 00:07:38.346374 2823 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" May 8 00:07:38.346583 kubelet[2823]: I0508 00:07:38.346401 2823 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 8 00:07:38.366582 kubelet[2823]: E0508 00:07:38.366526 2823 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 8 00:07:38.366582 kubelet[2823]: E0508 00:07:38.366554 2823 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" May 8 00:07:38.387713 kubelet[2823]: I0508 00:07:38.387640 2823 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 8 00:07:38.412215 kubelet[2823]: I0508 00:07:38.412192 2823 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e636c1ec48a17d0e1d62a61feec9b823-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"e636c1ec48a17d0e1d62a61feec9b823\") " pod="kube-system/kube-apiserver-localhost" May 8 00:07:38.412486 kubelet[2823]: I0508 00:07:38.412350 2823 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e636c1ec48a17d0e1d62a61feec9b823-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"e636c1ec48a17d0e1d62a61feec9b823\") " pod="kube-system/kube-apiserver-localhost" May 8 00:07:38.412486 kubelet[2823]: I0508 00:07:38.412379 2823 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:07:38.412486 kubelet[2823]: I0508 00:07:38.412392 2823 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:07:38.412486 kubelet[2823]: I0508 00:07:38.412404 2823 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:07:38.412486 kubelet[2823]: I0508 00:07:38.412415 2823 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2980a8ab51edc665be10a02e33130e15-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"2980a8ab51edc665be10a02e33130e15\") " pod="kube-system/kube-scheduler-localhost" May 8 00:07:38.412664 kubelet[2823]: I0508 00:07:38.412427 2823 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e636c1ec48a17d0e1d62a61feec9b823-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"e636c1ec48a17d0e1d62a61feec9b823\") " pod="kube-system/kube-apiserver-localhost" May 8 00:07:38.412664 kubelet[2823]: I0508 00:07:38.412438 2823 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:07:38.412664 kubelet[2823]: I0508 00:07:38.412450 2823 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:07:38.450968 kubelet[2823]: I0508 00:07:38.450928 2823 kubelet_node_status.go:125] "Node was previously registered" node="localhost" May 8 00:07:38.451151 kubelet[2823]: I0508 00:07:38.451040 2823 kubelet_node_status.go:79] "Successfully registered node" node="localhost" May 8 00:07:38.492101 sudo[2856]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin May 8 00:07:38.492308 sudo[2856]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) May 8 00:07:38.863758 sudo[2856]: pam_unix(sudo:session): session closed for user root May 8 00:07:39.179135 kubelet[2823]: I0508 00:07:39.178927 2823 apiserver.go:52] "Watching apiserver" May 8 00:07:39.211803 kubelet[2823]: I0508 00:07:39.211770 2823 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 8 00:07:39.264036 kubelet[2823]: I0508 00:07:39.263921 2823 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 8 00:07:39.264188 kubelet[2823]: I0508 00:07:39.264176 2823 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 8 00:07:39.264304 kubelet[2823]: I0508 00:07:39.264293 2823 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" May 8 00:07:39.269811 kubelet[2823]: E0508 00:07:39.268702 2823 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" May 8 00:07:39.269811 kubelet[2823]: E0508 00:07:39.268840 2823 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 8 00:07:39.269811 kubelet[2823]: E0508 00:07:39.268926 2823 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" May 8 00:07:39.282506 kubelet[2823]: I0508 00:07:39.282470 2823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.282453802 podStartE2EDuration="2.282453802s" podCreationTimestamp="2025-05-08 00:07:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 00:07:39.278860395 +0000 UTC m=+1.274739202" watchObservedRunningTime="2025-05-08 00:07:39.282453802 +0000 UTC m=+1.278332610" May 8 00:07:39.286674 kubelet[2823]: I0508 00:07:39.286129 2823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.286117503 podStartE2EDuration="1.286117503s" podCreationTimestamp="2025-05-08 00:07:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 00:07:39.282633356 +0000 UTC m=+1.278512165" watchObservedRunningTime="2025-05-08 00:07:39.286117503 +0000 UTC m=+1.281996315" May 8 00:07:39.286674 kubelet[2823]: I0508 00:07:39.286176 2823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=2.286172071 podStartE2EDuration="2.286172071s" podCreationTimestamp="2025-05-08 00:07:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 00:07:39.286088546 +0000 UTC m=+1.281967364" watchObservedRunningTime="2025-05-08 00:07:39.286172071 +0000 UTC m=+1.282050896" May 8 00:07:40.111309 sudo[1863]: pam_unix(sudo:session): session closed for user root May 8 00:07:40.112342 sshd[1862]: Connection closed by 139.178.89.65 port 46976 May 8 00:07:40.113036 sshd-session[1859]: pam_unix(sshd:session): session closed for user core May 8 00:07:40.114978 systemd[1]: sshd@7-139.178.70.106:22-139.178.89.65:46976.service: Deactivated successfully. May 8 00:07:40.116106 systemd[1]: session-9.scope: Deactivated successfully. May 8 00:07:40.116217 systemd[1]: session-9.scope: Consumed 2.987s CPU time, 210.6M memory peak. May 8 00:07:40.116964 systemd-logind[1535]: Session 9 logged out. Waiting for processes to exit. May 8 00:07:40.117871 systemd-logind[1535]: Removed session 9. May 8 00:07:44.072674 kubelet[2823]: I0508 00:07:44.072656 2823 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 8 00:07:44.073649 containerd[1554]: time="2025-05-08T00:07:44.073628805Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 8 00:07:44.074376 kubelet[2823]: I0508 00:07:44.074355 2823 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 8 00:07:44.250510 kubelet[2823]: I0508 00:07:44.249472 2823 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6ae90792-07ab-4a93-9671-7f095765d7e9-bpf-maps\") pod \"cilium-dk8bs\" (UID: \"6ae90792-07ab-4a93-9671-7f095765d7e9\") " pod="kube-system/cilium-dk8bs" May 8 00:07:44.250510 kubelet[2823]: I0508 00:07:44.249492 2823 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6ae90792-07ab-4a93-9671-7f095765d7e9-lib-modules\") pod \"cilium-dk8bs\" (UID: \"6ae90792-07ab-4a93-9671-7f095765d7e9\") " pod="kube-system/cilium-dk8bs" May 8 00:07:44.250510 kubelet[2823]: I0508 00:07:44.249504 2823 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6ae90792-07ab-4a93-9671-7f095765d7e9-host-proc-sys-kernel\") pod \"cilium-dk8bs\" (UID: \"6ae90792-07ab-4a93-9671-7f095765d7e9\") " pod="kube-system/cilium-dk8bs" May 8 00:07:44.250510 kubelet[2823]: I0508 00:07:44.249514 2823 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6ae90792-07ab-4a93-9671-7f095765d7e9-xtables-lock\") pod \"cilium-dk8bs\" (UID: \"6ae90792-07ab-4a93-9671-7f095765d7e9\") " pod="kube-system/cilium-dk8bs" May 8 00:07:44.250510 kubelet[2823]: I0508 00:07:44.249524 2823 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/a4536a78-acd4-422c-bf1b-766185f38f1a-kube-proxy\") pod \"kube-proxy-2qpgf\" (UID: \"a4536a78-acd4-422c-bf1b-766185f38f1a\") " pod="kube-system/kube-proxy-2qpgf" May 8 00:07:44.250510 kubelet[2823]: I0508 00:07:44.249539 2823 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6ae90792-07ab-4a93-9671-7f095765d7e9-etc-cni-netd\") pod \"cilium-dk8bs\" (UID: \"6ae90792-07ab-4a93-9671-7f095765d7e9\") " pod="kube-system/cilium-dk8bs" May 8 00:07:44.250990 kubelet[2823]: I0508 00:07:44.249574 2823 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a4536a78-acd4-422c-bf1b-766185f38f1a-lib-modules\") pod \"kube-proxy-2qpgf\" (UID: \"a4536a78-acd4-422c-bf1b-766185f38f1a\") " pod="kube-system/kube-proxy-2qpgf" May 8 00:07:44.250990 kubelet[2823]: I0508 00:07:44.249584 2823 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6ae90792-07ab-4a93-9671-7f095765d7e9-hubble-tls\") pod \"cilium-dk8bs\" (UID: \"6ae90792-07ab-4a93-9671-7f095765d7e9\") " pod="kube-system/cilium-dk8bs" May 8 00:07:44.250990 kubelet[2823]: I0508 00:07:44.249593 2823 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6ae90792-07ab-4a93-9671-7f095765d7e9-clustermesh-secrets\") pod \"cilium-dk8bs\" (UID: \"6ae90792-07ab-4a93-9671-7f095765d7e9\") " pod="kube-system/cilium-dk8bs" May 8 00:07:44.250990 kubelet[2823]: I0508 00:07:44.249601 2823 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6ae90792-07ab-4a93-9671-7f095765d7e9-cilium-config-path\") pod \"cilium-dk8bs\" (UID: \"6ae90792-07ab-4a93-9671-7f095765d7e9\") " pod="kube-system/cilium-dk8bs" May 8 00:07:44.250990 kubelet[2823]: I0508 00:07:44.249609 2823 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a4536a78-acd4-422c-bf1b-766185f38f1a-xtables-lock\") pod \"kube-proxy-2qpgf\" (UID: \"a4536a78-acd4-422c-bf1b-766185f38f1a\") " pod="kube-system/kube-proxy-2qpgf" May 8 00:07:44.250990 kubelet[2823]: I0508 00:07:44.249624 2823 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6ae90792-07ab-4a93-9671-7f095765d7e9-cilium-run\") pod \"cilium-dk8bs\" (UID: \"6ae90792-07ab-4a93-9671-7f095765d7e9\") " pod="kube-system/cilium-dk8bs" May 8 00:07:44.251086 kubelet[2823]: I0508 00:07:44.249651 2823 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6ae90792-07ab-4a93-9671-7f095765d7e9-hostproc\") pod \"cilium-dk8bs\" (UID: \"6ae90792-07ab-4a93-9671-7f095765d7e9\") " pod="kube-system/cilium-dk8bs" May 8 00:07:44.251086 kubelet[2823]: I0508 00:07:44.249667 2823 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6ae90792-07ab-4a93-9671-7f095765d7e9-cilium-cgroup\") pod \"cilium-dk8bs\" (UID: \"6ae90792-07ab-4a93-9671-7f095765d7e9\") " pod="kube-system/cilium-dk8bs" May 8 00:07:44.251086 kubelet[2823]: I0508 00:07:44.249682 2823 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6ae90792-07ab-4a93-9671-7f095765d7e9-host-proc-sys-net\") pod \"cilium-dk8bs\" (UID: \"6ae90792-07ab-4a93-9671-7f095765d7e9\") " pod="kube-system/cilium-dk8bs" May 8 00:07:44.251086 kubelet[2823]: I0508 00:07:44.249698 2823 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kpjdc\" (UniqueName: \"kubernetes.io/projected/a4536a78-acd4-422c-bf1b-766185f38f1a-kube-api-access-kpjdc\") pod \"kube-proxy-2qpgf\" (UID: \"a4536a78-acd4-422c-bf1b-766185f38f1a\") " pod="kube-system/kube-proxy-2qpgf" May 8 00:07:44.251086 kubelet[2823]: I0508 00:07:44.249727 2823 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6ae90792-07ab-4a93-9671-7f095765d7e9-cni-path\") pod \"cilium-dk8bs\" (UID: \"6ae90792-07ab-4a93-9671-7f095765d7e9\") " pod="kube-system/cilium-dk8bs" May 8 00:07:44.251164 kubelet[2823]: I0508 00:07:44.249741 2823 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zp4nj\" (UniqueName: \"kubernetes.io/projected/6ae90792-07ab-4a93-9671-7f095765d7e9-kube-api-access-zp4nj\") pod \"cilium-dk8bs\" (UID: \"6ae90792-07ab-4a93-9671-7f095765d7e9\") " pod="kube-system/cilium-dk8bs" May 8 00:07:44.251641 systemd[1]: Created slice kubepods-burstable-pod6ae90792_07ab_4a93_9671_7f095765d7e9.slice - libcontainer container kubepods-burstable-pod6ae90792_07ab_4a93_9671_7f095765d7e9.slice. May 8 00:07:44.256406 systemd[1]: Created slice kubepods-besteffort-poda4536a78_acd4_422c_bf1b_766185f38f1a.slice - libcontainer container kubepods-besteffort-poda4536a78_acd4_422c_bf1b_766185f38f1a.slice. May 8 00:07:44.362280 kubelet[2823]: E0508 00:07:44.362224 2823 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found May 8 00:07:44.362280 kubelet[2823]: E0508 00:07:44.362246 2823 projected.go:194] Error preparing data for projected volume kube-api-access-zp4nj for pod kube-system/cilium-dk8bs: configmap "kube-root-ca.crt" not found May 8 00:07:44.362280 kubelet[2823]: E0508 00:07:44.362278 2823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6ae90792-07ab-4a93-9671-7f095765d7e9-kube-api-access-zp4nj podName:6ae90792-07ab-4a93-9671-7f095765d7e9 nodeName:}" failed. No retries permitted until 2025-05-08 00:07:44.862266105 +0000 UTC m=+6.858144915 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-zp4nj" (UniqueName: "kubernetes.io/projected/6ae90792-07ab-4a93-9671-7f095765d7e9-kube-api-access-zp4nj") pod "cilium-dk8bs" (UID: "6ae90792-07ab-4a93-9671-7f095765d7e9") : configmap "kube-root-ca.crt" not found May 8 00:07:44.362814 kubelet[2823]: E0508 00:07:44.362222 2823 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found May 8 00:07:44.362814 kubelet[2823]: E0508 00:07:44.362481 2823 projected.go:194] Error preparing data for projected volume kube-api-access-kpjdc for pod kube-system/kube-proxy-2qpgf: configmap "kube-root-ca.crt" not found May 8 00:07:44.362814 kubelet[2823]: E0508 00:07:44.362502 2823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a4536a78-acd4-422c-bf1b-766185f38f1a-kube-api-access-kpjdc podName:a4536a78-acd4-422c-bf1b-766185f38f1a nodeName:}" failed. No retries permitted until 2025-05-08 00:07:44.862493604 +0000 UTC m=+6.858372415 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-kpjdc" (UniqueName: "kubernetes.io/projected/a4536a78-acd4-422c-bf1b-766185f38f1a-kube-api-access-kpjdc") pod "kube-proxy-2qpgf" (UID: "a4536a78-acd4-422c-bf1b-766185f38f1a") : configmap "kube-root-ca.crt" not found May 8 00:07:45.156317 containerd[1554]: time="2025-05-08T00:07:45.156291900Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-dk8bs,Uid:6ae90792-07ab-4a93-9671-7f095765d7e9,Namespace:kube-system,Attempt:0,}" May 8 00:07:45.163139 containerd[1554]: time="2025-05-08T00:07:45.163011161Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-2qpgf,Uid:a4536a78-acd4-422c-bf1b-766185f38f1a,Namespace:kube-system,Attempt:0,}" May 8 00:07:45.206518 kubelet[2823]: I0508 00:07:45.206107 2823 status_manager.go:890] "Failed to get status for pod" podUID="2cba70d1-0f21-44d6-970e-5d2a01d15dfb" pod="kube-system/cilium-operator-6c4d7847fc-nl7zn" err="pods \"cilium-operator-6c4d7847fc-nl7zn\" is forbidden: User \"system:node:localhost\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'localhost' and this object" May 8 00:07:45.209909 systemd[1]: Created slice kubepods-besteffort-pod2cba70d1_0f21_44d6_970e_5d2a01d15dfb.slice - libcontainer container kubepods-besteffort-pod2cba70d1_0f21_44d6_970e_5d2a01d15dfb.slice. May 8 00:07:45.259274 kubelet[2823]: I0508 00:07:45.258600 2823 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2cba70d1-0f21-44d6-970e-5d2a01d15dfb-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-nl7zn\" (UID: \"2cba70d1-0f21-44d6-970e-5d2a01d15dfb\") " pod="kube-system/cilium-operator-6c4d7847fc-nl7zn" May 8 00:07:45.259274 kubelet[2823]: I0508 00:07:45.258628 2823 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zr8sx\" (UniqueName: \"kubernetes.io/projected/2cba70d1-0f21-44d6-970e-5d2a01d15dfb-kube-api-access-zr8sx\") pod \"cilium-operator-6c4d7847fc-nl7zn\" (UID: \"2cba70d1-0f21-44d6-970e-5d2a01d15dfb\") " pod="kube-system/cilium-operator-6c4d7847fc-nl7zn" May 8 00:07:45.269405 containerd[1554]: time="2025-05-08T00:07:45.269127168Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:07:45.269405 containerd[1554]: time="2025-05-08T00:07:45.269208123Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:07:45.269405 containerd[1554]: time="2025-05-08T00:07:45.269232688Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:07:45.269405 containerd[1554]: time="2025-05-08T00:07:45.269350835Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:07:45.275072 containerd[1554]: time="2025-05-08T00:07:45.274985334Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:07:45.275072 containerd[1554]: time="2025-05-08T00:07:45.275028169Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:07:45.275198 containerd[1554]: time="2025-05-08T00:07:45.275079809Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:07:45.275198 containerd[1554]: time="2025-05-08T00:07:45.275145470Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:07:45.295713 systemd[1]: Started cri-containerd-57bc32357e238011dba77cb17bc6dd8b4222c7c7a0cf57eb1c1e39823b07ecbe.scope - libcontainer container 57bc32357e238011dba77cb17bc6dd8b4222c7c7a0cf57eb1c1e39823b07ecbe. May 8 00:07:45.299063 systemd[1]: Started cri-containerd-25c52c274c12bfc67953c0bb61f5b872d42d46631c838480782290d9bc3e5948.scope - libcontainer container 25c52c274c12bfc67953c0bb61f5b872d42d46631c838480782290d9bc3e5948. May 8 00:07:45.319996 containerd[1554]: time="2025-05-08T00:07:45.319971742Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-2qpgf,Uid:a4536a78-acd4-422c-bf1b-766185f38f1a,Namespace:kube-system,Attempt:0,} returns sandbox id \"25c52c274c12bfc67953c0bb61f5b872d42d46631c838480782290d9bc3e5948\"" May 8 00:07:45.326699 containerd[1554]: time="2025-05-08T00:07:45.322401464Z" level=info msg="CreateContainer within sandbox \"25c52c274c12bfc67953c0bb61f5b872d42d46631c838480782290d9bc3e5948\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 8 00:07:45.332635 containerd[1554]: time="2025-05-08T00:07:45.332534631Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-dk8bs,Uid:6ae90792-07ab-4a93-9671-7f095765d7e9,Namespace:kube-system,Attempt:0,} returns sandbox id \"57bc32357e238011dba77cb17bc6dd8b4222c7c7a0cf57eb1c1e39823b07ecbe\"" May 8 00:07:45.333743 containerd[1554]: time="2025-05-08T00:07:45.333724902Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" May 8 00:07:45.382446 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4241967192.mount: Deactivated successfully. May 8 00:07:45.385227 containerd[1554]: time="2025-05-08T00:07:45.385202755Z" level=info msg="CreateContainer within sandbox \"25c52c274c12bfc67953c0bb61f5b872d42d46631c838480782290d9bc3e5948\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"acdc654d43b9fa06c704a63d81deed3f370d8a4228ad9272c5d28ba2743bc944\"" May 8 00:07:45.385941 containerd[1554]: time="2025-05-08T00:07:45.385917418Z" level=info msg="StartContainer for \"acdc654d43b9fa06c704a63d81deed3f370d8a4228ad9272c5d28ba2743bc944\"" May 8 00:07:45.410699 systemd[1]: Started cri-containerd-acdc654d43b9fa06c704a63d81deed3f370d8a4228ad9272c5d28ba2743bc944.scope - libcontainer container acdc654d43b9fa06c704a63d81deed3f370d8a4228ad9272c5d28ba2743bc944. May 8 00:07:45.445121 containerd[1554]: time="2025-05-08T00:07:45.445054135Z" level=info msg="StartContainer for \"acdc654d43b9fa06c704a63d81deed3f370d8a4228ad9272c5d28ba2743bc944\" returns successfully" May 8 00:07:45.512723 containerd[1554]: time="2025-05-08T00:07:45.512684588Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-nl7zn,Uid:2cba70d1-0f21-44d6-970e-5d2a01d15dfb,Namespace:kube-system,Attempt:0,}" May 8 00:07:45.577373 containerd[1554]: time="2025-05-08T00:07:45.577119283Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:07:45.577373 containerd[1554]: time="2025-05-08T00:07:45.577174076Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:07:45.577373 containerd[1554]: time="2025-05-08T00:07:45.577184692Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:07:45.577587 containerd[1554]: time="2025-05-08T00:07:45.577234304Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:07:45.589654 systemd[1]: Started cri-containerd-a14c32d8e041218ef318cabc469e146c3a0809fdec859d9fd84a8e4d91371092.scope - libcontainer container a14c32d8e041218ef318cabc469e146c3a0809fdec859d9fd84a8e4d91371092. May 8 00:07:45.623424 containerd[1554]: time="2025-05-08T00:07:45.623260200Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-nl7zn,Uid:2cba70d1-0f21-44d6-970e-5d2a01d15dfb,Namespace:kube-system,Attempt:0,} returns sandbox id \"a14c32d8e041218ef318cabc469e146c3a0809fdec859d9fd84a8e4d91371092\"" May 8 00:07:46.353302 kubelet[2823]: I0508 00:07:46.352954 2823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-2qpgf" podStartSLOduration=2.352936809 podStartE2EDuration="2.352936809s" podCreationTimestamp="2025-05-08 00:07:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 00:07:46.311008292 +0000 UTC m=+8.306887104" watchObservedRunningTime="2025-05-08 00:07:46.352936809 +0000 UTC m=+8.348815631" May 8 00:07:49.363301 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2985097001.mount: Deactivated successfully. May 8 00:07:51.996929 containerd[1554]: time="2025-05-08T00:07:51.996887532Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:07:52.003520 containerd[1554]: time="2025-05-08T00:07:52.003478251Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" May 8 00:07:52.009012 containerd[1554]: time="2025-05-08T00:07:52.008979203Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:07:52.020039 containerd[1554]: time="2025-05-08T00:07:52.010035719Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 6.676288552s" May 8 00:07:52.020039 containerd[1554]: time="2025-05-08T00:07:52.010058989Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" May 8 00:07:52.021922 containerd[1554]: time="2025-05-08T00:07:52.021816303Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" May 8 00:07:52.080505 containerd[1554]: time="2025-05-08T00:07:52.080477757Z" level=info msg="CreateContainer within sandbox \"57bc32357e238011dba77cb17bc6dd8b4222c7c7a0cf57eb1c1e39823b07ecbe\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 8 00:07:52.161390 containerd[1554]: time="2025-05-08T00:07:52.161257069Z" level=info msg="CreateContainer within sandbox \"57bc32357e238011dba77cb17bc6dd8b4222c7c7a0cf57eb1c1e39823b07ecbe\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"69e2e147905c41fde7551bd2e4944154ec1583409c9fd444f4ebec048fe9d106\"" May 8 00:07:52.162728 containerd[1554]: time="2025-05-08T00:07:52.162236029Z" level=info msg="StartContainer for \"69e2e147905c41fde7551bd2e4944154ec1583409c9fd444f4ebec048fe9d106\"" May 8 00:07:52.346647 systemd[1]: Started cri-containerd-69e2e147905c41fde7551bd2e4944154ec1583409c9fd444f4ebec048fe9d106.scope - libcontainer container 69e2e147905c41fde7551bd2e4944154ec1583409c9fd444f4ebec048fe9d106. May 8 00:07:52.369397 containerd[1554]: time="2025-05-08T00:07:52.369328113Z" level=info msg="StartContainer for \"69e2e147905c41fde7551bd2e4944154ec1583409c9fd444f4ebec048fe9d106\" returns successfully" May 8 00:07:52.380852 systemd[1]: cri-containerd-69e2e147905c41fde7551bd2e4944154ec1583409c9fd444f4ebec048fe9d106.scope: Deactivated successfully. May 8 00:07:53.156951 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-69e2e147905c41fde7551bd2e4944154ec1583409c9fd444f4ebec048fe9d106-rootfs.mount: Deactivated successfully. May 8 00:07:53.536043 containerd[1554]: time="2025-05-08T00:07:53.521153097Z" level=info msg="shim disconnected" id=69e2e147905c41fde7551bd2e4944154ec1583409c9fd444f4ebec048fe9d106 namespace=k8s.io May 8 00:07:53.536043 containerd[1554]: time="2025-05-08T00:07:53.536042216Z" level=warning msg="cleaning up after shim disconnected" id=69e2e147905c41fde7551bd2e4944154ec1583409c9fd444f4ebec048fe9d106 namespace=k8s.io May 8 00:07:53.536363 containerd[1554]: time="2025-05-08T00:07:53.536051649Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 00:07:54.203949 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2051647970.mount: Deactivated successfully. May 8 00:07:54.364648 containerd[1554]: time="2025-05-08T00:07:54.364617840Z" level=info msg="CreateContainer within sandbox \"57bc32357e238011dba77cb17bc6dd8b4222c7c7a0cf57eb1c1e39823b07ecbe\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 8 00:07:54.431654 containerd[1554]: time="2025-05-08T00:07:54.431598732Z" level=info msg="CreateContainer within sandbox \"57bc32357e238011dba77cb17bc6dd8b4222c7c7a0cf57eb1c1e39823b07ecbe\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"da27a1faf4770a2c27d5a5d47a682d16980fb06619428fe821a74bcdbf99b19d\"" May 8 00:07:54.432629 containerd[1554]: time="2025-05-08T00:07:54.432079308Z" level=info msg="StartContainer for \"da27a1faf4770a2c27d5a5d47a682d16980fb06619428fe821a74bcdbf99b19d\"" May 8 00:07:54.451630 systemd[1]: Started cri-containerd-da27a1faf4770a2c27d5a5d47a682d16980fb06619428fe821a74bcdbf99b19d.scope - libcontainer container da27a1faf4770a2c27d5a5d47a682d16980fb06619428fe821a74bcdbf99b19d. May 8 00:07:54.474497 containerd[1554]: time="2025-05-08T00:07:54.474345690Z" level=info msg="StartContainer for \"da27a1faf4770a2c27d5a5d47a682d16980fb06619428fe821a74bcdbf99b19d\" returns successfully" May 8 00:07:54.486478 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 8 00:07:54.486719 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 8 00:07:54.486815 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... May 8 00:07:54.491707 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 8 00:07:54.491811 systemd[1]: cri-containerd-da27a1faf4770a2c27d5a5d47a682d16980fb06619428fe821a74bcdbf99b19d.scope: Deactivated successfully. May 8 00:07:54.682699 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 8 00:07:54.730757 containerd[1554]: time="2025-05-08T00:07:54.730437167Z" level=info msg="shim disconnected" id=da27a1faf4770a2c27d5a5d47a682d16980fb06619428fe821a74bcdbf99b19d namespace=k8s.io May 8 00:07:54.731379 containerd[1554]: time="2025-05-08T00:07:54.731066729Z" level=warning msg="cleaning up after shim disconnected" id=da27a1faf4770a2c27d5a5d47a682d16980fb06619428fe821a74bcdbf99b19d namespace=k8s.io May 8 00:07:54.731379 containerd[1554]: time="2025-05-08T00:07:54.731086679Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 00:07:54.751959 containerd[1554]: time="2025-05-08T00:07:54.751923218Z" level=warning msg="cleanup warnings time=\"2025-05-08T00:07:54Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io May 8 00:07:55.017738 containerd[1554]: time="2025-05-08T00:07:55.017710538Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:07:55.018406 containerd[1554]: time="2025-05-08T00:07:55.018374443Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" May 8 00:07:55.018594 containerd[1554]: time="2025-05-08T00:07:55.018578376Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:07:55.019869 containerd[1554]: time="2025-05-08T00:07:55.019851772Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.998009325s" May 8 00:07:55.019899 containerd[1554]: time="2025-05-08T00:07:55.019871165Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" May 8 00:07:55.024142 containerd[1554]: time="2025-05-08T00:07:55.024122331Z" level=info msg="CreateContainer within sandbox \"a14c32d8e041218ef318cabc469e146c3a0809fdec859d9fd84a8e4d91371092\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" May 8 00:07:55.029091 containerd[1554]: time="2025-05-08T00:07:55.029062532Z" level=info msg="CreateContainer within sandbox \"a14c32d8e041218ef318cabc469e146c3a0809fdec859d9fd84a8e4d91371092\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"ef70721c1294d865c86b7948ea532a1c8ad0a5e38d7ddc1532f033347fdf1d24\"" May 8 00:07:55.030565 containerd[1554]: time="2025-05-08T00:07:55.030055969Z" level=info msg="StartContainer for \"ef70721c1294d865c86b7948ea532a1c8ad0a5e38d7ddc1532f033347fdf1d24\"" May 8 00:07:55.054642 systemd[1]: Started cri-containerd-ef70721c1294d865c86b7948ea532a1c8ad0a5e38d7ddc1532f033347fdf1d24.scope - libcontainer container ef70721c1294d865c86b7948ea532a1c8ad0a5e38d7ddc1532f033347fdf1d24. May 8 00:07:55.105738 containerd[1554]: time="2025-05-08T00:07:55.105695417Z" level=info msg="StartContainer for \"ef70721c1294d865c86b7948ea532a1c8ad0a5e38d7ddc1532f033347fdf1d24\" returns successfully" May 8 00:07:55.202259 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-da27a1faf4770a2c27d5a5d47a682d16980fb06619428fe821a74bcdbf99b19d-rootfs.mount: Deactivated successfully. May 8 00:07:55.379167 containerd[1554]: time="2025-05-08T00:07:55.379094169Z" level=info msg="CreateContainer within sandbox \"57bc32357e238011dba77cb17bc6dd8b4222c7c7a0cf57eb1c1e39823b07ecbe\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 8 00:07:55.406401 containerd[1554]: time="2025-05-08T00:07:55.406374443Z" level=info msg="CreateContainer within sandbox \"57bc32357e238011dba77cb17bc6dd8b4222c7c7a0cf57eb1c1e39823b07ecbe\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"f8b77a592e1acac582b8599c4a151127aaa56cfa2d79879a7caad9fac8a68c7e\"" May 8 00:07:55.408673 containerd[1554]: time="2025-05-08T00:07:55.408652095Z" level=info msg="StartContainer for \"f8b77a592e1acac582b8599c4a151127aaa56cfa2d79879a7caad9fac8a68c7e\"" May 8 00:07:55.440653 systemd[1]: Started cri-containerd-f8b77a592e1acac582b8599c4a151127aaa56cfa2d79879a7caad9fac8a68c7e.scope - libcontainer container f8b77a592e1acac582b8599c4a151127aaa56cfa2d79879a7caad9fac8a68c7e. May 8 00:07:55.497146 containerd[1554]: time="2025-05-08T00:07:55.496690728Z" level=info msg="StartContainer for \"f8b77a592e1acac582b8599c4a151127aaa56cfa2d79879a7caad9fac8a68c7e\" returns successfully" May 8 00:07:55.513644 kubelet[2823]: I0508 00:07:55.513533 2823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-nl7zn" podStartSLOduration=1.116696227 podStartE2EDuration="10.513511943s" podCreationTimestamp="2025-05-08 00:07:45 +0000 UTC" firstStartedPulling="2025-05-08 00:07:45.624406953 +0000 UTC m=+7.620285760" lastFinishedPulling="2025-05-08 00:07:55.021222664 +0000 UTC m=+17.017101476" observedRunningTime="2025-05-08 00:07:55.415286314 +0000 UTC m=+17.411165130" watchObservedRunningTime="2025-05-08 00:07:55.513511943 +0000 UTC m=+17.509390759" May 8 00:07:55.514489 systemd[1]: cri-containerd-f8b77a592e1acac582b8599c4a151127aaa56cfa2d79879a7caad9fac8a68c7e.scope: Deactivated successfully. May 8 00:07:55.514817 systemd[1]: cri-containerd-f8b77a592e1acac582b8599c4a151127aaa56cfa2d79879a7caad9fac8a68c7e.scope: Consumed 14ms CPU time, 3.1M memory peak, 1M read from disk. May 8 00:07:55.536703 containerd[1554]: time="2025-05-08T00:07:55.536567538Z" level=info msg="shim disconnected" id=f8b77a592e1acac582b8599c4a151127aaa56cfa2d79879a7caad9fac8a68c7e namespace=k8s.io May 8 00:07:55.536703 containerd[1554]: time="2025-05-08T00:07:55.536608231Z" level=warning msg="cleaning up after shim disconnected" id=f8b77a592e1acac582b8599c4a151127aaa56cfa2d79879a7caad9fac8a68c7e namespace=k8s.io May 8 00:07:55.536703 containerd[1554]: time="2025-05-08T00:07:55.536613887Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 00:07:55.553304 containerd[1554]: time="2025-05-08T00:07:55.553184670Z" level=warning msg="cleanup warnings time=\"2025-05-08T00:07:55Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io May 8 00:07:56.202380 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f8b77a592e1acac582b8599c4a151127aaa56cfa2d79879a7caad9fac8a68c7e-rootfs.mount: Deactivated successfully. May 8 00:07:56.381905 containerd[1554]: time="2025-05-08T00:07:56.381770545Z" level=info msg="CreateContainer within sandbox \"57bc32357e238011dba77cb17bc6dd8b4222c7c7a0cf57eb1c1e39823b07ecbe\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 8 00:07:56.409056 containerd[1554]: time="2025-05-08T00:07:56.409030899Z" level=info msg="CreateContainer within sandbox \"57bc32357e238011dba77cb17bc6dd8b4222c7c7a0cf57eb1c1e39823b07ecbe\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"7e1df560719d9191f8aba7b7867bcf25664d24c49d18a8108cc5051455389622\"" May 8 00:07:56.409511 containerd[1554]: time="2025-05-08T00:07:56.409362435Z" level=info msg="StartContainer for \"7e1df560719d9191f8aba7b7867bcf25664d24c49d18a8108cc5051455389622\"" May 8 00:07:56.441703 systemd[1]: Started cri-containerd-7e1df560719d9191f8aba7b7867bcf25664d24c49d18a8108cc5051455389622.scope - libcontainer container 7e1df560719d9191f8aba7b7867bcf25664d24c49d18a8108cc5051455389622. May 8 00:07:56.457393 systemd[1]: cri-containerd-7e1df560719d9191f8aba7b7867bcf25664d24c49d18a8108cc5051455389622.scope: Deactivated successfully. May 8 00:07:56.458695 containerd[1554]: time="2025-05-08T00:07:56.458672483Z" level=info msg="StartContainer for \"7e1df560719d9191f8aba7b7867bcf25664d24c49d18a8108cc5051455389622\" returns successfully" May 8 00:07:56.475971 containerd[1554]: time="2025-05-08T00:07:56.475932931Z" level=info msg="shim disconnected" id=7e1df560719d9191f8aba7b7867bcf25664d24c49d18a8108cc5051455389622 namespace=k8s.io May 8 00:07:56.475971 containerd[1554]: time="2025-05-08T00:07:56.475969673Z" level=warning msg="cleaning up after shim disconnected" id=7e1df560719d9191f8aba7b7867bcf25664d24c49d18a8108cc5051455389622 namespace=k8s.io May 8 00:07:56.475971 containerd[1554]: time="2025-05-08T00:07:56.475977295Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 00:07:57.202110 systemd[1]: run-containerd-runc-k8s.io-7e1df560719d9191f8aba7b7867bcf25664d24c49d18a8108cc5051455389622-runc.BlgOhM.mount: Deactivated successfully. May 8 00:07:57.202190 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7e1df560719d9191f8aba7b7867bcf25664d24c49d18a8108cc5051455389622-rootfs.mount: Deactivated successfully. May 8 00:07:57.385355 containerd[1554]: time="2025-05-08T00:07:57.385275665Z" level=info msg="CreateContainer within sandbox \"57bc32357e238011dba77cb17bc6dd8b4222c7c7a0cf57eb1c1e39823b07ecbe\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 8 00:07:57.450262 containerd[1554]: time="2025-05-08T00:07:57.450181060Z" level=info msg="CreateContainer within sandbox \"57bc32357e238011dba77cb17bc6dd8b4222c7c7a0cf57eb1c1e39823b07ecbe\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"2d92ddc8254d93d24eb6bf7e02dfc0d1dc86ad1d927d971763c4ba3b67ccbe3f\"" May 8 00:07:57.450673 containerd[1554]: time="2025-05-08T00:07:57.450510050Z" level=info msg="StartContainer for \"2d92ddc8254d93d24eb6bf7e02dfc0d1dc86ad1d927d971763c4ba3b67ccbe3f\"" May 8 00:07:57.469644 systemd[1]: Started cri-containerd-2d92ddc8254d93d24eb6bf7e02dfc0d1dc86ad1d927d971763c4ba3b67ccbe3f.scope - libcontainer container 2d92ddc8254d93d24eb6bf7e02dfc0d1dc86ad1d927d971763c4ba3b67ccbe3f. May 8 00:07:57.495819 containerd[1554]: time="2025-05-08T00:07:57.495370552Z" level=info msg="StartContainer for \"2d92ddc8254d93d24eb6bf7e02dfc0d1dc86ad1d927d971763c4ba3b67ccbe3f\" returns successfully" May 8 00:07:57.687907 kubelet[2823]: I0508 00:07:57.687885 2823 kubelet_node_status.go:502] "Fast updating node status as it just became ready" May 8 00:07:57.774129 systemd[1]: Created slice kubepods-burstable-podcdbb789e_daef_40bc_a108_22c4959bf5d9.slice - libcontainer container kubepods-burstable-podcdbb789e_daef_40bc_a108_22c4959bf5d9.slice. May 8 00:07:57.780532 systemd[1]: Created slice kubepods-burstable-pod37ec69b7_0be8_4682_9a49_5c7a05ac5fb1.slice - libcontainer container kubepods-burstable-pod37ec69b7_0be8_4682_9a49_5c7a05ac5fb1.slice. May 8 00:07:57.921694 kubelet[2823]: I0508 00:07:57.921579 2823 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j74zt\" (UniqueName: \"kubernetes.io/projected/cdbb789e-daef-40bc-a108-22c4959bf5d9-kube-api-access-j74zt\") pod \"coredns-668d6bf9bc-hmjdc\" (UID: \"cdbb789e-daef-40bc-a108-22c4959bf5d9\") " pod="kube-system/coredns-668d6bf9bc-hmjdc" May 8 00:07:57.921694 kubelet[2823]: I0508 00:07:57.921626 2823 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cdbb789e-daef-40bc-a108-22c4959bf5d9-config-volume\") pod \"coredns-668d6bf9bc-hmjdc\" (UID: \"cdbb789e-daef-40bc-a108-22c4959bf5d9\") " pod="kube-system/coredns-668d6bf9bc-hmjdc" May 8 00:07:57.921694 kubelet[2823]: I0508 00:07:57.921637 2823 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7rrpv\" (UniqueName: \"kubernetes.io/projected/37ec69b7-0be8-4682-9a49-5c7a05ac5fb1-kube-api-access-7rrpv\") pod \"coredns-668d6bf9bc-qddsc\" (UID: \"37ec69b7-0be8-4682-9a49-5c7a05ac5fb1\") " pod="kube-system/coredns-668d6bf9bc-qddsc" May 8 00:07:57.921694 kubelet[2823]: I0508 00:07:57.921647 2823 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/37ec69b7-0be8-4682-9a49-5c7a05ac5fb1-config-volume\") pod \"coredns-668d6bf9bc-qddsc\" (UID: \"37ec69b7-0be8-4682-9a49-5c7a05ac5fb1\") " pod="kube-system/coredns-668d6bf9bc-qddsc" May 8 00:07:58.092007 containerd[1554]: time="2025-05-08T00:07:58.091834135Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-qddsc,Uid:37ec69b7-0be8-4682-9a49-5c7a05ac5fb1,Namespace:kube-system,Attempt:0,}" May 8 00:07:58.377708 containerd[1554]: time="2025-05-08T00:07:58.377627342Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-hmjdc,Uid:cdbb789e-daef-40bc-a108-22c4959bf5d9,Namespace:kube-system,Attempt:0,}" May 8 00:07:58.442988 kubelet[2823]: I0508 00:07:58.441786 2823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-dk8bs" podStartSLOduration=7.753636945 podStartE2EDuration="14.441774242s" podCreationTimestamp="2025-05-08 00:07:44 +0000 UTC" firstStartedPulling="2025-05-08 00:07:45.333485013 +0000 UTC m=+7.329363818" lastFinishedPulling="2025-05-08 00:07:52.021622302 +0000 UTC m=+14.017501115" observedRunningTime="2025-05-08 00:07:58.438396344 +0000 UTC m=+20.434275160" watchObservedRunningTime="2025-05-08 00:07:58.441774242 +0000 UTC m=+20.437653058" May 8 00:08:00.062848 systemd-networkd[1461]: cilium_host: Link UP May 8 00:08:00.063754 systemd-networkd[1461]: cilium_net: Link UP May 8 00:08:00.063930 systemd-networkd[1461]: cilium_net: Gained carrier May 8 00:08:00.064924 systemd-networkd[1461]: cilium_host: Gained carrier May 8 00:08:00.105607 systemd-networkd[1461]: cilium_net: Gained IPv6LL May 8 00:08:00.331188 systemd-networkd[1461]: cilium_vxlan: Link UP May 8 00:08:00.331193 systemd-networkd[1461]: cilium_vxlan: Gained carrier May 8 00:08:00.336715 systemd-networkd[1461]: cilium_host: Gained IPv6LL May 8 00:08:00.757630 kernel: NET: Registered PF_ALG protocol family May 8 00:08:01.385509 systemd-networkd[1461]: lxc_health: Link UP May 8 00:08:01.387901 systemd-networkd[1461]: lxc_health: Gained carrier May 8 00:08:01.792802 systemd-networkd[1461]: lxc3eb16c13a472: Link UP May 8 00:08:01.798649 kernel: eth0: renamed from tmp83c3a May 8 00:08:01.805774 systemd-networkd[1461]: lxc3eb16c13a472: Gained carrier May 8 00:08:01.976752 systemd-networkd[1461]: lxc18519133f181: Link UP May 8 00:08:01.979589 kernel: eth0: renamed from tmpc0f12 May 8 00:08:01.984245 systemd-networkd[1461]: lxc18519133f181: Gained carrier May 8 00:08:02.144635 systemd-networkd[1461]: cilium_vxlan: Gained IPv6LL May 8 00:08:03.296667 systemd-networkd[1461]: lxc_health: Gained IPv6LL May 8 00:08:03.362640 systemd-networkd[1461]: lxc18519133f181: Gained IPv6LL May 8 00:08:03.744694 systemd-networkd[1461]: lxc3eb16c13a472: Gained IPv6LL May 8 00:08:04.642647 containerd[1554]: time="2025-05-08T00:08:04.642412561Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:08:04.644132 containerd[1554]: time="2025-05-08T00:08:04.642960570Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:08:04.644132 containerd[1554]: time="2025-05-08T00:08:04.643002028Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:08:04.645727 containerd[1554]: time="2025-05-08T00:08:04.644640532Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:08:04.656619 containerd[1554]: time="2025-05-08T00:08:04.655440834Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:08:04.656619 containerd[1554]: time="2025-05-08T00:08:04.655528872Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:08:04.656619 containerd[1554]: time="2025-05-08T00:08:04.655601549Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:08:04.656619 containerd[1554]: time="2025-05-08T00:08:04.656052356Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:08:04.681832 systemd[1]: run-containerd-runc-k8s.io-c0f1216752381724acb0b1c6ee85331c89086817fd503ad60e3585c5fb858b85-runc.BPp7K0.mount: Deactivated successfully. May 8 00:08:04.691427 systemd[1]: Started cri-containerd-83c3a674c330c31662798f57b92329fa192ddcef85cbe9574765daa896868aa1.scope - libcontainer container 83c3a674c330c31662798f57b92329fa192ddcef85cbe9574765daa896868aa1. May 8 00:08:04.693889 systemd[1]: Started cri-containerd-c0f1216752381724acb0b1c6ee85331c89086817fd503ad60e3585c5fb858b85.scope - libcontainer container c0f1216752381724acb0b1c6ee85331c89086817fd503ad60e3585c5fb858b85. May 8 00:08:04.712835 systemd-resolved[1425]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 8 00:08:04.718877 systemd-resolved[1425]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 8 00:08:04.767389 containerd[1554]: time="2025-05-08T00:08:04.767344913Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-hmjdc,Uid:cdbb789e-daef-40bc-a108-22c4959bf5d9,Namespace:kube-system,Attempt:0,} returns sandbox id \"c0f1216752381724acb0b1c6ee85331c89086817fd503ad60e3585c5fb858b85\"" May 8 00:08:04.770828 containerd[1554]: time="2025-05-08T00:08:04.770554517Z" level=info msg="CreateContainer within sandbox \"c0f1216752381724acb0b1c6ee85331c89086817fd503ad60e3585c5fb858b85\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 8 00:08:04.774823 containerd[1554]: time="2025-05-08T00:08:04.774791882Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-qddsc,Uid:37ec69b7-0be8-4682-9a49-5c7a05ac5fb1,Namespace:kube-system,Attempt:0,} returns sandbox id \"83c3a674c330c31662798f57b92329fa192ddcef85cbe9574765daa896868aa1\"" May 8 00:08:04.777606 containerd[1554]: time="2025-05-08T00:08:04.777505628Z" level=info msg="CreateContainer within sandbox \"83c3a674c330c31662798f57b92329fa192ddcef85cbe9574765daa896868aa1\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 8 00:08:04.925703 containerd[1554]: time="2025-05-08T00:08:04.925605067Z" level=info msg="CreateContainer within sandbox \"c0f1216752381724acb0b1c6ee85331c89086817fd503ad60e3585c5fb858b85\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"3b2523d92d595b3ab21230d499178633b73184b96bf00f7c1cc7c31c4509d87d\"" May 8 00:08:04.926455 containerd[1554]: time="2025-05-08T00:08:04.926310988Z" level=info msg="StartContainer for \"3b2523d92d595b3ab21230d499178633b73184b96bf00f7c1cc7c31c4509d87d\"" May 8 00:08:04.931243 containerd[1554]: time="2025-05-08T00:08:04.931073113Z" level=info msg="CreateContainer within sandbox \"83c3a674c330c31662798f57b92329fa192ddcef85cbe9574765daa896868aa1\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a5d745836dd4edeb40bbaa3465564da18925270bab12955d7acd4e4cf4749be3\"" May 8 00:08:04.932599 containerd[1554]: time="2025-05-08T00:08:04.931905139Z" level=info msg="StartContainer for \"a5d745836dd4edeb40bbaa3465564da18925270bab12955d7acd4e4cf4749be3\"" May 8 00:08:04.950673 systemd[1]: Started cri-containerd-3b2523d92d595b3ab21230d499178633b73184b96bf00f7c1cc7c31c4509d87d.scope - libcontainer container 3b2523d92d595b3ab21230d499178633b73184b96bf00f7c1cc7c31c4509d87d. May 8 00:08:04.962684 systemd[1]: Started cri-containerd-a5d745836dd4edeb40bbaa3465564da18925270bab12955d7acd4e4cf4749be3.scope - libcontainer container a5d745836dd4edeb40bbaa3465564da18925270bab12955d7acd4e4cf4749be3. May 8 00:08:05.093918 containerd[1554]: time="2025-05-08T00:08:05.093796789Z" level=info msg="StartContainer for \"3b2523d92d595b3ab21230d499178633b73184b96bf00f7c1cc7c31c4509d87d\" returns successfully" May 8 00:08:05.093918 containerd[1554]: time="2025-05-08T00:08:05.093796798Z" level=info msg="StartContainer for \"a5d745836dd4edeb40bbaa3465564da18925270bab12955d7acd4e4cf4749be3\" returns successfully" May 8 00:08:05.418692 kubelet[2823]: I0508 00:08:05.418206 2823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-hmjdc" podStartSLOduration=20.418192547 podStartE2EDuration="20.418192547s" podCreationTimestamp="2025-05-08 00:07:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 00:08:05.417067409 +0000 UTC m=+27.412946224" watchObservedRunningTime="2025-05-08 00:08:05.418192547 +0000 UTC m=+27.414071364" May 8 00:08:05.652561 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount286374440.mount: Deactivated successfully. May 8 00:08:06.425561 kubelet[2823]: I0508 00:08:06.424787 2823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-qddsc" podStartSLOduration=21.42477126 podStartE2EDuration="21.42477126s" podCreationTimestamp="2025-05-08 00:07:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 00:08:05.432916572 +0000 UTC m=+27.428795397" watchObservedRunningTime="2025-05-08 00:08:06.42477126 +0000 UTC m=+28.420650073" May 8 00:08:21.087837 systemd[1]: Started sshd@8-139.178.70.106:22-149.107.122.12:46304.service - OpenSSH per-connection server daemon (149.107.122.12:46304). May 8 00:08:22.528464 sshd[4192]: Invalid user pgsql from 149.107.122.12 port 46304 May 8 00:08:22.756908 sshd-session[4197]: pam_faillock(sshd:auth): User unknown May 8 00:08:22.762594 sshd[4192]: Postponed keyboard-interactive for invalid user pgsql from 149.107.122.12 port 46304 ssh2 [preauth] May 8 00:08:23.003439 sshd-session[4197]: pam_unix(sshd:auth): check pass; user unknown May 8 00:08:23.003455 sshd-session[4197]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=149.107.122.12 May 8 00:08:23.003799 sshd-session[4197]: pam_faillock(sshd:auth): User unknown May 8 00:08:24.725200 sshd[4192]: PAM: Permission denied for illegal user pgsql from 149.107.122.12 May 8 00:08:24.726207 sshd[4192]: Failed keyboard-interactive/pam for invalid user pgsql from 149.107.122.12 port 46304 ssh2 May 8 00:08:25.008999 sshd[4192]: Connection closed by invalid user pgsql 149.107.122.12 port 46304 [preauth] May 8 00:08:25.010178 systemd[1]: sshd@8-139.178.70.106:22-149.107.122.12:46304.service: Deactivated successfully. May 8 00:08:26.398616 systemd[1]: Started sshd@9-139.178.70.106:22-113.11.34.221:39058.service - OpenSSH per-connection server daemon (113.11.34.221:39058). May 8 00:08:27.595288 systemd[1]: Started sshd@10-139.178.70.106:22-139.19.117.130:41594.service - OpenSSH per-connection server daemon (139.19.117.130:41594). May 8 00:08:28.286654 sshd[4204]: Invalid user admin from 139.19.117.130 port 41594 May 8 00:08:28.666379 sshd[4201]: Invalid user httpd from 113.11.34.221 port 39058 May 8 00:08:29.113725 sshd-session[4206]: pam_faillock(sshd:auth): User unknown May 8 00:08:29.120638 sshd[4201]: Postponed keyboard-interactive for invalid user httpd from 113.11.34.221 port 39058 ssh2 [preauth] May 8 00:08:29.718053 sshd-session[4206]: pam_unix(sshd:auth): check pass; user unknown May 8 00:08:29.718072 sshd-session[4206]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=113.11.34.221 May 8 00:08:29.718355 sshd-session[4206]: pam_faillock(sshd:auth): User unknown May 8 00:08:31.735982 sshd[4201]: PAM: Permission denied for illegal user httpd from 113.11.34.221 May 8 00:08:31.736218 sshd[4201]: Failed keyboard-interactive/pam for invalid user httpd from 113.11.34.221 port 39058 ssh2 May 8 00:08:32.305463 sshd[4201]: Connection closed by invalid user httpd 113.11.34.221 port 39058 [preauth] May 8 00:08:32.306242 systemd[1]: sshd@9-139.178.70.106:22-113.11.34.221:39058.service: Deactivated successfully. May 8 00:08:37.585626 sshd[4204]: Connection closed by invalid user admin 139.19.117.130 port 41594 [preauth] May 8 00:08:37.586526 systemd[1]: sshd@10-139.178.70.106:22-139.19.117.130:41594.service: Deactivated successfully. May 8 00:08:48.089591 systemd[1]: Started sshd@11-139.178.70.106:22-139.178.89.65:45360.service - OpenSSH per-connection server daemon (139.178.89.65:45360). May 8 00:08:48.149850 sshd[4220]: Accepted publickey for core from 139.178.89.65 port 45360 ssh2: RSA SHA256:YTfHkQoI5xSpvBwpAWL9S8jbOKDeUPQ5sL4eA5EtDVU May 8 00:08:48.159772 sshd-session[4220]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:08:48.171147 systemd-logind[1535]: New session 10 of user core. May 8 00:08:48.178741 systemd[1]: Started session-10.scope - Session 10 of User core. May 8 00:08:48.998865 sshd[4222]: Connection closed by 139.178.89.65 port 45360 May 8 00:08:48.999222 sshd-session[4220]: pam_unix(sshd:session): session closed for user core May 8 00:08:49.018220 systemd-logind[1535]: Session 10 logged out. Waiting for processes to exit. May 8 00:08:49.018326 systemd[1]: sshd@11-139.178.70.106:22-139.178.89.65:45360.service: Deactivated successfully. May 8 00:08:49.019540 systemd[1]: session-10.scope: Deactivated successfully. May 8 00:08:49.020189 systemd-logind[1535]: Removed session 10. May 8 00:08:54.013736 systemd[1]: Started sshd@12-139.178.70.106:22-139.178.89.65:45366.service - OpenSSH per-connection server daemon (139.178.89.65:45366). May 8 00:08:54.417344 sshd[4234]: Accepted publickey for core from 139.178.89.65 port 45366 ssh2: RSA SHA256:YTfHkQoI5xSpvBwpAWL9S8jbOKDeUPQ5sL4eA5EtDVU May 8 00:08:54.418432 sshd-session[4234]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:08:54.429637 systemd-logind[1535]: New session 11 of user core. May 8 00:08:54.432654 systemd[1]: Started session-11.scope - Session 11 of User core. May 8 00:08:54.714385 sshd[4236]: Connection closed by 139.178.89.65 port 45366 May 8 00:08:54.714753 sshd-session[4234]: pam_unix(sshd:session): session closed for user core May 8 00:08:54.737361 systemd[1]: sshd@12-139.178.70.106:22-139.178.89.65:45366.service: Deactivated successfully. May 8 00:08:54.738950 systemd-logind[1535]: Session 11 logged out. Waiting for processes to exit. May 8 00:08:54.739063 systemd[1]: session-11.scope: Deactivated successfully. May 8 00:08:54.739906 systemd-logind[1535]: Removed session 11. May 8 00:08:59.735775 systemd[1]: Started sshd@13-139.178.70.106:22-139.178.89.65:57118.service - OpenSSH per-connection server daemon (139.178.89.65:57118). May 8 00:08:59.761510 sshd[4248]: Accepted publickey for core from 139.178.89.65 port 57118 ssh2: RSA SHA256:YTfHkQoI5xSpvBwpAWL9S8jbOKDeUPQ5sL4eA5EtDVU May 8 00:08:59.762358 sshd-session[4248]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:08:59.765309 systemd-logind[1535]: New session 12 of user core. May 8 00:08:59.772781 systemd[1]: Started session-12.scope - Session 12 of User core. May 8 00:08:59.862474 sshd[4250]: Connection closed by 139.178.89.65 port 57118 May 8 00:08:59.862845 sshd-session[4248]: pam_unix(sshd:session): session closed for user core May 8 00:08:59.865007 systemd-logind[1535]: Session 12 logged out. Waiting for processes to exit. May 8 00:08:59.865130 systemd[1]: sshd@13-139.178.70.106:22-139.178.89.65:57118.service: Deactivated successfully. May 8 00:08:59.866181 systemd[1]: session-12.scope: Deactivated successfully. May 8 00:08:59.866791 systemd-logind[1535]: Removed session 12. May 8 00:09:04.876058 systemd[1]: Started sshd@14-139.178.70.106:22-139.178.89.65:57124.service - OpenSSH per-connection server daemon (139.178.89.65:57124). May 8 00:09:04.907081 sshd[4265]: Accepted publickey for core from 139.178.89.65 port 57124 ssh2: RSA SHA256:YTfHkQoI5xSpvBwpAWL9S8jbOKDeUPQ5sL4eA5EtDVU May 8 00:09:04.908001 sshd-session[4265]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:09:04.912380 systemd-logind[1535]: New session 13 of user core. May 8 00:09:04.917733 systemd[1]: Started session-13.scope - Session 13 of User core. May 8 00:09:05.008172 sshd[4267]: Connection closed by 139.178.89.65 port 57124 May 8 00:09:05.009192 sshd-session[4265]: pam_unix(sshd:session): session closed for user core May 8 00:09:05.016936 systemd[1]: sshd@14-139.178.70.106:22-139.178.89.65:57124.service: Deactivated successfully. May 8 00:09:05.018234 systemd[1]: session-13.scope: Deactivated successfully. May 8 00:09:05.018777 systemd-logind[1535]: Session 13 logged out. Waiting for processes to exit. May 8 00:09:05.022859 systemd[1]: Started sshd@15-139.178.70.106:22-139.178.89.65:57136.service - OpenSSH per-connection server daemon (139.178.89.65:57136). May 8 00:09:05.024185 systemd-logind[1535]: Removed session 13. May 8 00:09:05.052362 sshd[4279]: Accepted publickey for core from 139.178.89.65 port 57136 ssh2: RSA SHA256:YTfHkQoI5xSpvBwpAWL9S8jbOKDeUPQ5sL4eA5EtDVU May 8 00:09:05.053343 sshd-session[4279]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:09:05.057110 systemd-logind[1535]: New session 14 of user core. May 8 00:09:05.064687 systemd[1]: Started session-14.scope - Session 14 of User core. May 8 00:09:05.198070 sshd[4282]: Connection closed by 139.178.89.65 port 57136 May 8 00:09:05.197843 sshd-session[4279]: pam_unix(sshd:session): session closed for user core May 8 00:09:05.207223 systemd[1]: sshd@15-139.178.70.106:22-139.178.89.65:57136.service: Deactivated successfully. May 8 00:09:05.210565 systemd[1]: session-14.scope: Deactivated successfully. May 8 00:09:05.211942 systemd-logind[1535]: Session 14 logged out. Waiting for processes to exit. May 8 00:09:05.219983 systemd[1]: Started sshd@16-139.178.70.106:22-139.178.89.65:57146.service - OpenSSH per-connection server daemon (139.178.89.65:57146). May 8 00:09:05.222785 systemd-logind[1535]: Removed session 14. May 8 00:09:05.248487 sshd[4291]: Accepted publickey for core from 139.178.89.65 port 57146 ssh2: RSA SHA256:YTfHkQoI5xSpvBwpAWL9S8jbOKDeUPQ5sL4eA5EtDVU May 8 00:09:05.249627 sshd-session[4291]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:09:05.252735 systemd-logind[1535]: New session 15 of user core. May 8 00:09:05.261662 systemd[1]: Started session-15.scope - Session 15 of User core. May 8 00:09:05.351564 sshd[4294]: Connection closed by 139.178.89.65 port 57146 May 8 00:09:05.351978 sshd-session[4291]: pam_unix(sshd:session): session closed for user core May 8 00:09:05.354245 systemd-logind[1535]: Session 15 logged out. Waiting for processes to exit. May 8 00:09:05.354385 systemd[1]: sshd@16-139.178.70.106:22-139.178.89.65:57146.service: Deactivated successfully. May 8 00:09:05.355579 systemd[1]: session-15.scope: Deactivated successfully. May 8 00:09:05.356227 systemd-logind[1535]: Removed session 15. May 8 00:09:10.363719 systemd[1]: Started sshd@17-139.178.70.106:22-139.178.89.65:57154.service - OpenSSH per-connection server daemon (139.178.89.65:57154). May 8 00:09:10.394835 sshd[4305]: Accepted publickey for core from 139.178.89.65 port 57154 ssh2: RSA SHA256:YTfHkQoI5xSpvBwpAWL9S8jbOKDeUPQ5sL4eA5EtDVU May 8 00:09:10.395723 sshd-session[4305]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:09:10.398449 systemd-logind[1535]: New session 16 of user core. May 8 00:09:10.412743 systemd[1]: Started session-16.scope - Session 16 of User core. May 8 00:09:10.499892 sshd[4307]: Connection closed by 139.178.89.65 port 57154 May 8 00:09:10.500240 sshd-session[4305]: pam_unix(sshd:session): session closed for user core May 8 00:09:10.502851 systemd-logind[1535]: Session 16 logged out. Waiting for processes to exit. May 8 00:09:10.503028 systemd[1]: sshd@17-139.178.70.106:22-139.178.89.65:57154.service: Deactivated successfully. May 8 00:09:10.504528 systemd[1]: session-16.scope: Deactivated successfully. May 8 00:09:10.505597 systemd-logind[1535]: Removed session 16. May 8 00:09:15.512076 systemd[1]: Started sshd@18-139.178.70.106:22-139.178.89.65:57162.service - OpenSSH per-connection server daemon (139.178.89.65:57162). May 8 00:09:15.540991 sshd[4320]: Accepted publickey for core from 139.178.89.65 port 57162 ssh2: RSA SHA256:YTfHkQoI5xSpvBwpAWL9S8jbOKDeUPQ5sL4eA5EtDVU May 8 00:09:15.541756 sshd-session[4320]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:09:15.545653 systemd-logind[1535]: New session 17 of user core. May 8 00:09:15.551726 systemd[1]: Started session-17.scope - Session 17 of User core. May 8 00:09:15.717188 sshd[4322]: Connection closed by 139.178.89.65 port 57162 May 8 00:09:15.717625 sshd-session[4320]: pam_unix(sshd:session): session closed for user core May 8 00:09:15.724428 systemd[1]: sshd@18-139.178.70.106:22-139.178.89.65:57162.service: Deactivated successfully. May 8 00:09:15.725467 systemd[1]: session-17.scope: Deactivated successfully. May 8 00:09:15.726414 systemd-logind[1535]: Session 17 logged out. Waiting for processes to exit. May 8 00:09:15.729904 systemd[1]: Started sshd@19-139.178.70.106:22-139.178.89.65:57170.service - OpenSSH per-connection server daemon (139.178.89.65:57170). May 8 00:09:15.731940 systemd-logind[1535]: Removed session 17. May 8 00:09:15.759798 sshd[4333]: Accepted publickey for core from 139.178.89.65 port 57170 ssh2: RSA SHA256:YTfHkQoI5xSpvBwpAWL9S8jbOKDeUPQ5sL4eA5EtDVU May 8 00:09:15.760832 sshd-session[4333]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:09:15.764628 systemd-logind[1535]: New session 18 of user core. May 8 00:09:15.766672 systemd[1]: Started session-18.scope - Session 18 of User core. May 8 00:09:16.965503 sshd[4336]: Connection closed by 139.178.89.65 port 57170 May 8 00:09:16.966221 sshd-session[4333]: pam_unix(sshd:session): session closed for user core May 8 00:09:16.976443 systemd[1]: Started sshd@20-139.178.70.106:22-139.178.89.65:41068.service - OpenSSH per-connection server daemon (139.178.89.65:41068). May 8 00:09:16.976794 systemd[1]: sshd@19-139.178.70.106:22-139.178.89.65:57170.service: Deactivated successfully. May 8 00:09:16.977954 systemd[1]: session-18.scope: Deactivated successfully. May 8 00:09:16.979369 systemd-logind[1535]: Session 18 logged out. Waiting for processes to exit. May 8 00:09:16.981177 systemd-logind[1535]: Removed session 18. May 8 00:09:17.125724 sshd[4343]: Accepted publickey for core from 139.178.89.65 port 41068 ssh2: RSA SHA256:YTfHkQoI5xSpvBwpAWL9S8jbOKDeUPQ5sL4eA5EtDVU May 8 00:09:17.127045 sshd-session[4343]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:09:17.130165 systemd-logind[1535]: New session 19 of user core. May 8 00:09:17.143752 systemd[1]: Started session-19.scope - Session 19 of User core. May 8 00:09:18.322364 sshd[4350]: Connection closed by 139.178.89.65 port 41068 May 8 00:09:18.322134 sshd-session[4343]: pam_unix(sshd:session): session closed for user core May 8 00:09:18.338625 systemd[1]: Started sshd@21-139.178.70.106:22-139.178.89.65:41078.service - OpenSSH per-connection server daemon (139.178.89.65:41078). May 8 00:09:18.341032 systemd[1]: sshd@20-139.178.70.106:22-139.178.89.65:41068.service: Deactivated successfully. May 8 00:09:18.342412 systemd[1]: session-19.scope: Deactivated successfully. May 8 00:09:18.346240 systemd-logind[1535]: Session 19 logged out. Waiting for processes to exit. May 8 00:09:18.347333 systemd-logind[1535]: Removed session 19. May 8 00:09:18.498190 sshd[4362]: Accepted publickey for core from 139.178.89.65 port 41078 ssh2: RSA SHA256:YTfHkQoI5xSpvBwpAWL9S8jbOKDeUPQ5sL4eA5EtDVU May 8 00:09:18.499690 sshd-session[4362]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:09:18.507304 systemd-logind[1535]: New session 20 of user core. May 8 00:09:18.511726 systemd[1]: Started session-20.scope - Session 20 of User core. May 8 00:09:18.894525 sshd[4369]: Connection closed by 139.178.89.65 port 41078 May 8 00:09:18.895979 sshd-session[4362]: pam_unix(sshd:session): session closed for user core May 8 00:09:18.908149 systemd[1]: Started sshd@22-139.178.70.106:22-139.178.89.65:41084.service - OpenSSH per-connection server daemon (139.178.89.65:41084). May 8 00:09:18.909938 systemd[1]: sshd@21-139.178.70.106:22-139.178.89.65:41078.service: Deactivated successfully. May 8 00:09:18.913528 systemd[1]: session-20.scope: Deactivated successfully. May 8 00:09:18.915299 systemd-logind[1535]: Session 20 logged out. Waiting for processes to exit. May 8 00:09:18.916224 systemd-logind[1535]: Removed session 20. May 8 00:09:18.946600 sshd[4375]: Accepted publickey for core from 139.178.89.65 port 41084 ssh2: RSA SHA256:YTfHkQoI5xSpvBwpAWL9S8jbOKDeUPQ5sL4eA5EtDVU May 8 00:09:18.947612 sshd-session[4375]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:09:18.951821 systemd-logind[1535]: New session 21 of user core. May 8 00:09:18.954712 systemd[1]: Started session-21.scope - Session 21 of User core. May 8 00:09:19.120000 sshd[4380]: Connection closed by 139.178.89.65 port 41084 May 8 00:09:19.120215 sshd-session[4375]: pam_unix(sshd:session): session closed for user core May 8 00:09:19.122285 systemd[1]: sshd@22-139.178.70.106:22-139.178.89.65:41084.service: Deactivated successfully. May 8 00:09:19.123537 systemd[1]: session-21.scope: Deactivated successfully. May 8 00:09:19.124436 systemd-logind[1535]: Session 21 logged out. Waiting for processes to exit. May 8 00:09:19.125336 systemd-logind[1535]: Removed session 21. May 8 00:09:24.130667 systemd[1]: Started sshd@23-139.178.70.106:22-139.178.89.65:41098.service - OpenSSH per-connection server daemon (139.178.89.65:41098). May 8 00:09:24.159424 sshd[4392]: Accepted publickey for core from 139.178.89.65 port 41098 ssh2: RSA SHA256:YTfHkQoI5xSpvBwpAWL9S8jbOKDeUPQ5sL4eA5EtDVU May 8 00:09:24.160238 sshd-session[4392]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:09:24.162898 systemd-logind[1535]: New session 22 of user core. May 8 00:09:24.168659 systemd[1]: Started session-22.scope - Session 22 of User core. May 8 00:09:24.259472 sshd[4394]: Connection closed by 139.178.89.65 port 41098 May 8 00:09:24.259869 sshd-session[4392]: pam_unix(sshd:session): session closed for user core May 8 00:09:24.262188 systemd-logind[1535]: Session 22 logged out. Waiting for processes to exit. May 8 00:09:24.262645 systemd[1]: sshd@23-139.178.70.106:22-139.178.89.65:41098.service: Deactivated successfully. May 8 00:09:24.263753 systemd[1]: session-22.scope: Deactivated successfully. May 8 00:09:24.264430 systemd-logind[1535]: Removed session 22. May 8 00:09:29.269958 systemd[1]: Started sshd@24-139.178.70.106:22-139.178.89.65:48408.service - OpenSSH per-connection server daemon (139.178.89.65:48408). May 8 00:09:29.300128 sshd[4407]: Accepted publickey for core from 139.178.89.65 port 48408 ssh2: RSA SHA256:YTfHkQoI5xSpvBwpAWL9S8jbOKDeUPQ5sL4eA5EtDVU May 8 00:09:29.300943 sshd-session[4407]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:09:29.304079 systemd-logind[1535]: New session 23 of user core. May 8 00:09:29.310684 systemd[1]: Started session-23.scope - Session 23 of User core. May 8 00:09:29.407170 sshd[4409]: Connection closed by 139.178.89.65 port 48408 May 8 00:09:29.407740 sshd-session[4407]: pam_unix(sshd:session): session closed for user core May 8 00:09:29.410081 systemd-logind[1535]: Session 23 logged out. Waiting for processes to exit. May 8 00:09:29.410201 systemd[1]: sshd@24-139.178.70.106:22-139.178.89.65:48408.service: Deactivated successfully. May 8 00:09:29.411369 systemd[1]: session-23.scope: Deactivated successfully. May 8 00:09:29.411983 systemd-logind[1535]: Removed session 23. May 8 00:09:34.421583 systemd[1]: Started sshd@25-139.178.70.106:22-139.178.89.65:48424.service - OpenSSH per-connection server daemon (139.178.89.65:48424). May 8 00:09:34.466295 sshd[4421]: Accepted publickey for core from 139.178.89.65 port 48424 ssh2: RSA SHA256:YTfHkQoI5xSpvBwpAWL9S8jbOKDeUPQ5sL4eA5EtDVU May 8 00:09:34.467160 sshd-session[4421]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:09:34.470514 systemd-logind[1535]: New session 24 of user core. May 8 00:09:34.478691 systemd[1]: Started session-24.scope - Session 24 of User core. May 8 00:09:34.592387 sshd[4423]: Connection closed by 139.178.89.65 port 48424 May 8 00:09:34.593134 sshd-session[4421]: pam_unix(sshd:session): session closed for user core May 8 00:09:34.595582 systemd[1]: sshd@25-139.178.70.106:22-139.178.89.65:48424.service: Deactivated successfully. May 8 00:09:34.597134 systemd[1]: session-24.scope: Deactivated successfully. May 8 00:09:34.597817 systemd-logind[1535]: Session 24 logged out. Waiting for processes to exit. May 8 00:09:34.598475 systemd-logind[1535]: Removed session 24. May 8 00:09:39.603513 systemd[1]: Started sshd@26-139.178.70.106:22-139.178.89.65:38112.service - OpenSSH per-connection server daemon (139.178.89.65:38112). May 8 00:09:39.635971 sshd[4436]: Accepted publickey for core from 139.178.89.65 port 38112 ssh2: RSA SHA256:YTfHkQoI5xSpvBwpAWL9S8jbOKDeUPQ5sL4eA5EtDVU May 8 00:09:39.636932 sshd-session[4436]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:09:39.639771 systemd-logind[1535]: New session 25 of user core. May 8 00:09:39.647730 systemd[1]: Started session-25.scope - Session 25 of User core. May 8 00:09:39.751873 sshd[4438]: Connection closed by 139.178.89.65 port 38112 May 8 00:09:39.752360 sshd-session[4436]: pam_unix(sshd:session): session closed for user core May 8 00:09:39.759348 systemd[1]: sshd@26-139.178.70.106:22-139.178.89.65:38112.service: Deactivated successfully. May 8 00:09:39.761013 systemd[1]: session-25.scope: Deactivated successfully. May 8 00:09:39.762063 systemd-logind[1535]: Session 25 logged out. Waiting for processes to exit. May 8 00:09:39.768869 systemd[1]: Started sshd@27-139.178.70.106:22-139.178.89.65:38126.service - OpenSSH per-connection server daemon (139.178.89.65:38126). May 8 00:09:39.770388 systemd-logind[1535]: Removed session 25. May 8 00:09:39.798025 sshd[4449]: Accepted publickey for core from 139.178.89.65 port 38126 ssh2: RSA SHA256:YTfHkQoI5xSpvBwpAWL9S8jbOKDeUPQ5sL4eA5EtDVU May 8 00:09:39.799161 sshd-session[4449]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:09:39.804419 systemd-logind[1535]: New session 26 of user core. May 8 00:09:39.810763 systemd[1]: Started session-26.scope - Session 26 of User core. May 8 00:09:41.402061 containerd[1554]: time="2025-05-08T00:09:41.401864410Z" level=info msg="StopContainer for \"ef70721c1294d865c86b7948ea532a1c8ad0a5e38d7ddc1532f033347fdf1d24\" with timeout 30 (s)" May 8 00:09:41.422557 containerd[1554]: time="2025-05-08T00:09:41.422526838Z" level=info msg="Stop container \"ef70721c1294d865c86b7948ea532a1c8ad0a5e38d7ddc1532f033347fdf1d24\" with signal terminated" May 8 00:09:41.429793 systemd[1]: cri-containerd-ef70721c1294d865c86b7948ea532a1c8ad0a5e38d7ddc1532f033347fdf1d24.scope: Deactivated successfully. May 8 00:09:41.441896 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ef70721c1294d865c86b7948ea532a1c8ad0a5e38d7ddc1532f033347fdf1d24-rootfs.mount: Deactivated successfully. May 8 00:09:41.451326 containerd[1554]: time="2025-05-08T00:09:41.451223066Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 8 00:09:41.471839 containerd[1554]: time="2025-05-08T00:09:41.471815559Z" level=info msg="StopContainer for \"2d92ddc8254d93d24eb6bf7e02dfc0d1dc86ad1d927d971763c4ba3b67ccbe3f\" with timeout 2 (s)" May 8 00:09:41.472078 containerd[1554]: time="2025-05-08T00:09:41.472039585Z" level=info msg="Stop container \"2d92ddc8254d93d24eb6bf7e02dfc0d1dc86ad1d927d971763c4ba3b67ccbe3f\" with signal terminated" May 8 00:09:41.488895 systemd-networkd[1461]: lxc_health: Link DOWN May 8 00:09:41.488900 systemd-networkd[1461]: lxc_health: Lost carrier May 8 00:09:41.498606 containerd[1554]: time="2025-05-08T00:09:41.498556014Z" level=info msg="shim disconnected" id=ef70721c1294d865c86b7948ea532a1c8ad0a5e38d7ddc1532f033347fdf1d24 namespace=k8s.io May 8 00:09:41.498606 containerd[1554]: time="2025-05-08T00:09:41.498598146Z" level=warning msg="cleaning up after shim disconnected" id=ef70721c1294d865c86b7948ea532a1c8ad0a5e38d7ddc1532f033347fdf1d24 namespace=k8s.io May 8 00:09:41.498606 containerd[1554]: time="2025-05-08T00:09:41.498605249Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 00:09:41.502879 systemd[1]: cri-containerd-2d92ddc8254d93d24eb6bf7e02dfc0d1dc86ad1d927d971763c4ba3b67ccbe3f.scope: Deactivated successfully. May 8 00:09:41.503070 systemd[1]: cri-containerd-2d92ddc8254d93d24eb6bf7e02dfc0d1dc86ad1d927d971763c4ba3b67ccbe3f.scope: Consumed 4.834s CPU time, 196.4M memory peak, 72.4M read from disk, 13.3M written to disk. May 8 00:09:41.512087 containerd[1554]: time="2025-05-08T00:09:41.512004444Z" level=warning msg="cleanup warnings time=\"2025-05-08T00:09:41Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io May 8 00:09:41.516985 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2d92ddc8254d93d24eb6bf7e02dfc0d1dc86ad1d927d971763c4ba3b67ccbe3f-rootfs.mount: Deactivated successfully. May 8 00:09:41.530631 containerd[1554]: time="2025-05-08T00:09:41.530583130Z" level=info msg="StopContainer for \"ef70721c1294d865c86b7948ea532a1c8ad0a5e38d7ddc1532f033347fdf1d24\" returns successfully" May 8 00:09:41.630810 containerd[1554]: time="2025-05-08T00:09:41.630786901Z" level=info msg="StopPodSandbox for \"a14c32d8e041218ef318cabc469e146c3a0809fdec859d9fd84a8e4d91371092\"" May 8 00:09:41.635338 containerd[1554]: time="2025-05-08T00:09:41.630827510Z" level=info msg="Container to stop \"ef70721c1294d865c86b7948ea532a1c8ad0a5e38d7ddc1532f033347fdf1d24\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 8 00:09:41.636737 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a14c32d8e041218ef318cabc469e146c3a0809fdec859d9fd84a8e4d91371092-shm.mount: Deactivated successfully. May 8 00:09:41.643523 systemd[1]: cri-containerd-a14c32d8e041218ef318cabc469e146c3a0809fdec859d9fd84a8e4d91371092.scope: Deactivated successfully. May 8 00:09:41.657857 containerd[1554]: time="2025-05-08T00:09:41.657616282Z" level=info msg="shim disconnected" id=2d92ddc8254d93d24eb6bf7e02dfc0d1dc86ad1d927d971763c4ba3b67ccbe3f namespace=k8s.io May 8 00:09:41.657857 containerd[1554]: time="2025-05-08T00:09:41.657769946Z" level=warning msg="cleaning up after shim disconnected" id=2d92ddc8254d93d24eb6bf7e02dfc0d1dc86ad1d927d971763c4ba3b67ccbe3f namespace=k8s.io May 8 00:09:41.657857 containerd[1554]: time="2025-05-08T00:09:41.657779944Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 00:09:41.659126 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a14c32d8e041218ef318cabc469e146c3a0809fdec859d9fd84a8e4d91371092-rootfs.mount: Deactivated successfully. May 8 00:09:41.662697 containerd[1554]: time="2025-05-08T00:09:41.662652112Z" level=info msg="shim disconnected" id=a14c32d8e041218ef318cabc469e146c3a0809fdec859d9fd84a8e4d91371092 namespace=k8s.io May 8 00:09:41.662697 containerd[1554]: time="2025-05-08T00:09:41.662697456Z" level=warning msg="cleaning up after shim disconnected" id=a14c32d8e041218ef318cabc469e146c3a0809fdec859d9fd84a8e4d91371092 namespace=k8s.io May 8 00:09:41.664842 containerd[1554]: time="2025-05-08T00:09:41.662704110Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 00:09:41.674256 containerd[1554]: time="2025-05-08T00:09:41.674227087Z" level=info msg="StopContainer for \"2d92ddc8254d93d24eb6bf7e02dfc0d1dc86ad1d927d971763c4ba3b67ccbe3f\" returns successfully" May 8 00:09:41.674718 containerd[1554]: time="2025-05-08T00:09:41.674704662Z" level=info msg="StopPodSandbox for \"57bc32357e238011dba77cb17bc6dd8b4222c7c7a0cf57eb1c1e39823b07ecbe\"" May 8 00:09:41.674817 containerd[1554]: time="2025-05-08T00:09:41.674791209Z" level=info msg="Container to stop \"da27a1faf4770a2c27d5a5d47a682d16980fb06619428fe821a74bcdbf99b19d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 8 00:09:41.674856 containerd[1554]: time="2025-05-08T00:09:41.674849425Z" level=info msg="Container to stop \"f8b77a592e1acac582b8599c4a151127aaa56cfa2d79879a7caad9fac8a68c7e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 8 00:09:41.674890 containerd[1554]: time="2025-05-08T00:09:41.674883386Z" level=info msg="Container to stop \"7e1df560719d9191f8aba7b7867bcf25664d24c49d18a8108cc5051455389622\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 8 00:09:41.674928 containerd[1554]: time="2025-05-08T00:09:41.674921092Z" level=info msg="Container to stop \"2d92ddc8254d93d24eb6bf7e02dfc0d1dc86ad1d927d971763c4ba3b67ccbe3f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 8 00:09:41.675209 containerd[1554]: time="2025-05-08T00:09:41.675189183Z" level=info msg="Container to stop \"69e2e147905c41fde7551bd2e4944154ec1583409c9fd444f4ebec048fe9d106\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 8 00:09:41.676904 containerd[1554]: time="2025-05-08T00:09:41.676880673Z" level=info msg="TearDown network for sandbox \"a14c32d8e041218ef318cabc469e146c3a0809fdec859d9fd84a8e4d91371092\" successfully" May 8 00:09:41.677072 containerd[1554]: time="2025-05-08T00:09:41.677060723Z" level=info msg="StopPodSandbox for \"a14c32d8e041218ef318cabc469e146c3a0809fdec859d9fd84a8e4d91371092\" returns successfully" May 8 00:09:41.684378 systemd[1]: cri-containerd-57bc32357e238011dba77cb17bc6dd8b4222c7c7a0cf57eb1c1e39823b07ecbe.scope: Deactivated successfully. May 8 00:09:41.709353 containerd[1554]: time="2025-05-08T00:09:41.709306588Z" level=info msg="shim disconnected" id=57bc32357e238011dba77cb17bc6dd8b4222c7c7a0cf57eb1c1e39823b07ecbe namespace=k8s.io May 8 00:09:41.709562 containerd[1554]: time="2025-05-08T00:09:41.709532661Z" level=warning msg="cleaning up after shim disconnected" id=57bc32357e238011dba77cb17bc6dd8b4222c7c7a0cf57eb1c1e39823b07ecbe namespace=k8s.io May 8 00:09:41.710202 containerd[1554]: time="2025-05-08T00:09:41.710038141Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 00:09:41.720312 containerd[1554]: time="2025-05-08T00:09:41.720267619Z" level=info msg="TearDown network for sandbox \"57bc32357e238011dba77cb17bc6dd8b4222c7c7a0cf57eb1c1e39823b07ecbe\" successfully" May 8 00:09:41.720312 containerd[1554]: time="2025-05-08T00:09:41.720292046Z" level=info msg="StopPodSandbox for \"57bc32357e238011dba77cb17bc6dd8b4222c7c7a0cf57eb1c1e39823b07ecbe\" returns successfully" May 8 00:09:41.912330 kubelet[2823]: I0508 00:09:41.912253 2823 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6ae90792-07ab-4a93-9671-7f095765d7e9-cilium-config-path\") pod \"6ae90792-07ab-4a93-9671-7f095765d7e9\" (UID: \"6ae90792-07ab-4a93-9671-7f095765d7e9\") " May 8 00:09:41.912330 kubelet[2823]: I0508 00:09:41.912291 2823 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6ae90792-07ab-4a93-9671-7f095765d7e9-cni-path\") pod \"6ae90792-07ab-4a93-9671-7f095765d7e9\" (UID: \"6ae90792-07ab-4a93-9671-7f095765d7e9\") " May 8 00:09:41.912330 kubelet[2823]: I0508 00:09:41.912304 2823 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6ae90792-07ab-4a93-9671-7f095765d7e9-xtables-lock\") pod \"6ae90792-07ab-4a93-9671-7f095765d7e9\" (UID: \"6ae90792-07ab-4a93-9671-7f095765d7e9\") " May 8 00:09:41.912330 kubelet[2823]: I0508 00:09:41.912312 2823 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6ae90792-07ab-4a93-9671-7f095765d7e9-hostproc\") pod \"6ae90792-07ab-4a93-9671-7f095765d7e9\" (UID: \"6ae90792-07ab-4a93-9671-7f095765d7e9\") " May 8 00:09:41.912330 kubelet[2823]: I0508 00:09:41.912325 2823 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6ae90792-07ab-4a93-9671-7f095765d7e9-clustermesh-secrets\") pod \"6ae90792-07ab-4a93-9671-7f095765d7e9\" (UID: \"6ae90792-07ab-4a93-9671-7f095765d7e9\") " May 8 00:09:41.912330 kubelet[2823]: I0508 00:09:41.912333 2823 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6ae90792-07ab-4a93-9671-7f095765d7e9-bpf-maps\") pod \"6ae90792-07ab-4a93-9671-7f095765d7e9\" (UID: \"6ae90792-07ab-4a93-9671-7f095765d7e9\") " May 8 00:09:41.912758 kubelet[2823]: I0508 00:09:41.912341 2823 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6ae90792-07ab-4a93-9671-7f095765d7e9-host-proc-sys-kernel\") pod \"6ae90792-07ab-4a93-9671-7f095765d7e9\" (UID: \"6ae90792-07ab-4a93-9671-7f095765d7e9\") " May 8 00:09:41.912758 kubelet[2823]: I0508 00:09:41.912355 2823 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2cba70d1-0f21-44d6-970e-5d2a01d15dfb-cilium-config-path\") pod \"2cba70d1-0f21-44d6-970e-5d2a01d15dfb\" (UID: \"2cba70d1-0f21-44d6-970e-5d2a01d15dfb\") " May 8 00:09:41.912758 kubelet[2823]: I0508 00:09:41.912368 2823 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zp4nj\" (UniqueName: \"kubernetes.io/projected/6ae90792-07ab-4a93-9671-7f095765d7e9-kube-api-access-zp4nj\") pod \"6ae90792-07ab-4a93-9671-7f095765d7e9\" (UID: \"6ae90792-07ab-4a93-9671-7f095765d7e9\") " May 8 00:09:41.912758 kubelet[2823]: I0508 00:09:41.912377 2823 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6ae90792-07ab-4a93-9671-7f095765d7e9-hubble-tls\") pod \"6ae90792-07ab-4a93-9671-7f095765d7e9\" (UID: \"6ae90792-07ab-4a93-9671-7f095765d7e9\") " May 8 00:09:41.912758 kubelet[2823]: I0508 00:09:41.912386 2823 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6ae90792-07ab-4a93-9671-7f095765d7e9-cilium-cgroup\") pod \"6ae90792-07ab-4a93-9671-7f095765d7e9\" (UID: \"6ae90792-07ab-4a93-9671-7f095765d7e9\") " May 8 00:09:41.912758 kubelet[2823]: I0508 00:09:41.912401 2823 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zr8sx\" (UniqueName: \"kubernetes.io/projected/2cba70d1-0f21-44d6-970e-5d2a01d15dfb-kube-api-access-zr8sx\") pod \"2cba70d1-0f21-44d6-970e-5d2a01d15dfb\" (UID: \"2cba70d1-0f21-44d6-970e-5d2a01d15dfb\") " May 8 00:09:41.913185 kubelet[2823]: I0508 00:09:41.912410 2823 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6ae90792-07ab-4a93-9671-7f095765d7e9-lib-modules\") pod \"6ae90792-07ab-4a93-9671-7f095765d7e9\" (UID: \"6ae90792-07ab-4a93-9671-7f095765d7e9\") " May 8 00:09:41.913185 kubelet[2823]: I0508 00:09:41.912420 2823 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6ae90792-07ab-4a93-9671-7f095765d7e9-cilium-run\") pod \"6ae90792-07ab-4a93-9671-7f095765d7e9\" (UID: \"6ae90792-07ab-4a93-9671-7f095765d7e9\") " May 8 00:09:41.913185 kubelet[2823]: I0508 00:09:41.912428 2823 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6ae90792-07ab-4a93-9671-7f095765d7e9-host-proc-sys-net\") pod \"6ae90792-07ab-4a93-9671-7f095765d7e9\" (UID: \"6ae90792-07ab-4a93-9671-7f095765d7e9\") " May 8 00:09:41.913185 kubelet[2823]: I0508 00:09:41.912440 2823 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6ae90792-07ab-4a93-9671-7f095765d7e9-etc-cni-netd\") pod \"6ae90792-07ab-4a93-9671-7f095765d7e9\" (UID: \"6ae90792-07ab-4a93-9671-7f095765d7e9\") " May 8 00:09:41.918464 kubelet[2823]: I0508 00:09:41.916984 2823 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6ae90792-07ab-4a93-9671-7f095765d7e9-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "6ae90792-07ab-4a93-9671-7f095765d7e9" (UID: "6ae90792-07ab-4a93-9671-7f095765d7e9"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 8 00:09:41.918464 kubelet[2823]: I0508 00:09:41.917203 2823 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2cba70d1-0f21-44d6-970e-5d2a01d15dfb-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "2cba70d1-0f21-44d6-970e-5d2a01d15dfb" (UID: "2cba70d1-0f21-44d6-970e-5d2a01d15dfb"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 8 00:09:41.920691 kubelet[2823]: I0508 00:09:41.919414 2823 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ae90792-07ab-4a93-9671-7f095765d7e9-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "6ae90792-07ab-4a93-9671-7f095765d7e9" (UID: "6ae90792-07ab-4a93-9671-7f095765d7e9"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 8 00:09:41.920691 kubelet[2823]: I0508 00:09:41.919440 2823 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6ae90792-07ab-4a93-9671-7f095765d7e9-cni-path" (OuterVolumeSpecName: "cni-path") pod "6ae90792-07ab-4a93-9671-7f095765d7e9" (UID: "6ae90792-07ab-4a93-9671-7f095765d7e9"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 8 00:09:41.920691 kubelet[2823]: I0508 00:09:41.919452 2823 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6ae90792-07ab-4a93-9671-7f095765d7e9-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "6ae90792-07ab-4a93-9671-7f095765d7e9" (UID: "6ae90792-07ab-4a93-9671-7f095765d7e9"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 8 00:09:41.920691 kubelet[2823]: I0508 00:09:41.919461 2823 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6ae90792-07ab-4a93-9671-7f095765d7e9-hostproc" (OuterVolumeSpecName: "hostproc") pod "6ae90792-07ab-4a93-9671-7f095765d7e9" (UID: "6ae90792-07ab-4a93-9671-7f095765d7e9"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 8 00:09:41.922131 kubelet[2823]: I0508 00:09:41.921361 2823 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ae90792-07ab-4a93-9671-7f095765d7e9-kube-api-access-zp4nj" (OuterVolumeSpecName: "kube-api-access-zp4nj") pod "6ae90792-07ab-4a93-9671-7f095765d7e9" (UID: "6ae90792-07ab-4a93-9671-7f095765d7e9"). InnerVolumeSpecName "kube-api-access-zp4nj". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 8 00:09:41.922131 kubelet[2823]: I0508 00:09:41.921483 2823 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ae90792-07ab-4a93-9671-7f095765d7e9-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "6ae90792-07ab-4a93-9671-7f095765d7e9" (UID: "6ae90792-07ab-4a93-9671-7f095765d7e9"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" May 8 00:09:41.922131 kubelet[2823]: I0508 00:09:41.921515 2823 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6ae90792-07ab-4a93-9671-7f095765d7e9-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "6ae90792-07ab-4a93-9671-7f095765d7e9" (UID: "6ae90792-07ab-4a93-9671-7f095765d7e9"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 8 00:09:41.922131 kubelet[2823]: I0508 00:09:41.921532 2823 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6ae90792-07ab-4a93-9671-7f095765d7e9-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "6ae90792-07ab-4a93-9671-7f095765d7e9" (UID: "6ae90792-07ab-4a93-9671-7f095765d7e9"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 8 00:09:41.922432 kubelet[2823]: I0508 00:09:41.922419 2823 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6ae90792-07ab-4a93-9671-7f095765d7e9-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "6ae90792-07ab-4a93-9671-7f095765d7e9" (UID: "6ae90792-07ab-4a93-9671-7f095765d7e9"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 8 00:09:41.922712 kubelet[2823]: I0508 00:09:41.922701 2823 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6ae90792-07ab-4a93-9671-7f095765d7e9-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "6ae90792-07ab-4a93-9671-7f095765d7e9" (UID: "6ae90792-07ab-4a93-9671-7f095765d7e9"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 8 00:09:41.923151 kubelet[2823]: I0508 00:09:41.923135 2823 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ae90792-07ab-4a93-9671-7f095765d7e9-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "6ae90792-07ab-4a93-9671-7f095765d7e9" (UID: "6ae90792-07ab-4a93-9671-7f095765d7e9"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 8 00:09:41.923184 kubelet[2823]: I0508 00:09:41.923156 2823 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6ae90792-07ab-4a93-9671-7f095765d7e9-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "6ae90792-07ab-4a93-9671-7f095765d7e9" (UID: "6ae90792-07ab-4a93-9671-7f095765d7e9"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 8 00:09:41.923184 kubelet[2823]: I0508 00:09:41.923166 2823 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6ae90792-07ab-4a93-9671-7f095765d7e9-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "6ae90792-07ab-4a93-9671-7f095765d7e9" (UID: "6ae90792-07ab-4a93-9671-7f095765d7e9"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 8 00:09:41.924191 kubelet[2823]: I0508 00:09:41.924172 2823 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2cba70d1-0f21-44d6-970e-5d2a01d15dfb-kube-api-access-zr8sx" (OuterVolumeSpecName: "kube-api-access-zr8sx") pod "2cba70d1-0f21-44d6-970e-5d2a01d15dfb" (UID: "2cba70d1-0f21-44d6-970e-5d2a01d15dfb"). InnerVolumeSpecName "kube-api-access-zr8sx". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 8 00:09:42.013247 kubelet[2823]: I0508 00:09:42.013018 2823 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6ae90792-07ab-4a93-9671-7f095765d7e9-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" May 8 00:09:42.013247 kubelet[2823]: I0508 00:09:42.013048 2823 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6ae90792-07ab-4a93-9671-7f095765d7e9-bpf-maps\") on node \"localhost\" DevicePath \"\"" May 8 00:09:42.013247 kubelet[2823]: I0508 00:09:42.013061 2823 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6ae90792-07ab-4a93-9671-7f095765d7e9-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" May 8 00:09:42.013247 kubelet[2823]: I0508 00:09:42.013071 2823 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2cba70d1-0f21-44d6-970e-5d2a01d15dfb-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 8 00:09:42.013247 kubelet[2823]: I0508 00:09:42.013081 2823 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zp4nj\" (UniqueName: \"kubernetes.io/projected/6ae90792-07ab-4a93-9671-7f095765d7e9-kube-api-access-zp4nj\") on node \"localhost\" DevicePath \"\"" May 8 00:09:42.013247 kubelet[2823]: I0508 00:09:42.013091 2823 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6ae90792-07ab-4a93-9671-7f095765d7e9-hubble-tls\") on node \"localhost\" DevicePath \"\"" May 8 00:09:42.013247 kubelet[2823]: I0508 00:09:42.013101 2823 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6ae90792-07ab-4a93-9671-7f095765d7e9-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" May 8 00:09:42.013247 kubelet[2823]: I0508 00:09:42.013111 2823 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zr8sx\" (UniqueName: \"kubernetes.io/projected/2cba70d1-0f21-44d6-970e-5d2a01d15dfb-kube-api-access-zr8sx\") on node \"localhost\" DevicePath \"\"" May 8 00:09:42.013587 kubelet[2823]: I0508 00:09:42.013120 2823 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6ae90792-07ab-4a93-9671-7f095765d7e9-lib-modules\") on node \"localhost\" DevicePath \"\"" May 8 00:09:42.013587 kubelet[2823]: I0508 00:09:42.013128 2823 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6ae90792-07ab-4a93-9671-7f095765d7e9-cilium-run\") on node \"localhost\" DevicePath \"\"" May 8 00:09:42.013587 kubelet[2823]: I0508 00:09:42.013136 2823 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6ae90792-07ab-4a93-9671-7f095765d7e9-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" May 8 00:09:42.013587 kubelet[2823]: I0508 00:09:42.013144 2823 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6ae90792-07ab-4a93-9671-7f095765d7e9-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" May 8 00:09:42.013587 kubelet[2823]: I0508 00:09:42.013152 2823 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6ae90792-07ab-4a93-9671-7f095765d7e9-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 8 00:09:42.013587 kubelet[2823]: I0508 00:09:42.013159 2823 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6ae90792-07ab-4a93-9671-7f095765d7e9-cni-path\") on node \"localhost\" DevicePath \"\"" May 8 00:09:42.013587 kubelet[2823]: I0508 00:09:42.013166 2823 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6ae90792-07ab-4a93-9671-7f095765d7e9-xtables-lock\") on node \"localhost\" DevicePath \"\"" May 8 00:09:42.014591 kubelet[2823]: I0508 00:09:42.014571 2823 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6ae90792-07ab-4a93-9671-7f095765d7e9-hostproc\") on node \"localhost\" DevicePath \"\"" May 8 00:09:42.248844 systemd[1]: Removed slice kubepods-besteffort-pod2cba70d1_0f21_44d6_970e_5d2a01d15dfb.slice - libcontainer container kubepods-besteffort-pod2cba70d1_0f21_44d6_970e_5d2a01d15dfb.slice. May 8 00:09:42.249872 systemd[1]: Removed slice kubepods-burstable-pod6ae90792_07ab_4a93_9671_7f095765d7e9.slice - libcontainer container kubepods-burstable-pod6ae90792_07ab_4a93_9671_7f095765d7e9.slice. May 8 00:09:42.249934 systemd[1]: kubepods-burstable-pod6ae90792_07ab_4a93_9671_7f095765d7e9.slice: Consumed 4.887s CPU time, 197.4M memory peak, 73.5M read from disk, 13.3M written to disk. May 8 00:09:42.400367 systemd[1]: var-lib-kubelet-pods-2cba70d1\x2d0f21\x2d44d6\x2d970e\x2d5d2a01d15dfb-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dzr8sx.mount: Deactivated successfully. May 8 00:09:42.400433 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-57bc32357e238011dba77cb17bc6dd8b4222c7c7a0cf57eb1c1e39823b07ecbe-rootfs.mount: Deactivated successfully. May 8 00:09:42.400475 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-57bc32357e238011dba77cb17bc6dd8b4222c7c7a0cf57eb1c1e39823b07ecbe-shm.mount: Deactivated successfully. May 8 00:09:42.400518 systemd[1]: var-lib-kubelet-pods-6ae90792\x2d07ab\x2d4a93\x2d9671\x2d7f095765d7e9-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dzp4nj.mount: Deactivated successfully. May 8 00:09:42.400595 systemd[1]: var-lib-kubelet-pods-6ae90792\x2d07ab\x2d4a93\x2d9671\x2d7f095765d7e9-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 8 00:09:42.400642 systemd[1]: var-lib-kubelet-pods-6ae90792\x2d07ab\x2d4a93\x2d9671\x2d7f095765d7e9-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 8 00:09:42.550610 kubelet[2823]: I0508 00:09:42.550554 2823 scope.go:117] "RemoveContainer" containerID="2d92ddc8254d93d24eb6bf7e02dfc0d1dc86ad1d927d971763c4ba3b67ccbe3f" May 8 00:09:42.555601 containerd[1554]: time="2025-05-08T00:09:42.555572415Z" level=info msg="RemoveContainer for \"2d92ddc8254d93d24eb6bf7e02dfc0d1dc86ad1d927d971763c4ba3b67ccbe3f\"" May 8 00:09:42.557515 containerd[1554]: time="2025-05-08T00:09:42.557268709Z" level=info msg="RemoveContainer for \"2d92ddc8254d93d24eb6bf7e02dfc0d1dc86ad1d927d971763c4ba3b67ccbe3f\" returns successfully" May 8 00:09:42.560526 kubelet[2823]: I0508 00:09:42.560460 2823 scope.go:117] "RemoveContainer" containerID="7e1df560719d9191f8aba7b7867bcf25664d24c49d18a8108cc5051455389622" May 8 00:09:42.561612 containerd[1554]: time="2025-05-08T00:09:42.561537357Z" level=info msg="RemoveContainer for \"7e1df560719d9191f8aba7b7867bcf25664d24c49d18a8108cc5051455389622\"" May 8 00:09:42.563820 containerd[1554]: time="2025-05-08T00:09:42.563789723Z" level=info msg="RemoveContainer for \"7e1df560719d9191f8aba7b7867bcf25664d24c49d18a8108cc5051455389622\" returns successfully" May 8 00:09:42.564074 kubelet[2823]: I0508 00:09:42.564005 2823 scope.go:117] "RemoveContainer" containerID="f8b77a592e1acac582b8599c4a151127aaa56cfa2d79879a7caad9fac8a68c7e" May 8 00:09:42.567349 containerd[1554]: time="2025-05-08T00:09:42.567307211Z" level=info msg="RemoveContainer for \"f8b77a592e1acac582b8599c4a151127aaa56cfa2d79879a7caad9fac8a68c7e\"" May 8 00:09:42.569201 containerd[1554]: time="2025-05-08T00:09:42.569182634Z" level=info msg="RemoveContainer for \"f8b77a592e1acac582b8599c4a151127aaa56cfa2d79879a7caad9fac8a68c7e\" returns successfully" May 8 00:09:42.569491 kubelet[2823]: I0508 00:09:42.569346 2823 scope.go:117] "RemoveContainer" containerID="da27a1faf4770a2c27d5a5d47a682d16980fb06619428fe821a74bcdbf99b19d" May 8 00:09:42.570194 containerd[1554]: time="2025-05-08T00:09:42.570158344Z" level=info msg="RemoveContainer for \"da27a1faf4770a2c27d5a5d47a682d16980fb06619428fe821a74bcdbf99b19d\"" May 8 00:09:42.571587 containerd[1554]: time="2025-05-08T00:09:42.571448108Z" level=info msg="RemoveContainer for \"da27a1faf4770a2c27d5a5d47a682d16980fb06619428fe821a74bcdbf99b19d\" returns successfully" May 8 00:09:42.571738 kubelet[2823]: I0508 00:09:42.571646 2823 scope.go:117] "RemoveContainer" containerID="69e2e147905c41fde7551bd2e4944154ec1583409c9fd444f4ebec048fe9d106" May 8 00:09:42.572387 containerd[1554]: time="2025-05-08T00:09:42.572374112Z" level=info msg="RemoveContainer for \"69e2e147905c41fde7551bd2e4944154ec1583409c9fd444f4ebec048fe9d106\"" May 8 00:09:42.590270 containerd[1554]: time="2025-05-08T00:09:42.590245512Z" level=info msg="RemoveContainer for \"69e2e147905c41fde7551bd2e4944154ec1583409c9fd444f4ebec048fe9d106\" returns successfully" May 8 00:09:42.590537 kubelet[2823]: I0508 00:09:42.590516 2823 scope.go:117] "RemoveContainer" containerID="2d92ddc8254d93d24eb6bf7e02dfc0d1dc86ad1d927d971763c4ba3b67ccbe3f" May 8 00:09:42.590722 containerd[1554]: time="2025-05-08T00:09:42.590693955Z" level=error msg="ContainerStatus for \"2d92ddc8254d93d24eb6bf7e02dfc0d1dc86ad1d927d971763c4ba3b67ccbe3f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2d92ddc8254d93d24eb6bf7e02dfc0d1dc86ad1d927d971763c4ba3b67ccbe3f\": not found" May 8 00:09:42.697050 kubelet[2823]: E0508 00:09:42.696939 2823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2d92ddc8254d93d24eb6bf7e02dfc0d1dc86ad1d927d971763c4ba3b67ccbe3f\": not found" containerID="2d92ddc8254d93d24eb6bf7e02dfc0d1dc86ad1d927d971763c4ba3b67ccbe3f" May 8 00:09:42.762796 kubelet[2823]: I0508 00:09:42.710726 2823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2d92ddc8254d93d24eb6bf7e02dfc0d1dc86ad1d927d971763c4ba3b67ccbe3f"} err="failed to get container status \"2d92ddc8254d93d24eb6bf7e02dfc0d1dc86ad1d927d971763c4ba3b67ccbe3f\": rpc error: code = NotFound desc = an error occurred when try to find container \"2d92ddc8254d93d24eb6bf7e02dfc0d1dc86ad1d927d971763c4ba3b67ccbe3f\": not found" May 8 00:09:42.762796 kubelet[2823]: I0508 00:09:42.762683 2823 scope.go:117] "RemoveContainer" containerID="7e1df560719d9191f8aba7b7867bcf25664d24c49d18a8108cc5051455389622" May 8 00:09:42.763096 containerd[1554]: time="2025-05-08T00:09:42.762959025Z" level=error msg="ContainerStatus for \"7e1df560719d9191f8aba7b7867bcf25664d24c49d18a8108cc5051455389622\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7e1df560719d9191f8aba7b7867bcf25664d24c49d18a8108cc5051455389622\": not found" May 8 00:09:42.763140 kubelet[2823]: E0508 00:09:42.763064 2823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7e1df560719d9191f8aba7b7867bcf25664d24c49d18a8108cc5051455389622\": not found" containerID="7e1df560719d9191f8aba7b7867bcf25664d24c49d18a8108cc5051455389622" May 8 00:09:42.763140 kubelet[2823]: I0508 00:09:42.763078 2823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7e1df560719d9191f8aba7b7867bcf25664d24c49d18a8108cc5051455389622"} err="failed to get container status \"7e1df560719d9191f8aba7b7867bcf25664d24c49d18a8108cc5051455389622\": rpc error: code = NotFound desc = an error occurred when try to find container \"7e1df560719d9191f8aba7b7867bcf25664d24c49d18a8108cc5051455389622\": not found" May 8 00:09:42.763346 kubelet[2823]: I0508 00:09:42.763190 2823 scope.go:117] "RemoveContainer" containerID="f8b77a592e1acac582b8599c4a151127aaa56cfa2d79879a7caad9fac8a68c7e" May 8 00:09:42.763373 containerd[1554]: time="2025-05-08T00:09:42.763301062Z" level=error msg="ContainerStatus for \"f8b77a592e1acac582b8599c4a151127aaa56cfa2d79879a7caad9fac8a68c7e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f8b77a592e1acac582b8599c4a151127aaa56cfa2d79879a7caad9fac8a68c7e\": not found" May 8 00:09:42.763489 kubelet[2823]: E0508 00:09:42.763429 2823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f8b77a592e1acac582b8599c4a151127aaa56cfa2d79879a7caad9fac8a68c7e\": not found" containerID="f8b77a592e1acac582b8599c4a151127aaa56cfa2d79879a7caad9fac8a68c7e" May 8 00:09:42.763489 kubelet[2823]: I0508 00:09:42.763442 2823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f8b77a592e1acac582b8599c4a151127aaa56cfa2d79879a7caad9fac8a68c7e"} err="failed to get container status \"f8b77a592e1acac582b8599c4a151127aaa56cfa2d79879a7caad9fac8a68c7e\": rpc error: code = NotFound desc = an error occurred when try to find container \"f8b77a592e1acac582b8599c4a151127aaa56cfa2d79879a7caad9fac8a68c7e\": not found" May 8 00:09:42.763489 kubelet[2823]: I0508 00:09:42.763451 2823 scope.go:117] "RemoveContainer" containerID="da27a1faf4770a2c27d5a5d47a682d16980fb06619428fe821a74bcdbf99b19d" May 8 00:09:42.763597 containerd[1554]: time="2025-05-08T00:09:42.763569746Z" level=error msg="ContainerStatus for \"da27a1faf4770a2c27d5a5d47a682d16980fb06619428fe821a74bcdbf99b19d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"da27a1faf4770a2c27d5a5d47a682d16980fb06619428fe821a74bcdbf99b19d\": not found" May 8 00:09:42.763698 kubelet[2823]: E0508 00:09:42.763664 2823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"da27a1faf4770a2c27d5a5d47a682d16980fb06619428fe821a74bcdbf99b19d\": not found" containerID="da27a1faf4770a2c27d5a5d47a682d16980fb06619428fe821a74bcdbf99b19d" May 8 00:09:42.763698 kubelet[2823]: I0508 00:09:42.763685 2823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"da27a1faf4770a2c27d5a5d47a682d16980fb06619428fe821a74bcdbf99b19d"} err="failed to get container status \"da27a1faf4770a2c27d5a5d47a682d16980fb06619428fe821a74bcdbf99b19d\": rpc error: code = NotFound desc = an error occurred when try to find container \"da27a1faf4770a2c27d5a5d47a682d16980fb06619428fe821a74bcdbf99b19d\": not found" May 8 00:09:42.763698 kubelet[2823]: I0508 00:09:42.763698 2823 scope.go:117] "RemoveContainer" containerID="69e2e147905c41fde7551bd2e4944154ec1583409c9fd444f4ebec048fe9d106" May 8 00:09:42.763874 containerd[1554]: time="2025-05-08T00:09:42.763837620Z" level=error msg="ContainerStatus for \"69e2e147905c41fde7551bd2e4944154ec1583409c9fd444f4ebec048fe9d106\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"69e2e147905c41fde7551bd2e4944154ec1583409c9fd444f4ebec048fe9d106\": not found" May 8 00:09:42.763932 kubelet[2823]: E0508 00:09:42.763917 2823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"69e2e147905c41fde7551bd2e4944154ec1583409c9fd444f4ebec048fe9d106\": not found" containerID="69e2e147905c41fde7551bd2e4944154ec1583409c9fd444f4ebec048fe9d106" May 8 00:09:42.763958 kubelet[2823]: I0508 00:09:42.763933 2823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"69e2e147905c41fde7551bd2e4944154ec1583409c9fd444f4ebec048fe9d106"} err="failed to get container status \"69e2e147905c41fde7551bd2e4944154ec1583409c9fd444f4ebec048fe9d106\": rpc error: code = NotFound desc = an error occurred when try to find container \"69e2e147905c41fde7551bd2e4944154ec1583409c9fd444f4ebec048fe9d106\": not found" May 8 00:09:42.763958 kubelet[2823]: I0508 00:09:42.763945 2823 scope.go:117] "RemoveContainer" containerID="ef70721c1294d865c86b7948ea532a1c8ad0a5e38d7ddc1532f033347fdf1d24" May 8 00:09:42.764604 containerd[1554]: time="2025-05-08T00:09:42.764585832Z" level=info msg="RemoveContainer for \"ef70721c1294d865c86b7948ea532a1c8ad0a5e38d7ddc1532f033347fdf1d24\"" May 8 00:09:42.776963 containerd[1554]: time="2025-05-08T00:09:42.776938320Z" level=info msg="RemoveContainer for \"ef70721c1294d865c86b7948ea532a1c8ad0a5e38d7ddc1532f033347fdf1d24\" returns successfully" May 8 00:09:43.214597 sshd[4452]: Connection closed by 139.178.89.65 port 38126 May 8 00:09:43.221650 systemd[1]: Started sshd@28-139.178.70.106:22-139.178.89.65:38132.service - OpenSSH per-connection server daemon (139.178.89.65:38132). May 8 00:09:43.228011 sshd-session[4449]: pam_unix(sshd:session): session closed for user core May 8 00:09:43.230400 systemd-logind[1535]: Session 26 logged out. Waiting for processes to exit. May 8 00:09:43.230648 systemd[1]: sshd@27-139.178.70.106:22-139.178.89.65:38126.service: Deactivated successfully. May 8 00:09:43.231713 systemd[1]: session-26.scope: Deactivated successfully. May 8 00:09:43.232491 systemd-logind[1535]: Removed session 26. May 8 00:09:43.270969 sshd[4609]: Accepted publickey for core from 139.178.89.65 port 38132 ssh2: RSA SHA256:YTfHkQoI5xSpvBwpAWL9S8jbOKDeUPQ5sL4eA5EtDVU May 8 00:09:43.271810 sshd-session[4609]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:09:43.274733 systemd-logind[1535]: New session 27 of user core. May 8 00:09:43.277631 systemd[1]: Started session-27.scope - Session 27 of User core. May 8 00:09:43.362236 kubelet[2823]: E0508 00:09:43.362195 2823 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 8 00:09:43.777692 sshd[4614]: Connection closed by 139.178.89.65 port 38132 May 8 00:09:43.779335 sshd-session[4609]: pam_unix(sshd:session): session closed for user core May 8 00:09:43.788280 systemd[1]: sshd@28-139.178.70.106:22-139.178.89.65:38132.service: Deactivated successfully. May 8 00:09:43.790048 systemd[1]: session-27.scope: Deactivated successfully. May 8 00:09:43.790870 systemd-logind[1535]: Session 27 logged out. Waiting for processes to exit. May 8 00:09:43.798347 systemd[1]: Started sshd@29-139.178.70.106:22-139.178.89.65:38148.service - OpenSSH per-connection server daemon (139.178.89.65:38148). May 8 00:09:43.800488 systemd-logind[1535]: Removed session 27. May 8 00:09:43.818064 kubelet[2823]: I0508 00:09:43.818041 2823 memory_manager.go:355] "RemoveStaleState removing state" podUID="2cba70d1-0f21-44d6-970e-5d2a01d15dfb" containerName="cilium-operator" May 8 00:09:43.818064 kubelet[2823]: I0508 00:09:43.818057 2823 memory_manager.go:355] "RemoveStaleState removing state" podUID="6ae90792-07ab-4a93-9671-7f095765d7e9" containerName="cilium-agent" May 8 00:09:43.826902 sshd[4624]: Accepted publickey for core from 139.178.89.65 port 38148 ssh2: RSA SHA256:YTfHkQoI5xSpvBwpAWL9S8jbOKDeUPQ5sL4eA5EtDVU May 8 00:09:43.827948 sshd-session[4624]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:09:43.833788 systemd-logind[1535]: New session 28 of user core. May 8 00:09:43.840692 systemd[1]: Started session-28.scope - Session 28 of User core. May 8 00:09:43.860839 systemd[1]: Created slice kubepods-burstable-podf84c114f_de7c_4255_b870_cdd7c6ee2566.slice - libcontainer container kubepods-burstable-podf84c114f_de7c_4255_b870_cdd7c6ee2566.slice. May 8 00:09:43.892018 sshd[4627]: Connection closed by 139.178.89.65 port 38148 May 8 00:09:43.893039 sshd-session[4624]: pam_unix(sshd:session): session closed for user core May 8 00:09:43.902059 systemd[1]: sshd@29-139.178.70.106:22-139.178.89.65:38148.service: Deactivated successfully. May 8 00:09:43.903386 systemd[1]: session-28.scope: Deactivated successfully. May 8 00:09:43.904020 systemd-logind[1535]: Session 28 logged out. Waiting for processes to exit. May 8 00:09:43.910139 systemd[1]: Started sshd@30-139.178.70.106:22-139.178.89.65:38158.service - OpenSSH per-connection server daemon (139.178.89.65:38158). May 8 00:09:43.912102 systemd-logind[1535]: Removed session 28. May 8 00:09:43.925061 kubelet[2823]: I0508 00:09:43.925031 2823 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f84c114f-de7c-4255-b870-cdd7c6ee2566-clustermesh-secrets\") pod \"cilium-7z2sq\" (UID: \"f84c114f-de7c-4255-b870-cdd7c6ee2566\") " pod="kube-system/cilium-7z2sq" May 8 00:09:43.926187 kubelet[2823]: I0508 00:09:43.925257 2823 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f84c114f-de7c-4255-b870-cdd7c6ee2566-bpf-maps\") pod \"cilium-7z2sq\" (UID: \"f84c114f-de7c-4255-b870-cdd7c6ee2566\") " pod="kube-system/cilium-7z2sq" May 8 00:09:43.926187 kubelet[2823]: I0508 00:09:43.925279 2823 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f84c114f-de7c-4255-b870-cdd7c6ee2566-cilium-cgroup\") pod \"cilium-7z2sq\" (UID: \"f84c114f-de7c-4255-b870-cdd7c6ee2566\") " pod="kube-system/cilium-7z2sq" May 8 00:09:43.926187 kubelet[2823]: I0508 00:09:43.925293 2823 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f84c114f-de7c-4255-b870-cdd7c6ee2566-hostproc\") pod \"cilium-7z2sq\" (UID: \"f84c114f-de7c-4255-b870-cdd7c6ee2566\") " pod="kube-system/cilium-7z2sq" May 8 00:09:43.926187 kubelet[2823]: I0508 00:09:43.925936 2823 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f84c114f-de7c-4255-b870-cdd7c6ee2566-cilium-config-path\") pod \"cilium-7z2sq\" (UID: \"f84c114f-de7c-4255-b870-cdd7c6ee2566\") " pod="kube-system/cilium-7z2sq" May 8 00:09:43.926187 kubelet[2823]: I0508 00:09:43.925948 2823 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f84c114f-de7c-4255-b870-cdd7c6ee2566-xtables-lock\") pod \"cilium-7z2sq\" (UID: \"f84c114f-de7c-4255-b870-cdd7c6ee2566\") " pod="kube-system/cilium-7z2sq" May 8 00:09:43.926187 kubelet[2823]: I0508 00:09:43.925959 2823 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/f84c114f-de7c-4255-b870-cdd7c6ee2566-cilium-ipsec-secrets\") pod \"cilium-7z2sq\" (UID: \"f84c114f-de7c-4255-b870-cdd7c6ee2566\") " pod="kube-system/cilium-7z2sq" May 8 00:09:43.926302 kubelet[2823]: I0508 00:09:43.925968 2823 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f84c114f-de7c-4255-b870-cdd7c6ee2566-hubble-tls\") pod \"cilium-7z2sq\" (UID: \"f84c114f-de7c-4255-b870-cdd7c6ee2566\") " pod="kube-system/cilium-7z2sq" May 8 00:09:43.926302 kubelet[2823]: I0508 00:09:43.926000 2823 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f84c114f-de7c-4255-b870-cdd7c6ee2566-cilium-run\") pod \"cilium-7z2sq\" (UID: \"f84c114f-de7c-4255-b870-cdd7c6ee2566\") " pod="kube-system/cilium-7z2sq" May 8 00:09:43.926302 kubelet[2823]: I0508 00:09:43.926009 2823 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f84c114f-de7c-4255-b870-cdd7c6ee2566-cni-path\") pod \"cilium-7z2sq\" (UID: \"f84c114f-de7c-4255-b870-cdd7c6ee2566\") " pod="kube-system/cilium-7z2sq" May 8 00:09:43.926302 kubelet[2823]: I0508 00:09:43.926017 2823 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f84c114f-de7c-4255-b870-cdd7c6ee2566-host-proc-sys-net\") pod \"cilium-7z2sq\" (UID: \"f84c114f-de7c-4255-b870-cdd7c6ee2566\") " pod="kube-system/cilium-7z2sq" May 8 00:09:43.926302 kubelet[2823]: I0508 00:09:43.926027 2823 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f84c114f-de7c-4255-b870-cdd7c6ee2566-host-proc-sys-kernel\") pod \"cilium-7z2sq\" (UID: \"f84c114f-de7c-4255-b870-cdd7c6ee2566\") " pod="kube-system/cilium-7z2sq" May 8 00:09:43.926302 kubelet[2823]: I0508 00:09:43.926045 2823 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mpffq\" (UniqueName: \"kubernetes.io/projected/f84c114f-de7c-4255-b870-cdd7c6ee2566-kube-api-access-mpffq\") pod \"cilium-7z2sq\" (UID: \"f84c114f-de7c-4255-b870-cdd7c6ee2566\") " pod="kube-system/cilium-7z2sq" May 8 00:09:43.926451 kubelet[2823]: I0508 00:09:43.926065 2823 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f84c114f-de7c-4255-b870-cdd7c6ee2566-etc-cni-netd\") pod \"cilium-7z2sq\" (UID: \"f84c114f-de7c-4255-b870-cdd7c6ee2566\") " pod="kube-system/cilium-7z2sq" May 8 00:09:43.926451 kubelet[2823]: I0508 00:09:43.926074 2823 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f84c114f-de7c-4255-b870-cdd7c6ee2566-lib-modules\") pod \"cilium-7z2sq\" (UID: \"f84c114f-de7c-4255-b870-cdd7c6ee2566\") " pod="kube-system/cilium-7z2sq" May 8 00:09:43.938855 sshd[4633]: Accepted publickey for core from 139.178.89.65 port 38158 ssh2: RSA SHA256:YTfHkQoI5xSpvBwpAWL9S8jbOKDeUPQ5sL4eA5EtDVU May 8 00:09:43.939714 sshd-session[4633]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:09:43.942533 systemd-logind[1535]: New session 29 of user core. May 8 00:09:43.951716 systemd[1]: Started session-29.scope - Session 29 of User core. May 8 00:09:44.165484 containerd[1554]: time="2025-05-08T00:09:44.165307285Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-7z2sq,Uid:f84c114f-de7c-4255-b870-cdd7c6ee2566,Namespace:kube-system,Attempt:0,}" May 8 00:09:44.187290 containerd[1554]: time="2025-05-08T00:09:44.186952804Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:09:44.187290 containerd[1554]: time="2025-05-08T00:09:44.186998244Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:09:44.187290 containerd[1554]: time="2025-05-08T00:09:44.187008441Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:09:44.187290 containerd[1554]: time="2025-05-08T00:09:44.187063474Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:09:44.203714 systemd[1]: Started cri-containerd-1ef4887f878864ec552fb4609186e3b3856e92932c8f5fe521997072400f8bdd.scope - libcontainer container 1ef4887f878864ec552fb4609186e3b3856e92932c8f5fe521997072400f8bdd. May 8 00:09:44.222286 containerd[1554]: time="2025-05-08T00:09:44.222258338Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-7z2sq,Uid:f84c114f-de7c-4255-b870-cdd7c6ee2566,Namespace:kube-system,Attempt:0,} returns sandbox id \"1ef4887f878864ec552fb4609186e3b3856e92932c8f5fe521997072400f8bdd\"" May 8 00:09:44.226079 containerd[1554]: time="2025-05-08T00:09:44.226040378Z" level=info msg="CreateContainer within sandbox \"1ef4887f878864ec552fb4609186e3b3856e92932c8f5fe521997072400f8bdd\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 8 00:09:44.243875 kubelet[2823]: I0508 00:09:44.243851 2823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2cba70d1-0f21-44d6-970e-5d2a01d15dfb" path="/var/lib/kubelet/pods/2cba70d1-0f21-44d6-970e-5d2a01d15dfb/volumes" May 8 00:09:44.244189 kubelet[2823]: I0508 00:09:44.244175 2823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ae90792-07ab-4a93-9671-7f095765d7e9" path="/var/lib/kubelet/pods/6ae90792-07ab-4a93-9671-7f095765d7e9/volumes" May 8 00:09:44.294016 containerd[1554]: time="2025-05-08T00:09:44.293968876Z" level=info msg="CreateContainer within sandbox \"1ef4887f878864ec552fb4609186e3b3856e92932c8f5fe521997072400f8bdd\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"1383342996b7770ca64dc431014827ff6c35cfb74890768f2a01714b39826600\"" May 8 00:09:44.294716 containerd[1554]: time="2025-05-08T00:09:44.294628957Z" level=info msg="StartContainer for \"1383342996b7770ca64dc431014827ff6c35cfb74890768f2a01714b39826600\"" May 8 00:09:44.322790 systemd[1]: Started cri-containerd-1383342996b7770ca64dc431014827ff6c35cfb74890768f2a01714b39826600.scope - libcontainer container 1383342996b7770ca64dc431014827ff6c35cfb74890768f2a01714b39826600. May 8 00:09:44.353753 containerd[1554]: time="2025-05-08T00:09:44.353718395Z" level=info msg="StartContainer for \"1383342996b7770ca64dc431014827ff6c35cfb74890768f2a01714b39826600\" returns successfully" May 8 00:09:44.429347 systemd[1]: cri-containerd-1383342996b7770ca64dc431014827ff6c35cfb74890768f2a01714b39826600.scope: Deactivated successfully. May 8 00:09:44.429642 systemd[1]: cri-containerd-1383342996b7770ca64dc431014827ff6c35cfb74890768f2a01714b39826600.scope: Consumed 18ms CPU time, 9.3M memory peak, 2.7M read from disk. May 8 00:09:44.457004 containerd[1554]: time="2025-05-08T00:09:44.456943467Z" level=info msg="shim disconnected" id=1383342996b7770ca64dc431014827ff6c35cfb74890768f2a01714b39826600 namespace=k8s.io May 8 00:09:44.457148 containerd[1554]: time="2025-05-08T00:09:44.457010632Z" level=warning msg="cleaning up after shim disconnected" id=1383342996b7770ca64dc431014827ff6c35cfb74890768f2a01714b39826600 namespace=k8s.io May 8 00:09:44.457148 containerd[1554]: time="2025-05-08T00:09:44.457020390Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 00:09:44.560477 containerd[1554]: time="2025-05-08T00:09:44.560233885Z" level=info msg="CreateContainer within sandbox \"1ef4887f878864ec552fb4609186e3b3856e92932c8f5fe521997072400f8bdd\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 8 00:09:44.575582 containerd[1554]: time="2025-05-08T00:09:44.575327794Z" level=info msg="CreateContainer within sandbox \"1ef4887f878864ec552fb4609186e3b3856e92932c8f5fe521997072400f8bdd\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"dc6d6c8fece37191f82ce9b55c8c97735d9c0a12521f00e481f0db7381032a93\"" May 8 00:09:44.576142 containerd[1554]: time="2025-05-08T00:09:44.575859898Z" level=info msg="StartContainer for \"dc6d6c8fece37191f82ce9b55c8c97735d9c0a12521f00e481f0db7381032a93\"" May 8 00:09:44.601727 systemd[1]: Started cri-containerd-dc6d6c8fece37191f82ce9b55c8c97735d9c0a12521f00e481f0db7381032a93.scope - libcontainer container dc6d6c8fece37191f82ce9b55c8c97735d9c0a12521f00e481f0db7381032a93. May 8 00:09:44.625119 containerd[1554]: time="2025-05-08T00:09:44.625036385Z" level=info msg="StartContainer for \"dc6d6c8fece37191f82ce9b55c8c97735d9c0a12521f00e481f0db7381032a93\" returns successfully" May 8 00:09:44.655828 systemd[1]: cri-containerd-dc6d6c8fece37191f82ce9b55c8c97735d9c0a12521f00e481f0db7381032a93.scope: Deactivated successfully. May 8 00:09:44.656494 systemd[1]: cri-containerd-dc6d6c8fece37191f82ce9b55c8c97735d9c0a12521f00e481f0db7381032a93.scope: Consumed 14ms CPU time, 7.3M memory peak, 1.9M read from disk. May 8 00:09:44.688743 containerd[1554]: time="2025-05-08T00:09:44.688425100Z" level=info msg="shim disconnected" id=dc6d6c8fece37191f82ce9b55c8c97735d9c0a12521f00e481f0db7381032a93 namespace=k8s.io May 8 00:09:44.688743 containerd[1554]: time="2025-05-08T00:09:44.688464708Z" level=warning msg="cleaning up after shim disconnected" id=dc6d6c8fece37191f82ce9b55c8c97735d9c0a12521f00e481f0db7381032a93 namespace=k8s.io May 8 00:09:44.688743 containerd[1554]: time="2025-05-08T00:09:44.688472374Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 00:09:45.562648 containerd[1554]: time="2025-05-08T00:09:45.562608976Z" level=info msg="CreateContainer within sandbox \"1ef4887f878864ec552fb4609186e3b3856e92932c8f5fe521997072400f8bdd\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 8 00:09:45.578083 containerd[1554]: time="2025-05-08T00:09:45.578026524Z" level=info msg="CreateContainer within sandbox \"1ef4887f878864ec552fb4609186e3b3856e92932c8f5fe521997072400f8bdd\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"245b5a390576d5c914990342f03b9715b01ad7c9a784f17fe3d03f7d1c9dd380\"" May 8 00:09:45.584359 containerd[1554]: time="2025-05-08T00:09:45.584156392Z" level=info msg="StartContainer for \"245b5a390576d5c914990342f03b9715b01ad7c9a784f17fe3d03f7d1c9dd380\"" May 8 00:09:45.607674 systemd[1]: Started cri-containerd-245b5a390576d5c914990342f03b9715b01ad7c9a784f17fe3d03f7d1c9dd380.scope - libcontainer container 245b5a390576d5c914990342f03b9715b01ad7c9a784f17fe3d03f7d1c9dd380. May 8 00:09:45.633935 containerd[1554]: time="2025-05-08T00:09:45.633881719Z" level=info msg="StartContainer for \"245b5a390576d5c914990342f03b9715b01ad7c9a784f17fe3d03f7d1c9dd380\" returns successfully" May 8 00:09:45.684697 systemd[1]: cri-containerd-245b5a390576d5c914990342f03b9715b01ad7c9a784f17fe3d03f7d1c9dd380.scope: Deactivated successfully. May 8 00:09:45.796342 containerd[1554]: time="2025-05-08T00:09:45.796283944Z" level=info msg="shim disconnected" id=245b5a390576d5c914990342f03b9715b01ad7c9a784f17fe3d03f7d1c9dd380 namespace=k8s.io May 8 00:09:45.796342 containerd[1554]: time="2025-05-08T00:09:45.796333467Z" level=warning msg="cleaning up after shim disconnected" id=245b5a390576d5c914990342f03b9715b01ad7c9a784f17fe3d03f7d1c9dd380 namespace=k8s.io May 8 00:09:45.796342 containerd[1554]: time="2025-05-08T00:09:45.796341358Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 00:09:46.041900 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-245b5a390576d5c914990342f03b9715b01ad7c9a784f17fe3d03f7d1c9dd380-rootfs.mount: Deactivated successfully. May 8 00:09:46.565380 containerd[1554]: time="2025-05-08T00:09:46.565354548Z" level=info msg="CreateContainer within sandbox \"1ef4887f878864ec552fb4609186e3b3856e92932c8f5fe521997072400f8bdd\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 8 00:09:46.581674 containerd[1554]: time="2025-05-08T00:09:46.581634476Z" level=info msg="CreateContainer within sandbox \"1ef4887f878864ec552fb4609186e3b3856e92932c8f5fe521997072400f8bdd\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"0a35b2ab005ee4ec4d0a97e68962e7e73e16f4fbd3439f12e82bad92197a23ac\"" May 8 00:09:46.582760 containerd[1554]: time="2025-05-08T00:09:46.582735821Z" level=info msg="StartContainer for \"0a35b2ab005ee4ec4d0a97e68962e7e73e16f4fbd3439f12e82bad92197a23ac\"" May 8 00:09:46.607732 systemd[1]: Started cri-containerd-0a35b2ab005ee4ec4d0a97e68962e7e73e16f4fbd3439f12e82bad92197a23ac.scope - libcontainer container 0a35b2ab005ee4ec4d0a97e68962e7e73e16f4fbd3439f12e82bad92197a23ac. May 8 00:09:46.626900 systemd[1]: cri-containerd-0a35b2ab005ee4ec4d0a97e68962e7e73e16f4fbd3439f12e82bad92197a23ac.scope: Deactivated successfully. May 8 00:09:46.636279 containerd[1554]: time="2025-05-08T00:09:46.636177560Z" level=info msg="StartContainer for \"0a35b2ab005ee4ec4d0a97e68962e7e73e16f4fbd3439f12e82bad92197a23ac\" returns successfully" May 8 00:09:46.653832 containerd[1554]: time="2025-05-08T00:09:46.653715678Z" level=info msg="shim disconnected" id=0a35b2ab005ee4ec4d0a97e68962e7e73e16f4fbd3439f12e82bad92197a23ac namespace=k8s.io May 8 00:09:46.653832 containerd[1554]: time="2025-05-08T00:09:46.653750919Z" level=warning msg="cleaning up after shim disconnected" id=0a35b2ab005ee4ec4d0a97e68962e7e73e16f4fbd3439f12e82bad92197a23ac namespace=k8s.io May 8 00:09:46.653832 containerd[1554]: time="2025-05-08T00:09:46.653756609Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 00:09:47.041872 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0a35b2ab005ee4ec4d0a97e68962e7e73e16f4fbd3439f12e82bad92197a23ac-rootfs.mount: Deactivated successfully. May 8 00:09:47.569756 containerd[1554]: time="2025-05-08T00:09:47.569731748Z" level=info msg="CreateContainer within sandbox \"1ef4887f878864ec552fb4609186e3b3856e92932c8f5fe521997072400f8bdd\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 8 00:09:47.583632 containerd[1554]: time="2025-05-08T00:09:47.583536042Z" level=info msg="CreateContainer within sandbox \"1ef4887f878864ec552fb4609186e3b3856e92932c8f5fe521997072400f8bdd\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"3719bbe80215817cb5577f6862042514b52604e7788ffcfeb70ff29754d85026\"" May 8 00:09:47.585553 containerd[1554]: time="2025-05-08T00:09:47.583962777Z" level=info msg="StartContainer for \"3719bbe80215817cb5577f6862042514b52604e7788ffcfeb70ff29754d85026\"" May 8 00:09:47.614695 systemd[1]: Started cri-containerd-3719bbe80215817cb5577f6862042514b52604e7788ffcfeb70ff29754d85026.scope - libcontainer container 3719bbe80215817cb5577f6862042514b52604e7788ffcfeb70ff29754d85026. May 8 00:09:47.640841 containerd[1554]: time="2025-05-08T00:09:47.640812274Z" level=info msg="StartContainer for \"3719bbe80215817cb5577f6862042514b52604e7788ffcfeb70ff29754d85026\" returns successfully" May 8 00:09:48.593263 kubelet[2823]: I0508 00:09:48.593041 2823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-7z2sq" podStartSLOduration=5.592966123 podStartE2EDuration="5.592966123s" podCreationTimestamp="2025-05-08 00:09:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 00:09:48.59239256 +0000 UTC m=+130.588271375" watchObservedRunningTime="2025-05-08 00:09:48.592966123 +0000 UTC m=+130.588844933" May 8 00:09:49.658614 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) May 8 00:09:50.516538 systemd[1]: run-containerd-runc-k8s.io-3719bbe80215817cb5577f6862042514b52604e7788ffcfeb70ff29754d85026-runc.MA2pBV.mount: Deactivated successfully. May 8 00:09:52.507937 systemd-networkd[1461]: lxc_health: Link UP May 8 00:09:52.520366 systemd-networkd[1461]: lxc_health: Gained carrier May 8 00:09:54.016675 systemd-networkd[1461]: lxc_health: Gained IPv6LL May 8 00:09:59.192013 systemd[1]: run-containerd-runc-k8s.io-3719bbe80215817cb5577f6862042514b52604e7788ffcfeb70ff29754d85026-runc.vcsTUQ.mount: Deactivated successfully. May 8 00:09:59.231171 sshd[4636]: Connection closed by 139.178.89.65 port 38158 May 8 00:09:59.231859 sshd-session[4633]: pam_unix(sshd:session): session closed for user core May 8 00:09:59.233724 systemd[1]: sshd@30-139.178.70.106:22-139.178.89.65:38158.service: Deactivated successfully. May 8 00:09:59.235036 systemd[1]: session-29.scope: Deactivated successfully. May 8 00:09:59.235674 systemd-logind[1535]: Session 29 logged out. Waiting for processes to exit. May 8 00:09:59.236961 systemd-logind[1535]: Removed session 29.