May 12 23:37:33.758920 kernel: Linux version 6.6.89-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT_DYNAMIC Mon May 12 22:05:47 -00 2025 May 12 23:37:33.758945 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=5fa7c1ec1190c634be13c39e3f7599010d1d102f7681a0d92e31c1dc0e6a7a5d May 12 23:37:33.758953 kernel: Disabled fast string operations May 12 23:37:33.758957 kernel: BIOS-provided physical RAM map: May 12 23:37:33.758961 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ebff] usable May 12 23:37:33.758966 kernel: BIOS-e820: [mem 0x000000000009ec00-0x000000000009ffff] reserved May 12 23:37:33.758972 kernel: BIOS-e820: [mem 0x00000000000dc000-0x00000000000fffff] reserved May 12 23:37:33.758977 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007fedffff] usable May 12 23:37:33.758981 kernel: BIOS-e820: [mem 0x000000007fee0000-0x000000007fefefff] ACPI data May 12 23:37:33.758986 kernel: BIOS-e820: [mem 0x000000007feff000-0x000000007fefffff] ACPI NVS May 12 23:37:33.758990 kernel: BIOS-e820: [mem 0x000000007ff00000-0x000000007fffffff] usable May 12 23:37:33.758995 kernel: BIOS-e820: [mem 0x00000000f0000000-0x00000000f7ffffff] reserved May 12 23:37:33.758999 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec0ffff] reserved May 12 23:37:33.759004 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved May 12 23:37:33.759011 kernel: BIOS-e820: [mem 0x00000000fffe0000-0x00000000ffffffff] reserved May 12 23:37:33.759016 kernel: NX (Execute Disable) protection: active May 12 23:37:33.759021 kernel: APIC: Static calls initialized May 12 23:37:33.759026 kernel: SMBIOS 2.7 present. May 12 23:37:33.759031 kernel: DMI: VMware, Inc. VMware Virtual Platform/440BX Desktop Reference Platform, BIOS 6.00 05/28/2020 May 12 23:37:33.759036 kernel: vmware: hypercall mode: 0x00 May 12 23:37:33.759041 kernel: Hypervisor detected: VMware May 12 23:37:33.759046 kernel: vmware: TSC freq read from hypervisor : 3408.000 MHz May 12 23:37:33.759052 kernel: vmware: Host bus clock speed read from hypervisor : 66000000 Hz May 12 23:37:33.759057 kernel: vmware: using clock offset of 3714938588 ns May 12 23:37:33.759062 kernel: tsc: Detected 3408.000 MHz processor May 12 23:37:33.759068 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 12 23:37:33.759074 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 12 23:37:33.759079 kernel: last_pfn = 0x80000 max_arch_pfn = 0x400000000 May 12 23:37:33.759084 kernel: total RAM covered: 3072M May 12 23:37:33.759089 kernel: Found optimal setting for mtrr clean up May 12 23:37:33.759095 kernel: gran_size: 64K chunk_size: 64K num_reg: 2 lose cover RAM: 0G May 12 23:37:33.759100 kernel: MTRR map: 6 entries (5 fixed + 1 variable; max 21), built from 8 variable MTRRs May 12 23:37:33.759107 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 12 23:37:33.759112 kernel: Using GB pages for direct mapping May 12 23:37:33.759117 kernel: ACPI: Early table checksum verification disabled May 12 23:37:33.759122 kernel: ACPI: RSDP 0x00000000000F6A00 000024 (v02 PTLTD ) May 12 23:37:33.759127 kernel: ACPI: XSDT 0x000000007FEE965B 00005C (v01 INTEL 440BX 06040000 VMW 01324272) May 12 23:37:33.759133 kernel: ACPI: FACP 0x000000007FEFEE73 0000F4 (v04 INTEL 440BX 06040000 PTL 000F4240) May 12 23:37:33.759138 kernel: ACPI: DSDT 0x000000007FEEAD55 01411E (v01 PTLTD Custom 06040000 MSFT 03000001) May 12 23:37:33.759143 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 May 12 23:37:33.759151 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 May 12 23:37:33.759157 kernel: ACPI: BOOT 0x000000007FEEAD2D 000028 (v01 PTLTD $SBFTBL$ 06040000 LTP 00000001) May 12 23:37:33.759162 kernel: ACPI: APIC 0x000000007FEEA5EB 000742 (v01 PTLTD ? APIC 06040000 LTP 00000000) May 12 23:37:33.759168 kernel: ACPI: MCFG 0x000000007FEEA5AF 00003C (v01 PTLTD $PCITBL$ 06040000 LTP 00000001) May 12 23:37:33.759173 kernel: ACPI: SRAT 0x000000007FEE9757 0008A8 (v02 VMWARE MEMPLUG 06040000 VMW 00000001) May 12 23:37:33.759178 kernel: ACPI: HPET 0x000000007FEE971F 000038 (v01 VMWARE VMW HPET 06040000 VMW 00000001) May 12 23:37:33.759185 kernel: ACPI: WAET 0x000000007FEE96F7 000028 (v01 VMWARE VMW WAET 06040000 VMW 00000001) May 12 23:37:33.759191 kernel: ACPI: Reserving FACP table memory at [mem 0x7fefee73-0x7fefef66] May 12 23:37:33.759196 kernel: ACPI: Reserving DSDT table memory at [mem 0x7feead55-0x7fefee72] May 12 23:37:33.759202 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] May 12 23:37:33.759208 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] May 12 23:37:33.759213 kernel: ACPI: Reserving BOOT table memory at [mem 0x7feead2d-0x7feead54] May 12 23:37:33.759219 kernel: ACPI: Reserving APIC table memory at [mem 0x7feea5eb-0x7feead2c] May 12 23:37:33.759224 kernel: ACPI: Reserving MCFG table memory at [mem 0x7feea5af-0x7feea5ea] May 12 23:37:33.759229 kernel: ACPI: Reserving SRAT table memory at [mem 0x7fee9757-0x7fee9ffe] May 12 23:37:33.759236 kernel: ACPI: Reserving HPET table memory at [mem 0x7fee971f-0x7fee9756] May 12 23:37:33.759241 kernel: ACPI: Reserving WAET table memory at [mem 0x7fee96f7-0x7fee971e] May 12 23:37:33.759246 kernel: system APIC only can use physical flat May 12 23:37:33.759252 kernel: APIC: Switched APIC routing to: physical flat May 12 23:37:33.759257 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 May 12 23:37:33.759262 kernel: SRAT: PXM 0 -> APIC 0x02 -> Node 0 May 12 23:37:33.759268 kernel: SRAT: PXM 0 -> APIC 0x04 -> Node 0 May 12 23:37:33.759273 kernel: SRAT: PXM 0 -> APIC 0x06 -> Node 0 May 12 23:37:33.759278 kernel: SRAT: PXM 0 -> APIC 0x08 -> Node 0 May 12 23:37:33.759283 kernel: SRAT: PXM 0 -> APIC 0x0a -> Node 0 May 12 23:37:33.759289 kernel: SRAT: PXM 0 -> APIC 0x0c -> Node 0 May 12 23:37:33.759295 kernel: SRAT: PXM 0 -> APIC 0x0e -> Node 0 May 12 23:37:33.759300 kernel: SRAT: PXM 0 -> APIC 0x10 -> Node 0 May 12 23:37:33.759306 kernel: SRAT: PXM 0 -> APIC 0x12 -> Node 0 May 12 23:37:33.759311 kernel: SRAT: PXM 0 -> APIC 0x14 -> Node 0 May 12 23:37:33.759316 kernel: SRAT: PXM 0 -> APIC 0x16 -> Node 0 May 12 23:37:33.759321 kernel: SRAT: PXM 0 -> APIC 0x18 -> Node 0 May 12 23:37:33.759327 kernel: SRAT: PXM 0 -> APIC 0x1a -> Node 0 May 12 23:37:33.759332 kernel: SRAT: PXM 0 -> APIC 0x1c -> Node 0 May 12 23:37:33.759337 kernel: SRAT: PXM 0 -> APIC 0x1e -> Node 0 May 12 23:37:33.759343 kernel: SRAT: PXM 0 -> APIC 0x20 -> Node 0 May 12 23:37:33.759349 kernel: SRAT: PXM 0 -> APIC 0x22 -> Node 0 May 12 23:37:33.759354 kernel: SRAT: PXM 0 -> APIC 0x24 -> Node 0 May 12 23:37:33.759359 kernel: SRAT: PXM 0 -> APIC 0x26 -> Node 0 May 12 23:37:33.759364 kernel: SRAT: PXM 0 -> APIC 0x28 -> Node 0 May 12 23:37:33.759370 kernel: SRAT: PXM 0 -> APIC 0x2a -> Node 0 May 12 23:37:33.759375 kernel: SRAT: PXM 0 -> APIC 0x2c -> Node 0 May 12 23:37:33.759380 kernel: SRAT: PXM 0 -> APIC 0x2e -> Node 0 May 12 23:37:33.759385 kernel: SRAT: PXM 0 -> APIC 0x30 -> Node 0 May 12 23:37:33.759390 kernel: SRAT: PXM 0 -> APIC 0x32 -> Node 0 May 12 23:37:33.759397 kernel: SRAT: PXM 0 -> APIC 0x34 -> Node 0 May 12 23:37:33.759402 kernel: SRAT: PXM 0 -> APIC 0x36 -> Node 0 May 12 23:37:33.759407 kernel: SRAT: PXM 0 -> APIC 0x38 -> Node 0 May 12 23:37:33.759413 kernel: SRAT: PXM 0 -> APIC 0x3a -> Node 0 May 12 23:37:33.759418 kernel: SRAT: PXM 0 -> APIC 0x3c -> Node 0 May 12 23:37:33.759423 kernel: SRAT: PXM 0 -> APIC 0x3e -> Node 0 May 12 23:37:33.759428 kernel: SRAT: PXM 0 -> APIC 0x40 -> Node 0 May 12 23:37:33.759433 kernel: SRAT: PXM 0 -> APIC 0x42 -> Node 0 May 12 23:37:33.759439 kernel: SRAT: PXM 0 -> APIC 0x44 -> Node 0 May 12 23:37:33.759444 kernel: SRAT: PXM 0 -> APIC 0x46 -> Node 0 May 12 23:37:33.759449 kernel: SRAT: PXM 0 -> APIC 0x48 -> Node 0 May 12 23:37:33.759455 kernel: SRAT: PXM 0 -> APIC 0x4a -> Node 0 May 12 23:37:33.759461 kernel: SRAT: PXM 0 -> APIC 0x4c -> Node 0 May 12 23:37:33.759466 kernel: SRAT: PXM 0 -> APIC 0x4e -> Node 0 May 12 23:37:33.759471 kernel: SRAT: PXM 0 -> APIC 0x50 -> Node 0 May 12 23:37:33.759476 kernel: SRAT: PXM 0 -> APIC 0x52 -> Node 0 May 12 23:37:33.759481 kernel: SRAT: PXM 0 -> APIC 0x54 -> Node 0 May 12 23:37:33.759487 kernel: SRAT: PXM 0 -> APIC 0x56 -> Node 0 May 12 23:37:33.759492 kernel: SRAT: PXM 0 -> APIC 0x58 -> Node 0 May 12 23:37:33.759497 kernel: SRAT: PXM 0 -> APIC 0x5a -> Node 0 May 12 23:37:33.759502 kernel: SRAT: PXM 0 -> APIC 0x5c -> Node 0 May 12 23:37:33.759509 kernel: SRAT: PXM 0 -> APIC 0x5e -> Node 0 May 12 23:37:33.759514 kernel: SRAT: PXM 0 -> APIC 0x60 -> Node 0 May 12 23:37:33.759520 kernel: SRAT: PXM 0 -> APIC 0x62 -> Node 0 May 12 23:37:33.759525 kernel: SRAT: PXM 0 -> APIC 0x64 -> Node 0 May 12 23:37:33.759530 kernel: SRAT: PXM 0 -> APIC 0x66 -> Node 0 May 12 23:37:33.759535 kernel: SRAT: PXM 0 -> APIC 0x68 -> Node 0 May 12 23:37:33.759540 kernel: SRAT: PXM 0 -> APIC 0x6a -> Node 0 May 12 23:37:33.759545 kernel: SRAT: PXM 0 -> APIC 0x6c -> Node 0 May 12 23:37:33.759551 kernel: SRAT: PXM 0 -> APIC 0x6e -> Node 0 May 12 23:37:33.759556 kernel: SRAT: PXM 0 -> APIC 0x70 -> Node 0 May 12 23:37:33.759562 kernel: SRAT: PXM 0 -> APIC 0x72 -> Node 0 May 12 23:37:33.759568 kernel: SRAT: PXM 0 -> APIC 0x74 -> Node 0 May 12 23:37:33.759576 kernel: SRAT: PXM 0 -> APIC 0x76 -> Node 0 May 12 23:37:33.759583 kernel: SRAT: PXM 0 -> APIC 0x78 -> Node 0 May 12 23:37:33.759589 kernel: SRAT: PXM 0 -> APIC 0x7a -> Node 0 May 12 23:37:33.759594 kernel: SRAT: PXM 0 -> APIC 0x7c -> Node 0 May 12 23:37:33.759600 kernel: SRAT: PXM 0 -> APIC 0x7e -> Node 0 May 12 23:37:33.759605 kernel: SRAT: PXM 0 -> APIC 0x80 -> Node 0 May 12 23:37:33.759611 kernel: SRAT: PXM 0 -> APIC 0x82 -> Node 0 May 12 23:37:33.759617 kernel: SRAT: PXM 0 -> APIC 0x84 -> Node 0 May 12 23:37:33.759623 kernel: SRAT: PXM 0 -> APIC 0x86 -> Node 0 May 12 23:37:33.759629 kernel: SRAT: PXM 0 -> APIC 0x88 -> Node 0 May 12 23:37:33.759634 kernel: SRAT: PXM 0 -> APIC 0x8a -> Node 0 May 12 23:37:33.759640 kernel: SRAT: PXM 0 -> APIC 0x8c -> Node 0 May 12 23:37:33.759645 kernel: SRAT: PXM 0 -> APIC 0x8e -> Node 0 May 12 23:37:33.759651 kernel: SRAT: PXM 0 -> APIC 0x90 -> Node 0 May 12 23:37:33.759656 kernel: SRAT: PXM 0 -> APIC 0x92 -> Node 0 May 12 23:37:33.759662 kernel: SRAT: PXM 0 -> APIC 0x94 -> Node 0 May 12 23:37:33.759668 kernel: SRAT: PXM 0 -> APIC 0x96 -> Node 0 May 12 23:37:33.759674 kernel: SRAT: PXM 0 -> APIC 0x98 -> Node 0 May 12 23:37:33.759680 kernel: SRAT: PXM 0 -> APIC 0x9a -> Node 0 May 12 23:37:33.759686 kernel: SRAT: PXM 0 -> APIC 0x9c -> Node 0 May 12 23:37:33.759691 kernel: SRAT: PXM 0 -> APIC 0x9e -> Node 0 May 12 23:37:33.759697 kernel: SRAT: PXM 0 -> APIC 0xa0 -> Node 0 May 12 23:37:33.759702 kernel: SRAT: PXM 0 -> APIC 0xa2 -> Node 0 May 12 23:37:33.759708 kernel: SRAT: PXM 0 -> APIC 0xa4 -> Node 0 May 12 23:37:33.759713 kernel: SRAT: PXM 0 -> APIC 0xa6 -> Node 0 May 12 23:37:33.759724 kernel: SRAT: PXM 0 -> APIC 0xa8 -> Node 0 May 12 23:37:33.759730 kernel: SRAT: PXM 0 -> APIC 0xaa -> Node 0 May 12 23:37:33.759737 kernel: SRAT: PXM 0 -> APIC 0xac -> Node 0 May 12 23:37:33.759753 kernel: SRAT: PXM 0 -> APIC 0xae -> Node 0 May 12 23:37:33.759759 kernel: SRAT: PXM 0 -> APIC 0xb0 -> Node 0 May 12 23:37:33.759764 kernel: SRAT: PXM 0 -> APIC 0xb2 -> Node 0 May 12 23:37:33.759770 kernel: SRAT: PXM 0 -> APIC 0xb4 -> Node 0 May 12 23:37:33.759776 kernel: SRAT: PXM 0 -> APIC 0xb6 -> Node 0 May 12 23:37:33.759781 kernel: SRAT: PXM 0 -> APIC 0xb8 -> Node 0 May 12 23:37:33.759786 kernel: SRAT: PXM 0 -> APIC 0xba -> Node 0 May 12 23:37:33.759792 kernel: SRAT: PXM 0 -> APIC 0xbc -> Node 0 May 12 23:37:33.759798 kernel: SRAT: PXM 0 -> APIC 0xbe -> Node 0 May 12 23:37:33.759806 kernel: SRAT: PXM 0 -> APIC 0xc0 -> Node 0 May 12 23:37:33.759811 kernel: SRAT: PXM 0 -> APIC 0xc2 -> Node 0 May 12 23:37:33.759817 kernel: SRAT: PXM 0 -> APIC 0xc4 -> Node 0 May 12 23:37:33.759823 kernel: SRAT: PXM 0 -> APIC 0xc6 -> Node 0 May 12 23:37:33.759828 kernel: SRAT: PXM 0 -> APIC 0xc8 -> Node 0 May 12 23:37:33.759834 kernel: SRAT: PXM 0 -> APIC 0xca -> Node 0 May 12 23:37:33.759839 kernel: SRAT: PXM 0 -> APIC 0xcc -> Node 0 May 12 23:37:33.759845 kernel: SRAT: PXM 0 -> APIC 0xce -> Node 0 May 12 23:37:33.759850 kernel: SRAT: PXM 0 -> APIC 0xd0 -> Node 0 May 12 23:37:33.759856 kernel: SRAT: PXM 0 -> APIC 0xd2 -> Node 0 May 12 23:37:33.759862 kernel: SRAT: PXM 0 -> APIC 0xd4 -> Node 0 May 12 23:37:33.759869 kernel: SRAT: PXM 0 -> APIC 0xd6 -> Node 0 May 12 23:37:33.759874 kernel: SRAT: PXM 0 -> APIC 0xd8 -> Node 0 May 12 23:37:33.759880 kernel: SRAT: PXM 0 -> APIC 0xda -> Node 0 May 12 23:37:33.759885 kernel: SRAT: PXM 0 -> APIC 0xdc -> Node 0 May 12 23:37:33.759891 kernel: SRAT: PXM 0 -> APIC 0xde -> Node 0 May 12 23:37:33.759896 kernel: SRAT: PXM 0 -> APIC 0xe0 -> Node 0 May 12 23:37:33.759902 kernel: SRAT: PXM 0 -> APIC 0xe2 -> Node 0 May 12 23:37:33.759907 kernel: SRAT: PXM 0 -> APIC 0xe4 -> Node 0 May 12 23:37:33.759913 kernel: SRAT: PXM 0 -> APIC 0xe6 -> Node 0 May 12 23:37:33.759918 kernel: SRAT: PXM 0 -> APIC 0xe8 -> Node 0 May 12 23:37:33.759925 kernel: SRAT: PXM 0 -> APIC 0xea -> Node 0 May 12 23:37:33.759931 kernel: SRAT: PXM 0 -> APIC 0xec -> Node 0 May 12 23:37:33.759936 kernel: SRAT: PXM 0 -> APIC 0xee -> Node 0 May 12 23:37:33.759942 kernel: SRAT: PXM 0 -> APIC 0xf0 -> Node 0 May 12 23:37:33.759947 kernel: SRAT: PXM 0 -> APIC 0xf2 -> Node 0 May 12 23:37:33.759953 kernel: SRAT: PXM 0 -> APIC 0xf4 -> Node 0 May 12 23:37:33.759958 kernel: SRAT: PXM 0 -> APIC 0xf6 -> Node 0 May 12 23:37:33.759964 kernel: SRAT: PXM 0 -> APIC 0xf8 -> Node 0 May 12 23:37:33.759969 kernel: SRAT: PXM 0 -> APIC 0xfa -> Node 0 May 12 23:37:33.759975 kernel: SRAT: PXM 0 -> APIC 0xfc -> Node 0 May 12 23:37:33.759982 kernel: SRAT: PXM 0 -> APIC 0xfe -> Node 0 May 12 23:37:33.759987 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] May 12 23:37:33.759993 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] May 12 23:37:33.759999 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000-0xbfffffff] hotplug May 12 23:37:33.760005 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7fffffff] -> [mem 0x00000000-0x7fffffff] May 12 23:37:33.760010 kernel: NODE_DATA(0) allocated [mem 0x7fffa000-0x7fffffff] May 12 23:37:33.760016 kernel: Zone ranges: May 12 23:37:33.760022 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 12 23:37:33.760028 kernel: DMA32 [mem 0x0000000001000000-0x000000007fffffff] May 12 23:37:33.760033 kernel: Normal empty May 12 23:37:33.760040 kernel: Movable zone start for each node May 12 23:37:33.760046 kernel: Early memory node ranges May 12 23:37:33.760051 kernel: node 0: [mem 0x0000000000001000-0x000000000009dfff] May 12 23:37:33.760057 kernel: node 0: [mem 0x0000000000100000-0x000000007fedffff] May 12 23:37:33.760063 kernel: node 0: [mem 0x000000007ff00000-0x000000007fffffff] May 12 23:37:33.760068 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007fffffff] May 12 23:37:33.760074 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 12 23:37:33.760080 kernel: On node 0, zone DMA: 98 pages in unavailable ranges May 12 23:37:33.760086 kernel: On node 0, zone DMA32: 32 pages in unavailable ranges May 12 23:37:33.760092 kernel: ACPI: PM-Timer IO Port: 0x1008 May 12 23:37:33.760098 kernel: system APIC only can use physical flat May 12 23:37:33.760104 kernel: ACPI: LAPIC_NMI (acpi_id[0x00] high edge lint[0x1]) May 12 23:37:33.760109 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) May 12 23:37:33.760115 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) May 12 23:37:33.760121 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) May 12 23:37:33.760126 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) May 12 23:37:33.760132 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) May 12 23:37:33.760138 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) May 12 23:37:33.760143 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) May 12 23:37:33.760150 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) May 12 23:37:33.760156 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) May 12 23:37:33.760161 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) May 12 23:37:33.760167 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) May 12 23:37:33.760173 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) May 12 23:37:33.760178 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) May 12 23:37:33.760184 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) May 12 23:37:33.760189 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) May 12 23:37:33.760195 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) May 12 23:37:33.760200 kernel: ACPI: LAPIC_NMI (acpi_id[0x11] high edge lint[0x1]) May 12 23:37:33.760207 kernel: ACPI: LAPIC_NMI (acpi_id[0x12] high edge lint[0x1]) May 12 23:37:33.760213 kernel: ACPI: LAPIC_NMI (acpi_id[0x13] high edge lint[0x1]) May 12 23:37:33.760218 kernel: ACPI: LAPIC_NMI (acpi_id[0x14] high edge lint[0x1]) May 12 23:37:33.760224 kernel: ACPI: LAPIC_NMI (acpi_id[0x15] high edge lint[0x1]) May 12 23:37:33.760230 kernel: ACPI: LAPIC_NMI (acpi_id[0x16] high edge lint[0x1]) May 12 23:37:33.760235 kernel: ACPI: LAPIC_NMI (acpi_id[0x17] high edge lint[0x1]) May 12 23:37:33.760241 kernel: ACPI: LAPIC_NMI (acpi_id[0x18] high edge lint[0x1]) May 12 23:37:33.760246 kernel: ACPI: LAPIC_NMI (acpi_id[0x19] high edge lint[0x1]) May 12 23:37:33.760252 kernel: ACPI: LAPIC_NMI (acpi_id[0x1a] high edge lint[0x1]) May 12 23:37:33.760259 kernel: ACPI: LAPIC_NMI (acpi_id[0x1b] high edge lint[0x1]) May 12 23:37:33.760264 kernel: ACPI: LAPIC_NMI (acpi_id[0x1c] high edge lint[0x1]) May 12 23:37:33.760270 kernel: ACPI: LAPIC_NMI (acpi_id[0x1d] high edge lint[0x1]) May 12 23:37:33.760276 kernel: ACPI: LAPIC_NMI (acpi_id[0x1e] high edge lint[0x1]) May 12 23:37:33.760281 kernel: ACPI: LAPIC_NMI (acpi_id[0x1f] high edge lint[0x1]) May 12 23:37:33.760287 kernel: ACPI: LAPIC_NMI (acpi_id[0x20] high edge lint[0x1]) May 12 23:37:33.760292 kernel: ACPI: LAPIC_NMI (acpi_id[0x21] high edge lint[0x1]) May 12 23:37:33.760298 kernel: ACPI: LAPIC_NMI (acpi_id[0x22] high edge lint[0x1]) May 12 23:37:33.760303 kernel: ACPI: LAPIC_NMI (acpi_id[0x23] high edge lint[0x1]) May 12 23:37:33.760309 kernel: ACPI: LAPIC_NMI (acpi_id[0x24] high edge lint[0x1]) May 12 23:37:33.760316 kernel: ACPI: LAPIC_NMI (acpi_id[0x25] high edge lint[0x1]) May 12 23:37:33.760322 kernel: ACPI: LAPIC_NMI (acpi_id[0x26] high edge lint[0x1]) May 12 23:37:33.760327 kernel: ACPI: LAPIC_NMI (acpi_id[0x27] high edge lint[0x1]) May 12 23:37:33.760333 kernel: ACPI: LAPIC_NMI (acpi_id[0x28] high edge lint[0x1]) May 12 23:37:33.760338 kernel: ACPI: LAPIC_NMI (acpi_id[0x29] high edge lint[0x1]) May 12 23:37:33.760344 kernel: ACPI: LAPIC_NMI (acpi_id[0x2a] high edge lint[0x1]) May 12 23:37:33.760350 kernel: ACPI: LAPIC_NMI (acpi_id[0x2b] high edge lint[0x1]) May 12 23:37:33.760355 kernel: ACPI: LAPIC_NMI (acpi_id[0x2c] high edge lint[0x1]) May 12 23:37:33.760361 kernel: ACPI: LAPIC_NMI (acpi_id[0x2d] high edge lint[0x1]) May 12 23:37:33.760366 kernel: ACPI: LAPIC_NMI (acpi_id[0x2e] high edge lint[0x1]) May 12 23:37:33.760373 kernel: ACPI: LAPIC_NMI (acpi_id[0x2f] high edge lint[0x1]) May 12 23:37:33.760379 kernel: ACPI: LAPIC_NMI (acpi_id[0x30] high edge lint[0x1]) May 12 23:37:33.760384 kernel: ACPI: LAPIC_NMI (acpi_id[0x31] high edge lint[0x1]) May 12 23:37:33.760390 kernel: ACPI: LAPIC_NMI (acpi_id[0x32] high edge lint[0x1]) May 12 23:37:33.760396 kernel: ACPI: LAPIC_NMI (acpi_id[0x33] high edge lint[0x1]) May 12 23:37:33.760401 kernel: ACPI: LAPIC_NMI (acpi_id[0x34] high edge lint[0x1]) May 12 23:37:33.760407 kernel: ACPI: LAPIC_NMI (acpi_id[0x35] high edge lint[0x1]) May 12 23:37:33.760413 kernel: ACPI: LAPIC_NMI (acpi_id[0x36] high edge lint[0x1]) May 12 23:37:33.760418 kernel: ACPI: LAPIC_NMI (acpi_id[0x37] high edge lint[0x1]) May 12 23:37:33.760424 kernel: ACPI: LAPIC_NMI (acpi_id[0x38] high edge lint[0x1]) May 12 23:37:33.760431 kernel: ACPI: LAPIC_NMI (acpi_id[0x39] high edge lint[0x1]) May 12 23:37:33.760436 kernel: ACPI: LAPIC_NMI (acpi_id[0x3a] high edge lint[0x1]) May 12 23:37:33.760442 kernel: ACPI: LAPIC_NMI (acpi_id[0x3b] high edge lint[0x1]) May 12 23:37:33.760448 kernel: ACPI: LAPIC_NMI (acpi_id[0x3c] high edge lint[0x1]) May 12 23:37:33.760453 kernel: ACPI: LAPIC_NMI (acpi_id[0x3d] high edge lint[0x1]) May 12 23:37:33.760459 kernel: ACPI: LAPIC_NMI (acpi_id[0x3e] high edge lint[0x1]) May 12 23:37:33.760464 kernel: ACPI: LAPIC_NMI (acpi_id[0x3f] high edge lint[0x1]) May 12 23:37:33.760470 kernel: ACPI: LAPIC_NMI (acpi_id[0x40] high edge lint[0x1]) May 12 23:37:33.760475 kernel: ACPI: LAPIC_NMI (acpi_id[0x41] high edge lint[0x1]) May 12 23:37:33.760482 kernel: ACPI: LAPIC_NMI (acpi_id[0x42] high edge lint[0x1]) May 12 23:37:33.760488 kernel: ACPI: LAPIC_NMI (acpi_id[0x43] high edge lint[0x1]) May 12 23:37:33.760493 kernel: ACPI: LAPIC_NMI (acpi_id[0x44] high edge lint[0x1]) May 12 23:37:33.760499 kernel: ACPI: LAPIC_NMI (acpi_id[0x45] high edge lint[0x1]) May 12 23:37:33.760505 kernel: ACPI: LAPIC_NMI (acpi_id[0x46] high edge lint[0x1]) May 12 23:37:33.760510 kernel: ACPI: LAPIC_NMI (acpi_id[0x47] high edge lint[0x1]) May 12 23:37:33.760516 kernel: ACPI: LAPIC_NMI (acpi_id[0x48] high edge lint[0x1]) May 12 23:37:33.760521 kernel: ACPI: LAPIC_NMI (acpi_id[0x49] high edge lint[0x1]) May 12 23:37:33.760527 kernel: ACPI: LAPIC_NMI (acpi_id[0x4a] high edge lint[0x1]) May 12 23:37:33.760533 kernel: ACPI: LAPIC_NMI (acpi_id[0x4b] high edge lint[0x1]) May 12 23:37:33.760539 kernel: ACPI: LAPIC_NMI (acpi_id[0x4c] high edge lint[0x1]) May 12 23:37:33.760545 kernel: ACPI: LAPIC_NMI (acpi_id[0x4d] high edge lint[0x1]) May 12 23:37:33.760551 kernel: ACPI: LAPIC_NMI (acpi_id[0x4e] high edge lint[0x1]) May 12 23:37:33.760556 kernel: ACPI: LAPIC_NMI (acpi_id[0x4f] high edge lint[0x1]) May 12 23:37:33.760562 kernel: ACPI: LAPIC_NMI (acpi_id[0x50] high edge lint[0x1]) May 12 23:37:33.760571 kernel: ACPI: LAPIC_NMI (acpi_id[0x51] high edge lint[0x1]) May 12 23:37:33.760578 kernel: ACPI: LAPIC_NMI (acpi_id[0x52] high edge lint[0x1]) May 12 23:37:33.760584 kernel: ACPI: LAPIC_NMI (acpi_id[0x53] high edge lint[0x1]) May 12 23:37:33.760590 kernel: ACPI: LAPIC_NMI (acpi_id[0x54] high edge lint[0x1]) May 12 23:37:33.760596 kernel: ACPI: LAPIC_NMI (acpi_id[0x55] high edge lint[0x1]) May 12 23:37:33.760603 kernel: ACPI: LAPIC_NMI (acpi_id[0x56] high edge lint[0x1]) May 12 23:37:33.760609 kernel: ACPI: LAPIC_NMI (acpi_id[0x57] high edge lint[0x1]) May 12 23:37:33.760614 kernel: ACPI: LAPIC_NMI (acpi_id[0x58] high edge lint[0x1]) May 12 23:37:33.760620 kernel: ACPI: LAPIC_NMI (acpi_id[0x59] high edge lint[0x1]) May 12 23:37:33.760626 kernel: ACPI: LAPIC_NMI (acpi_id[0x5a] high edge lint[0x1]) May 12 23:37:33.760631 kernel: ACPI: LAPIC_NMI (acpi_id[0x5b] high edge lint[0x1]) May 12 23:37:33.760637 kernel: ACPI: LAPIC_NMI (acpi_id[0x5c] high edge lint[0x1]) May 12 23:37:33.760642 kernel: ACPI: LAPIC_NMI (acpi_id[0x5d] high edge lint[0x1]) May 12 23:37:33.760648 kernel: ACPI: LAPIC_NMI (acpi_id[0x5e] high edge lint[0x1]) May 12 23:37:33.760653 kernel: ACPI: LAPIC_NMI (acpi_id[0x5f] high edge lint[0x1]) May 12 23:37:33.760661 kernel: ACPI: LAPIC_NMI (acpi_id[0x60] high edge lint[0x1]) May 12 23:37:33.760666 kernel: ACPI: LAPIC_NMI (acpi_id[0x61] high edge lint[0x1]) May 12 23:37:33.760672 kernel: ACPI: LAPIC_NMI (acpi_id[0x62] high edge lint[0x1]) May 12 23:37:33.760678 kernel: ACPI: LAPIC_NMI (acpi_id[0x63] high edge lint[0x1]) May 12 23:37:33.760684 kernel: ACPI: LAPIC_NMI (acpi_id[0x64] high edge lint[0x1]) May 12 23:37:33.760689 kernel: ACPI: LAPIC_NMI (acpi_id[0x65] high edge lint[0x1]) May 12 23:37:33.760695 kernel: ACPI: LAPIC_NMI (acpi_id[0x66] high edge lint[0x1]) May 12 23:37:33.760700 kernel: ACPI: LAPIC_NMI (acpi_id[0x67] high edge lint[0x1]) May 12 23:37:33.760706 kernel: ACPI: LAPIC_NMI (acpi_id[0x68] high edge lint[0x1]) May 12 23:37:33.760713 kernel: ACPI: LAPIC_NMI (acpi_id[0x69] high edge lint[0x1]) May 12 23:37:33.760718 kernel: ACPI: LAPIC_NMI (acpi_id[0x6a] high edge lint[0x1]) May 12 23:37:33.760724 kernel: ACPI: LAPIC_NMI (acpi_id[0x6b] high edge lint[0x1]) May 12 23:37:33.760729 kernel: ACPI: LAPIC_NMI (acpi_id[0x6c] high edge lint[0x1]) May 12 23:37:33.760735 kernel: ACPI: LAPIC_NMI (acpi_id[0x6d] high edge lint[0x1]) May 12 23:37:33.762752 kernel: ACPI: LAPIC_NMI (acpi_id[0x6e] high edge lint[0x1]) May 12 23:37:33.762761 kernel: ACPI: LAPIC_NMI (acpi_id[0x6f] high edge lint[0x1]) May 12 23:37:33.762767 kernel: ACPI: LAPIC_NMI (acpi_id[0x70] high edge lint[0x1]) May 12 23:37:33.762772 kernel: ACPI: LAPIC_NMI (acpi_id[0x71] high edge lint[0x1]) May 12 23:37:33.762778 kernel: ACPI: LAPIC_NMI (acpi_id[0x72] high edge lint[0x1]) May 12 23:37:33.762786 kernel: ACPI: LAPIC_NMI (acpi_id[0x73] high edge lint[0x1]) May 12 23:37:33.762792 kernel: ACPI: LAPIC_NMI (acpi_id[0x74] high edge lint[0x1]) May 12 23:37:33.762797 kernel: ACPI: LAPIC_NMI (acpi_id[0x75] high edge lint[0x1]) May 12 23:37:33.762803 kernel: ACPI: LAPIC_NMI (acpi_id[0x76] high edge lint[0x1]) May 12 23:37:33.762809 kernel: ACPI: LAPIC_NMI (acpi_id[0x77] high edge lint[0x1]) May 12 23:37:33.762815 kernel: ACPI: LAPIC_NMI (acpi_id[0x78] high edge lint[0x1]) May 12 23:37:33.762820 kernel: ACPI: LAPIC_NMI (acpi_id[0x79] high edge lint[0x1]) May 12 23:37:33.762826 kernel: ACPI: LAPIC_NMI (acpi_id[0x7a] high edge lint[0x1]) May 12 23:37:33.762832 kernel: ACPI: LAPIC_NMI (acpi_id[0x7b] high edge lint[0x1]) May 12 23:37:33.762837 kernel: ACPI: LAPIC_NMI (acpi_id[0x7c] high edge lint[0x1]) May 12 23:37:33.762844 kernel: ACPI: LAPIC_NMI (acpi_id[0x7d] high edge lint[0x1]) May 12 23:37:33.762850 kernel: ACPI: LAPIC_NMI (acpi_id[0x7e] high edge lint[0x1]) May 12 23:37:33.762856 kernel: ACPI: LAPIC_NMI (acpi_id[0x7f] high edge lint[0x1]) May 12 23:37:33.762862 kernel: IOAPIC[0]: apic_id 1, version 17, address 0xfec00000, GSI 0-23 May 12 23:37:33.762868 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 high edge) May 12 23:37:33.762874 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 12 23:37:33.762880 kernel: ACPI: HPET id: 0x8086af01 base: 0xfed00000 May 12 23:37:33.762885 kernel: TSC deadline timer available May 12 23:37:33.762891 kernel: smpboot: Allowing 128 CPUs, 126 hotplug CPUs May 12 23:37:33.762898 kernel: [mem 0x80000000-0xefffffff] available for PCI devices May 12 23:37:33.762904 kernel: Booting paravirtualized kernel on VMware hypervisor May 12 23:37:33.762909 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 12 23:37:33.762915 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:128 nr_cpu_ids:128 nr_node_ids:1 May 12 23:37:33.762921 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u262144 May 12 23:37:33.762928 kernel: pcpu-alloc: s197032 r8192 d32344 u262144 alloc=1*2097152 May 12 23:37:33.762933 kernel: pcpu-alloc: [0] 000 001 002 003 004 005 006 007 May 12 23:37:33.762939 kernel: pcpu-alloc: [0] 008 009 010 011 012 013 014 015 May 12 23:37:33.762945 kernel: pcpu-alloc: [0] 016 017 018 019 020 021 022 023 May 12 23:37:33.762952 kernel: pcpu-alloc: [0] 024 025 026 027 028 029 030 031 May 12 23:37:33.762958 kernel: pcpu-alloc: [0] 032 033 034 035 036 037 038 039 May 12 23:37:33.762971 kernel: pcpu-alloc: [0] 040 041 042 043 044 045 046 047 May 12 23:37:33.762978 kernel: pcpu-alloc: [0] 048 049 050 051 052 053 054 055 May 12 23:37:33.762984 kernel: pcpu-alloc: [0] 056 057 058 059 060 061 062 063 May 12 23:37:33.762989 kernel: pcpu-alloc: [0] 064 065 066 067 068 069 070 071 May 12 23:37:33.762995 kernel: pcpu-alloc: [0] 072 073 074 075 076 077 078 079 May 12 23:37:33.763001 kernel: pcpu-alloc: [0] 080 081 082 083 084 085 086 087 May 12 23:37:33.763007 kernel: pcpu-alloc: [0] 088 089 090 091 092 093 094 095 May 12 23:37:33.763014 kernel: pcpu-alloc: [0] 096 097 098 099 100 101 102 103 May 12 23:37:33.763020 kernel: pcpu-alloc: [0] 104 105 106 107 108 109 110 111 May 12 23:37:33.763026 kernel: pcpu-alloc: [0] 112 113 114 115 116 117 118 119 May 12 23:37:33.763032 kernel: pcpu-alloc: [0] 120 121 122 123 124 125 126 127 May 12 23:37:33.763038 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=5fa7c1ec1190c634be13c39e3f7599010d1d102f7681a0d92e31c1dc0e6a7a5d May 12 23:37:33.763045 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 12 23:37:33.763051 kernel: random: crng init done May 12 23:37:33.763057 kernel: printk: log_buf_len individual max cpu contribution: 4096 bytes May 12 23:37:33.763064 kernel: printk: log_buf_len total cpu_extra contributions: 520192 bytes May 12 23:37:33.763070 kernel: printk: log_buf_len min size: 262144 bytes May 12 23:37:33.763076 kernel: printk: log_buf_len: 1048576 bytes May 12 23:37:33.763082 kernel: printk: early log buf free: 239648(91%) May 12 23:37:33.763088 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 12 23:37:33.763095 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) May 12 23:37:33.763101 kernel: Fallback order for Node 0: 0 May 12 23:37:33.763107 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515808 May 12 23:37:33.763113 kernel: Policy zone: DMA32 May 12 23:37:33.763121 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 12 23:37:33.763128 kernel: Memory: 1934300K/2096628K available (14336K kernel code, 2295K rwdata, 22864K rodata, 43484K init, 1592K bss, 162068K reserved, 0K cma-reserved) May 12 23:37:33.763135 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=128, Nodes=1 May 12 23:37:33.763141 kernel: ftrace: allocating 37918 entries in 149 pages May 12 23:37:33.763147 kernel: ftrace: allocated 149 pages with 4 groups May 12 23:37:33.763154 kernel: Dynamic Preempt: voluntary May 12 23:37:33.763160 kernel: rcu: Preemptible hierarchical RCU implementation. May 12 23:37:33.763167 kernel: rcu: RCU event tracing is enabled. May 12 23:37:33.763173 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=128. May 12 23:37:33.763179 kernel: Trampoline variant of Tasks RCU enabled. May 12 23:37:33.763185 kernel: Rude variant of Tasks RCU enabled. May 12 23:37:33.763191 kernel: Tracing variant of Tasks RCU enabled. May 12 23:37:33.763197 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 12 23:37:33.763203 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=128 May 12 23:37:33.763209 kernel: NR_IRQS: 33024, nr_irqs: 1448, preallocated irqs: 16 May 12 23:37:33.763217 kernel: rcu: srcu_init: Setting srcu_struct sizes to big. May 12 23:37:33.763223 kernel: Console: colour VGA+ 80x25 May 12 23:37:33.763229 kernel: printk: console [tty0] enabled May 12 23:37:33.763235 kernel: printk: console [ttyS0] enabled May 12 23:37:33.763241 kernel: ACPI: Core revision 20230628 May 12 23:37:33.763248 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 133484882848 ns May 12 23:37:33.763254 kernel: APIC: Switch to symmetric I/O mode setup May 12 23:37:33.763261 kernel: x2apic enabled May 12 23:37:33.763267 kernel: APIC: Switched APIC routing to: physical x2apic May 12 23:37:33.763274 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 May 12 23:37:33.763281 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns May 12 23:37:33.763287 kernel: Calibrating delay loop (skipped) preset value.. 6816.00 BogoMIPS (lpj=3408000) May 12 23:37:33.763293 kernel: Disabled fast string operations May 12 23:37:33.763299 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 May 12 23:37:33.763305 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 May 12 23:37:33.763311 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 12 23:37:33.763318 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit May 12 23:37:33.763324 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall May 12 23:37:33.763331 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS May 12 23:37:33.763337 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT May 12 23:37:33.763343 kernel: RETBleed: Mitigation: Enhanced IBRS May 12 23:37:33.763350 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier May 12 23:37:33.763356 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl May 12 23:37:33.763362 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode May 12 23:37:33.763368 kernel: SRBDS: Unknown: Dependent on hypervisor status May 12 23:37:33.763374 kernel: GDS: Unknown: Dependent on hypervisor status May 12 23:37:33.763380 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' May 12 23:37:33.763387 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' May 12 23:37:33.763393 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' May 12 23:37:33.763399 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 May 12 23:37:33.763405 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. May 12 23:37:33.763412 kernel: Freeing SMP alternatives memory: 32K May 12 23:37:33.763418 kernel: pid_max: default: 131072 minimum: 1024 May 12 23:37:33.763424 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 12 23:37:33.763430 kernel: landlock: Up and running. May 12 23:37:33.763436 kernel: SELinux: Initializing. May 12 23:37:33.763443 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) May 12 23:37:33.763449 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) May 12 23:37:33.763456 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) May 12 23:37:33.763462 kernel: RCU Tasks: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. May 12 23:37:33.763468 kernel: RCU Tasks Rude: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. May 12 23:37:33.763474 kernel: RCU Tasks Trace: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. May 12 23:37:33.763480 kernel: Performance Events: Skylake events, core PMU driver. May 12 23:37:33.763487 kernel: core: CPUID marked event: 'cpu cycles' unavailable May 12 23:37:33.763494 kernel: core: CPUID marked event: 'instructions' unavailable May 12 23:37:33.763500 kernel: core: CPUID marked event: 'bus cycles' unavailable May 12 23:37:33.763506 kernel: core: CPUID marked event: 'cache references' unavailable May 12 23:37:33.763512 kernel: core: CPUID marked event: 'cache misses' unavailable May 12 23:37:33.763518 kernel: core: CPUID marked event: 'branch instructions' unavailable May 12 23:37:33.763524 kernel: core: CPUID marked event: 'branch misses' unavailable May 12 23:37:33.763530 kernel: ... version: 1 May 12 23:37:33.763536 kernel: ... bit width: 48 May 12 23:37:33.763542 kernel: ... generic registers: 4 May 12 23:37:33.763549 kernel: ... value mask: 0000ffffffffffff May 12 23:37:33.763555 kernel: ... max period: 000000007fffffff May 12 23:37:33.763561 kernel: ... fixed-purpose events: 0 May 12 23:37:33.763570 kernel: ... event mask: 000000000000000f May 12 23:37:33.763577 kernel: signal: max sigframe size: 1776 May 12 23:37:33.763583 kernel: rcu: Hierarchical SRCU implementation. May 12 23:37:33.763589 kernel: rcu: Max phase no-delay instances is 400. May 12 23:37:33.763595 kernel: NMI watchdog: Perf NMI watchdog permanently disabled May 12 23:37:33.763601 kernel: smp: Bringing up secondary CPUs ... May 12 23:37:33.763608 kernel: smpboot: x86: Booting SMP configuration: May 12 23:37:33.763614 kernel: .... node #0, CPUs: #1 May 12 23:37:33.763620 kernel: Disabled fast string operations May 12 23:37:33.763626 kernel: smpboot: CPU 1 Converting physical 2 to logical package 1 May 12 23:37:33.763632 kernel: smpboot: CPU 1 Converting physical 0 to logical die 1 May 12 23:37:33.763638 kernel: smp: Brought up 1 node, 2 CPUs May 12 23:37:33.763644 kernel: smpboot: Max logical packages: 128 May 12 23:37:33.763650 kernel: smpboot: Total of 2 processors activated (13632.00 BogoMIPS) May 12 23:37:33.763656 kernel: devtmpfs: initialized May 12 23:37:33.763662 kernel: x86/mm: Memory block size: 128MB May 12 23:37:33.763670 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7feff000-0x7fefffff] (4096 bytes) May 12 23:37:33.763676 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 12 23:37:33.763683 kernel: futex hash table entries: 32768 (order: 9, 2097152 bytes, linear) May 12 23:37:33.763690 kernel: pinctrl core: initialized pinctrl subsystem May 12 23:37:33.763696 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 12 23:37:33.763702 kernel: audit: initializing netlink subsys (disabled) May 12 23:37:33.763708 kernel: audit: type=2000 audit(1747093052.064:1): state=initialized audit_enabled=0 res=1 May 12 23:37:33.763714 kernel: thermal_sys: Registered thermal governor 'step_wise' May 12 23:37:33.763720 kernel: thermal_sys: Registered thermal governor 'user_space' May 12 23:37:33.763727 kernel: cpuidle: using governor menu May 12 23:37:33.763733 kernel: Simple Boot Flag at 0x36 set to 0x80 May 12 23:37:33.763746 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 12 23:37:33.763753 kernel: dca service started, version 1.12.1 May 12 23:37:33.763759 kernel: PCI: MMCONFIG for domain 0000 [bus 00-7f] at [mem 0xf0000000-0xf7ffffff] (base 0xf0000000) May 12 23:37:33.763765 kernel: PCI: Using configuration type 1 for base access May 12 23:37:33.763771 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 12 23:37:33.763777 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 12 23:37:33.763783 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page May 12 23:37:33.763791 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 12 23:37:33.763797 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page May 12 23:37:33.763803 kernel: ACPI: Added _OSI(Module Device) May 12 23:37:33.763809 kernel: ACPI: Added _OSI(Processor Device) May 12 23:37:33.763815 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 12 23:37:33.763821 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 12 23:37:33.763827 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 12 23:37:33.763834 kernel: ACPI: [Firmware Bug]: BIOS _OSI(Linux) query ignored May 12 23:37:33.763840 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC May 12 23:37:33.763847 kernel: ACPI: Interpreter enabled May 12 23:37:33.763853 kernel: ACPI: PM: (supports S0 S1 S5) May 12 23:37:33.763859 kernel: ACPI: Using IOAPIC for interrupt routing May 12 23:37:33.763865 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 12 23:37:33.763871 kernel: PCI: Using E820 reservations for host bridge windows May 12 23:37:33.763878 kernel: ACPI: Enabled 4 GPEs in block 00 to 0F May 12 23:37:33.763884 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-7f]) May 12 23:37:33.763966 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 12 23:37:33.764024 kernel: acpi PNP0A03:00: _OSC: platform does not support [AER LTR] May 12 23:37:33.764074 kernel: acpi PNP0A03:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability] May 12 23:37:33.764083 kernel: PCI host bridge to bus 0000:00 May 12 23:37:33.764135 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] May 12 23:37:33.764181 kernel: pci_bus 0000:00: root bus resource [mem 0x000cc000-0x000dbfff window] May 12 23:37:33.764227 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] May 12 23:37:33.764271 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] May 12 23:37:33.764318 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xfeff window] May 12 23:37:33.764362 kernel: pci_bus 0000:00: root bus resource [bus 00-7f] May 12 23:37:33.764426 kernel: pci 0000:00:00.0: [8086:7190] type 00 class 0x060000 May 12 23:37:33.764487 kernel: pci 0000:00:01.0: [8086:7191] type 01 class 0x060400 May 12 23:37:33.764544 kernel: pci 0000:00:07.0: [8086:7110] type 00 class 0x060100 May 12 23:37:33.764600 kernel: pci 0000:00:07.1: [8086:7111] type 00 class 0x01018a May 12 23:37:33.764655 kernel: pci 0000:00:07.1: reg 0x20: [io 0x1060-0x106f] May 12 23:37:33.764708 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] May 12 23:37:33.767130 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x14: [io 0x03f6] May 12 23:37:33.767193 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] May 12 23:37:33.767250 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x1c: [io 0x0376] May 12 23:37:33.767309 kernel: pci 0000:00:07.3: [8086:7113] type 00 class 0x068000 May 12 23:37:33.767366 kernel: pci 0000:00:07.3: quirk: [io 0x1000-0x103f] claimed by PIIX4 ACPI May 12 23:37:33.767419 kernel: pci 0000:00:07.3: quirk: [io 0x1040-0x104f] claimed by PIIX4 SMB May 12 23:37:33.767476 kernel: pci 0000:00:07.7: [15ad:0740] type 00 class 0x088000 May 12 23:37:33.767530 kernel: pci 0000:00:07.7: reg 0x10: [io 0x1080-0x10bf] May 12 23:37:33.767596 kernel: pci 0000:00:07.7: reg 0x14: [mem 0xfebfe000-0xfebfffff 64bit] May 12 23:37:33.767654 kernel: pci 0000:00:0f.0: [15ad:0405] type 00 class 0x030000 May 12 23:37:33.767707 kernel: pci 0000:00:0f.0: reg 0x10: [io 0x1070-0x107f] May 12 23:37:33.767772 kernel: pci 0000:00:0f.0: reg 0x14: [mem 0xe8000000-0xefffffff pref] May 12 23:37:33.767824 kernel: pci 0000:00:0f.0: reg 0x18: [mem 0xfe000000-0xfe7fffff] May 12 23:37:33.767874 kernel: pci 0000:00:0f.0: reg 0x30: [mem 0x00000000-0x00007fff pref] May 12 23:37:33.767926 kernel: pci 0000:00:0f.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] May 12 23:37:33.767982 kernel: pci 0000:00:11.0: [15ad:0790] type 01 class 0x060401 May 12 23:37:33.768042 kernel: pci 0000:00:15.0: [15ad:07a0] type 01 class 0x060400 May 12 23:37:33.768095 kernel: pci 0000:00:15.0: PME# supported from D0 D3hot D3cold May 12 23:37:33.768155 kernel: pci 0000:00:15.1: [15ad:07a0] type 01 class 0x060400 May 12 23:37:33.768211 kernel: pci 0000:00:15.1: PME# supported from D0 D3hot D3cold May 12 23:37:33.768268 kernel: pci 0000:00:15.2: [15ad:07a0] type 01 class 0x060400 May 12 23:37:33.768321 kernel: pci 0000:00:15.2: PME# supported from D0 D3hot D3cold May 12 23:37:33.768378 kernel: pci 0000:00:15.3: [15ad:07a0] type 01 class 0x060400 May 12 23:37:33.768431 kernel: pci 0000:00:15.3: PME# supported from D0 D3hot D3cold May 12 23:37:33.768490 kernel: pci 0000:00:15.4: [15ad:07a0] type 01 class 0x060400 May 12 23:37:33.768544 kernel: pci 0000:00:15.4: PME# supported from D0 D3hot D3cold May 12 23:37:33.768602 kernel: pci 0000:00:15.5: [15ad:07a0] type 01 class 0x060400 May 12 23:37:33.768655 kernel: pci 0000:00:15.5: PME# supported from D0 D3hot D3cold May 12 23:37:33.768711 kernel: pci 0000:00:15.6: [15ad:07a0] type 01 class 0x060400 May 12 23:37:33.770790 kernel: pci 0000:00:15.6: PME# supported from D0 D3hot D3cold May 12 23:37:33.770859 kernel: pci 0000:00:15.7: [15ad:07a0] type 01 class 0x060400 May 12 23:37:33.770915 kernel: pci 0000:00:15.7: PME# supported from D0 D3hot D3cold May 12 23:37:33.770976 kernel: pci 0000:00:16.0: [15ad:07a0] type 01 class 0x060400 May 12 23:37:33.771031 kernel: pci 0000:00:16.0: PME# supported from D0 D3hot D3cold May 12 23:37:33.771088 kernel: pci 0000:00:16.1: [15ad:07a0] type 01 class 0x060400 May 12 23:37:33.771145 kernel: pci 0000:00:16.1: PME# supported from D0 D3hot D3cold May 12 23:37:33.771202 kernel: pci 0000:00:16.2: [15ad:07a0] type 01 class 0x060400 May 12 23:37:33.771255 kernel: pci 0000:00:16.2: PME# supported from D0 D3hot D3cold May 12 23:37:33.771310 kernel: pci 0000:00:16.3: [15ad:07a0] type 01 class 0x060400 May 12 23:37:33.771362 kernel: pci 0000:00:16.3: PME# supported from D0 D3hot D3cold May 12 23:37:33.771419 kernel: pci 0000:00:16.4: [15ad:07a0] type 01 class 0x060400 May 12 23:37:33.771473 kernel: pci 0000:00:16.4: PME# supported from D0 D3hot D3cold May 12 23:37:33.771532 kernel: pci 0000:00:16.5: [15ad:07a0] type 01 class 0x060400 May 12 23:37:33.771585 kernel: pci 0000:00:16.5: PME# supported from D0 D3hot D3cold May 12 23:37:33.771641 kernel: pci 0000:00:16.6: [15ad:07a0] type 01 class 0x060400 May 12 23:37:33.771694 kernel: pci 0000:00:16.6: PME# supported from D0 D3hot D3cold May 12 23:37:33.771766 kernel: pci 0000:00:16.7: [15ad:07a0] type 01 class 0x060400 May 12 23:37:33.771820 kernel: pci 0000:00:16.7: PME# supported from D0 D3hot D3cold May 12 23:37:33.771879 kernel: pci 0000:00:17.0: [15ad:07a0] type 01 class 0x060400 May 12 23:37:33.771931 kernel: pci 0000:00:17.0: PME# supported from D0 D3hot D3cold May 12 23:37:33.771990 kernel: pci 0000:00:17.1: [15ad:07a0] type 01 class 0x060400 May 12 23:37:33.772042 kernel: pci 0000:00:17.1: PME# supported from D0 D3hot D3cold May 12 23:37:33.772096 kernel: pci 0000:00:17.2: [15ad:07a0] type 01 class 0x060400 May 12 23:37:33.772149 kernel: pci 0000:00:17.2: PME# supported from D0 D3hot D3cold May 12 23:37:33.772207 kernel: pci 0000:00:17.3: [15ad:07a0] type 01 class 0x060400 May 12 23:37:33.772259 kernel: pci 0000:00:17.3: PME# supported from D0 D3hot D3cold May 12 23:37:33.772315 kernel: pci 0000:00:17.4: [15ad:07a0] type 01 class 0x060400 May 12 23:37:33.772367 kernel: pci 0000:00:17.4: PME# supported from D0 D3hot D3cold May 12 23:37:33.772423 kernel: pci 0000:00:17.5: [15ad:07a0] type 01 class 0x060400 May 12 23:37:33.772475 kernel: pci 0000:00:17.5: PME# supported from D0 D3hot D3cold May 12 23:37:33.772534 kernel: pci 0000:00:17.6: [15ad:07a0] type 01 class 0x060400 May 12 23:37:33.772588 kernel: pci 0000:00:17.6: PME# supported from D0 D3hot D3cold May 12 23:37:33.772644 kernel: pci 0000:00:17.7: [15ad:07a0] type 01 class 0x060400 May 12 23:37:33.772695 kernel: pci 0000:00:17.7: PME# supported from D0 D3hot D3cold May 12 23:37:33.776104 kernel: pci 0000:00:18.0: [15ad:07a0] type 01 class 0x060400 May 12 23:37:33.776170 kernel: pci 0000:00:18.0: PME# supported from D0 D3hot D3cold May 12 23:37:33.776232 kernel: pci 0000:00:18.1: [15ad:07a0] type 01 class 0x060400 May 12 23:37:33.776291 kernel: pci 0000:00:18.1: PME# supported from D0 D3hot D3cold May 12 23:37:33.776348 kernel: pci 0000:00:18.2: [15ad:07a0] type 01 class 0x060400 May 12 23:37:33.776401 kernel: pci 0000:00:18.2: PME# supported from D0 D3hot D3cold May 12 23:37:33.776457 kernel: pci 0000:00:18.3: [15ad:07a0] type 01 class 0x060400 May 12 23:37:33.776509 kernel: pci 0000:00:18.3: PME# supported from D0 D3hot D3cold May 12 23:37:33.776564 kernel: pci 0000:00:18.4: [15ad:07a0] type 01 class 0x060400 May 12 23:37:33.776624 kernel: pci 0000:00:18.4: PME# supported from D0 D3hot D3cold May 12 23:37:33.776680 kernel: pci 0000:00:18.5: [15ad:07a0] type 01 class 0x060400 May 12 23:37:33.776734 kernel: pci 0000:00:18.5: PME# supported from D0 D3hot D3cold May 12 23:37:33.776798 kernel: pci 0000:00:18.6: [15ad:07a0] type 01 class 0x060400 May 12 23:37:33.776852 kernel: pci 0000:00:18.6: PME# supported from D0 D3hot D3cold May 12 23:37:33.776907 kernel: pci 0000:00:18.7: [15ad:07a0] type 01 class 0x060400 May 12 23:37:33.776962 kernel: pci 0000:00:18.7: PME# supported from D0 D3hot D3cold May 12 23:37:33.777018 kernel: pci_bus 0000:01: extended config space not accessible May 12 23:37:33.777072 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] May 12 23:37:33.777127 kernel: pci_bus 0000:02: extended config space not accessible May 12 23:37:33.777136 kernel: acpiphp: Slot [32] registered May 12 23:37:33.777143 kernel: acpiphp: Slot [33] registered May 12 23:37:33.777149 kernel: acpiphp: Slot [34] registered May 12 23:37:33.777157 kernel: acpiphp: Slot [35] registered May 12 23:37:33.777164 kernel: acpiphp: Slot [36] registered May 12 23:37:33.777170 kernel: acpiphp: Slot [37] registered May 12 23:37:33.777176 kernel: acpiphp: Slot [38] registered May 12 23:37:33.777182 kernel: acpiphp: Slot [39] registered May 12 23:37:33.777188 kernel: acpiphp: Slot [40] registered May 12 23:37:33.777194 kernel: acpiphp: Slot [41] registered May 12 23:37:33.777200 kernel: acpiphp: Slot [42] registered May 12 23:37:33.777206 kernel: acpiphp: Slot [43] registered May 12 23:37:33.777212 kernel: acpiphp: Slot [44] registered May 12 23:37:33.777220 kernel: acpiphp: Slot [45] registered May 12 23:37:33.777226 kernel: acpiphp: Slot [46] registered May 12 23:37:33.777232 kernel: acpiphp: Slot [47] registered May 12 23:37:33.777238 kernel: acpiphp: Slot [48] registered May 12 23:37:33.777244 kernel: acpiphp: Slot [49] registered May 12 23:37:33.777250 kernel: acpiphp: Slot [50] registered May 12 23:37:33.777256 kernel: acpiphp: Slot [51] registered May 12 23:37:33.777262 kernel: acpiphp: Slot [52] registered May 12 23:37:33.777268 kernel: acpiphp: Slot [53] registered May 12 23:37:33.777275 kernel: acpiphp: Slot [54] registered May 12 23:37:33.777282 kernel: acpiphp: Slot [55] registered May 12 23:37:33.777288 kernel: acpiphp: Slot [56] registered May 12 23:37:33.777294 kernel: acpiphp: Slot [57] registered May 12 23:37:33.777301 kernel: acpiphp: Slot [58] registered May 12 23:37:33.777307 kernel: acpiphp: Slot [59] registered May 12 23:37:33.777313 kernel: acpiphp: Slot [60] registered May 12 23:37:33.777319 kernel: acpiphp: Slot [61] registered May 12 23:37:33.777325 kernel: acpiphp: Slot [62] registered May 12 23:37:33.777331 kernel: acpiphp: Slot [63] registered May 12 23:37:33.777386 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] (subtractive decode) May 12 23:37:33.777437 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] May 12 23:37:33.777488 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] May 12 23:37:33.777540 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] May 12 23:37:33.777596 kernel: pci 0000:00:11.0: bridge window [mem 0x000a0000-0x000bffff window] (subtractive decode) May 12 23:37:33.777648 kernel: pci 0000:00:11.0: bridge window [mem 0x000cc000-0x000dbfff window] (subtractive decode) May 12 23:37:33.777700 kernel: pci 0000:00:11.0: bridge window [mem 0xc0000000-0xfebfffff window] (subtractive decode) May 12 23:37:33.777767 kernel: pci 0000:00:11.0: bridge window [io 0x0000-0x0cf7 window] (subtractive decode) May 12 23:37:33.777820 kernel: pci 0000:00:11.0: bridge window [io 0x0d00-0xfeff window] (subtractive decode) May 12 23:37:33.777878 kernel: pci 0000:03:00.0: [15ad:07c0] type 00 class 0x010700 May 12 23:37:33.777932 kernel: pci 0000:03:00.0: reg 0x10: [io 0x4000-0x4007] May 12 23:37:33.777984 kernel: pci 0000:03:00.0: reg 0x14: [mem 0xfd5f8000-0xfd5fffff 64bit] May 12 23:37:33.778037 kernel: pci 0000:03:00.0: reg 0x30: [mem 0x00000000-0x0000ffff pref] May 12 23:37:33.778091 kernel: pci 0000:03:00.0: PME# supported from D0 D3hot D3cold May 12 23:37:33.778147 kernel: pci 0000:03:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' May 12 23:37:33.778201 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] May 12 23:37:33.778253 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] May 12 23:37:33.778305 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] May 12 23:37:33.778358 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] May 12 23:37:33.778411 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] May 12 23:37:33.778463 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] May 12 23:37:33.778515 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] May 12 23:37:33.778571 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] May 12 23:37:33.778624 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] May 12 23:37:33.778676 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] May 12 23:37:33.778728 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] May 12 23:37:33.780132 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] May 12 23:37:33.780190 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] May 12 23:37:33.780245 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] May 12 23:37:33.780303 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] May 12 23:37:33.780356 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] May 12 23:37:33.780409 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] May 12 23:37:33.780464 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] May 12 23:37:33.780516 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] May 12 23:37:33.780576 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] May 12 23:37:33.780631 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] May 12 23:37:33.780684 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] May 12 23:37:33.780737 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] May 12 23:37:33.784697 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] May 12 23:37:33.784767 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] May 12 23:37:33.784824 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] May 12 23:37:33.784885 kernel: pci 0000:0b:00.0: [15ad:07b0] type 00 class 0x020000 May 12 23:37:33.784945 kernel: pci 0000:0b:00.0: reg 0x10: [mem 0xfd4fc000-0xfd4fcfff] May 12 23:37:33.785000 kernel: pci 0000:0b:00.0: reg 0x14: [mem 0xfd4fd000-0xfd4fdfff] May 12 23:37:33.786906 kernel: pci 0000:0b:00.0: reg 0x18: [mem 0xfd4fe000-0xfd4fffff] May 12 23:37:33.786975 kernel: pci 0000:0b:00.0: reg 0x1c: [io 0x5000-0x500f] May 12 23:37:33.787034 kernel: pci 0000:0b:00.0: reg 0x30: [mem 0x00000000-0x0000ffff pref] May 12 23:37:33.787091 kernel: pci 0000:0b:00.0: supports D1 D2 May 12 23:37:33.787145 kernel: pci 0000:0b:00.0: PME# supported from D0 D1 D2 D3hot D3cold May 12 23:37:33.787203 kernel: pci 0000:0b:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' May 12 23:37:33.787266 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] May 12 23:37:33.787337 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] May 12 23:37:33.787413 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] May 12 23:37:33.787472 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] May 12 23:37:33.787526 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] May 12 23:37:33.787592 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] May 12 23:37:33.787659 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] May 12 23:37:33.787725 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] May 12 23:37:33.787801 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] May 12 23:37:33.787872 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] May 12 23:37:33.787945 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] May 12 23:37:33.788003 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] May 12 23:37:33.788056 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] May 12 23:37:33.788109 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] May 12 23:37:33.788162 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] May 12 23:37:33.788218 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] May 12 23:37:33.788272 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] May 12 23:37:33.788327 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] May 12 23:37:33.788380 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] May 12 23:37:33.788433 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] May 12 23:37:33.788488 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] May 12 23:37:33.788540 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] May 12 23:37:33.788593 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] May 12 23:37:33.788650 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] May 12 23:37:33.788704 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] May 12 23:37:33.788768 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] May 12 23:37:33.788824 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] May 12 23:37:33.788876 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] May 12 23:37:33.788929 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] May 12 23:37:33.788982 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] May 12 23:37:33.789056 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] May 12 23:37:33.789426 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] May 12 23:37:33.789515 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] May 12 23:37:33.789588 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] May 12 23:37:33.789650 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] May 12 23:37:33.789706 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] May 12 23:37:33.789839 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] May 12 23:37:33.789894 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] May 12 23:37:33.789954 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] May 12 23:37:33.790007 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] May 12 23:37:33.790059 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] May 12 23:37:33.790139 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] May 12 23:37:33.790196 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] May 12 23:37:33.790260 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] May 12 23:37:33.790327 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] May 12 23:37:33.790381 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] May 12 23:37:33.790437 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] May 12 23:37:33.790491 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] May 12 23:37:33.790545 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] May 12 23:37:33.791864 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] May 12 23:37:33.791925 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] May 12 23:37:33.791978 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] May 12 23:37:33.792030 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] May 12 23:37:33.792085 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] May 12 23:37:33.792143 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] May 12 23:37:33.792195 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] May 12 23:37:33.792247 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] May 12 23:37:33.792330 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] May 12 23:37:33.792398 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] May 12 23:37:33.792455 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] May 12 23:37:33.792508 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] May 12 23:37:33.792565 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] May 12 23:37:33.792617 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] May 12 23:37:33.792669 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] May 12 23:37:33.792724 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] May 12 23:37:33.792784 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] May 12 23:37:33.792836 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] May 12 23:37:33.792890 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] May 12 23:37:33.792957 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] May 12 23:37:33.793023 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] May 12 23:37:33.793089 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] May 12 23:37:33.793144 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] May 12 23:37:33.793197 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] May 12 23:37:33.793262 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] May 12 23:37:33.793317 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] May 12 23:37:33.793370 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] May 12 23:37:33.793425 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] May 12 23:37:33.793477 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] May 12 23:37:33.793533 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] May 12 23:37:33.793542 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 9 May 12 23:37:33.793549 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 0 May 12 23:37:33.793555 kernel: ACPI: PCI: Interrupt link LNKB disabled May 12 23:37:33.793562 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 May 12 23:37:33.793568 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 10 May 12 23:37:33.793574 kernel: iommu: Default domain type: Translated May 12 23:37:33.793581 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 12 23:37:33.793589 kernel: PCI: Using ACPI for IRQ routing May 12 23:37:33.793596 kernel: PCI: pci_cache_line_size set to 64 bytes May 12 23:37:33.793602 kernel: e820: reserve RAM buffer [mem 0x0009ec00-0x0009ffff] May 12 23:37:33.793608 kernel: e820: reserve RAM buffer [mem 0x7fee0000-0x7fffffff] May 12 23:37:33.793662 kernel: pci 0000:00:0f.0: vgaarb: setting as boot VGA device May 12 23:37:33.793716 kernel: pci 0000:00:0f.0: vgaarb: bridge control possible May 12 23:37:33.794103 kernel: pci 0000:00:0f.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none May 12 23:37:33.794115 kernel: vgaarb: loaded May 12 23:37:33.794123 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 May 12 23:37:33.794132 kernel: hpet0: 16 comparators, 64-bit 14.318180 MHz counter May 12 23:37:33.794138 kernel: clocksource: Switched to clocksource tsc-early May 12 23:37:33.794145 kernel: VFS: Disk quotas dquot_6.6.0 May 12 23:37:33.794151 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 12 23:37:33.794158 kernel: pnp: PnP ACPI init May 12 23:37:33.794215 kernel: system 00:00: [io 0x1000-0x103f] has been reserved May 12 23:37:33.794266 kernel: system 00:00: [io 0x1040-0x104f] has been reserved May 12 23:37:33.794314 kernel: system 00:00: [io 0x0cf0-0x0cf1] has been reserved May 12 23:37:33.794367 kernel: system 00:04: [mem 0xfed00000-0xfed003ff] has been reserved May 12 23:37:33.794418 kernel: pnp 00:06: [dma 2] May 12 23:37:33.794473 kernel: system 00:07: [io 0xfce0-0xfcff] has been reserved May 12 23:37:33.794522 kernel: system 00:07: [mem 0xf0000000-0xf7ffffff] has been reserved May 12 23:37:33.794582 kernel: system 00:07: [mem 0xfe800000-0xfe9fffff] has been reserved May 12 23:37:33.794594 kernel: pnp: PnP ACPI: found 8 devices May 12 23:37:33.794600 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 12 23:37:33.794609 kernel: NET: Registered PF_INET protocol family May 12 23:37:33.794616 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) May 12 23:37:33.794623 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) May 12 23:37:33.794629 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 12 23:37:33.794635 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) May 12 23:37:33.794641 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) May 12 23:37:33.794647 kernel: TCP: Hash tables configured (established 16384 bind 16384) May 12 23:37:33.794654 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) May 12 23:37:33.794660 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) May 12 23:37:33.794667 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 12 23:37:33.794674 kernel: NET: Registered PF_XDP protocol family May 12 23:37:33.794735 kernel: pci 0000:00:15.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 May 12 23:37:33.794808 kernel: pci 0000:00:15.3: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 May 12 23:37:33.794880 kernel: pci 0000:00:15.4: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 May 12 23:37:33.795197 kernel: pci 0000:00:15.5: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 May 12 23:37:33.795271 kernel: pci 0000:00:15.6: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 May 12 23:37:33.795340 kernel: pci 0000:00:15.7: bridge window [io 0x1000-0x0fff] to [bus 0a] add_size 1000 May 12 23:37:33.795436 kernel: pci 0000:00:16.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 0b] add_size 200000 add_align 100000 May 12 23:37:33.795496 kernel: pci 0000:00:16.3: bridge window [io 0x1000-0x0fff] to [bus 0e] add_size 1000 May 12 23:37:33.795552 kernel: pci 0000:00:16.4: bridge window [io 0x1000-0x0fff] to [bus 0f] add_size 1000 May 12 23:37:33.795607 kernel: pci 0000:00:16.5: bridge window [io 0x1000-0x0fff] to [bus 10] add_size 1000 May 12 23:37:33.795667 kernel: pci 0000:00:16.6: bridge window [io 0x1000-0x0fff] to [bus 11] add_size 1000 May 12 23:37:33.795722 kernel: pci 0000:00:16.7: bridge window [io 0x1000-0x0fff] to [bus 12] add_size 1000 May 12 23:37:33.795808 kernel: pci 0000:00:17.3: bridge window [io 0x1000-0x0fff] to [bus 16] add_size 1000 May 12 23:37:33.795868 kernel: pci 0000:00:17.4: bridge window [io 0x1000-0x0fff] to [bus 17] add_size 1000 May 12 23:37:33.795933 kernel: pci 0000:00:17.5: bridge window [io 0x1000-0x0fff] to [bus 18] add_size 1000 May 12 23:37:33.795987 kernel: pci 0000:00:17.6: bridge window [io 0x1000-0x0fff] to [bus 19] add_size 1000 May 12 23:37:33.796046 kernel: pci 0000:00:17.7: bridge window [io 0x1000-0x0fff] to [bus 1a] add_size 1000 May 12 23:37:33.796099 kernel: pci 0000:00:18.2: bridge window [io 0x1000-0x0fff] to [bus 1d] add_size 1000 May 12 23:37:33.796152 kernel: pci 0000:00:18.3: bridge window [io 0x1000-0x0fff] to [bus 1e] add_size 1000 May 12 23:37:33.796205 kernel: pci 0000:00:18.4: bridge window [io 0x1000-0x0fff] to [bus 1f] add_size 1000 May 12 23:37:33.796258 kernel: pci 0000:00:18.5: bridge window [io 0x1000-0x0fff] to [bus 20] add_size 1000 May 12 23:37:33.796311 kernel: pci 0000:00:18.6: bridge window [io 0x1000-0x0fff] to [bus 21] add_size 1000 May 12 23:37:33.796366 kernel: pci 0000:00:18.7: bridge window [io 0x1000-0x0fff] to [bus 22] add_size 1000 May 12 23:37:33.796420 kernel: pci 0000:00:15.0: BAR 15: assigned [mem 0xc0000000-0xc01fffff 64bit pref] May 12 23:37:33.796473 kernel: pci 0000:00:16.0: BAR 15: assigned [mem 0xc0200000-0xc03fffff 64bit pref] May 12 23:37:33.796526 kernel: pci 0000:00:15.3: BAR 13: no space for [io size 0x1000] May 12 23:37:33.796578 kernel: pci 0000:00:15.3: BAR 13: failed to assign [io size 0x1000] May 12 23:37:33.796632 kernel: pci 0000:00:15.4: BAR 13: no space for [io size 0x1000] May 12 23:37:33.796684 kernel: pci 0000:00:15.4: BAR 13: failed to assign [io size 0x1000] May 12 23:37:33.796810 kernel: pci 0000:00:15.5: BAR 13: no space for [io size 0x1000] May 12 23:37:33.796870 kernel: pci 0000:00:15.5: BAR 13: failed to assign [io size 0x1000] May 12 23:37:33.796922 kernel: pci 0000:00:15.6: BAR 13: no space for [io size 0x1000] May 12 23:37:33.796975 kernel: pci 0000:00:15.6: BAR 13: failed to assign [io size 0x1000] May 12 23:37:33.797027 kernel: pci 0000:00:15.7: BAR 13: no space for [io size 0x1000] May 12 23:37:33.797078 kernel: pci 0000:00:15.7: BAR 13: failed to assign [io size 0x1000] May 12 23:37:33.797130 kernel: pci 0000:00:16.3: BAR 13: no space for [io size 0x1000] May 12 23:37:33.797182 kernel: pci 0000:00:16.3: BAR 13: failed to assign [io size 0x1000] May 12 23:37:33.797238 kernel: pci 0000:00:16.4: BAR 13: no space for [io size 0x1000] May 12 23:37:33.797291 kernel: pci 0000:00:16.4: BAR 13: failed to assign [io size 0x1000] May 12 23:37:33.797344 kernel: pci 0000:00:16.5: BAR 13: no space for [io size 0x1000] May 12 23:37:33.797396 kernel: pci 0000:00:16.5: BAR 13: failed to assign [io size 0x1000] May 12 23:37:33.797448 kernel: pci 0000:00:16.6: BAR 13: no space for [io size 0x1000] May 12 23:37:33.797500 kernel: pci 0000:00:16.6: BAR 13: failed to assign [io size 0x1000] May 12 23:37:33.797551 kernel: pci 0000:00:16.7: BAR 13: no space for [io size 0x1000] May 12 23:37:33.797615 kernel: pci 0000:00:16.7: BAR 13: failed to assign [io size 0x1000] May 12 23:37:33.797672 kernel: pci 0000:00:17.3: BAR 13: no space for [io size 0x1000] May 12 23:37:33.797724 kernel: pci 0000:00:17.3: BAR 13: failed to assign [io size 0x1000] May 12 23:37:33.797785 kernel: pci 0000:00:17.4: BAR 13: no space for [io size 0x1000] May 12 23:37:33.797838 kernel: pci 0000:00:17.4: BAR 13: failed to assign [io size 0x1000] May 12 23:37:33.797892 kernel: pci 0000:00:17.5: BAR 13: no space for [io size 0x1000] May 12 23:37:33.797945 kernel: pci 0000:00:17.5: BAR 13: failed to assign [io size 0x1000] May 12 23:37:33.798013 kernel: pci 0000:00:17.6: BAR 13: no space for [io size 0x1000] May 12 23:37:33.798078 kernel: pci 0000:00:17.6: BAR 13: failed to assign [io size 0x1000] May 12 23:37:33.798148 kernel: pci 0000:00:17.7: BAR 13: no space for [io size 0x1000] May 12 23:37:33.798202 kernel: pci 0000:00:17.7: BAR 13: failed to assign [io size 0x1000] May 12 23:37:33.798255 kernel: pci 0000:00:18.2: BAR 13: no space for [io size 0x1000] May 12 23:37:33.798307 kernel: pci 0000:00:18.2: BAR 13: failed to assign [io size 0x1000] May 12 23:37:33.798360 kernel: pci 0000:00:18.3: BAR 13: no space for [io size 0x1000] May 12 23:37:33.798411 kernel: pci 0000:00:18.3: BAR 13: failed to assign [io size 0x1000] May 12 23:37:33.798463 kernel: pci 0000:00:18.4: BAR 13: no space for [io size 0x1000] May 12 23:37:33.798516 kernel: pci 0000:00:18.4: BAR 13: failed to assign [io size 0x1000] May 12 23:37:33.798572 kernel: pci 0000:00:18.5: BAR 13: no space for [io size 0x1000] May 12 23:37:33.798635 kernel: pci 0000:00:18.5: BAR 13: failed to assign [io size 0x1000] May 12 23:37:33.798700 kernel: pci 0000:00:18.6: BAR 13: no space for [io size 0x1000] May 12 23:37:33.798799 kernel: pci 0000:00:18.6: BAR 13: failed to assign [io size 0x1000] May 12 23:37:33.798854 kernel: pci 0000:00:18.7: BAR 13: no space for [io size 0x1000] May 12 23:37:33.798906 kernel: pci 0000:00:18.7: BAR 13: failed to assign [io size 0x1000] May 12 23:37:33.798957 kernel: pci 0000:00:18.7: BAR 13: no space for [io size 0x1000] May 12 23:37:33.799009 kernel: pci 0000:00:18.7: BAR 13: failed to assign [io size 0x1000] May 12 23:37:33.799060 kernel: pci 0000:00:18.6: BAR 13: no space for [io size 0x1000] May 12 23:37:33.799119 kernel: pci 0000:00:18.6: BAR 13: failed to assign [io size 0x1000] May 12 23:37:33.799172 kernel: pci 0000:00:18.5: BAR 13: no space for [io size 0x1000] May 12 23:37:33.799223 kernel: pci 0000:00:18.5: BAR 13: failed to assign [io size 0x1000] May 12 23:37:33.799286 kernel: pci 0000:00:18.4: BAR 13: no space for [io size 0x1000] May 12 23:37:33.799338 kernel: pci 0000:00:18.4: BAR 13: failed to assign [io size 0x1000] May 12 23:37:33.799390 kernel: pci 0000:00:18.3: BAR 13: no space for [io size 0x1000] May 12 23:37:33.799442 kernel: pci 0000:00:18.3: BAR 13: failed to assign [io size 0x1000] May 12 23:37:33.799494 kernel: pci 0000:00:18.2: BAR 13: no space for [io size 0x1000] May 12 23:37:33.799546 kernel: pci 0000:00:18.2: BAR 13: failed to assign [io size 0x1000] May 12 23:37:33.799601 kernel: pci 0000:00:17.7: BAR 13: no space for [io size 0x1000] May 12 23:37:33.799654 kernel: pci 0000:00:17.7: BAR 13: failed to assign [io size 0x1000] May 12 23:37:33.799707 kernel: pci 0000:00:17.6: BAR 13: no space for [io size 0x1000] May 12 23:37:33.799767 kernel: pci 0000:00:17.6: BAR 13: failed to assign [io size 0x1000] May 12 23:37:33.799820 kernel: pci 0000:00:17.5: BAR 13: no space for [io size 0x1000] May 12 23:37:33.799872 kernel: pci 0000:00:17.5: BAR 13: failed to assign [io size 0x1000] May 12 23:37:33.799925 kernel: pci 0000:00:17.4: BAR 13: no space for [io size 0x1000] May 12 23:37:33.799978 kernel: pci 0000:00:17.4: BAR 13: failed to assign [io size 0x1000] May 12 23:37:33.800030 kernel: pci 0000:00:17.3: BAR 13: no space for [io size 0x1000] May 12 23:37:33.800082 kernel: pci 0000:00:17.3: BAR 13: failed to assign [io size 0x1000] May 12 23:37:33.800160 kernel: pci 0000:00:16.7: BAR 13: no space for [io size 0x1000] May 12 23:37:33.800225 kernel: pci 0000:00:16.7: BAR 13: failed to assign [io size 0x1000] May 12 23:37:33.800288 kernel: pci 0000:00:16.6: BAR 13: no space for [io size 0x1000] May 12 23:37:33.800350 kernel: pci 0000:00:16.6: BAR 13: failed to assign [io size 0x1000] May 12 23:37:33.800415 kernel: pci 0000:00:16.5: BAR 13: no space for [io size 0x1000] May 12 23:37:33.800469 kernel: pci 0000:00:16.5: BAR 13: failed to assign [io size 0x1000] May 12 23:37:33.800522 kernel: pci 0000:00:16.4: BAR 13: no space for [io size 0x1000] May 12 23:37:33.800577 kernel: pci 0000:00:16.4: BAR 13: failed to assign [io size 0x1000] May 12 23:37:33.800630 kernel: pci 0000:00:16.3: BAR 13: no space for [io size 0x1000] May 12 23:37:33.800686 kernel: pci 0000:00:16.3: BAR 13: failed to assign [io size 0x1000] May 12 23:37:33.802664 kernel: pci 0000:00:15.7: BAR 13: no space for [io size 0x1000] May 12 23:37:33.802783 kernel: pci 0000:00:15.7: BAR 13: failed to assign [io size 0x1000] May 12 23:37:33.802846 kernel: pci 0000:00:15.6: BAR 13: no space for [io size 0x1000] May 12 23:37:33.802901 kernel: pci 0000:00:15.6: BAR 13: failed to assign [io size 0x1000] May 12 23:37:33.802955 kernel: pci 0000:00:15.5: BAR 13: no space for [io size 0x1000] May 12 23:37:33.803007 kernel: pci 0000:00:15.5: BAR 13: failed to assign [io size 0x1000] May 12 23:37:33.803059 kernel: pci 0000:00:15.4: BAR 13: no space for [io size 0x1000] May 12 23:37:33.803112 kernel: pci 0000:00:15.4: BAR 13: failed to assign [io size 0x1000] May 12 23:37:33.803165 kernel: pci 0000:00:15.3: BAR 13: no space for [io size 0x1000] May 12 23:37:33.803221 kernel: pci 0000:00:15.3: BAR 13: failed to assign [io size 0x1000] May 12 23:37:33.803503 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] May 12 23:37:33.803561 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] May 12 23:37:33.803622 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] May 12 23:37:33.803684 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] May 12 23:37:33.805014 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] May 12 23:37:33.805082 kernel: pci 0000:03:00.0: BAR 6: assigned [mem 0xfd500000-0xfd50ffff pref] May 12 23:37:33.805141 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] May 12 23:37:33.805200 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] May 12 23:37:33.805253 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] May 12 23:37:33.805307 kernel: pci 0000:00:15.0: bridge window [mem 0xc0000000-0xc01fffff 64bit pref] May 12 23:37:33.805362 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] May 12 23:37:33.805423 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] May 12 23:37:33.805476 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] May 12 23:37:33.805530 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] May 12 23:37:33.805583 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] May 12 23:37:33.805636 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] May 12 23:37:33.805692 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] May 12 23:37:33.805761 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] May 12 23:37:33.805818 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] May 12 23:37:33.805871 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] May 12 23:37:33.805924 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] May 12 23:37:33.805977 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] May 12 23:37:33.806030 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] May 12 23:37:33.806082 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] May 12 23:37:33.806138 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] May 12 23:37:33.806191 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] May 12 23:37:33.806243 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] May 12 23:37:33.806296 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] May 12 23:37:33.806349 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] May 12 23:37:33.806401 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] May 12 23:37:33.806453 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] May 12 23:37:33.806511 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] May 12 23:37:33.806588 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] May 12 23:37:33.806650 kernel: pci 0000:0b:00.0: BAR 6: assigned [mem 0xfd400000-0xfd40ffff pref] May 12 23:37:33.806713 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] May 12 23:37:33.806827 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] May 12 23:37:33.806882 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] May 12 23:37:33.806934 kernel: pci 0000:00:16.0: bridge window [mem 0xc0200000-0xc03fffff 64bit pref] May 12 23:37:33.806989 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] May 12 23:37:33.807043 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] May 12 23:37:33.807114 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] May 12 23:37:33.807180 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] May 12 23:37:33.807246 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] May 12 23:37:33.807301 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] May 12 23:37:33.807353 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] May 12 23:37:33.807405 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] May 12 23:37:33.807457 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] May 12 23:37:33.807510 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] May 12 23:37:33.807562 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] May 12 23:37:33.807638 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] May 12 23:37:33.807695 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] May 12 23:37:33.808094 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] May 12 23:37:33.808161 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] May 12 23:37:33.808218 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] May 12 23:37:33.808272 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] May 12 23:37:33.808326 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] May 12 23:37:33.808379 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] May 12 23:37:33.808432 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] May 12 23:37:33.808485 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] May 12 23:37:33.808540 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] May 12 23:37:33.808593 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] May 12 23:37:33.808646 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] May 12 23:37:33.808699 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] May 12 23:37:33.808817 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] May 12 23:37:33.808871 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] May 12 23:37:33.808925 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] May 12 23:37:33.808976 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] May 12 23:37:33.809028 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] May 12 23:37:33.809080 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] May 12 23:37:33.809137 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] May 12 23:37:33.809189 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] May 12 23:37:33.809240 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] May 12 23:37:33.809292 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] May 12 23:37:33.809356 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] May 12 23:37:33.809424 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] May 12 23:37:33.809489 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] May 12 23:37:33.809544 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] May 12 23:37:33.809596 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] May 12 23:37:33.809651 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] May 12 23:37:33.809704 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] May 12 23:37:33.809773 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] May 12 23:37:33.809829 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] May 12 23:37:33.809882 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] May 12 23:37:33.809935 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] May 12 23:37:33.810011 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] May 12 23:37:33.810077 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] May 12 23:37:33.810133 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] May 12 23:37:33.810186 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] May 12 23:37:33.810244 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] May 12 23:37:33.810298 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] May 12 23:37:33.810350 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] May 12 23:37:33.810402 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] May 12 23:37:33.810456 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] May 12 23:37:33.810510 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] May 12 23:37:33.810563 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] May 12 23:37:33.810620 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] May 12 23:37:33.810673 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] May 12 23:37:33.810729 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] May 12 23:37:33.810834 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] May 12 23:37:33.810890 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] May 12 23:37:33.810943 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] May 12 23:37:33.810995 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] May 12 23:37:33.811049 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] May 12 23:37:33.811101 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] May 12 23:37:33.811154 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] May 12 23:37:33.811207 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] May 12 23:37:33.811259 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] May 12 23:37:33.811315 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] May 12 23:37:33.811368 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] May 12 23:37:33.811421 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] May 12 23:37:33.811474 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] May 12 23:37:33.811526 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] May 12 23:37:33.811578 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] May 12 23:37:33.811631 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] May 12 23:37:33.811683 kernel: pci_bus 0000:00: resource 4 [mem 0x000a0000-0x000bffff window] May 12 23:37:33.811730 kernel: pci_bus 0000:00: resource 5 [mem 0x000cc000-0x000dbfff window] May 12 23:37:33.812039 kernel: pci_bus 0000:00: resource 6 [mem 0xc0000000-0xfebfffff window] May 12 23:37:33.812113 kernel: pci_bus 0000:00: resource 7 [io 0x0000-0x0cf7 window] May 12 23:37:33.812163 kernel: pci_bus 0000:00: resource 8 [io 0x0d00-0xfeff window] May 12 23:37:33.812221 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x3fff] May 12 23:37:33.812271 kernel: pci_bus 0000:02: resource 1 [mem 0xfd600000-0xfdffffff] May 12 23:37:33.812320 kernel: pci_bus 0000:02: resource 2 [mem 0xe7b00000-0xe7ffffff 64bit pref] May 12 23:37:33.812381 kernel: pci_bus 0000:02: resource 4 [mem 0x000a0000-0x000bffff window] May 12 23:37:33.812446 kernel: pci_bus 0000:02: resource 5 [mem 0x000cc000-0x000dbfff window] May 12 23:37:33.812509 kernel: pci_bus 0000:02: resource 6 [mem 0xc0000000-0xfebfffff window] May 12 23:37:33.812559 kernel: pci_bus 0000:02: resource 7 [io 0x0000-0x0cf7 window] May 12 23:37:33.812606 kernel: pci_bus 0000:02: resource 8 [io 0x0d00-0xfeff window] May 12 23:37:33.812658 kernel: pci_bus 0000:03: resource 0 [io 0x4000-0x4fff] May 12 23:37:33.812707 kernel: pci_bus 0000:03: resource 1 [mem 0xfd500000-0xfd5fffff] May 12 23:37:33.813069 kernel: pci_bus 0000:03: resource 2 [mem 0xc0000000-0xc01fffff 64bit pref] May 12 23:37:33.813129 kernel: pci_bus 0000:04: resource 0 [io 0x8000-0x8fff] May 12 23:37:33.813182 kernel: pci_bus 0000:04: resource 1 [mem 0xfd100000-0xfd1fffff] May 12 23:37:33.813230 kernel: pci_bus 0000:04: resource 2 [mem 0xe7800000-0xe78fffff 64bit pref] May 12 23:37:33.813281 kernel: pci_bus 0000:05: resource 0 [io 0xc000-0xcfff] May 12 23:37:33.813329 kernel: pci_bus 0000:05: resource 1 [mem 0xfcd00000-0xfcdfffff] May 12 23:37:33.813376 kernel: pci_bus 0000:05: resource 2 [mem 0xe7400000-0xe74fffff 64bit pref] May 12 23:37:33.813426 kernel: pci_bus 0000:06: resource 1 [mem 0xfc900000-0xfc9fffff] May 12 23:37:33.813477 kernel: pci_bus 0000:06: resource 2 [mem 0xe7000000-0xe70fffff 64bit pref] May 12 23:37:33.813528 kernel: pci_bus 0000:07: resource 1 [mem 0xfc500000-0xfc5fffff] May 12 23:37:33.813575 kernel: pci_bus 0000:07: resource 2 [mem 0xe6c00000-0xe6cfffff 64bit pref] May 12 23:37:33.813626 kernel: pci_bus 0000:08: resource 1 [mem 0xfc100000-0xfc1fffff] May 12 23:37:33.813673 kernel: pci_bus 0000:08: resource 2 [mem 0xe6800000-0xe68fffff 64bit pref] May 12 23:37:33.813727 kernel: pci_bus 0000:09: resource 1 [mem 0xfbd00000-0xfbdfffff] May 12 23:37:33.813791 kernel: pci_bus 0000:09: resource 2 [mem 0xe6400000-0xe64fffff 64bit pref] May 12 23:37:33.813843 kernel: pci_bus 0000:0a: resource 1 [mem 0xfb900000-0xfb9fffff] May 12 23:37:33.813891 kernel: pci_bus 0000:0a: resource 2 [mem 0xe6000000-0xe60fffff 64bit pref] May 12 23:37:33.813953 kernel: pci_bus 0000:0b: resource 0 [io 0x5000-0x5fff] May 12 23:37:33.814002 kernel: pci_bus 0000:0b: resource 1 [mem 0xfd400000-0xfd4fffff] May 12 23:37:33.814049 kernel: pci_bus 0000:0b: resource 2 [mem 0xc0200000-0xc03fffff 64bit pref] May 12 23:37:33.814104 kernel: pci_bus 0000:0c: resource 0 [io 0x9000-0x9fff] May 12 23:37:33.814153 kernel: pci_bus 0000:0c: resource 1 [mem 0xfd000000-0xfd0fffff] May 12 23:37:33.814201 kernel: pci_bus 0000:0c: resource 2 [mem 0xe7700000-0xe77fffff 64bit pref] May 12 23:37:33.814253 kernel: pci_bus 0000:0d: resource 0 [io 0xd000-0xdfff] May 12 23:37:33.814301 kernel: pci_bus 0000:0d: resource 1 [mem 0xfcc00000-0xfccfffff] May 12 23:37:33.814349 kernel: pci_bus 0000:0d: resource 2 [mem 0xe7300000-0xe73fffff 64bit pref] May 12 23:37:33.814401 kernel: pci_bus 0000:0e: resource 1 [mem 0xfc800000-0xfc8fffff] May 12 23:37:33.814451 kernel: pci_bus 0000:0e: resource 2 [mem 0xe6f00000-0xe6ffffff 64bit pref] May 12 23:37:33.814503 kernel: pci_bus 0000:0f: resource 1 [mem 0xfc400000-0xfc4fffff] May 12 23:37:33.814551 kernel: pci_bus 0000:0f: resource 2 [mem 0xe6b00000-0xe6bfffff 64bit pref] May 12 23:37:33.814603 kernel: pci_bus 0000:10: resource 1 [mem 0xfc000000-0xfc0fffff] May 12 23:37:33.814651 kernel: pci_bus 0000:10: resource 2 [mem 0xe6700000-0xe67fffff 64bit pref] May 12 23:37:33.814703 kernel: pci_bus 0000:11: resource 1 [mem 0xfbc00000-0xfbcfffff] May 12 23:37:33.814814 kernel: pci_bus 0000:11: resource 2 [mem 0xe6300000-0xe63fffff 64bit pref] May 12 23:37:33.814870 kernel: pci_bus 0000:12: resource 1 [mem 0xfb800000-0xfb8fffff] May 12 23:37:33.814918 kernel: pci_bus 0000:12: resource 2 [mem 0xe5f00000-0xe5ffffff 64bit pref] May 12 23:37:33.814969 kernel: pci_bus 0000:13: resource 0 [io 0x6000-0x6fff] May 12 23:37:33.815018 kernel: pci_bus 0000:13: resource 1 [mem 0xfd300000-0xfd3fffff] May 12 23:37:33.815066 kernel: pci_bus 0000:13: resource 2 [mem 0xe7a00000-0xe7afffff 64bit pref] May 12 23:37:33.815117 kernel: pci_bus 0000:14: resource 0 [io 0xa000-0xafff] May 12 23:37:33.815169 kernel: pci_bus 0000:14: resource 1 [mem 0xfcf00000-0xfcffffff] May 12 23:37:33.815241 kernel: pci_bus 0000:14: resource 2 [mem 0xe7600000-0xe76fffff 64bit pref] May 12 23:37:33.815300 kernel: pci_bus 0000:15: resource 0 [io 0xe000-0xefff] May 12 23:37:33.815360 kernel: pci_bus 0000:15: resource 1 [mem 0xfcb00000-0xfcbfffff] May 12 23:37:33.815412 kernel: pci_bus 0000:15: resource 2 [mem 0xe7200000-0xe72fffff 64bit pref] May 12 23:37:33.815465 kernel: pci_bus 0000:16: resource 1 [mem 0xfc700000-0xfc7fffff] May 12 23:37:33.815516 kernel: pci_bus 0000:16: resource 2 [mem 0xe6e00000-0xe6efffff 64bit pref] May 12 23:37:33.815566 kernel: pci_bus 0000:17: resource 1 [mem 0xfc300000-0xfc3fffff] May 12 23:37:33.815636 kernel: pci_bus 0000:17: resource 2 [mem 0xe6a00000-0xe6afffff 64bit pref] May 12 23:37:33.815694 kernel: pci_bus 0000:18: resource 1 [mem 0xfbf00000-0xfbffffff] May 12 23:37:33.815762 kernel: pci_bus 0000:18: resource 2 [mem 0xe6600000-0xe66fffff 64bit pref] May 12 23:37:33.815817 kernel: pci_bus 0000:19: resource 1 [mem 0xfbb00000-0xfbbfffff] May 12 23:37:33.815869 kernel: pci_bus 0000:19: resource 2 [mem 0xe6200000-0xe62fffff 64bit pref] May 12 23:37:33.815924 kernel: pci_bus 0000:1a: resource 1 [mem 0xfb700000-0xfb7fffff] May 12 23:37:33.815974 kernel: pci_bus 0000:1a: resource 2 [mem 0xe5e00000-0xe5efffff 64bit pref] May 12 23:37:33.816026 kernel: pci_bus 0000:1b: resource 0 [io 0x7000-0x7fff] May 12 23:37:33.816075 kernel: pci_bus 0000:1b: resource 1 [mem 0xfd200000-0xfd2fffff] May 12 23:37:33.816126 kernel: pci_bus 0000:1b: resource 2 [mem 0xe7900000-0xe79fffff 64bit pref] May 12 23:37:33.816180 kernel: pci_bus 0000:1c: resource 0 [io 0xb000-0xbfff] May 12 23:37:33.816229 kernel: pci_bus 0000:1c: resource 1 [mem 0xfce00000-0xfcefffff] May 12 23:37:33.816277 kernel: pci_bus 0000:1c: resource 2 [mem 0xe7500000-0xe75fffff 64bit pref] May 12 23:37:33.816329 kernel: pci_bus 0000:1d: resource 1 [mem 0xfca00000-0xfcafffff] May 12 23:37:33.816377 kernel: pci_bus 0000:1d: resource 2 [mem 0xe7100000-0xe71fffff 64bit pref] May 12 23:37:33.816429 kernel: pci_bus 0000:1e: resource 1 [mem 0xfc600000-0xfc6fffff] May 12 23:37:33.816478 kernel: pci_bus 0000:1e: resource 2 [mem 0xe6d00000-0xe6dfffff 64bit pref] May 12 23:37:33.816533 kernel: pci_bus 0000:1f: resource 1 [mem 0xfc200000-0xfc2fffff] May 12 23:37:33.816581 kernel: pci_bus 0000:1f: resource 2 [mem 0xe6900000-0xe69fffff 64bit pref] May 12 23:37:33.816632 kernel: pci_bus 0000:20: resource 1 [mem 0xfbe00000-0xfbefffff] May 12 23:37:33.816680 kernel: pci_bus 0000:20: resource 2 [mem 0xe6500000-0xe65fffff 64bit pref] May 12 23:37:33.816733 kernel: pci_bus 0000:21: resource 1 [mem 0xfba00000-0xfbafffff] May 12 23:37:33.817155 kernel: pci_bus 0000:21: resource 2 [mem 0xe6100000-0xe61fffff 64bit pref] May 12 23:37:33.817217 kernel: pci_bus 0000:22: resource 1 [mem 0xfb600000-0xfb6fffff] May 12 23:37:33.817287 kernel: pci_bus 0000:22: resource 2 [mem 0xe5d00000-0xe5dfffff 64bit pref] May 12 23:37:33.817350 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers May 12 23:37:33.817361 kernel: PCI: CLS 32 bytes, default 64 May 12 23:37:33.817368 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer May 12 23:37:33.817375 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns May 12 23:37:33.817382 kernel: clocksource: Switched to clocksource tsc May 12 23:37:33.817392 kernel: Initialise system trusted keyrings May 12 23:37:33.817399 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 May 12 23:37:33.817405 kernel: Key type asymmetric registered May 12 23:37:33.817412 kernel: Asymmetric key parser 'x509' registered May 12 23:37:33.817418 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) May 12 23:37:33.817425 kernel: io scheduler mq-deadline registered May 12 23:37:33.817431 kernel: io scheduler kyber registered May 12 23:37:33.817438 kernel: io scheduler bfq registered May 12 23:37:33.817492 kernel: pcieport 0000:00:15.0: PME: Signaling with IRQ 24 May 12 23:37:33.817550 kernel: pcieport 0000:00:15.0: pciehp: Slot #160 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 12 23:37:33.817619 kernel: pcieport 0000:00:15.1: PME: Signaling with IRQ 25 May 12 23:37:33.817674 kernel: pcieport 0000:00:15.1: pciehp: Slot #161 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 12 23:37:33.817728 kernel: pcieport 0000:00:15.2: PME: Signaling with IRQ 26 May 12 23:37:33.817796 kernel: pcieport 0000:00:15.2: pciehp: Slot #162 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 12 23:37:33.817851 kernel: pcieport 0000:00:15.3: PME: Signaling with IRQ 27 May 12 23:37:33.817904 kernel: pcieport 0000:00:15.3: pciehp: Slot #163 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 12 23:37:33.817961 kernel: pcieport 0000:00:15.4: PME: Signaling with IRQ 28 May 12 23:37:33.818014 kernel: pcieport 0000:00:15.4: pciehp: Slot #164 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 12 23:37:33.818082 kernel: pcieport 0000:00:15.5: PME: Signaling with IRQ 29 May 12 23:37:33.818148 kernel: pcieport 0000:00:15.5: pciehp: Slot #165 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 12 23:37:33.818214 kernel: pcieport 0000:00:15.6: PME: Signaling with IRQ 30 May 12 23:37:33.818271 kernel: pcieport 0000:00:15.6: pciehp: Slot #166 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 12 23:37:33.818326 kernel: pcieport 0000:00:15.7: PME: Signaling with IRQ 31 May 12 23:37:33.818379 kernel: pcieport 0000:00:15.7: pciehp: Slot #167 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 12 23:37:33.818458 kernel: pcieport 0000:00:16.0: PME: Signaling with IRQ 32 May 12 23:37:33.818521 kernel: pcieport 0000:00:16.0: pciehp: Slot #192 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 12 23:37:33.818578 kernel: pcieport 0000:00:16.1: PME: Signaling with IRQ 33 May 12 23:37:33.818635 kernel: pcieport 0000:00:16.1: pciehp: Slot #193 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 12 23:37:33.818692 kernel: pcieport 0000:00:16.2: PME: Signaling with IRQ 34 May 12 23:37:33.818790 kernel: pcieport 0000:00:16.2: pciehp: Slot #194 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 12 23:37:33.818846 kernel: pcieport 0000:00:16.3: PME: Signaling with IRQ 35 May 12 23:37:33.818899 kernel: pcieport 0000:00:16.3: pciehp: Slot #195 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 12 23:37:33.818953 kernel: pcieport 0000:00:16.4: PME: Signaling with IRQ 36 May 12 23:37:33.819006 kernel: pcieport 0000:00:16.4: pciehp: Slot #196 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 12 23:37:33.819063 kernel: pcieport 0000:00:16.5: PME: Signaling with IRQ 37 May 12 23:37:33.819440 kernel: pcieport 0000:00:16.5: pciehp: Slot #197 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 12 23:37:33.819502 kernel: pcieport 0000:00:16.6: PME: Signaling with IRQ 38 May 12 23:37:33.819558 kernel: pcieport 0000:00:16.6: pciehp: Slot #198 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 12 23:37:33.819614 kernel: pcieport 0000:00:16.7: PME: Signaling with IRQ 39 May 12 23:37:33.819672 kernel: pcieport 0000:00:16.7: pciehp: Slot #199 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 12 23:37:33.819726 kernel: pcieport 0000:00:17.0: PME: Signaling with IRQ 40 May 12 23:37:33.819788 kernel: pcieport 0000:00:17.0: pciehp: Slot #224 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 12 23:37:33.819842 kernel: pcieport 0000:00:17.1: PME: Signaling with IRQ 41 May 12 23:37:33.819895 kernel: pcieport 0000:00:17.1: pciehp: Slot #225 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 12 23:37:33.819948 kernel: pcieport 0000:00:17.2: PME: Signaling with IRQ 42 May 12 23:37:33.820001 kernel: pcieport 0000:00:17.2: pciehp: Slot #226 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 12 23:37:33.820057 kernel: pcieport 0000:00:17.3: PME: Signaling with IRQ 43 May 12 23:37:33.820429 kernel: pcieport 0000:00:17.3: pciehp: Slot #227 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 12 23:37:33.820491 kernel: pcieport 0000:00:17.4: PME: Signaling with IRQ 44 May 12 23:37:33.820546 kernel: pcieport 0000:00:17.4: pciehp: Slot #228 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 12 23:37:33.820601 kernel: pcieport 0000:00:17.5: PME: Signaling with IRQ 45 May 12 23:37:33.820658 kernel: pcieport 0000:00:17.5: pciehp: Slot #229 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 12 23:37:33.820712 kernel: pcieport 0000:00:17.6: PME: Signaling with IRQ 46 May 12 23:37:33.820806 kernel: pcieport 0000:00:17.6: pciehp: Slot #230 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 12 23:37:33.820871 kernel: pcieport 0000:00:17.7: PME: Signaling with IRQ 47 May 12 23:37:33.820941 kernel: pcieport 0000:00:17.7: pciehp: Slot #231 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 12 23:37:33.821010 kernel: pcieport 0000:00:18.0: PME: Signaling with IRQ 48 May 12 23:37:33.821064 kernel: pcieport 0000:00:18.0: pciehp: Slot #256 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 12 23:37:33.821117 kernel: pcieport 0000:00:18.1: PME: Signaling with IRQ 49 May 12 23:37:33.821170 kernel: pcieport 0000:00:18.1: pciehp: Slot #257 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 12 23:37:33.821223 kernel: pcieport 0000:00:18.2: PME: Signaling with IRQ 50 May 12 23:37:33.821293 kernel: pcieport 0000:00:18.2: pciehp: Slot #258 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 12 23:37:33.821359 kernel: pcieport 0000:00:18.3: PME: Signaling with IRQ 51 May 12 23:37:33.821426 kernel: pcieport 0000:00:18.3: pciehp: Slot #259 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 12 23:37:33.821480 kernel: pcieport 0000:00:18.4: PME: Signaling with IRQ 52 May 12 23:37:33.821532 kernel: pcieport 0000:00:18.4: pciehp: Slot #260 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 12 23:37:33.821585 kernel: pcieport 0000:00:18.5: PME: Signaling with IRQ 53 May 12 23:37:33.821638 kernel: pcieport 0000:00:18.5: pciehp: Slot #261 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 12 23:37:33.821694 kernel: pcieport 0000:00:18.6: PME: Signaling with IRQ 54 May 12 23:37:33.821758 kernel: pcieport 0000:00:18.6: pciehp: Slot #262 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 12 23:37:33.821815 kernel: pcieport 0000:00:18.7: PME: Signaling with IRQ 55 May 12 23:37:33.821868 kernel: pcieport 0000:00:18.7: pciehp: Slot #263 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 12 23:37:33.821878 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 12 23:37:33.821885 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 12 23:37:33.821895 kernel: 00:05: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 12 23:37:33.821902 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBC,PNP0f13:MOUS] at 0x60,0x64 irq 1,12 May 12 23:37:33.821908 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 May 12 23:37:33.821915 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 May 12 23:37:33.821970 kernel: rtc_cmos 00:01: registered as rtc0 May 12 23:37:33.822019 kernel: rtc_cmos 00:01: setting system clock to 2025-05-12T23:37:33 UTC (1747093053) May 12 23:37:33.822067 kernel: rtc_cmos 00:01: alarms up to one month, y3k, 114 bytes nvram May 12 23:37:33.822077 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 May 12 23:37:33.822086 kernel: intel_pstate: CPU model not supported May 12 23:37:33.822092 kernel: NET: Registered PF_INET6 protocol family May 12 23:37:33.822099 kernel: Segment Routing with IPv6 May 12 23:37:33.822106 kernel: In-situ OAM (IOAM) with IPv6 May 12 23:37:33.822112 kernel: NET: Registered PF_PACKET protocol family May 12 23:37:33.822119 kernel: Key type dns_resolver registered May 12 23:37:33.822127 kernel: IPI shorthand broadcast: enabled May 12 23:37:33.822133 kernel: sched_clock: Marking stable (874003717, 223647724)->(1156087094, -58435653) May 12 23:37:33.822140 kernel: registered taskstats version 1 May 12 23:37:33.822148 kernel: Loading compiled-in X.509 certificates May 12 23:37:33.822154 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.89-flatcar: d95820acae50a149bf4631b9744f68258af45186' May 12 23:37:33.822161 kernel: Key type .fscrypt registered May 12 23:37:33.822167 kernel: Key type fscrypt-provisioning registered May 12 23:37:33.822174 kernel: ima: No TPM chip found, activating TPM-bypass! May 12 23:37:33.822180 kernel: ima: Allocated hash algorithm: sha1 May 12 23:37:33.822187 kernel: ima: No architecture policies found May 12 23:37:33.822193 kernel: clk: Disabling unused clocks May 12 23:37:33.822199 kernel: Freeing unused kernel image (initmem) memory: 43484K May 12 23:37:33.822207 kernel: Write protecting the kernel read-only data: 38912k May 12 23:37:33.822214 kernel: Freeing unused kernel image (rodata/data gap) memory: 1712K May 12 23:37:33.822220 kernel: Run /init as init process May 12 23:37:33.822226 kernel: with arguments: May 12 23:37:33.822233 kernel: /init May 12 23:37:33.822240 kernel: with environment: May 12 23:37:33.822246 kernel: HOME=/ May 12 23:37:33.822252 kernel: TERM=linux May 12 23:37:33.822258 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 12 23:37:33.822265 systemd[1]: Successfully made /usr/ read-only. May 12 23:37:33.822275 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 12 23:37:33.822283 systemd[1]: Detected virtualization vmware. May 12 23:37:33.822289 systemd[1]: Detected architecture x86-64. May 12 23:37:33.822296 systemd[1]: Running in initrd. May 12 23:37:33.822302 systemd[1]: No hostname configured, using default hostname. May 12 23:37:33.822310 systemd[1]: Hostname set to . May 12 23:37:33.822318 systemd[1]: Initializing machine ID from random generator. May 12 23:37:33.822324 systemd[1]: Queued start job for default target initrd.target. May 12 23:37:33.822331 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 12 23:37:33.822338 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 12 23:37:33.822346 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 12 23:37:33.822353 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 12 23:37:33.822359 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 12 23:37:33.822366 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 12 23:37:33.822375 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 12 23:37:33.822382 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 12 23:37:33.822389 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 12 23:37:33.822396 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 12 23:37:33.822403 systemd[1]: Reached target paths.target - Path Units. May 12 23:37:33.822409 systemd[1]: Reached target slices.target - Slice Units. May 12 23:37:33.822416 systemd[1]: Reached target swap.target - Swaps. May 12 23:37:33.822424 systemd[1]: Reached target timers.target - Timer Units. May 12 23:37:33.822431 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 12 23:37:33.822438 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 12 23:37:33.822444 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 12 23:37:33.822451 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. May 12 23:37:33.822458 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 12 23:37:33.822465 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 12 23:37:33.822472 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 12 23:37:33.822478 systemd[1]: Reached target sockets.target - Socket Units. May 12 23:37:33.822487 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 12 23:37:33.822494 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 12 23:37:33.822500 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 12 23:37:33.822508 systemd[1]: Starting systemd-fsck-usr.service... May 12 23:37:33.822515 systemd[1]: Starting systemd-journald.service - Journal Service... May 12 23:37:33.822522 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 12 23:37:33.822528 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 12 23:37:33.822535 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 12 23:37:33.822542 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 12 23:37:33.822552 systemd[1]: Finished systemd-fsck-usr.service. May 12 23:37:33.822559 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 12 23:37:33.822584 systemd-journald[216]: Collecting audit messages is disabled. May 12 23:37:33.822603 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 12 23:37:33.822610 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 12 23:37:33.822617 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 12 23:37:33.822624 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 12 23:37:33.822631 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 12 23:37:33.822639 kernel: Bridge firewalling registered May 12 23:37:33.822646 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 12 23:37:33.822653 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 12 23:37:33.822660 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 12 23:37:33.822667 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 12 23:37:33.822673 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 12 23:37:33.822680 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 12 23:37:33.822688 systemd-journald[216]: Journal started May 12 23:37:33.822704 systemd-journald[216]: Runtime Journal (/run/log/journal/308a6fd50dd34c07a640d59298b2dd4a) is 4.8M, max 38.6M, 33.8M free. May 12 23:37:33.824159 systemd[1]: Started systemd-journald.service - Journal Service. May 12 23:37:33.756947 systemd-modules-load[217]: Inserted module 'overlay' May 12 23:37:33.790321 systemd-modules-load[217]: Inserted module 'br_netfilter' May 12 23:37:33.824731 dracut-cmdline[238]: dracut-dracut-053 May 12 23:37:33.824731 dracut-cmdline[238]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=5fa7c1ec1190c634be13c39e3f7599010d1d102f7681a0d92e31c1dc0e6a7a5d May 12 23:37:33.828817 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 12 23:37:33.833020 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 12 23:37:33.834221 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 12 23:37:33.861465 systemd-resolved[289]: Positive Trust Anchors: May 12 23:37:33.861473 systemd-resolved[289]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 12 23:37:33.861495 systemd-resolved[289]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 12 23:37:33.863124 systemd-resolved[289]: Defaulting to hostname 'linux'. May 12 23:37:33.863707 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 12 23:37:33.863878 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 12 23:37:33.869747 kernel: SCSI subsystem initialized May 12 23:37:33.875753 kernel: Loading iSCSI transport class v2.0-870. May 12 23:37:33.882758 kernel: iscsi: registered transport (tcp) May 12 23:37:33.895761 kernel: iscsi: registered transport (qla4xxx) May 12 23:37:33.895793 kernel: QLogic iSCSI HBA Driver May 12 23:37:33.916016 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 12 23:37:33.920841 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 12 23:37:33.935016 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 12 23:37:33.935046 kernel: device-mapper: uevent: version 1.0.3 May 12 23:37:33.936087 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 12 23:37:33.967759 kernel: raid6: avx2x4 gen() 47397 MB/s May 12 23:37:33.983758 kernel: raid6: avx2x2 gen() 52721 MB/s May 12 23:37:34.000979 kernel: raid6: avx2x1 gen() 44488 MB/s May 12 23:37:34.001029 kernel: raid6: using algorithm avx2x2 gen() 52721 MB/s May 12 23:37:34.018962 kernel: raid6: .... xor() 31956 MB/s, rmw enabled May 12 23:37:34.019014 kernel: raid6: using avx2x2 recovery algorithm May 12 23:37:34.032755 kernel: xor: automatically using best checksumming function avx May 12 23:37:34.121775 kernel: Btrfs loaded, zoned=no, fsverity=no May 12 23:37:34.127174 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 12 23:37:34.131851 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 12 23:37:34.140060 systemd-udevd[434]: Using default interface naming scheme 'v255'. May 12 23:37:34.143010 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 12 23:37:34.149834 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 12 23:37:34.156387 dracut-pre-trigger[439]: rd.md=0: removing MD RAID activation May 12 23:37:34.172279 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 12 23:37:34.176910 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 12 23:37:34.249465 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 12 23:37:34.255878 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 12 23:37:34.265809 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 12 23:37:34.266276 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 12 23:37:34.267074 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 12 23:37:34.267415 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 12 23:37:34.270833 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 12 23:37:34.279057 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 12 23:37:34.323752 kernel: VMware vmxnet3 virtual NIC driver - version 1.7.0.0-k-NAPI May 12 23:37:34.328247 kernel: vmxnet3 0000:0b:00.0: # of Tx queues : 2, # of Rx queues : 2 May 12 23:37:34.328410 kernel: VMware PVSCSI driver - version 1.0.7.0-k May 12 23:37:34.333780 kernel: vmxnet3 0000:0b:00.0 eth0: NIC Link is Up 10000 Mbps May 12 23:37:34.336120 kernel: vmw_pvscsi: using 64bit dma May 12 23:37:34.336140 kernel: vmw_pvscsi: max_id: 16 May 12 23:37:34.336148 kernel: vmw_pvscsi: setting ring_pages to 8 May 12 23:37:34.340855 kernel: vmw_pvscsi: enabling reqCallThreshold May 12 23:37:34.340875 kernel: vmw_pvscsi: driver-based request coalescing enabled May 12 23:37:34.340884 kernel: vmw_pvscsi: using MSI-X May 12 23:37:34.350772 kernel: scsi host0: VMware PVSCSI storage adapter rev 2, req/cmp/msg rings: 8/8/1 pages, cmd_per_lun=254 May 12 23:37:34.350817 kernel: cryptd: max_cpu_qlen set to 1000 May 12 23:37:34.350827 kernel: vmxnet3 0000:0b:00.0 ens192: renamed from eth0 May 12 23:37:34.354636 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 12 23:37:34.354956 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 12 23:37:34.355262 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 12 23:37:34.355614 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 12 23:37:34.355773 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 12 23:37:34.356060 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 12 23:37:34.358671 kernel: vmw_pvscsi 0000:03:00.0: VMware PVSCSI rev 2 host #0 May 12 23:37:34.358948 kernel: scsi 0:0:0:0: Direct-Access VMware Virtual disk 2.0 PQ: 0 ANSI: 6 May 12 23:37:34.361233 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 12 23:37:34.367386 kernel: libata version 3.00 loaded. May 12 23:37:34.369103 kernel: ata_piix 0000:00:07.1: version 2.13 May 12 23:37:34.370876 kernel: AVX2 version of gcm_enc/dec engaged. May 12 23:37:34.370899 kernel: AES CTR mode by8 optimization enabled May 12 23:37:34.372753 kernel: scsi host1: ata_piix May 12 23:37:34.372851 kernel: scsi host2: ata_piix May 12 23:37:34.379766 kernel: ata1: PATA max UDMA/33 cmd 0x1f0 ctl 0x3f6 bmdma 0x1060 irq 14 May 12 23:37:34.379793 kernel: ata2: PATA max UDMA/33 cmd 0x170 ctl 0x376 bmdma 0x1068 irq 15 May 12 23:37:34.380443 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 12 23:37:34.392883 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 12 23:37:34.404136 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 12 23:37:34.550778 kernel: ata2.00: ATAPI: VMware Virtual IDE CDROM Drive, 00000001, max UDMA/33 May 12 23:37:34.557894 kernel: scsi 2:0:0:0: CD-ROM NECVMWar VMware IDE CDR10 1.00 PQ: 0 ANSI: 5 May 12 23:37:34.570942 kernel: sd 0:0:0:0: [sda] 17805312 512-byte logical blocks: (9.12 GB/8.49 GiB) May 12 23:37:34.571047 kernel: sd 0:0:0:0: [sda] Write Protect is off May 12 23:37:34.571113 kernel: sd 0:0:0:0: [sda] Mode Sense: 31 00 00 00 May 12 23:37:34.571175 kernel: sd 0:0:0:0: [sda] Cache data unavailable May 12 23:37:34.572279 kernel: sd 0:0:0:0: [sda] Assuming drive cache: write through May 12 23:37:34.575751 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 1x/1x writer dvd-ram cd/rw xa/form2 cdda tray May 12 23:37:34.575836 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 May 12 23:37:34.588751 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 May 12 23:37:34.624119 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 12 23:37:34.624159 kernel: sd 0:0:0:0: [sda] Attached SCSI disk May 12 23:37:34.765825 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by (udev-worker) (484) May 12 23:37:34.772800 kernel: BTRFS: device fsid 3e20f71b-219b-4481-b973-b1b0271e18c1 devid 1 transid 42 /dev/sda3 scanned by (udev-worker) (487) May 12 23:37:34.778206 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_disk ROOT. May 12 23:37:34.784331 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_disk EFI-SYSTEM. May 12 23:37:34.790239 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_disk OEM. May 12 23:37:34.794605 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_disk USR-A. May 12 23:37:34.794736 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_disk USR-A. May 12 23:37:34.802814 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 12 23:37:34.828769 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 12 23:37:35.837472 disk-uuid[598]: The operation has completed successfully. May 12 23:37:35.837755 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 12 23:37:35.875390 systemd[1]: disk-uuid.service: Deactivated successfully. May 12 23:37:35.875456 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 12 23:37:35.895876 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 12 23:37:35.897807 sh[612]: Success May 12 23:37:35.906870 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" May 12 23:37:35.955381 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 12 23:37:35.955722 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 12 23:37:35.956513 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 12 23:37:35.974303 kernel: BTRFS info (device dm-0): first mount of filesystem 3e20f71b-219b-4481-b973-b1b0271e18c1 May 12 23:37:35.974333 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm May 12 23:37:35.974342 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 12 23:37:35.975391 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 12 23:37:35.976849 kernel: BTRFS info (device dm-0): using free space tree May 12 23:37:35.983754 kernel: BTRFS info (device dm-0): enabling ssd optimizations May 12 23:37:35.984922 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 12 23:37:35.989880 systemd[1]: Starting afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments... May 12 23:37:35.991846 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 12 23:37:36.011364 kernel: BTRFS info (device sda6): first mount of filesystem 37df245c-8784-48a4-9eff-5b32614fef7c May 12 23:37:36.011406 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm May 12 23:37:36.011415 kernel: BTRFS info (device sda6): using free space tree May 12 23:37:36.017759 kernel: BTRFS info (device sda6): enabling ssd optimizations May 12 23:37:36.025102 kernel: BTRFS info (device sda6): last unmount of filesystem 37df245c-8784-48a4-9eff-5b32614fef7c May 12 23:37:36.026389 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 12 23:37:36.033865 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 12 23:37:36.055920 systemd[1]: Finished afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments. May 12 23:37:36.059840 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 12 23:37:36.105286 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 12 23:37:36.109963 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 12 23:37:36.126412 systemd-networkd[799]: lo: Link UP May 12 23:37:36.126419 systemd-networkd[799]: lo: Gained carrier May 12 23:37:36.132543 kernel: vmxnet3 0000:0b:00.0 ens192: intr type 3, mode 0, 3 vectors allocated May 12 23:37:36.132681 kernel: vmxnet3 0000:0b:00.0 ens192: NIC Link is Up 10000 Mbps May 12 23:37:36.127228 systemd-networkd[799]: Enumeration completed May 12 23:37:36.127487 systemd[1]: Started systemd-networkd.service - Network Configuration. May 12 23:37:36.127635 systemd[1]: Reached target network.target - Network. May 12 23:37:36.127731 systemd-networkd[799]: ens192: Configuring with /etc/systemd/network/10-dracut-cmdline-99.network. May 12 23:37:36.132046 systemd-networkd[799]: ens192: Link UP May 12 23:37:36.132049 systemd-networkd[799]: ens192: Gained carrier May 12 23:37:36.181179 ignition[670]: Ignition 2.20.0 May 12 23:37:36.181190 ignition[670]: Stage: fetch-offline May 12 23:37:36.181230 ignition[670]: no configs at "/usr/lib/ignition/base.d" May 12 23:37:36.181243 ignition[670]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" May 12 23:37:36.181315 ignition[670]: parsed url from cmdline: "" May 12 23:37:36.181317 ignition[670]: no config URL provided May 12 23:37:36.181320 ignition[670]: reading system config file "/usr/lib/ignition/user.ign" May 12 23:37:36.181325 ignition[670]: no config at "/usr/lib/ignition/user.ign" May 12 23:37:36.181707 ignition[670]: config successfully fetched May 12 23:37:36.181723 ignition[670]: parsing config with SHA512: b29731fb02163ffcacbb58dddfd7dca72b154f1d0aecd630efcfd1c66472b088995eea87c9abcb5524b3ea42aa9a932068264c97d244e943bd4faa7406a7110c May 12 23:37:36.184394 unknown[670]: fetched base config from "system" May 12 23:37:36.184402 unknown[670]: fetched user config from "vmware" May 12 23:37:36.184659 ignition[670]: fetch-offline: fetch-offline passed May 12 23:37:36.184704 ignition[670]: Ignition finished successfully May 12 23:37:36.185598 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 12 23:37:36.185815 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 12 23:37:36.189861 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 12 23:37:36.199399 ignition[808]: Ignition 2.20.0 May 12 23:37:36.199410 ignition[808]: Stage: kargs May 12 23:37:36.200264 ignition[808]: no configs at "/usr/lib/ignition/base.d" May 12 23:37:36.200277 ignition[808]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" May 12 23:37:36.200884 ignition[808]: kargs: kargs passed May 12 23:37:36.200917 ignition[808]: Ignition finished successfully May 12 23:37:36.202309 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 12 23:37:36.205889 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 12 23:37:36.212590 ignition[814]: Ignition 2.20.0 May 12 23:37:36.212602 ignition[814]: Stage: disks May 12 23:37:36.212701 ignition[814]: no configs at "/usr/lib/ignition/base.d" May 12 23:37:36.212707 ignition[814]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" May 12 23:37:36.213216 ignition[814]: disks: disks passed May 12 23:37:36.213241 ignition[814]: Ignition finished successfully May 12 23:37:36.214016 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 12 23:37:36.214503 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 12 23:37:36.214801 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 12 23:37:36.215037 systemd[1]: Reached target local-fs.target - Local File Systems. May 12 23:37:36.215262 systemd[1]: Reached target sysinit.target - System Initialization. May 12 23:37:36.215484 systemd[1]: Reached target basic.target - Basic System. May 12 23:37:36.222884 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 12 23:37:36.315544 systemd-fsck[822]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks May 12 23:37:36.321766 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 12 23:37:36.325825 systemd[1]: Mounting sysroot.mount - /sysroot... May 12 23:37:36.513748 kernel: EXT4-fs (sda9): mounted filesystem 87f139f9-1bcd-498d-9753-0a353ad6da1c r/w with ordered data mode. Quota mode: none. May 12 23:37:36.514045 systemd[1]: Mounted sysroot.mount - /sysroot. May 12 23:37:36.514442 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 12 23:37:36.534810 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 12 23:37:36.544511 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 12 23:37:36.544882 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 12 23:37:36.544917 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 12 23:37:36.544936 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 12 23:37:36.549097 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 12 23:37:36.550134 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 12 23:37:36.626767 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (830) May 12 23:37:36.639534 kernel: BTRFS info (device sda6): first mount of filesystem 37df245c-8784-48a4-9eff-5b32614fef7c May 12 23:37:36.639563 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm May 12 23:37:36.639575 kernel: BTRFS info (device sda6): using free space tree May 12 23:37:36.686817 kernel: BTRFS info (device sda6): enabling ssd optimizations May 12 23:37:36.692962 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 12 23:37:36.816571 initrd-setup-root[854]: cut: /sysroot/etc/passwd: No such file or directory May 12 23:37:36.824721 initrd-setup-root[861]: cut: /sysroot/etc/group: No such file or directory May 12 23:37:36.829729 initrd-setup-root[868]: cut: /sysroot/etc/shadow: No such file or directory May 12 23:37:36.839170 initrd-setup-root[875]: cut: /sysroot/etc/gshadow: No such file or directory May 12 23:37:36.981124 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 12 23:37:36.988849 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 12 23:37:36.991584 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 12 23:37:36.994684 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 12 23:37:36.996750 kernel: BTRFS info (device sda6): last unmount of filesystem 37df245c-8784-48a4-9eff-5b32614fef7c May 12 23:37:37.012639 ignition[942]: INFO : Ignition 2.20.0 May 12 23:37:37.012639 ignition[942]: INFO : Stage: mount May 12 23:37:37.013896 ignition[942]: INFO : no configs at "/usr/lib/ignition/base.d" May 12 23:37:37.013896 ignition[942]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" May 12 23:37:37.013896 ignition[942]: INFO : mount: mount passed May 12 23:37:37.013896 ignition[942]: INFO : Ignition finished successfully May 12 23:37:37.014491 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 12 23:37:37.021067 systemd[1]: Starting ignition-files.service - Ignition (files)... May 12 23:37:37.022677 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 12 23:37:37.026153 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 12 23:37:37.036758 kernel: BTRFS: device label OEM devid 1 transid 17 /dev/sda6 scanned by mount (954) May 12 23:37:37.039263 kernel: BTRFS info (device sda6): first mount of filesystem 37df245c-8784-48a4-9eff-5b32614fef7c May 12 23:37:37.039291 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm May 12 23:37:37.039306 kernel: BTRFS info (device sda6): using free space tree May 12 23:37:37.044356 kernel: BTRFS info (device sda6): enabling ssd optimizations May 12 23:37:37.044231 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 12 23:37:37.056338 ignition[970]: INFO : Ignition 2.20.0 May 12 23:37:37.057011 ignition[970]: INFO : Stage: files May 12 23:37:37.057184 ignition[970]: INFO : no configs at "/usr/lib/ignition/base.d" May 12 23:37:37.057184 ignition[970]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" May 12 23:37:37.057892 ignition[970]: DEBUG : files: compiled without relabeling support, skipping May 12 23:37:37.058420 ignition[970]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 12 23:37:37.058420 ignition[970]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 12 23:37:37.060624 ignition[970]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 12 23:37:37.060901 ignition[970]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 12 23:37:37.061134 ignition[970]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 12 23:37:37.061066 unknown[970]: wrote ssh authorized keys file for user: core May 12 23:37:37.063141 ignition[970]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 12 23:37:37.063394 ignition[970]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 May 12 23:37:37.100161 ignition[970]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 12 23:37:37.374784 ignition[970]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 12 23:37:37.374784 ignition[970]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" May 12 23:37:37.375214 ignition[970]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 May 12 23:37:37.862384 ignition[970]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK May 12 23:37:37.944819 systemd-networkd[799]: ens192: Gained IPv6LL May 12 23:37:37.954711 ignition[970]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" May 12 23:37:37.954711 ignition[970]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" May 12 23:37:37.955194 ignition[970]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" May 12 23:37:37.955194 ignition[970]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" May 12 23:37:37.955194 ignition[970]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" May 12 23:37:37.955194 ignition[970]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 12 23:37:37.955194 ignition[970]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 12 23:37:37.955194 ignition[970]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 12 23:37:37.955194 ignition[970]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 12 23:37:37.955194 ignition[970]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" May 12 23:37:37.956540 ignition[970]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 12 23:37:37.956540 ignition[970]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 12 23:37:37.956540 ignition[970]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 12 23:37:37.956540 ignition[970]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 12 23:37:37.956540 ignition[970]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 May 12 23:37:38.376185 ignition[970]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK May 12 23:37:38.864676 ignition[970]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 12 23:37:38.864676 ignition[970]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/etc/systemd/network/00-vmware.network" May 12 23:37:38.865204 ignition[970]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/etc/systemd/network/00-vmware.network" May 12 23:37:38.865204 ignition[970]: INFO : files: op(d): [started] processing unit "prepare-helm.service" May 12 23:37:38.865556 ignition[970]: INFO : files: op(d): op(e): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 12 23:37:38.865556 ignition[970]: INFO : files: op(d): op(e): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 12 23:37:38.865556 ignition[970]: INFO : files: op(d): [finished] processing unit "prepare-helm.service" May 12 23:37:38.865556 ignition[970]: INFO : files: op(f): [started] processing unit "coreos-metadata.service" May 12 23:37:38.865556 ignition[970]: INFO : files: op(f): op(10): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 12 23:37:38.865556 ignition[970]: INFO : files: op(f): op(10): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 12 23:37:38.865556 ignition[970]: INFO : files: op(f): [finished] processing unit "coreos-metadata.service" May 12 23:37:38.865556 ignition[970]: INFO : files: op(11): [started] setting preset to disabled for "coreos-metadata.service" May 12 23:37:38.983312 ignition[970]: INFO : files: op(11): op(12): [started] removing enablement symlink(s) for "coreos-metadata.service" May 12 23:37:38.985766 ignition[970]: INFO : files: op(11): op(12): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 12 23:37:38.985766 ignition[970]: INFO : files: op(11): [finished] setting preset to disabled for "coreos-metadata.service" May 12 23:37:38.985766 ignition[970]: INFO : files: op(13): [started] setting preset to enabled for "prepare-helm.service" May 12 23:37:38.985766 ignition[970]: INFO : files: op(13): [finished] setting preset to enabled for "prepare-helm.service" May 12 23:37:38.985766 ignition[970]: INFO : files: createResultFile: createFiles: op(14): [started] writing file "/sysroot/etc/.ignition-result.json" May 12 23:37:38.987021 ignition[970]: INFO : files: createResultFile: createFiles: op(14): [finished] writing file "/sysroot/etc/.ignition-result.json" May 12 23:37:38.987021 ignition[970]: INFO : files: files passed May 12 23:37:38.987021 ignition[970]: INFO : Ignition finished successfully May 12 23:37:38.986653 systemd[1]: Finished ignition-files.service - Ignition (files). May 12 23:37:39.002865 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 12 23:37:39.004852 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 12 23:37:39.010955 initrd-setup-root-after-ignition[1000]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 12 23:37:39.010955 initrd-setup-root-after-ignition[1000]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 12 23:37:39.012059 initrd-setup-root-after-ignition[1004]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 12 23:37:39.012658 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 12 23:37:39.013012 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 12 23:37:39.015903 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 12 23:37:39.016312 systemd[1]: ignition-quench.service: Deactivated successfully. May 12 23:37:39.016495 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 12 23:37:39.027547 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 12 23:37:39.027612 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 12 23:37:39.027989 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 12 23:37:39.028093 systemd[1]: Reached target initrd.target - Initrd Default Target. May 12 23:37:39.028284 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 12 23:37:39.028807 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 12 23:37:39.037805 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 12 23:37:39.042876 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 12 23:37:39.048767 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 12 23:37:39.049093 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 12 23:37:39.049266 systemd[1]: Stopped target timers.target - Timer Units. May 12 23:37:39.049416 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 12 23:37:39.049505 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 12 23:37:39.049768 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 12 23:37:39.049913 systemd[1]: Stopped target basic.target - Basic System. May 12 23:37:39.050141 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 12 23:37:39.050286 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 12 23:37:39.050437 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 12 23:37:39.050588 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 12 23:37:39.050729 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 12 23:37:39.052414 systemd[1]: Stopped target sysinit.target - System Initialization. May 12 23:37:39.052567 systemd[1]: Stopped target local-fs.target - Local File Systems. May 12 23:37:39.052711 systemd[1]: Stopped target swap.target - Swaps. May 12 23:37:39.052891 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 12 23:37:39.052961 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 12 23:37:39.053212 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 12 23:37:39.053422 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 12 23:37:39.053610 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 12 23:37:39.053678 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 12 23:37:39.053855 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 12 23:37:39.053927 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 12 23:37:39.054250 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 12 23:37:39.054315 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 12 23:37:39.054536 systemd[1]: Stopped target paths.target - Path Units. May 12 23:37:39.054689 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 12 23:37:39.057762 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 12 23:37:39.057945 systemd[1]: Stopped target slices.target - Slice Units. May 12 23:37:39.058142 systemd[1]: Stopped target sockets.target - Socket Units. May 12 23:37:39.058337 systemd[1]: iscsid.socket: Deactivated successfully. May 12 23:37:39.058405 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 12 23:37:39.058589 systemd[1]: iscsiuio.socket: Deactivated successfully. May 12 23:37:39.058634 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 12 23:37:39.058883 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 12 23:37:39.058948 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 12 23:37:39.059171 systemd[1]: ignition-files.service: Deactivated successfully. May 12 23:37:39.059228 systemd[1]: Stopped ignition-files.service - Ignition (files). May 12 23:37:39.063904 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 12 23:37:39.065884 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 12 23:37:39.066012 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 12 23:37:39.066107 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 12 23:37:39.066398 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 12 23:37:39.066513 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 12 23:37:39.069794 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 12 23:37:39.072602 ignition[1025]: INFO : Ignition 2.20.0 May 12 23:37:39.075101 ignition[1025]: INFO : Stage: umount May 12 23:37:39.075101 ignition[1025]: INFO : no configs at "/usr/lib/ignition/base.d" May 12 23:37:39.075101 ignition[1025]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" May 12 23:37:39.075101 ignition[1025]: INFO : umount: umount passed May 12 23:37:39.075101 ignition[1025]: INFO : Ignition finished successfully May 12 23:37:39.074999 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 12 23:37:39.075441 systemd[1]: ignition-mount.service: Deactivated successfully. May 12 23:37:39.075510 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 12 23:37:39.078832 systemd[1]: Stopped target network.target - Network. May 12 23:37:39.078960 systemd[1]: ignition-disks.service: Deactivated successfully. May 12 23:37:39.079003 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 12 23:37:39.079379 systemd[1]: ignition-kargs.service: Deactivated successfully. May 12 23:37:39.079413 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 12 23:37:39.079552 systemd[1]: ignition-setup.service: Deactivated successfully. May 12 23:37:39.079575 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 12 23:37:39.079810 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 12 23:37:39.079843 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 12 23:37:39.080119 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 12 23:37:39.080424 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 12 23:37:39.084948 systemd[1]: systemd-resolved.service: Deactivated successfully. May 12 23:37:39.085045 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 12 23:37:39.086981 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. May 12 23:37:39.087197 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 12 23:37:39.087229 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 12 23:37:39.088235 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. May 12 23:37:39.090730 systemd[1]: systemd-networkd.service: Deactivated successfully. May 12 23:37:39.090904 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 12 23:37:39.091863 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. May 12 23:37:39.091978 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 12 23:37:39.091997 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 12 23:37:39.095820 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 12 23:37:39.095929 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 12 23:37:39.095961 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 12 23:37:39.096114 systemd[1]: afterburn-network-kargs.service: Deactivated successfully. May 12 23:37:39.096145 systemd[1]: Stopped afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments. May 12 23:37:39.096271 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 12 23:37:39.096294 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 12 23:37:39.096478 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 12 23:37:39.096501 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 12 23:37:39.096638 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 12 23:37:39.097547 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 12 23:37:39.108752 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 12 23:37:39.109180 systemd[1]: systemd-udevd.service: Deactivated successfully. May 12 23:37:39.109269 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 12 23:37:39.109619 systemd[1]: network-cleanup.service: Deactivated successfully. May 12 23:37:39.109677 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 12 23:37:39.110886 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 12 23:37:39.110919 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 12 23:37:39.111152 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 12 23:37:39.111171 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 12 23:37:39.111470 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 12 23:37:39.111504 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 12 23:37:39.111794 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 12 23:37:39.111817 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 12 23:37:39.112113 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 12 23:37:39.112137 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 12 23:37:39.117856 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 12 23:37:39.118020 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 12 23:37:39.118054 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 12 23:37:39.118234 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. May 12 23:37:39.118257 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 12 23:37:39.118389 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 12 23:37:39.118418 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 12 23:37:39.118538 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 12 23:37:39.118559 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 12 23:37:39.121438 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 12 23:37:39.121676 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 12 23:37:39.468142 systemd[1]: sysroot-boot.service: Deactivated successfully. May 12 23:37:39.468224 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 12 23:37:39.468787 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 12 23:37:39.468949 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 12 23:37:39.468992 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 12 23:37:39.472850 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 12 23:37:39.488019 systemd[1]: Switching root. May 12 23:37:39.518767 systemd-journald[216]: Received SIGTERM from PID 1 (systemd). May 12 23:37:39.518812 systemd-journald[216]: Journal stopped May 12 23:37:41.618615 kernel: SELinux: policy capability network_peer_controls=1 May 12 23:37:41.618638 kernel: SELinux: policy capability open_perms=1 May 12 23:37:41.618646 kernel: SELinux: policy capability extended_socket_class=1 May 12 23:37:41.618651 kernel: SELinux: policy capability always_check_network=0 May 12 23:37:41.618656 kernel: SELinux: policy capability cgroup_seclabel=1 May 12 23:37:41.618662 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 12 23:37:41.618670 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 12 23:37:41.618675 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 12 23:37:41.618681 kernel: audit: type=1403 audit(1747093060.696:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 12 23:37:41.618688 systemd[1]: Successfully loaded SELinux policy in 32.810ms. May 12 23:37:41.618695 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 11.275ms. May 12 23:37:41.618701 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 12 23:37:41.618708 systemd[1]: Detected virtualization vmware. May 12 23:37:41.618715 systemd[1]: Detected architecture x86-64. May 12 23:37:41.618722 systemd[1]: Detected first boot. May 12 23:37:41.618728 systemd[1]: Initializing machine ID from random generator. May 12 23:37:41.618735 zram_generator::config[1071]: No configuration found. May 12 23:37:41.618860 kernel: vmw_vmci 0000:00:07.7: Using capabilities 0xc May 12 23:37:41.618871 kernel: Guest personality initialized and is active May 12 23:37:41.618878 kernel: VMCI host device registered (name=vmci, major=10, minor=125) May 12 23:37:41.618886 kernel: Initialized host personality May 12 23:37:41.618892 kernel: NET: Registered PF_VSOCK protocol family May 12 23:37:41.618899 systemd[1]: Populated /etc with preset unit settings. May 12 23:37:41.618907 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") May 12 23:37:41.618916 systemd[1]: COREOS_CUSTOM_PUBLIC_IPV4=$(ip addr show ens192 | grep -v "inet 10." | grep -Po "inet \K[\d.]+")" > ${OUTPUT}" May 12 23:37:41.618923 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. May 12 23:37:41.618930 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 12 23:37:41.618936 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 12 23:37:41.618943 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 12 23:37:41.618949 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 12 23:37:41.618957 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 12 23:37:41.618964 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 12 23:37:41.618971 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 12 23:37:41.618977 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 12 23:37:41.618984 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 12 23:37:41.618991 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 12 23:37:41.618998 systemd[1]: Created slice user.slice - User and Session Slice. May 12 23:37:41.619005 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 12 23:37:41.619012 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 12 23:37:41.619020 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 12 23:37:41.619028 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 12 23:37:41.619035 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 12 23:37:41.619042 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 12 23:37:41.619049 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... May 12 23:37:41.619056 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 12 23:37:41.619063 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 12 23:37:41.619071 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 12 23:37:41.619078 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 12 23:37:41.619084 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 12 23:37:41.619091 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 12 23:37:41.619098 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 12 23:37:41.619104 systemd[1]: Reached target slices.target - Slice Units. May 12 23:37:41.619111 systemd[1]: Reached target swap.target - Swaps. May 12 23:37:41.619120 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 12 23:37:41.619132 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 12 23:37:41.619141 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. May 12 23:37:41.619148 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 12 23:37:41.619157 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 12 23:37:41.619164 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 12 23:37:41.619173 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 12 23:37:41.619186 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 12 23:37:41.619198 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 12 23:37:41.619211 systemd[1]: Mounting media.mount - External Media Directory... May 12 23:37:41.619224 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 12 23:37:41.619236 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 12 23:37:41.619244 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 12 23:37:41.619251 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 12 23:37:41.619258 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 12 23:37:41.619270 systemd[1]: Reached target machines.target - Containers. May 12 23:37:41.619280 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 12 23:37:41.619288 systemd[1]: Starting ignition-delete-config.service - Ignition (delete config)... May 12 23:37:41.619294 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 12 23:37:41.619301 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 12 23:37:41.619308 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 12 23:37:41.619315 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 12 23:37:41.619322 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 12 23:37:41.619330 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 12 23:37:41.619337 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 12 23:37:41.619345 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 12 23:37:41.619352 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 12 23:37:41.619359 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 12 23:37:41.619365 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 12 23:37:41.619372 systemd[1]: Stopped systemd-fsck-usr.service. May 12 23:37:41.619379 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 12 23:37:41.619387 systemd[1]: Starting systemd-journald.service - Journal Service... May 12 23:37:41.619395 kernel: fuse: init (API version 7.39) May 12 23:37:41.619406 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 12 23:37:41.619419 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 12 23:37:41.619428 kernel: loop: module loaded May 12 23:37:41.619448 systemd-journald[1171]: Collecting audit messages is disabled. May 12 23:37:41.619468 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 12 23:37:41.619476 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... May 12 23:37:41.619485 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 12 23:37:41.619497 systemd[1]: verity-setup.service: Deactivated successfully. May 12 23:37:41.619510 systemd[1]: Stopped verity-setup.service. May 12 23:37:41.619522 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 12 23:37:41.619536 systemd-journald[1171]: Journal started May 12 23:37:41.619552 systemd-journald[1171]: Runtime Journal (/run/log/journal/1f62f9f59eaf44088a0f204723a5b3b7) is 4.8M, max 38.6M, 33.8M free. May 12 23:37:41.451298 systemd[1]: Queued start job for default target multi-user.target. May 12 23:37:41.459791 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. May 12 23:37:41.460016 systemd[1]: systemd-journald.service: Deactivated successfully. May 12 23:37:41.622139 jq[1141]: true May 12 23:37:41.630757 systemd[1]: Started systemd-journald.service - Journal Service. May 12 23:37:41.630854 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 12 23:37:41.631013 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 12 23:37:41.631165 systemd[1]: Mounted media.mount - External Media Directory. May 12 23:37:41.631307 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 12 23:37:41.631453 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 12 23:37:41.631606 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 12 23:37:41.631862 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 12 23:37:41.633789 kernel: ACPI: bus type drm_connector registered May 12 23:37:41.636503 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 12 23:37:41.636780 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 12 23:37:41.636878 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 12 23:37:41.637109 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 12 23:37:41.637199 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 12 23:37:41.637423 systemd[1]: modprobe@drm.service: Deactivated successfully. May 12 23:37:41.637510 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 12 23:37:41.637734 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 12 23:37:41.638152 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 12 23:37:41.638469 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 12 23:37:41.638604 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 12 23:37:41.638878 systemd[1]: modprobe@loop.service: Deactivated successfully. May 12 23:37:41.639007 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 12 23:37:41.639299 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 12 23:37:41.639612 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 12 23:37:41.639933 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 12 23:37:41.640255 jq[1186]: true May 12 23:37:41.647630 systemd[1]: Reached target network-pre.target - Preparation for Network. May 12 23:37:41.650805 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 12 23:37:41.653945 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 12 23:37:41.654102 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 12 23:37:41.654123 systemd[1]: Reached target local-fs.target - Local File Systems. May 12 23:37:41.655646 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. May 12 23:37:41.656696 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 12 23:37:41.658151 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 12 23:37:41.658300 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 12 23:37:41.666847 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 12 23:37:41.668827 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 12 23:37:41.669305 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 12 23:37:41.671146 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 12 23:37:41.671280 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 12 23:37:41.674528 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 12 23:37:41.676820 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 12 23:37:41.679568 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 12 23:37:41.681411 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. May 12 23:37:41.681627 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 12 23:37:41.682392 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 12 23:37:41.682935 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 12 23:37:41.689489 systemd-journald[1171]: Time spent on flushing to /var/log/journal/1f62f9f59eaf44088a0f204723a5b3b7 is 55.423ms for 1851 entries. May 12 23:37:41.689489 systemd-journald[1171]: System Journal (/var/log/journal/1f62f9f59eaf44088a0f204723a5b3b7) is 8M, max 584.8M, 576.8M free. May 12 23:37:41.762707 systemd-journald[1171]: Received client request to flush runtime journal. May 12 23:37:41.762735 kernel: loop0: detected capacity change from 0 to 147912 May 12 23:37:41.698080 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 12 23:37:41.698512 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 12 23:37:41.705551 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... May 12 23:37:41.754296 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 12 23:37:41.764115 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 12 23:37:41.804963 systemd-tmpfiles[1217]: ACLs are not supported, ignoring. May 12 23:37:41.804974 systemd-tmpfiles[1217]: ACLs are not supported, ignoring. May 12 23:37:41.805841 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. May 12 23:37:41.808859 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 12 23:37:41.815668 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 12 23:37:41.821073 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 12 23:37:41.829950 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... May 12 23:37:41.842383 udevadm[1240]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. May 12 23:37:41.848111 ignition[1188]: Ignition 2.20.0 May 12 23:37:41.848340 ignition[1188]: deleting config from guestinfo properties May 12 23:37:41.854407 ignition[1188]: Successfully deleted config May 12 23:37:41.861071 systemd[1]: Finished ignition-delete-config.service - Ignition (delete config). May 12 23:37:41.875770 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 12 23:37:41.891922 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 12 23:37:41.898868 kernel: loop1: detected capacity change from 0 to 2960 May 12 23:37:41.900922 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 12 23:37:41.914172 systemd-tmpfiles[1247]: ACLs are not supported, ignoring. May 12 23:37:41.914186 systemd-tmpfiles[1247]: ACLs are not supported, ignoring. May 12 23:37:41.918676 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 12 23:37:41.991763 kernel: loop2: detected capacity change from 0 to 138176 May 12 23:37:42.072771 kernel: loop3: detected capacity change from 0 to 210664 May 12 23:37:42.306757 kernel: loop4: detected capacity change from 0 to 147912 May 12 23:37:42.460982 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 12 23:37:42.541767 kernel: loop5: detected capacity change from 0 to 2960 May 12 23:37:42.701393 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 12 23:37:42.709066 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 12 23:37:42.725558 systemd-udevd[1255]: Using default interface naming scheme 'v255'. May 12 23:37:42.761764 kernel: loop6: detected capacity change from 0 to 138176 May 12 23:37:43.001759 kernel: loop7: detected capacity change from 0 to 210664 May 12 23:37:43.087178 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 12 23:37:43.093895 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 12 23:37:43.106737 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 12 23:37:43.114112 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. May 12 23:37:43.151088 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 12 23:37:43.178760 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 May 12 23:37:43.182755 kernel: ACPI: button: Power Button [PWRF] May 12 23:37:43.230007 systemd-networkd[1257]: lo: Link UP May 12 23:37:43.230012 systemd-networkd[1257]: lo: Gained carrier May 12 23:37:43.231413 systemd-networkd[1257]: Enumeration completed May 12 23:37:43.231479 systemd[1]: Started systemd-networkd.service - Network Configuration. May 12 23:37:43.231924 systemd-networkd[1257]: ens192: Configuring with /etc/systemd/network/00-vmware.network. May 12 23:37:43.237176 kernel: vmxnet3 0000:0b:00.0 ens192: intr type 3, mode 0, 3 vectors allocated May 12 23:37:43.237334 kernel: vmxnet3 0000:0b:00.0 ens192: NIC Link is Up 10000 Mbps May 12 23:37:43.236193 systemd-networkd[1257]: ens192: Link UP May 12 23:37:43.236279 systemd-networkd[1257]: ens192: Gained carrier May 12 23:37:43.240259 kernel: piix4_smbus 0000:00:07.3: SMBus Host Controller not enabled! May 12 23:37:43.238929 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... May 12 23:37:43.245189 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 12 23:37:43.257429 (sd-merge)[1253]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-vmware'. May 12 23:37:43.260263 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. May 12 23:37:43.260273 (sd-merge)[1253]: Merged extensions into '/usr'. May 12 23:37:43.269447 systemd[1]: Reload requested from client PID 1216 ('systemd-sysext') (unit systemd-sysext.service)... May 12 23:37:43.269457 systemd[1]: Reloading... May 12 23:37:43.294758 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 42 scanned by (udev-worker) (1259) May 12 23:37:43.315838 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input4 May 12 23:37:43.339753 zram_generator::config[1322]: No configuration found. May 12 23:37:43.352899 (udev-worker)[1265]: id: Truncating stdout of 'dmi_memory_id' up to 16384 byte. May 12 23:37:43.373986 kernel: mousedev: PS/2 mouse device common for all mice May 12 23:37:43.442438 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") May 12 23:37:43.461330 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 12 23:37:43.526042 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_disk OEM. May 12 23:37:43.526512 systemd[1]: Reloading finished in 256 ms. May 12 23:37:43.544497 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 12 23:37:43.557359 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. May 12 23:37:43.564852 systemd[1]: Starting ensure-sysext.service... May 12 23:37:43.567881 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... May 12 23:37:43.574848 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 12 23:37:43.576009 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 12 23:37:43.578832 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 12 23:37:43.587698 systemd[1]: Reload requested from client PID 1381 ('systemctl') (unit ensure-sysext.service)... May 12 23:37:43.587707 systemd[1]: Reloading... May 12 23:37:43.608349 systemd-tmpfiles[1384]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 12 23:37:43.608514 systemd-tmpfiles[1384]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 12 23:37:43.610071 systemd-tmpfiles[1384]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 12 23:37:43.610312 systemd-tmpfiles[1384]: ACLs are not supported, ignoring. May 12 23:37:43.610353 systemd-tmpfiles[1384]: ACLs are not supported, ignoring. May 12 23:37:43.612511 systemd-tmpfiles[1384]: Detected autofs mount point /boot during canonicalization of boot. May 12 23:37:43.612628 systemd-tmpfiles[1384]: Skipping /boot May 12 23:37:43.615452 lvm[1382]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 12 23:37:43.619517 systemd-tmpfiles[1384]: Detected autofs mount point /boot during canonicalization of boot. May 12 23:37:43.619863 systemd-tmpfiles[1384]: Skipping /boot May 12 23:37:43.649643 ldconfig[1211]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 12 23:37:43.654789 zram_generator::config[1423]: No configuration found. May 12 23:37:43.715047 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") May 12 23:37:43.733220 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 12 23:37:43.802938 systemd[1]: Reloading finished in 215 ms. May 12 23:37:43.813977 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 12 23:37:43.822244 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. May 12 23:37:43.822526 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 12 23:37:43.822808 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 12 23:37:43.823077 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 12 23:37:43.828066 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 12 23:37:43.836025 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 12 23:37:43.842850 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 12 23:37:43.845605 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... May 12 23:37:43.846753 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 12 23:37:43.851923 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 12 23:37:43.853199 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 12 23:37:43.855404 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 12 23:37:43.856617 lvm[1489]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 12 23:37:43.862019 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 12 23:37:43.864940 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 12 23:37:43.867893 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 12 23:37:43.868068 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 12 23:37:43.868147 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 12 23:37:43.868223 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 12 23:37:43.871146 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 12 23:37:43.871242 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 12 23:37:43.871294 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 12 23:37:43.871365 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 12 23:37:43.876997 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 12 23:37:43.877134 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 12 23:37:43.879132 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 12 23:37:43.885027 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 12 23:37:43.885791 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 12 23:37:43.885922 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 12 23:37:43.886087 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 12 23:37:43.891598 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 12 23:37:43.892922 systemd[1]: Finished ensure-sysext.service. May 12 23:37:43.893340 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. May 12 23:37:43.895795 systemd[1]: modprobe@loop.service: Deactivated successfully. May 12 23:37:43.895917 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 12 23:37:43.897970 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 12 23:37:43.898922 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 12 23:37:43.899693 systemd[1]: modprobe@drm.service: Deactivated successfully. May 12 23:37:43.900157 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 12 23:37:43.905417 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 12 23:37:43.905465 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 12 23:37:43.915853 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 12 23:37:43.945103 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 12 23:37:43.945300 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 12 23:37:43.952496 augenrules[1525]: No rules May 12 23:37:43.953395 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 12 23:37:43.959267 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 12 23:37:43.959462 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 12 23:37:43.959749 systemd[1]: audit-rules.service: Deactivated successfully. May 12 23:37:43.959952 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 12 23:37:43.960716 systemd[1]: Reached target time-set.target - System Time Set. May 12 23:37:43.964176 systemd-resolved[1491]: Positive Trust Anchors: May 12 23:37:43.966636 systemd-resolved[1491]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 12 23:37:43.966661 systemd-resolved[1491]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 12 23:37:43.967772 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 12 23:37:43.969219 systemd-resolved[1491]: Defaulting to hostname 'linux'. May 12 23:39:05.317037 systemd-timesyncd[1519]: Contacted time server 45.63.54.13:123 (0.flatcar.pool.ntp.org). May 12 23:39:05.317071 systemd-timesyncd[1519]: Initial clock synchronization to Mon 2025-05-12 23:39:05.316991 UTC. May 12 23:39:05.317553 systemd-resolved[1491]: Clock change detected. Flushing caches. May 12 23:39:05.317567 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 12 23:39:05.317726 systemd[1]: Reached target network.target - Network. May 12 23:39:05.317824 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 12 23:39:05.317936 systemd[1]: Reached target sysinit.target - System Initialization. May 12 23:39:05.318091 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 12 23:39:05.318216 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 12 23:39:05.318410 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 12 23:39:05.318570 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 12 23:39:05.318685 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 12 23:39:05.318812 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 12 23:39:05.318836 systemd[1]: Reached target paths.target - Path Units. May 12 23:39:05.318925 systemd[1]: Reached target timers.target - Timer Units. May 12 23:39:05.319915 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 12 23:39:05.321104 systemd[1]: Starting docker.socket - Docker Socket for the API... May 12 23:39:05.322962 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). May 12 23:39:05.323190 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). May 12 23:39:05.323312 systemd[1]: Reached target ssh-access.target - SSH Access Available. May 12 23:39:05.325029 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 12 23:39:05.325364 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. May 12 23:39:05.325895 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 12 23:39:05.326038 systemd[1]: Reached target sockets.target - Socket Units. May 12 23:39:05.326128 systemd[1]: Reached target basic.target - Basic System. May 12 23:39:05.326242 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 12 23:39:05.326261 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 12 23:39:05.327110 systemd[1]: Starting containerd.service - containerd container runtime... May 12 23:39:05.329882 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 12 23:39:05.331727 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 12 23:39:05.333968 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 12 23:39:05.334096 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 12 23:39:05.336312 jq[1539]: false May 12 23:39:05.336862 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 12 23:39:05.339583 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 12 23:39:05.341869 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 12 23:39:05.344576 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 12 23:39:05.352359 dbus-daemon[1538]: [system] SELinux support is enabled May 12 23:39:05.352840 systemd[1]: Starting systemd-logind.service - User Login Management... May 12 23:39:05.353510 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 12 23:39:05.354435 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 12 23:39:05.355452 systemd[1]: Starting update-engine.service - Update Engine... May 12 23:39:05.358379 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 12 23:39:05.360817 systemd[1]: Starting vgauthd.service - VGAuth Service for open-vm-tools... May 12 23:39:05.361366 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 12 23:39:05.368145 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 12 23:39:05.368273 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 12 23:39:05.372493 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 12 23:39:05.372520 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 12 23:39:05.373789 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 12 23:39:05.373800 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 12 23:39:05.378983 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 12 23:39:05.379122 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 12 23:39:05.382167 jq[1548]: true May 12 23:39:05.382843 systemd[1]: Started vgauthd.service - VGAuth Service for open-vm-tools. May 12 23:39:05.385957 systemd[1]: motdgen.service: Deactivated successfully. May 12 23:39:05.386094 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 12 23:39:05.389424 update_engine[1547]: I20250512 23:39:05.389380 1547 main.cc:92] Flatcar Update Engine starting May 12 23:39:05.390637 update_engine[1547]: I20250512 23:39:05.390619 1547 update_check_scheduler.cc:74] Next update check in 3m25s May 12 23:39:05.392010 systemd[1]: Started update-engine.service - Update Engine. May 12 23:39:05.395871 systemd[1]: Starting vmtoolsd.service - Service for virtual machines hosted on VMware... May 12 23:39:05.398847 jq[1565]: true May 12 23:39:05.397941 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 12 23:39:05.406335 (ntainerd)[1566]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 12 23:39:05.413543 extend-filesystems[1540]: Found loop4 May 12 23:39:05.413797 extend-filesystems[1540]: Found loop5 May 12 23:39:05.413797 extend-filesystems[1540]: Found loop6 May 12 23:39:05.413797 extend-filesystems[1540]: Found loop7 May 12 23:39:05.413797 extend-filesystems[1540]: Found sda May 12 23:39:05.413797 extend-filesystems[1540]: Found sda1 May 12 23:39:05.413797 extend-filesystems[1540]: Found sda2 May 12 23:39:05.413797 extend-filesystems[1540]: Found sda3 May 12 23:39:05.413797 extend-filesystems[1540]: Found usr May 12 23:39:05.413797 extend-filesystems[1540]: Found sda4 May 12 23:39:05.413797 extend-filesystems[1540]: Found sda6 May 12 23:39:05.414875 extend-filesystems[1540]: Found sda7 May 12 23:39:05.414967 extend-filesystems[1540]: Found sda9 May 12 23:39:05.414967 extend-filesystems[1540]: Checking size of /dev/sda9 May 12 23:39:05.428820 systemd[1]: Started vmtoolsd.service - Service for virtual machines hosted on VMware. May 12 23:39:05.429717 unknown[1564]: Pref_Init: Using '/etc/vmware-tools/vgauth.conf' as preferences filepath May 12 23:39:05.431190 unknown[1564]: Core dump limit set to -1 May 12 23:39:05.445829 systemd-logind[1545]: Watching system buttons on /dev/input/event1 (Power Button) May 12 23:39:05.447345 systemd-logind[1545]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 12 23:39:05.447540 systemd-logind[1545]: New seat seat0. May 12 23:39:05.453213 systemd[1]: Started systemd-logind.service - User Login Management. May 12 23:39:05.454896 tar[1553]: linux-amd64/helm May 12 23:39:05.462193 extend-filesystems[1540]: Old size kept for /dev/sda9 May 12 23:39:05.462193 extend-filesystems[1540]: Found sr0 May 12 23:39:05.463120 systemd[1]: extend-filesystems.service: Deactivated successfully. May 12 23:39:05.463561 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 12 23:39:05.496764 bash[1594]: Updated "/home/core/.ssh/authorized_keys" May 12 23:39:05.497840 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 12 23:39:05.498447 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. May 12 23:39:05.507748 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 42 scanned by (udev-worker) (1260) May 12 23:39:05.592165 locksmithd[1572]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 12 23:39:05.603222 sshd_keygen[1574]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 12 23:39:05.623827 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 12 23:39:05.633525 systemd[1]: Starting issuegen.service - Generate /run/issue... May 12 23:39:05.642167 systemd[1]: issuegen.service: Deactivated successfully. May 12 23:39:05.642319 systemd[1]: Finished issuegen.service - Generate /run/issue. May 12 23:39:05.654945 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 12 23:39:05.674642 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 12 23:39:05.681031 systemd[1]: Started getty@tty1.service - Getty on tty1. May 12 23:39:05.683002 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. May 12 23:39:05.683228 systemd[1]: Reached target getty.target - Login Prompts. May 12 23:39:05.802498 containerd[1566]: time="2025-05-12T23:39:05.802404364Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 May 12 23:39:05.826665 containerd[1566]: time="2025-05-12T23:39:05.826627590Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 12 23:39:05.827971 containerd[1566]: time="2025-05-12T23:39:05.827747113Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.89-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 12 23:39:05.827971 containerd[1566]: time="2025-05-12T23:39:05.827768662Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 12 23:39:05.827971 containerd[1566]: time="2025-05-12T23:39:05.827779615Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 12 23:39:05.827971 containerd[1566]: time="2025-05-12T23:39:05.827880442Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 May 12 23:39:05.827971 containerd[1566]: time="2025-05-12T23:39:05.827890456Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 May 12 23:39:05.827971 containerd[1566]: time="2025-05-12T23:39:05.827930634Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 May 12 23:39:05.827971 containerd[1566]: time="2025-05-12T23:39:05.827938843Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 12 23:39:05.828133 containerd[1566]: time="2025-05-12T23:39:05.828069113Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 12 23:39:05.828133 containerd[1566]: time="2025-05-12T23:39:05.828078859Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 12 23:39:05.828133 containerd[1566]: time="2025-05-12T23:39:05.828087284Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 May 12 23:39:05.828133 containerd[1566]: time="2025-05-12T23:39:05.828092502Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 12 23:39:05.828133 containerd[1566]: time="2025-05-12T23:39:05.828132917Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 12 23:39:05.828530 containerd[1566]: time="2025-05-12T23:39:05.828250987Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 12 23:39:05.828530 containerd[1566]: time="2025-05-12T23:39:05.828325829Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 12 23:39:05.828530 containerd[1566]: time="2025-05-12T23:39:05.828333781Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 12 23:39:05.828530 containerd[1566]: time="2025-05-12T23:39:05.828375589Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 12 23:39:05.828530 containerd[1566]: time="2025-05-12T23:39:05.828402396Z" level=info msg="metadata content store policy set" policy=shared May 12 23:39:05.832378 containerd[1566]: time="2025-05-12T23:39:05.832186527Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 12 23:39:05.832378 containerd[1566]: time="2025-05-12T23:39:05.832226544Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 12 23:39:05.832378 containerd[1566]: time="2025-05-12T23:39:05.832236234Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 May 12 23:39:05.832378 containerd[1566]: time="2025-05-12T23:39:05.832245756Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 May 12 23:39:05.832378 containerd[1566]: time="2025-05-12T23:39:05.832257988Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 12 23:39:05.832378 containerd[1566]: time="2025-05-12T23:39:05.832352887Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 12 23:39:05.832570 containerd[1566]: time="2025-05-12T23:39:05.832483639Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 12 23:39:05.832570 containerd[1566]: time="2025-05-12T23:39:05.832549552Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 May 12 23:39:05.832570 containerd[1566]: time="2025-05-12T23:39:05.832558570Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 May 12 23:39:05.832570 containerd[1566]: time="2025-05-12T23:39:05.832567015Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 May 12 23:39:05.832643 containerd[1566]: time="2025-05-12T23:39:05.832575333Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 12 23:39:05.832643 containerd[1566]: time="2025-05-12T23:39:05.832582743Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 12 23:39:05.832643 containerd[1566]: time="2025-05-12T23:39:05.832590063Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 12 23:39:05.832643 containerd[1566]: time="2025-05-12T23:39:05.832597887Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 12 23:39:05.832643 containerd[1566]: time="2025-05-12T23:39:05.832605734Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 12 23:39:05.832643 containerd[1566]: time="2025-05-12T23:39:05.832612880Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 12 23:39:05.832643 containerd[1566]: time="2025-05-12T23:39:05.832619307Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 12 23:39:05.832643 containerd[1566]: time="2025-05-12T23:39:05.832625511Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 12 23:39:05.832643 containerd[1566]: time="2025-05-12T23:39:05.832637766Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 12 23:39:05.833251 containerd[1566]: time="2025-05-12T23:39:05.832645382Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 12 23:39:05.833251 containerd[1566]: time="2025-05-12T23:39:05.832654394Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 12 23:39:05.833251 containerd[1566]: time="2025-05-12T23:39:05.832662122Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 12 23:39:05.833251 containerd[1566]: time="2025-05-12T23:39:05.832668509Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 12 23:39:05.833251 containerd[1566]: time="2025-05-12T23:39:05.832679935Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 12 23:39:05.833251 containerd[1566]: time="2025-05-12T23:39:05.832686361Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 12 23:39:05.833251 containerd[1566]: time="2025-05-12T23:39:05.832694225Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 12 23:39:05.833251 containerd[1566]: time="2025-05-12T23:39:05.832701728Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 May 12 23:39:05.833251 containerd[1566]: time="2025-05-12T23:39:05.832710646Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 May 12 23:39:05.833251 containerd[1566]: time="2025-05-12T23:39:05.832717205Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 12 23:39:05.833251 containerd[1566]: time="2025-05-12T23:39:05.832724345Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 May 12 23:39:05.833251 containerd[1566]: time="2025-05-12T23:39:05.832746011Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 12 23:39:05.833251 containerd[1566]: time="2025-05-12T23:39:05.832755931Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 May 12 23:39:05.833251 containerd[1566]: time="2025-05-12T23:39:05.832768000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 May 12 23:39:05.833251 containerd[1566]: time="2025-05-12T23:39:05.832775619Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 12 23:39:05.833922 containerd[1566]: time="2025-05-12T23:39:05.832781642Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 12 23:39:05.833922 containerd[1566]: time="2025-05-12T23:39:05.832806085Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 12 23:39:05.833922 containerd[1566]: time="2025-05-12T23:39:05.832816229Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 May 12 23:39:05.833922 containerd[1566]: time="2025-05-12T23:39:05.832822028Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 12 23:39:05.833922 containerd[1566]: time="2025-05-12T23:39:05.832828309Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 May 12 23:39:05.833922 containerd[1566]: time="2025-05-12T23:39:05.832833408Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 12 23:39:05.833922 containerd[1566]: time="2025-05-12T23:39:05.832840452Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 May 12 23:39:05.833922 containerd[1566]: time="2025-05-12T23:39:05.832845807Z" level=info msg="NRI interface is disabled by configuration." May 12 23:39:05.833922 containerd[1566]: time="2025-05-12T23:39:05.832851745Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 12 23:39:05.834049 systemd[1]: Started containerd.service - containerd container runtime. May 12 23:39:05.834232 containerd[1566]: time="2025-05-12T23:39:05.833017580Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 12 23:39:05.834232 containerd[1566]: time="2025-05-12T23:39:05.833045725Z" level=info msg="Connect containerd service" May 12 23:39:05.834232 containerd[1566]: time="2025-05-12T23:39:05.833068051Z" level=info msg="using legacy CRI server" May 12 23:39:05.834232 containerd[1566]: time="2025-05-12T23:39:05.833072225Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 12 23:39:05.834232 containerd[1566]: time="2025-05-12T23:39:05.833133356Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 12 23:39:05.834232 containerd[1566]: time="2025-05-12T23:39:05.833539664Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 12 23:39:05.834232 containerd[1566]: time="2025-05-12T23:39:05.833637209Z" level=info msg="Start subscribing containerd event" May 12 23:39:05.834232 containerd[1566]: time="2025-05-12T23:39:05.833664323Z" level=info msg="Start recovering state" May 12 23:39:05.834232 containerd[1566]: time="2025-05-12T23:39:05.833697667Z" level=info msg="Start event monitor" May 12 23:39:05.834232 containerd[1566]: time="2025-05-12T23:39:05.833708455Z" level=info msg="Start snapshots syncer" May 12 23:39:05.834232 containerd[1566]: time="2025-05-12T23:39:05.833713724Z" level=info msg="Start cni network conf syncer for default" May 12 23:39:05.834232 containerd[1566]: time="2025-05-12T23:39:05.833719596Z" level=info msg="Start streaming server" May 12 23:39:05.834232 containerd[1566]: time="2025-05-12T23:39:05.833917539Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 12 23:39:05.834232 containerd[1566]: time="2025-05-12T23:39:05.833959227Z" level=info msg=serving... address=/run/containerd/containerd.sock May 12 23:39:05.837359 containerd[1566]: time="2025-05-12T23:39:05.836481952Z" level=info msg="containerd successfully booted in 0.035064s" May 12 23:39:05.903794 tar[1553]: linux-amd64/LICENSE May 12 23:39:05.903891 tar[1553]: linux-amd64/README.md May 12 23:39:05.917448 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 12 23:39:06.075903 systemd-networkd[1257]: ens192: Gained IPv6LL May 12 23:39:06.077322 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 12 23:39:06.078114 systemd[1]: Reached target network-online.target - Network is Online. May 12 23:39:06.082895 systemd[1]: Starting coreos-metadata.service - VMware metadata agent... May 12 23:39:06.089932 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 12 23:39:06.092884 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 12 23:39:06.116530 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 12 23:39:06.123927 systemd[1]: coreos-metadata.service: Deactivated successfully. May 12 23:39:06.124199 systemd[1]: Finished coreos-metadata.service - VMware metadata agent. May 12 23:39:06.125025 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 12 23:39:07.614774 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 12 23:39:07.615224 systemd[1]: Reached target multi-user.target - Multi-User System. May 12 23:39:07.615812 systemd[1]: Startup finished in 956ms (kernel) + 7.069s (initrd) + 5.603s (userspace) = 13.629s. May 12 23:39:07.633213 (kubelet)[1717]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 12 23:39:07.685279 login[1670]: pam_lastlog(login:session): file /var/log/lastlog is locked/write, retrying May 12 23:39:07.686391 login[1673]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) May 12 23:39:07.695474 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 12 23:39:07.700905 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 12 23:39:07.703778 systemd-logind[1545]: New session 1 of user core. May 12 23:39:07.709175 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 12 23:39:07.716079 systemd[1]: Starting user@500.service - User Manager for UID 500... May 12 23:39:07.720809 (systemd)[1724]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 12 23:39:07.722471 systemd-logind[1545]: New session c1 of user core. May 12 23:39:07.863486 systemd[1724]: Queued start job for default target default.target. May 12 23:39:07.872703 systemd[1724]: Created slice app.slice - User Application Slice. May 12 23:39:07.872730 systemd[1724]: Reached target paths.target - Paths. May 12 23:39:07.872787 systemd[1724]: Reached target timers.target - Timers. May 12 23:39:07.876700 systemd[1724]: Starting dbus.socket - D-Bus User Message Bus Socket... May 12 23:39:07.880976 systemd[1724]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 12 23:39:07.881452 systemd[1724]: Reached target sockets.target - Sockets. May 12 23:39:07.881489 systemd[1724]: Reached target basic.target - Basic System. May 12 23:39:07.881517 systemd[1724]: Reached target default.target - Main User Target. May 12 23:39:07.881534 systemd[1724]: Startup finished in 154ms. May 12 23:39:07.881667 systemd[1]: Started user@500.service - User Manager for UID 500. May 12 23:39:07.883444 systemd[1]: Started session-1.scope - Session 1 of User core. May 12 23:39:08.685637 login[1670]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) May 12 23:39:08.689065 systemd-logind[1545]: New session 2 of user core. May 12 23:39:08.698835 systemd[1]: Started session-2.scope - Session 2 of User core. May 12 23:39:08.875443 kubelet[1717]: E0512 23:39:08.875401 1717 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 12 23:39:08.876757 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 12 23:39:08.876845 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 12 23:39:08.877181 systemd[1]: kubelet.service: Consumed 672ms CPU time, 247.1M memory peak. May 12 23:39:19.127435 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 12 23:39:19.133952 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 12 23:39:19.468257 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 12 23:39:19.471435 (kubelet)[1767]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 12 23:39:19.510693 kubelet[1767]: E0512 23:39:19.510657 1767 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 12 23:39:19.513333 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 12 23:39:19.513492 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 12 23:39:19.513804 systemd[1]: kubelet.service: Consumed 89ms CPU time, 95.5M memory peak. May 12 23:39:29.763834 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 12 23:39:29.772859 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 12 23:39:30.102044 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 12 23:39:30.104569 (kubelet)[1782]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 12 23:39:30.128344 kubelet[1782]: E0512 23:39:30.128300 1782 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 12 23:39:30.129936 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 12 23:39:30.130082 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 12 23:39:30.130390 systemd[1]: kubelet.service: Consumed 80ms CPU time, 97.8M memory peak. May 12 23:39:35.523295 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 12 23:39:35.524940 systemd[1]: Started sshd@0-139.178.70.108:22-139.178.68.195:51990.service - OpenSSH per-connection server daemon (139.178.68.195:51990). May 12 23:39:35.566243 sshd[1791]: Accepted publickey for core from 139.178.68.195 port 51990 ssh2: RSA SHA256:sCFuDykEXum0h7cf6aaOmFY5lUkY7O+fj3cBiGeu3s0 May 12 23:39:35.567359 sshd-session[1791]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 12 23:39:35.570796 systemd-logind[1545]: New session 3 of user core. May 12 23:39:35.578873 systemd[1]: Started session-3.scope - Session 3 of User core. May 12 23:39:35.642961 systemd[1]: Started sshd@1-139.178.70.108:22-139.178.68.195:51998.service - OpenSSH per-connection server daemon (139.178.68.195:51998). May 12 23:39:35.676298 sshd[1796]: Accepted publickey for core from 139.178.68.195 port 51998 ssh2: RSA SHA256:sCFuDykEXum0h7cf6aaOmFY5lUkY7O+fj3cBiGeu3s0 May 12 23:39:35.677142 sshd-session[1796]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 12 23:39:35.679867 systemd-logind[1545]: New session 4 of user core. May 12 23:39:35.685816 systemd[1]: Started session-4.scope - Session 4 of User core. May 12 23:39:35.735395 sshd[1798]: Connection closed by 139.178.68.195 port 51998 May 12 23:39:35.735343 sshd-session[1796]: pam_unix(sshd:session): session closed for user core May 12 23:39:35.744449 systemd[1]: sshd@1-139.178.70.108:22-139.178.68.195:51998.service: Deactivated successfully. May 12 23:39:35.745652 systemd[1]: session-4.scope: Deactivated successfully. May 12 23:39:35.746387 systemd-logind[1545]: Session 4 logged out. Waiting for processes to exit. May 12 23:39:35.747903 systemd-logind[1545]: Removed session 4. May 12 23:39:35.750952 systemd[1]: Started sshd@2-139.178.70.108:22-139.178.68.195:52010.service - OpenSSH per-connection server daemon (139.178.68.195:52010). May 12 23:39:35.786441 sshd[1803]: Accepted publickey for core from 139.178.68.195 port 52010 ssh2: RSA SHA256:sCFuDykEXum0h7cf6aaOmFY5lUkY7O+fj3cBiGeu3s0 May 12 23:39:35.787505 sshd-session[1803]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 12 23:39:35.791193 systemd-logind[1545]: New session 5 of user core. May 12 23:39:35.796840 systemd[1]: Started session-5.scope - Session 5 of User core. May 12 23:39:35.843802 sshd[1806]: Connection closed by 139.178.68.195 port 52010 May 12 23:39:35.844170 sshd-session[1803]: pam_unix(sshd:session): session closed for user core May 12 23:39:35.854486 systemd[1]: sshd@2-139.178.70.108:22-139.178.68.195:52010.service: Deactivated successfully. May 12 23:39:35.855591 systemd[1]: session-5.scope: Deactivated successfully. May 12 23:39:35.856161 systemd-logind[1545]: Session 5 logged out. Waiting for processes to exit. May 12 23:39:35.857416 systemd[1]: Started sshd@3-139.178.70.108:22-139.178.68.195:52012.service - OpenSSH per-connection server daemon (139.178.68.195:52012). May 12 23:39:35.858975 systemd-logind[1545]: Removed session 5. May 12 23:39:35.894905 sshd[1811]: Accepted publickey for core from 139.178.68.195 port 52012 ssh2: RSA SHA256:sCFuDykEXum0h7cf6aaOmFY5lUkY7O+fj3cBiGeu3s0 May 12 23:39:35.895796 sshd-session[1811]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 12 23:39:35.900646 systemd-logind[1545]: New session 6 of user core. May 12 23:39:35.909855 systemd[1]: Started session-6.scope - Session 6 of User core. May 12 23:39:35.960212 sshd[1814]: Connection closed by 139.178.68.195 port 52012 May 12 23:39:35.960656 sshd-session[1811]: pam_unix(sshd:session): session closed for user core May 12 23:39:35.969371 systemd[1]: sshd@3-139.178.70.108:22-139.178.68.195:52012.service: Deactivated successfully. May 12 23:39:35.970497 systemd[1]: session-6.scope: Deactivated successfully. May 12 23:39:35.971173 systemd-logind[1545]: Session 6 logged out. Waiting for processes to exit. May 12 23:39:35.975946 systemd[1]: Started sshd@4-139.178.70.108:22-139.178.68.195:52022.service - OpenSSH per-connection server daemon (139.178.68.195:52022). May 12 23:39:35.976355 systemd-logind[1545]: Removed session 6. May 12 23:39:36.011097 sshd[1819]: Accepted publickey for core from 139.178.68.195 port 52022 ssh2: RSA SHA256:sCFuDykEXum0h7cf6aaOmFY5lUkY7O+fj3cBiGeu3s0 May 12 23:39:36.011998 sshd-session[1819]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 12 23:39:36.016500 systemd-logind[1545]: New session 7 of user core. May 12 23:39:36.021853 systemd[1]: Started session-7.scope - Session 7 of User core. May 12 23:39:36.081544 sudo[1823]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 12 23:39:36.081805 sudo[1823]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 12 23:39:36.091444 sudo[1823]: pam_unix(sudo:session): session closed for user root May 12 23:39:36.092375 sshd[1822]: Connection closed by 139.178.68.195 port 52022 May 12 23:39:36.093348 sshd-session[1819]: pam_unix(sshd:session): session closed for user core May 12 23:39:36.104072 systemd[1]: sshd@4-139.178.70.108:22-139.178.68.195:52022.service: Deactivated successfully. May 12 23:39:36.105047 systemd[1]: session-7.scope: Deactivated successfully. May 12 23:39:36.105647 systemd-logind[1545]: Session 7 logged out. Waiting for processes to exit. May 12 23:39:36.109006 systemd[1]: Started sshd@5-139.178.70.108:22-139.178.68.195:52026.service - OpenSSH per-connection server daemon (139.178.68.195:52026). May 12 23:39:36.110585 systemd-logind[1545]: Removed session 7. May 12 23:39:36.143784 sshd[1828]: Accepted publickey for core from 139.178.68.195 port 52026 ssh2: RSA SHA256:sCFuDykEXum0h7cf6aaOmFY5lUkY7O+fj3cBiGeu3s0 May 12 23:39:36.144684 sshd-session[1828]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 12 23:39:36.148474 systemd-logind[1545]: New session 8 of user core. May 12 23:39:36.157863 systemd[1]: Started session-8.scope - Session 8 of User core. May 12 23:39:36.208374 sudo[1833]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 12 23:39:36.208790 sudo[1833]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 12 23:39:36.211237 sudo[1833]: pam_unix(sudo:session): session closed for user root May 12 23:39:36.214979 sudo[1832]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules May 12 23:39:36.215181 sudo[1832]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 12 23:39:36.231998 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 12 23:39:36.250504 augenrules[1855]: No rules May 12 23:39:36.251247 systemd[1]: audit-rules.service: Deactivated successfully. May 12 23:39:36.251395 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 12 23:39:36.252108 sudo[1832]: pam_unix(sudo:session): session closed for user root May 12 23:39:36.252976 sshd[1831]: Connection closed by 139.178.68.195 port 52026 May 12 23:39:36.253293 sshd-session[1828]: pam_unix(sshd:session): session closed for user core May 12 23:39:36.258154 systemd[1]: sshd@5-139.178.70.108:22-139.178.68.195:52026.service: Deactivated successfully. May 12 23:39:36.259001 systemd[1]: session-8.scope: Deactivated successfully. May 12 23:39:36.259441 systemd-logind[1545]: Session 8 logged out. Waiting for processes to exit. May 12 23:39:36.260336 systemd[1]: Started sshd@6-139.178.70.108:22-139.178.68.195:52030.service - OpenSSH per-connection server daemon (139.178.68.195:52030). May 12 23:39:36.262934 systemd-logind[1545]: Removed session 8. May 12 23:39:36.298300 sshd[1863]: Accepted publickey for core from 139.178.68.195 port 52030 ssh2: RSA SHA256:sCFuDykEXum0h7cf6aaOmFY5lUkY7O+fj3cBiGeu3s0 May 12 23:39:36.299132 sshd-session[1863]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 12 23:39:36.303888 systemd-logind[1545]: New session 9 of user core. May 12 23:39:36.309930 systemd[1]: Started session-9.scope - Session 9 of User core. May 12 23:39:36.360011 sudo[1867]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 12 23:39:36.360219 sudo[1867]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 12 23:39:36.672978 (dockerd)[1883]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 12 23:39:36.673274 systemd[1]: Starting docker.service - Docker Application Container Engine... May 12 23:39:36.941585 dockerd[1883]: time="2025-05-12T23:39:36.941497233Z" level=info msg="Starting up" May 12 23:39:36.996325 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport1670692346-merged.mount: Deactivated successfully. May 12 23:39:37.012598 dockerd[1883]: time="2025-05-12T23:39:37.012580478Z" level=info msg="Loading containers: start." May 12 23:39:37.111762 kernel: Initializing XFRM netlink socket May 12 23:39:37.163663 systemd-networkd[1257]: docker0: Link UP May 12 23:39:37.182801 dockerd[1883]: time="2025-05-12T23:39:37.182768679Z" level=info msg="Loading containers: done." May 12 23:39:37.192612 dockerd[1883]: time="2025-05-12T23:39:37.192545155Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 12 23:39:37.192705 dockerd[1883]: time="2025-05-12T23:39:37.192609882Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 May 12 23:39:37.192705 dockerd[1883]: time="2025-05-12T23:39:37.192680165Z" level=info msg="Daemon has completed initialization" May 12 23:39:37.209202 dockerd[1883]: time="2025-05-12T23:39:37.209162526Z" level=info msg="API listen on /run/docker.sock" May 12 23:39:37.209464 systemd[1]: Started docker.service - Docker Application Container Engine. May 12 23:39:38.112400 containerd[1566]: time="2025-05-12T23:39:38.112367253Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\"" May 12 23:39:38.853810 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3033922066.mount: Deactivated successfully. May 12 23:39:39.857167 containerd[1566]: time="2025-05-12T23:39:39.856563181Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 12 23:39:39.857676 containerd[1566]: time="2025-05-12T23:39:39.857659120Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.12: active requests=0, bytes read=32674873" May 12 23:39:39.861885 containerd[1566]: time="2025-05-12T23:39:39.861872784Z" level=info msg="ImageCreate event name:\"sha256:e113c59aa22f0650435e2a3ed64aadb01e87f3d2835aa3825fe078cd39699bfb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 12 23:39:39.863544 containerd[1566]: time="2025-05-12T23:39:39.863531637Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 12 23:39:39.864205 containerd[1566]: time="2025-05-12T23:39:39.863933100Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.12\" with image id \"sha256:e113c59aa22f0650435e2a3ed64aadb01e87f3d2835aa3825fe078cd39699bfb\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e\", size \"32671673\" in 1.751545687s" May 12 23:39:39.864465 containerd[1566]: time="2025-05-12T23:39:39.864455162Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\" returns image reference \"sha256:e113c59aa22f0650435e2a3ed64aadb01e87f3d2835aa3825fe078cd39699bfb\"" May 12 23:39:39.877278 containerd[1566]: time="2025-05-12T23:39:39.877235982Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\"" May 12 23:39:40.380318 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. May 12 23:39:40.385874 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 12 23:39:40.454262 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 12 23:39:40.456764 (kubelet)[2143]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 12 23:39:40.503769 kubelet[2143]: E0512 23:39:40.503718 2143 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 12 23:39:40.505338 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 12 23:39:40.505442 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 12 23:39:40.505875 systemd[1]: kubelet.service: Consumed 82ms CPU time, 99.8M memory peak. May 12 23:39:41.671772 containerd[1566]: time="2025-05-12T23:39:41.671532390Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 12 23:39:41.679085 containerd[1566]: time="2025-05-12T23:39:41.679047303Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.12: active requests=0, bytes read=29617534" May 12 23:39:41.687774 containerd[1566]: time="2025-05-12T23:39:41.686969935Z" level=info msg="ImageCreate event name:\"sha256:70742b7b7d90a618a1fa06d89248dbe2c291c19d7f75f4ad60a69d0454dbbac8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 12 23:39:41.689173 containerd[1566]: time="2025-05-12T23:39:41.689134770Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 12 23:39:41.691931 containerd[1566]: time="2025-05-12T23:39:41.689652709Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.12\" with image id \"sha256:70742b7b7d90a618a1fa06d89248dbe2c291c19d7f75f4ad60a69d0454dbbac8\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74\", size \"31105907\" in 1.812274075s" May 12 23:39:41.691931 containerd[1566]: time="2025-05-12T23:39:41.689855297Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\" returns image reference \"sha256:70742b7b7d90a618a1fa06d89248dbe2c291c19d7f75f4ad60a69d0454dbbac8\"" May 12 23:39:41.706304 containerd[1566]: time="2025-05-12T23:39:41.706274940Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\"" May 12 23:39:42.976974 containerd[1566]: time="2025-05-12T23:39:42.976655505Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 12 23:39:42.977685 containerd[1566]: time="2025-05-12T23:39:42.977666621Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.12: active requests=0, bytes read=17903682" May 12 23:39:42.978754 containerd[1566]: time="2025-05-12T23:39:42.977907617Z" level=info msg="ImageCreate event name:\"sha256:c0b91cfea9f9a1c09fc5d056f3a015e52604fd0d63671ff5bf31e642402ef05d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 12 23:39:42.979748 containerd[1566]: time="2025-05-12T23:39:42.979718184Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 12 23:39:42.980768 containerd[1566]: time="2025-05-12T23:39:42.980724270Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.12\" with image id \"sha256:c0b91cfea9f9a1c09fc5d056f3a015e52604fd0d63671ff5bf31e642402ef05d\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029\", size \"19392073\" in 1.274427701s" May 12 23:39:42.980799 containerd[1566]: time="2025-05-12T23:39:42.980772442Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\" returns image reference \"sha256:c0b91cfea9f9a1c09fc5d056f3a015e52604fd0d63671ff5bf31e642402ef05d\"" May 12 23:39:42.993169 containerd[1566]: time="2025-05-12T23:39:42.993146856Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\"" May 12 23:39:43.783594 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2427373404.mount: Deactivated successfully. May 12 23:39:44.347298 containerd[1566]: time="2025-05-12T23:39:44.347265560Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 12 23:39:44.357301 containerd[1566]: time="2025-05-12T23:39:44.357275286Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.12: active requests=0, bytes read=29185817" May 12 23:39:44.362461 containerd[1566]: time="2025-05-12T23:39:44.362444545Z" level=info msg="ImageCreate event name:\"sha256:c9356fea5d151501039907c3ba870272461396117eabc74063632616f4e31b2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 12 23:39:44.365245 containerd[1566]: time="2025-05-12T23:39:44.365231992Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 12 23:39:44.365558 containerd[1566]: time="2025-05-12T23:39:44.365507681Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.12\" with image id \"sha256:c9356fea5d151501039907c3ba870272461396117eabc74063632616f4e31b2b\", repo tag \"registry.k8s.io/kube-proxy:v1.30.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15\", size \"29184836\" in 1.372339599s" May 12 23:39:44.365558 containerd[1566]: time="2025-05-12T23:39:44.365525236Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\" returns image reference \"sha256:c9356fea5d151501039907c3ba870272461396117eabc74063632616f4e31b2b\"" May 12 23:39:44.379208 containerd[1566]: time="2025-05-12T23:39:44.379184090Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" May 12 23:39:45.100839 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1072639168.mount: Deactivated successfully. May 12 23:39:45.981362 containerd[1566]: time="2025-05-12T23:39:45.981321041Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 12 23:39:45.982245 containerd[1566]: time="2025-05-12T23:39:45.982215812Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" May 12 23:39:45.982850 containerd[1566]: time="2025-05-12T23:39:45.982831994Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 12 23:39:45.985792 containerd[1566]: time="2025-05-12T23:39:45.985772937Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 12 23:39:45.987000 containerd[1566]: time="2025-05-12T23:39:45.986779094Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.607554918s" May 12 23:39:45.987000 containerd[1566]: time="2025-05-12T23:39:45.986799925Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" May 12 23:39:46.000241 containerd[1566]: time="2025-05-12T23:39:46.000197268Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" May 12 23:39:46.611243 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1592922355.mount: Deactivated successfully. May 12 23:39:46.612887 containerd[1566]: time="2025-05-12T23:39:46.612856791Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 12 23:39:46.613275 containerd[1566]: time="2025-05-12T23:39:46.613253696Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" May 12 23:39:46.613746 containerd[1566]: time="2025-05-12T23:39:46.613322092Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 12 23:39:46.614561 containerd[1566]: time="2025-05-12T23:39:46.614547637Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 12 23:39:46.615089 containerd[1566]: time="2025-05-12T23:39:46.615076228Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 614.746969ms" May 12 23:39:46.615143 containerd[1566]: time="2025-05-12T23:39:46.615134478Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" May 12 23:39:46.627979 containerd[1566]: time="2025-05-12T23:39:46.627956838Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" May 12 23:39:48.034717 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1638258832.mount: Deactivated successfully. May 12 23:39:50.755972 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. May 12 23:39:50.764863 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 12 23:39:51.001194 update_engine[1547]: I20250512 23:39:51.000779 1547 update_attempter.cc:509] Updating boot flags... May 12 23:39:51.093759 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 42 scanned by (udev-worker) (2294) May 12 23:39:51.444639 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 12 23:39:51.447864 (kubelet)[2305]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 12 23:39:51.644708 kubelet[2305]: E0512 23:39:51.644677 2305 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 12 23:39:51.646255 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 12 23:39:51.646344 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 12 23:39:51.646521 systemd[1]: kubelet.service: Consumed 86ms CPU time, 93.5M memory peak. May 12 23:39:52.308650 containerd[1566]: time="2025-05-12T23:39:52.308616641Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 12 23:39:52.311261 containerd[1566]: time="2025-05-12T23:39:52.311239388Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238571" May 12 23:39:52.318116 containerd[1566]: time="2025-05-12T23:39:52.318093253Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 12 23:39:52.326144 containerd[1566]: time="2025-05-12T23:39:52.326120719Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 12 23:39:52.326742 containerd[1566]: time="2025-05-12T23:39:52.326647241Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 5.698573683s" May 12 23:39:52.326742 containerd[1566]: time="2025-05-12T23:39:52.326666709Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" May 12 23:39:54.716743 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 12 23:39:54.717196 systemd[1]: kubelet.service: Consumed 86ms CPU time, 93.5M memory peak. May 12 23:39:54.722864 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 12 23:39:54.737139 systemd[1]: Reload requested from client PID 2376 ('systemctl') (unit session-9.scope)... May 12 23:39:54.737151 systemd[1]: Reloading... May 12 23:39:54.807769 zram_generator::config[2421]: No configuration found. May 12 23:39:54.865921 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") May 12 23:39:54.883940 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 12 23:39:54.951022 systemd[1]: Reloading finished in 213 ms. May 12 23:39:54.996458 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM May 12 23:39:54.996512 systemd[1]: kubelet.service: Failed with result 'signal'. May 12 23:39:54.996671 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 12 23:39:54.996699 systemd[1]: kubelet.service: Consumed 48ms CPU time, 78.5M memory peak. May 12 23:39:55.001916 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 12 23:39:55.326979 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 12 23:39:55.329787 (kubelet)[2488]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 12 23:39:55.365670 kubelet[2488]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 12 23:39:55.365670 kubelet[2488]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 12 23:39:55.365670 kubelet[2488]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 12 23:39:55.381673 kubelet[2488]: I0512 23:39:55.381647 2488 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 12 23:39:55.681928 kubelet[2488]: I0512 23:39:55.681611 2488 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" May 12 23:39:55.681928 kubelet[2488]: I0512 23:39:55.681635 2488 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 12 23:39:55.681928 kubelet[2488]: I0512 23:39:55.681819 2488 server.go:927] "Client rotation is on, will bootstrap in background" May 12 23:39:55.813150 kubelet[2488]: I0512 23:39:55.813129 2488 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 12 23:39:55.844040 kubelet[2488]: E0512 23:39:55.842467 2488 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://139.178.70.108:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 139.178.70.108:6443: connect: connection refused May 12 23:39:55.909855 kubelet[2488]: I0512 23:39:55.909836 2488 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 12 23:39:55.910091 kubelet[2488]: I0512 23:39:55.910074 2488 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 12 23:39:55.920149 kubelet[2488]: I0512 23:39:55.910130 2488 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} May 12 23:39:55.920327 kubelet[2488]: I0512 23:39:55.920316 2488 topology_manager.go:138] "Creating topology manager with none policy" May 12 23:39:55.920381 kubelet[2488]: I0512 23:39:55.920375 2488 container_manager_linux.go:301] "Creating device plugin manager" May 12 23:39:55.931031 kubelet[2488]: I0512 23:39:55.931019 2488 state_mem.go:36] "Initialized new in-memory state store" May 12 23:39:55.935341 kubelet[2488]: I0512 23:39:55.935300 2488 kubelet.go:400] "Attempting to sync node with API server" May 12 23:39:55.935602 kubelet[2488]: I0512 23:39:55.935593 2488 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" May 12 23:39:55.935706 kubelet[2488]: W0512 23:39:55.935637 2488 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://139.178.70.108:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 139.178.70.108:6443: connect: connection refused May 12 23:39:55.935706 kubelet[2488]: E0512 23:39:55.935684 2488 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://139.178.70.108:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 139.178.70.108:6443: connect: connection refused May 12 23:39:55.944939 kubelet[2488]: I0512 23:39:55.944839 2488 kubelet.go:312] "Adding apiserver pod source" May 12 23:39:55.951453 kubelet[2488]: I0512 23:39:55.951301 2488 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 12 23:39:55.971563 kubelet[2488]: W0512 23:39:55.971514 2488 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://139.178.70.108:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 139.178.70.108:6443: connect: connection refused May 12 23:39:55.971563 kubelet[2488]: E0512 23:39:55.971549 2488 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://139.178.70.108:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 139.178.70.108:6443: connect: connection refused May 12 23:39:55.972127 kubelet[2488]: I0512 23:39:55.971964 2488 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" May 12 23:39:55.981693 kubelet[2488]: I0512 23:39:55.981601 2488 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 12 23:39:55.992261 kubelet[2488]: W0512 23:39:55.991982 2488 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 12 23:39:56.001526 kubelet[2488]: I0512 23:39:56.001435 2488 server.go:1264] "Started kubelet" May 12 23:39:56.015633 kubelet[2488]: I0512 23:39:56.015456 2488 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 12 23:39:56.015764 kubelet[2488]: I0512 23:39:56.015751 2488 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 12 23:39:56.016521 kubelet[2488]: I0512 23:39:56.016504 2488 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 12 23:39:56.024099 kubelet[2488]: I0512 23:39:56.024062 2488 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 12 23:39:56.024781 kubelet[2488]: I0512 23:39:56.024767 2488 server.go:455] "Adding debug handlers to kubelet server" May 12 23:39:56.026833 kubelet[2488]: I0512 23:39:56.026818 2488 volume_manager.go:291] "Starting Kubelet Volume Manager" May 12 23:39:56.036345 kubelet[2488]: I0512 23:39:56.036321 2488 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 12 23:39:56.036400 kubelet[2488]: I0512 23:39:56.036373 2488 reconciler.go:26] "Reconciler: start to sync state" May 12 23:39:56.044751 kubelet[2488]: E0512 23:39:56.044441 2488 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.108:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.108:6443: connect: connection refused" interval="200ms" May 12 23:39:56.049008 kubelet[2488]: E0512 23:39:56.048897 2488 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://139.178.70.108:6443/api/v1/namespaces/default/events\": dial tcp 139.178.70.108:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.183eec01592a7932 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-12 23:39:56.00140933 +0000 UTC m=+0.669158317,LastTimestamp:2025-05-12 23:39:56.00140933 +0000 UTC m=+0.669158317,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 12 23:39:56.049522 kubelet[2488]: W0512 23:39:56.049400 2488 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://139.178.70.108:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.70.108:6443: connect: connection refused May 12 23:39:56.049522 kubelet[2488]: E0512 23:39:56.049431 2488 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://139.178.70.108:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.70.108:6443: connect: connection refused May 12 23:39:56.054642 kubelet[2488]: I0512 23:39:56.054475 2488 factory.go:221] Registration of the containerd container factory successfully May 12 23:39:56.054642 kubelet[2488]: I0512 23:39:56.054486 2488 factory.go:221] Registration of the systemd container factory successfully May 12 23:39:56.054642 kubelet[2488]: I0512 23:39:56.054567 2488 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 12 23:39:56.054828 kubelet[2488]: E0512 23:39:56.054814 2488 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 12 23:39:56.078629 kubelet[2488]: I0512 23:39:56.078593 2488 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 12 23:39:56.079280 kubelet[2488]: I0512 23:39:56.079263 2488 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 12 23:39:56.079315 kubelet[2488]: I0512 23:39:56.079287 2488 status_manager.go:217] "Starting to sync pod status with apiserver" May 12 23:39:56.079315 kubelet[2488]: I0512 23:39:56.079301 2488 kubelet.go:2337] "Starting kubelet main sync loop" May 12 23:39:56.079346 kubelet[2488]: E0512 23:39:56.079326 2488 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 12 23:39:56.083800 kubelet[2488]: W0512 23:39:56.083779 2488 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://139.178.70.108:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.70.108:6443: connect: connection refused May 12 23:39:56.083800 kubelet[2488]: E0512 23:39:56.083801 2488 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://139.178.70.108:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.70.108:6443: connect: connection refused May 12 23:39:56.088151 kubelet[2488]: I0512 23:39:56.088007 2488 cpu_manager.go:214] "Starting CPU manager" policy="none" May 12 23:39:56.088151 kubelet[2488]: I0512 23:39:56.088017 2488 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 12 23:39:56.088151 kubelet[2488]: I0512 23:39:56.088026 2488 state_mem.go:36] "Initialized new in-memory state store" May 12 23:39:56.096027 kubelet[2488]: I0512 23:39:56.095970 2488 policy_none.go:49] "None policy: Start" May 12 23:39:56.096625 kubelet[2488]: I0512 23:39:56.096402 2488 memory_manager.go:170] "Starting memorymanager" policy="None" May 12 23:39:56.096625 kubelet[2488]: I0512 23:39:56.096418 2488 state_mem.go:35] "Initializing new in-memory state store" May 12 23:39:56.119023 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 12 23:39:56.128145 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 12 23:39:56.131237 kubelet[2488]: I0512 23:39:56.131153 2488 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 12 23:39:56.131372 kubelet[2488]: E0512 23:39:56.131354 2488 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://139.178.70.108:6443/api/v1/nodes\": dial tcp 139.178.70.108:6443: connect: connection refused" node="localhost" May 12 23:39:56.133317 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 12 23:39:56.142175 kubelet[2488]: I0512 23:39:56.142160 2488 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 12 23:39:56.142712 kubelet[2488]: I0512 23:39:56.142684 2488 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 12 23:39:56.143114 kubelet[2488]: I0512 23:39:56.143002 2488 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 12 23:39:56.144065 kubelet[2488]: E0512 23:39:56.144047 2488 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" May 12 23:39:56.179673 kubelet[2488]: I0512 23:39:56.179593 2488 topology_manager.go:215] "Topology Admit Handler" podUID="09a94040ed8b928e8f977c687ec59ddd" podNamespace="kube-system" podName="kube-apiserver-localhost" May 12 23:39:56.180672 kubelet[2488]: I0512 23:39:56.180329 2488 topology_manager.go:215] "Topology Admit Handler" podUID="b20b39a8540dba87b5883a6f0f602dba" podNamespace="kube-system" podName="kube-controller-manager-localhost" May 12 23:39:56.190686 kubelet[2488]: I0512 23:39:56.190112 2488 topology_manager.go:215] "Topology Admit Handler" podUID="6ece95f10dbffa04b25ec3439a115512" podNamespace="kube-system" podName="kube-scheduler-localhost" May 12 23:39:56.194374 systemd[1]: Created slice kubepods-burstable-podb20b39a8540dba87b5883a6f0f602dba.slice - libcontainer container kubepods-burstable-podb20b39a8540dba87b5883a6f0f602dba.slice. May 12 23:39:56.209032 systemd[1]: Created slice kubepods-burstable-pod6ece95f10dbffa04b25ec3439a115512.slice - libcontainer container kubepods-burstable-pod6ece95f10dbffa04b25ec3439a115512.slice. May 12 23:39:56.218805 systemd[1]: Created slice kubepods-burstable-pod09a94040ed8b928e8f977c687ec59ddd.slice - libcontainer container kubepods-burstable-pod09a94040ed8b928e8f977c687ec59ddd.slice. May 12 23:39:56.237774 kubelet[2488]: I0512 23:39:56.237754 2488 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/09a94040ed8b928e8f977c687ec59ddd-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"09a94040ed8b928e8f977c687ec59ddd\") " pod="kube-system/kube-apiserver-localhost" May 12 23:39:56.237877 kubelet[2488]: I0512 23:39:56.237868 2488 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/09a94040ed8b928e8f977c687ec59ddd-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"09a94040ed8b928e8f977c687ec59ddd\") " pod="kube-system/kube-apiserver-localhost" May 12 23:39:56.237925 kubelet[2488]: I0512 23:39:56.237918 2488 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/09a94040ed8b928e8f977c687ec59ddd-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"09a94040ed8b928e8f977c687ec59ddd\") " pod="kube-system/kube-apiserver-localhost" May 12 23:39:56.238045 kubelet[2488]: I0512 23:39:56.237962 2488 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 12 23:39:56.238045 kubelet[2488]: I0512 23:39:56.237973 2488 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 12 23:39:56.238045 kubelet[2488]: I0512 23:39:56.237983 2488 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 12 23:39:56.238045 kubelet[2488]: I0512 23:39:56.237992 2488 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 12 23:39:56.238045 kubelet[2488]: I0512 23:39:56.238001 2488 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 12 23:39:56.238139 kubelet[2488]: I0512 23:39:56.238010 2488 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6ece95f10dbffa04b25ec3439a115512-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6ece95f10dbffa04b25ec3439a115512\") " pod="kube-system/kube-scheduler-localhost" May 12 23:39:56.245052 kubelet[2488]: E0512 23:39:56.245028 2488 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.108:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.108:6443: connect: connection refused" interval="400ms" May 12 23:39:56.332335 kubelet[2488]: I0512 23:39:56.332267 2488 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 12 23:39:56.332561 kubelet[2488]: E0512 23:39:56.332545 2488 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://139.178.70.108:6443/api/v1/nodes\": dial tcp 139.178.70.108:6443: connect: connection refused" node="localhost" May 12 23:39:56.507044 containerd[1566]: time="2025-05-12T23:39:56.507018897Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:b20b39a8540dba87b5883a6f0f602dba,Namespace:kube-system,Attempt:0,}" May 12 23:39:56.517925 containerd[1566]: time="2025-05-12T23:39:56.517687515Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6ece95f10dbffa04b25ec3439a115512,Namespace:kube-system,Attempt:0,}" May 12 23:39:56.521226 containerd[1566]: time="2025-05-12T23:39:56.521205124Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:09a94040ed8b928e8f977c687ec59ddd,Namespace:kube-system,Attempt:0,}" May 12 23:39:56.645633 kubelet[2488]: E0512 23:39:56.645603 2488 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.108:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.108:6443: connect: connection refused" interval="800ms" May 12 23:39:56.733852 kubelet[2488]: I0512 23:39:56.733822 2488 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 12 23:39:56.734277 kubelet[2488]: E0512 23:39:56.734258 2488 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://139.178.70.108:6443/api/v1/nodes\": dial tcp 139.178.70.108:6443: connect: connection refused" node="localhost" May 12 23:39:56.915260 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1707076728.mount: Deactivated successfully. May 12 23:39:56.917767 containerd[1566]: time="2025-05-12T23:39:56.917551947Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 12 23:39:56.918415 containerd[1566]: time="2025-05-12T23:39:56.918381296Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" May 12 23:39:56.918962 containerd[1566]: time="2025-05-12T23:39:56.918943225Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 12 23:39:56.920356 containerd[1566]: time="2025-05-12T23:39:56.920338295Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 12 23:39:56.921551 containerd[1566]: time="2025-05-12T23:39:56.921511502Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 12 23:39:56.922745 containerd[1566]: time="2025-05-12T23:39:56.922717245Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 12 23:39:56.922802 containerd[1566]: time="2025-05-12T23:39:56.922778988Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 12 23:39:56.923948 containerd[1566]: time="2025-05-12T23:39:56.923921213Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 12 23:39:56.925997 containerd[1566]: time="2025-05-12T23:39:56.924560656Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 415.581569ms" May 12 23:39:56.925997 containerd[1566]: time="2025-05-12T23:39:56.925958349Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 408.196465ms" May 12 23:39:56.929252 containerd[1566]: time="2025-05-12T23:39:56.929117696Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 407.866423ms" May 12 23:39:57.113246 kubelet[2488]: W0512 23:39:57.113223 2488 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://139.178.70.108:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.70.108:6443: connect: connection refused May 12 23:39:57.113391 kubelet[2488]: E0512 23:39:57.113378 2488 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://139.178.70.108:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.70.108:6443: connect: connection refused May 12 23:39:57.180988 kubelet[2488]: W0512 23:39:57.180898 2488 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://139.178.70.108:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 139.178.70.108:6443: connect: connection refused May 12 23:39:57.181157 kubelet[2488]: E0512 23:39:57.181141 2488 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://139.178.70.108:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 139.178.70.108:6443: connect: connection refused May 12 23:39:57.233775 containerd[1566]: time="2025-05-12T23:39:57.233645239Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 12 23:39:57.233775 containerd[1566]: time="2025-05-12T23:39:57.233694783Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 12 23:39:57.233775 containerd[1566]: time="2025-05-12T23:39:57.233713598Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 12 23:39:57.234615 containerd[1566]: time="2025-05-12T23:39:57.234452855Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 12 23:39:57.236957 containerd[1566]: time="2025-05-12T23:39:57.236853854Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 12 23:39:57.236957 containerd[1566]: time="2025-05-12T23:39:57.236895400Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 12 23:39:57.236957 containerd[1566]: time="2025-05-12T23:39:57.236914014Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 12 23:39:57.237115 containerd[1566]: time="2025-05-12T23:39:57.236999792Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 12 23:39:57.237423 containerd[1566]: time="2025-05-12T23:39:57.237333968Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 12 23:39:57.237724 containerd[1566]: time="2025-05-12T23:39:57.237557590Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 12 23:39:57.237724 containerd[1566]: time="2025-05-12T23:39:57.237573418Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 12 23:39:57.237724 containerd[1566]: time="2025-05-12T23:39:57.237618231Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 12 23:39:57.272852 systemd[1]: Started cri-containerd-1c7d444655c5e93c72151675caf5e827436554b56de53e047809a209a0e4a9ce.scope - libcontainer container 1c7d444655c5e93c72151675caf5e827436554b56de53e047809a209a0e4a9ce. May 12 23:39:57.274081 systemd[1]: Started cri-containerd-508451aba825b845203360296b612fbbfb0f5470e89c6524d88de8d08c32ccf1.scope - libcontainer container 508451aba825b845203360296b612fbbfb0f5470e89c6524d88de8d08c32ccf1. May 12 23:39:57.278465 systemd[1]: Started cri-containerd-e6ecaa72db178081f298eabaac2934e982fca40ecbd322e8ff992358be987a6e.scope - libcontainer container e6ecaa72db178081f298eabaac2934e982fca40ecbd322e8ff992358be987a6e. May 12 23:39:57.316433 containerd[1566]: time="2025-05-12T23:39:57.316406508Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6ece95f10dbffa04b25ec3439a115512,Namespace:kube-system,Attempt:0,} returns sandbox id \"508451aba825b845203360296b612fbbfb0f5470e89c6524d88de8d08c32ccf1\"" May 12 23:39:57.322481 containerd[1566]: time="2025-05-12T23:39:57.322282414Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:09a94040ed8b928e8f977c687ec59ddd,Namespace:kube-system,Attempt:0,} returns sandbox id \"e6ecaa72db178081f298eabaac2934e982fca40ecbd322e8ff992358be987a6e\"" May 12 23:39:57.330257 containerd[1566]: time="2025-05-12T23:39:57.330203547Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:b20b39a8540dba87b5883a6f0f602dba,Namespace:kube-system,Attempt:0,} returns sandbox id \"1c7d444655c5e93c72151675caf5e827436554b56de53e047809a209a0e4a9ce\"" May 12 23:39:57.346908 containerd[1566]: time="2025-05-12T23:39:57.346847603Z" level=info msg="CreateContainer within sandbox \"1c7d444655c5e93c72151675caf5e827436554b56de53e047809a209a0e4a9ce\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 12 23:39:57.346994 containerd[1566]: time="2025-05-12T23:39:57.346962564Z" level=info msg="CreateContainer within sandbox \"e6ecaa72db178081f298eabaac2934e982fca40ecbd322e8ff992358be987a6e\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 12 23:39:57.347202 containerd[1566]: time="2025-05-12T23:39:57.347041140Z" level=info msg="CreateContainer within sandbox \"508451aba825b845203360296b612fbbfb0f5470e89c6524d88de8d08c32ccf1\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 12 23:39:57.452575 kubelet[2488]: W0512 23:39:57.452511 2488 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://139.178.70.108:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.70.108:6443: connect: connection refused May 12 23:39:57.452575 kubelet[2488]: E0512 23:39:57.452561 2488 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://139.178.70.108:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.70.108:6443: connect: connection refused May 12 23:39:57.477332 kubelet[2488]: E0512 23:39:57.477293 2488 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.108:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.108:6443: connect: connection refused" interval="1.6s" May 12 23:39:57.515801 kubelet[2488]: W0512 23:39:57.515727 2488 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://139.178.70.108:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 139.178.70.108:6443: connect: connection refused May 12 23:39:57.515801 kubelet[2488]: E0512 23:39:57.515787 2488 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://139.178.70.108:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 139.178.70.108:6443: connect: connection refused May 12 23:39:57.536232 kubelet[2488]: I0512 23:39:57.535999 2488 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 12 23:39:57.536232 kubelet[2488]: E0512 23:39:57.536195 2488 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://139.178.70.108:6443/api/v1/nodes\": dial tcp 139.178.70.108:6443: connect: connection refused" node="localhost" May 12 23:39:57.878322 kubelet[2488]: E0512 23:39:57.878257 2488 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://139.178.70.108:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 139.178.70.108:6443: connect: connection refused May 12 23:39:57.964725 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3220443521.mount: Deactivated successfully. May 12 23:39:58.024567 containerd[1566]: time="2025-05-12T23:39:58.024451487Z" level=info msg="CreateContainer within sandbox \"1c7d444655c5e93c72151675caf5e827436554b56de53e047809a209a0e4a9ce\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"002d1ab73120fb536da095aba6d3e6e89b152c6b59a9580b1ea646d66b49f9d1\"" May 12 23:39:58.024567 containerd[1566]: time="2025-05-12T23:39:58.024520298Z" level=info msg="CreateContainer within sandbox \"508451aba825b845203360296b612fbbfb0f5470e89c6524d88de8d08c32ccf1\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"6220e51194bc1569ddb705d588e6f403555419dfed1b70c31d0e568f1c3dbf3c\"" May 12 23:39:58.025999 containerd[1566]: time="2025-05-12T23:39:58.025075453Z" level=info msg="StartContainer for \"6220e51194bc1569ddb705d588e6f403555419dfed1b70c31d0e568f1c3dbf3c\"" May 12 23:39:58.025999 containerd[1566]: time="2025-05-12T23:39:58.025103418Z" level=info msg="StartContainer for \"002d1ab73120fb536da095aba6d3e6e89b152c6b59a9580b1ea646d66b49f9d1\"" May 12 23:39:58.031592 containerd[1566]: time="2025-05-12T23:39:58.031569499Z" level=info msg="CreateContainer within sandbox \"e6ecaa72db178081f298eabaac2934e982fca40ecbd322e8ff992358be987a6e\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"e409fbde5984c6440771efd11929acabf4ae2c09ebeefcdcbef006a6fd308c68\"" May 12 23:39:58.032185 containerd[1566]: time="2025-05-12T23:39:58.032169120Z" level=info msg="StartContainer for \"e409fbde5984c6440771efd11929acabf4ae2c09ebeefcdcbef006a6fd308c68\"" May 12 23:39:58.048883 systemd[1]: Started cri-containerd-6220e51194bc1569ddb705d588e6f403555419dfed1b70c31d0e568f1c3dbf3c.scope - libcontainer container 6220e51194bc1569ddb705d588e6f403555419dfed1b70c31d0e568f1c3dbf3c. May 12 23:39:58.053485 systemd[1]: Started cri-containerd-002d1ab73120fb536da095aba6d3e6e89b152c6b59a9580b1ea646d66b49f9d1.scope - libcontainer container 002d1ab73120fb536da095aba6d3e6e89b152c6b59a9580b1ea646d66b49f9d1. May 12 23:39:58.066828 systemd[1]: Started cri-containerd-e409fbde5984c6440771efd11929acabf4ae2c09ebeefcdcbef006a6fd308c68.scope - libcontainer container e409fbde5984c6440771efd11929acabf4ae2c09ebeefcdcbef006a6fd308c68. May 12 23:39:58.107572 containerd[1566]: time="2025-05-12T23:39:58.107495603Z" level=info msg="StartContainer for \"e409fbde5984c6440771efd11929acabf4ae2c09ebeefcdcbef006a6fd308c68\" returns successfully" May 12 23:39:58.113965 containerd[1566]: time="2025-05-12T23:39:58.113897598Z" level=info msg="StartContainer for \"6220e51194bc1569ddb705d588e6f403555419dfed1b70c31d0e568f1c3dbf3c\" returns successfully" May 12 23:39:58.126308 containerd[1566]: time="2025-05-12T23:39:58.126273961Z" level=info msg="StartContainer for \"002d1ab73120fb536da095aba6d3e6e89b152c6b59a9580b1ea646d66b49f9d1\" returns successfully" May 12 23:39:58.910833 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2626868869.mount: Deactivated successfully. May 12 23:39:58.910888 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2892587506.mount: Deactivated successfully. May 12 23:39:58.934897 kubelet[2488]: W0512 23:39:58.934852 2488 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://139.178.70.108:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 139.178.70.108:6443: connect: connection refused May 12 23:39:58.934897 kubelet[2488]: E0512 23:39:58.934879 2488 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://139.178.70.108:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 139.178.70.108:6443: connect: connection refused May 12 23:39:59.013113 kubelet[2488]: W0512 23:39:59.013000 2488 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://139.178.70.108:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.70.108:6443: connect: connection refused May 12 23:39:59.013113 kubelet[2488]: E0512 23:39:59.013033 2488 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://139.178.70.108:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.70.108:6443: connect: connection refused May 12 23:39:59.077721 kubelet[2488]: E0512 23:39:59.077684 2488 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.108:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.108:6443: connect: connection refused" interval="3.2s" May 12 23:39:59.137256 kubelet[2488]: I0512 23:39:59.137229 2488 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 12 23:39:59.137440 kubelet[2488]: E0512 23:39:59.137422 2488 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://139.178.70.108:6443/api/v1/nodes\": dial tcp 139.178.70.108:6443: connect: connection refused" node="localhost" May 12 23:40:00.722320 kubelet[2488]: E0512 23:40:00.722296 2488 csi_plugin.go:308] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found May 12 23:40:00.975424 kubelet[2488]: I0512 23:40:00.975300 2488 apiserver.go:52] "Watching apiserver" May 12 23:40:01.037055 kubelet[2488]: I0512 23:40:01.037025 2488 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 12 23:40:01.075950 kubelet[2488]: E0512 23:40:01.075905 2488 csi_plugin.go:308] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found May 12 23:40:01.495874 kubelet[2488]: E0512 23:40:01.495851 2488 csi_plugin.go:308] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found May 12 23:40:02.280868 kubelet[2488]: E0512 23:40:02.280837 2488 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" May 12 23:40:02.339683 kubelet[2488]: I0512 23:40:02.339642 2488 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 12 23:40:02.345848 kubelet[2488]: I0512 23:40:02.345749 2488 kubelet_node_status.go:76] "Successfully registered node" node="localhost" May 12 23:40:02.680407 systemd[1]: Reload requested from client PID 2768 ('systemctl') (unit session-9.scope)... May 12 23:40:02.680418 systemd[1]: Reloading... May 12 23:40:02.757753 zram_generator::config[2816]: No configuration found. May 12 23:40:02.827421 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") May 12 23:40:02.849107 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 12 23:40:02.934799 systemd[1]: Reloading finished in 253 ms. May 12 23:40:02.956569 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 12 23:40:02.968824 systemd[1]: kubelet.service: Deactivated successfully. May 12 23:40:02.969012 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 12 23:40:02.969058 systemd[1]: kubelet.service: Consumed 547ms CPU time, 114.2M memory peak. May 12 23:40:02.974051 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 12 23:40:03.201257 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 12 23:40:03.206045 (kubelet)[2880]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 12 23:40:03.358930 kubelet[2880]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 12 23:40:03.358930 kubelet[2880]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 12 23:40:03.358930 kubelet[2880]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 12 23:40:03.373214 kubelet[2880]: I0512 23:40:03.373176 2880 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 12 23:40:03.375711 kubelet[2880]: I0512 23:40:03.375696 2880 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" May 12 23:40:03.375711 kubelet[2880]: I0512 23:40:03.375708 2880 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 12 23:40:03.375844 kubelet[2880]: I0512 23:40:03.375832 2880 server.go:927] "Client rotation is on, will bootstrap in background" May 12 23:40:03.376540 kubelet[2880]: I0512 23:40:03.376527 2880 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 12 23:40:03.377375 kubelet[2880]: I0512 23:40:03.377244 2880 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 12 23:40:03.389258 kubelet[2880]: I0512 23:40:03.389229 2880 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 12 23:40:03.389951 kubelet[2880]: I0512 23:40:03.389923 2880 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 12 23:40:03.390050 kubelet[2880]: I0512 23:40:03.389949 2880 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} May 12 23:40:03.390112 kubelet[2880]: I0512 23:40:03.390055 2880 topology_manager.go:138] "Creating topology manager with none policy" May 12 23:40:03.390112 kubelet[2880]: I0512 23:40:03.390062 2880 container_manager_linux.go:301] "Creating device plugin manager" May 12 23:40:03.390112 kubelet[2880]: I0512 23:40:03.390088 2880 state_mem.go:36] "Initialized new in-memory state store" May 12 23:40:03.390175 kubelet[2880]: I0512 23:40:03.390145 2880 kubelet.go:400] "Attempting to sync node with API server" May 12 23:40:03.390175 kubelet[2880]: I0512 23:40:03.390153 2880 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" May 12 23:40:03.390175 kubelet[2880]: I0512 23:40:03.390167 2880 kubelet.go:312] "Adding apiserver pod source" May 12 23:40:03.390221 kubelet[2880]: I0512 23:40:03.390178 2880 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 12 23:40:03.408649 kubelet[2880]: I0512 23:40:03.408625 2880 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" May 12 23:40:03.408756 kubelet[2880]: I0512 23:40:03.408743 2880 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 12 23:40:03.408990 kubelet[2880]: I0512 23:40:03.408972 2880 server.go:1264] "Started kubelet" May 12 23:40:03.411126 kubelet[2880]: I0512 23:40:03.411111 2880 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 12 23:40:03.414846 kubelet[2880]: I0512 23:40:03.414803 2880 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 12 23:40:03.415512 kubelet[2880]: I0512 23:40:03.415495 2880 server.go:455] "Adding debug handlers to kubelet server" May 12 23:40:03.416095 kubelet[2880]: I0512 23:40:03.416061 2880 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 12 23:40:03.416197 kubelet[2880]: I0512 23:40:03.416184 2880 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 12 23:40:03.418904 kubelet[2880]: I0512 23:40:03.418887 2880 volume_manager.go:291] "Starting Kubelet Volume Manager" May 12 23:40:03.418985 kubelet[2880]: I0512 23:40:03.418952 2880 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 12 23:40:03.419489 kubelet[2880]: I0512 23:40:03.419035 2880 reconciler.go:26] "Reconciler: start to sync state" May 12 23:40:03.421799 kubelet[2880]: I0512 23:40:03.421774 2880 factory.go:221] Registration of the containerd container factory successfully May 12 23:40:03.421799 kubelet[2880]: I0512 23:40:03.421786 2880 factory.go:221] Registration of the systemd container factory successfully May 12 23:40:03.422158 kubelet[2880]: I0512 23:40:03.422141 2880 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 12 23:40:03.445063 kubelet[2880]: I0512 23:40:03.444998 2880 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 12 23:40:03.445871 kubelet[2880]: I0512 23:40:03.445858 2880 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 12 23:40:03.446306 kubelet[2880]: I0512 23:40:03.445945 2880 status_manager.go:217] "Starting to sync pod status with apiserver" May 12 23:40:03.446306 kubelet[2880]: I0512 23:40:03.445964 2880 kubelet.go:2337] "Starting kubelet main sync loop" May 12 23:40:03.446306 kubelet[2880]: E0512 23:40:03.445995 2880 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 12 23:40:03.459563 kubelet[2880]: I0512 23:40:03.459510 2880 cpu_manager.go:214] "Starting CPU manager" policy="none" May 12 23:40:03.459563 kubelet[2880]: I0512 23:40:03.459520 2880 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 12 23:40:03.459563 kubelet[2880]: I0512 23:40:03.459532 2880 state_mem.go:36] "Initialized new in-memory state store" May 12 23:40:03.459667 kubelet[2880]: I0512 23:40:03.459628 2880 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 12 23:40:03.459667 kubelet[2880]: I0512 23:40:03.459635 2880 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 12 23:40:03.459667 kubelet[2880]: I0512 23:40:03.459646 2880 policy_none.go:49] "None policy: Start" May 12 23:40:03.460497 kubelet[2880]: I0512 23:40:03.460483 2880 memory_manager.go:170] "Starting memorymanager" policy="None" May 12 23:40:03.460497 kubelet[2880]: I0512 23:40:03.460497 2880 state_mem.go:35] "Initializing new in-memory state store" May 12 23:40:03.460587 kubelet[2880]: I0512 23:40:03.460574 2880 state_mem.go:75] "Updated machine memory state" May 12 23:40:03.463087 kubelet[2880]: I0512 23:40:03.463071 2880 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 12 23:40:03.464728 kubelet[2880]: I0512 23:40:03.463160 2880 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 12 23:40:03.464728 kubelet[2880]: I0512 23:40:03.463222 2880 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 12 23:40:03.476023 sudo[2894]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin May 12 23:40:03.476454 sudo[2894]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) May 12 23:40:03.522340 kubelet[2880]: I0512 23:40:03.522310 2880 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 12 23:40:03.533049 kubelet[2880]: I0512 23:40:03.533024 2880 kubelet_node_status.go:112] "Node was previously registered" node="localhost" May 12 23:40:03.533524 kubelet[2880]: I0512 23:40:03.533075 2880 kubelet_node_status.go:76] "Successfully registered node" node="localhost" May 12 23:40:03.546243 kubelet[2880]: I0512 23:40:03.546216 2880 topology_manager.go:215] "Topology Admit Handler" podUID="09a94040ed8b928e8f977c687ec59ddd" podNamespace="kube-system" podName="kube-apiserver-localhost" May 12 23:40:03.546481 kubelet[2880]: I0512 23:40:03.546371 2880 topology_manager.go:215] "Topology Admit Handler" podUID="b20b39a8540dba87b5883a6f0f602dba" podNamespace="kube-system" podName="kube-controller-manager-localhost" May 12 23:40:03.546481 kubelet[2880]: I0512 23:40:03.546425 2880 topology_manager.go:215] "Topology Admit Handler" podUID="6ece95f10dbffa04b25ec3439a115512" podNamespace="kube-system" podName="kube-scheduler-localhost" May 12 23:40:03.551691 kubelet[2880]: E0512 23:40:03.551667 2880 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 12 23:40:03.620121 kubelet[2880]: I0512 23:40:03.620074 2880 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 12 23:40:03.620121 kubelet[2880]: I0512 23:40:03.620117 2880 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/09a94040ed8b928e8f977c687ec59ddd-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"09a94040ed8b928e8f977c687ec59ddd\") " pod="kube-system/kube-apiserver-localhost" May 12 23:40:03.620121 kubelet[2880]: I0512 23:40:03.620137 2880 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 12 23:40:03.621218 kubelet[2880]: I0512 23:40:03.620168 2880 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 12 23:40:03.621218 kubelet[2880]: I0512 23:40:03.620189 2880 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 12 23:40:03.621218 kubelet[2880]: I0512 23:40:03.620888 2880 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6ece95f10dbffa04b25ec3439a115512-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6ece95f10dbffa04b25ec3439a115512\") " pod="kube-system/kube-scheduler-localhost" May 12 23:40:03.621218 kubelet[2880]: I0512 23:40:03.620913 2880 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/09a94040ed8b928e8f977c687ec59ddd-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"09a94040ed8b928e8f977c687ec59ddd\") " pod="kube-system/kube-apiserver-localhost" May 12 23:40:03.621218 kubelet[2880]: I0512 23:40:03.620934 2880 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/09a94040ed8b928e8f977c687ec59ddd-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"09a94040ed8b928e8f977c687ec59ddd\") " pod="kube-system/kube-apiserver-localhost" May 12 23:40:03.621327 kubelet[2880]: I0512 23:40:03.620952 2880 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 12 23:40:04.123240 sudo[2894]: pam_unix(sudo:session): session closed for user root May 12 23:40:04.396555 kubelet[2880]: I0512 23:40:04.396397 2880 apiserver.go:52] "Watching apiserver" May 12 23:40:04.574795 kubelet[2880]: I0512 23:40:04.574411 2880 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 12 23:40:04.592311 kubelet[2880]: E0512 23:40:04.592258 2880 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 12 23:40:04.654515 kubelet[2880]: I0512 23:40:04.648906 2880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.648890211 podStartE2EDuration="2.648890211s" podCreationTimestamp="2025-05-12 23:40:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-12 23:40:04.627164403 +0000 UTC m=+1.314563638" watchObservedRunningTime="2025-05-12 23:40:04.648890211 +0000 UTC m=+1.336289442" May 12 23:40:04.673509 kubelet[2880]: I0512 23:40:04.673406 2880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.6733883729999999 podStartE2EDuration="1.673388373s" podCreationTimestamp="2025-05-12 23:40:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-12 23:40:04.654510119 +0000 UTC m=+1.341909350" watchObservedRunningTime="2025-05-12 23:40:04.673388373 +0000 UTC m=+1.360787609" May 12 23:40:04.722597 kubelet[2880]: I0512 23:40:04.722560 2880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.722546484 podStartE2EDuration="1.722546484s" podCreationTimestamp="2025-05-12 23:40:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-12 23:40:04.675250576 +0000 UTC m=+1.362649844" watchObservedRunningTime="2025-05-12 23:40:04.722546484 +0000 UTC m=+1.409945709" May 12 23:40:05.704644 sudo[1867]: pam_unix(sudo:session): session closed for user root May 12 23:40:05.706275 sshd[1866]: Connection closed by 139.178.68.195 port 52030 May 12 23:40:05.711464 sshd-session[1863]: pam_unix(sshd:session): session closed for user core May 12 23:40:05.713694 systemd-logind[1545]: Session 9 logged out. Waiting for processes to exit. May 12 23:40:05.714024 systemd[1]: sshd@6-139.178.70.108:22-139.178.68.195:52030.service: Deactivated successfully. May 12 23:40:05.715451 systemd[1]: session-9.scope: Deactivated successfully. May 12 23:40:05.715679 systemd[1]: session-9.scope: Consumed 3.229s CPU time, 230.1M memory peak. May 12 23:40:05.716826 systemd-logind[1545]: Removed session 9. May 12 23:40:17.812012 kubelet[2880]: I0512 23:40:17.811970 2880 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 12 23:40:17.817095 containerd[1566]: time="2025-05-12T23:40:17.816411704Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 12 23:40:17.817341 kubelet[2880]: I0512 23:40:17.816587 2880 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 12 23:40:17.894021 kubelet[2880]: I0512 23:40:17.893308 2880 topology_manager.go:215] "Topology Admit Handler" podUID="0164df58-cfc5-404c-a73b-8d45356dcf78" podNamespace="kube-system" podName="cilium-operator-599987898-lfrsq" May 12 23:40:17.901375 systemd[1]: Created slice kubepods-besteffort-pod0164df58_cfc5_404c_a73b_8d45356dcf78.slice - libcontainer container kubepods-besteffort-pod0164df58_cfc5_404c_a73b_8d45356dcf78.slice. May 12 23:40:18.022239 kubelet[2880]: I0512 23:40:18.022160 2880 topology_manager.go:215] "Topology Admit Handler" podUID="0adc2537-4a4e-4a99-b729-7509c4c33ec1" podNamespace="kube-system" podName="kube-proxy-64bhf" May 12 23:40:18.023683 kubelet[2880]: I0512 23:40:18.022297 2880 topology_manager.go:215] "Topology Admit Handler" podUID="5a7a7643-15de-49f1-b799-3f44a8701ae4" podNamespace="kube-system" podName="cilium-q2wlk" May 12 23:40:18.031945 systemd[1]: Created slice kubepods-besteffort-pod0adc2537_4a4e_4a99_b729_7509c4c33ec1.slice - libcontainer container kubepods-besteffort-pod0adc2537_4a4e_4a99_b729_7509c4c33ec1.slice. May 12 23:40:18.041780 systemd[1]: Created slice kubepods-burstable-pod5a7a7643_15de_49f1_b799_3f44a8701ae4.slice - libcontainer container kubepods-burstable-pod5a7a7643_15de_49f1_b799_3f44a8701ae4.slice. May 12 23:40:18.052434 kubelet[2880]: I0512 23:40:18.052377 2880 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0164df58-cfc5-404c-a73b-8d45356dcf78-cilium-config-path\") pod \"cilium-operator-599987898-lfrsq\" (UID: \"0164df58-cfc5-404c-a73b-8d45356dcf78\") " pod="kube-system/cilium-operator-599987898-lfrsq" May 12 23:40:18.052434 kubelet[2880]: I0512 23:40:18.052398 2880 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2w94w\" (UniqueName: \"kubernetes.io/projected/0164df58-cfc5-404c-a73b-8d45356dcf78-kube-api-access-2w94w\") pod \"cilium-operator-599987898-lfrsq\" (UID: \"0164df58-cfc5-404c-a73b-8d45356dcf78\") " pod="kube-system/cilium-operator-599987898-lfrsq" May 12 23:40:18.153334 kubelet[2880]: I0512 23:40:18.152984 2880 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5a7a7643-15de-49f1-b799-3f44a8701ae4-etc-cni-netd\") pod \"cilium-q2wlk\" (UID: \"5a7a7643-15de-49f1-b799-3f44a8701ae4\") " pod="kube-system/cilium-q2wlk" May 12 23:40:18.153334 kubelet[2880]: I0512 23:40:18.153028 2880 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5a7a7643-15de-49f1-b799-3f44a8701ae4-lib-modules\") pod \"cilium-q2wlk\" (UID: \"5a7a7643-15de-49f1-b799-3f44a8701ae4\") " pod="kube-system/cilium-q2wlk" May 12 23:40:18.153334 kubelet[2880]: I0512 23:40:18.153047 2880 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5a7a7643-15de-49f1-b799-3f44a8701ae4-host-proc-sys-kernel\") pod \"cilium-q2wlk\" (UID: \"5a7a7643-15de-49f1-b799-3f44a8701ae4\") " pod="kube-system/cilium-q2wlk" May 12 23:40:18.153334 kubelet[2880]: I0512 23:40:18.153068 2880 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5a7a7643-15de-49f1-b799-3f44a8701ae4-hubble-tls\") pod \"cilium-q2wlk\" (UID: \"5a7a7643-15de-49f1-b799-3f44a8701ae4\") " pod="kube-system/cilium-q2wlk" May 12 23:40:18.171228 kubelet[2880]: I0512 23:40:18.153087 2880 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5a7a7643-15de-49f1-b799-3f44a8701ae4-bpf-maps\") pod \"cilium-q2wlk\" (UID: \"5a7a7643-15de-49f1-b799-3f44a8701ae4\") " pod="kube-system/cilium-q2wlk" May 12 23:40:18.171228 kubelet[2880]: I0512 23:40:18.171203 2880 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5a7a7643-15de-49f1-b799-3f44a8701ae4-hostproc\") pod \"cilium-q2wlk\" (UID: \"5a7a7643-15de-49f1-b799-3f44a8701ae4\") " pod="kube-system/cilium-q2wlk" May 12 23:40:18.171228 kubelet[2880]: I0512 23:40:18.171217 2880 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5a7a7643-15de-49f1-b799-3f44a8701ae4-cilium-cgroup\") pod \"cilium-q2wlk\" (UID: \"5a7a7643-15de-49f1-b799-3f44a8701ae4\") " pod="kube-system/cilium-q2wlk" May 12 23:40:18.171228 kubelet[2880]: I0512 23:40:18.171228 2880 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5a7a7643-15de-49f1-b799-3f44a8701ae4-cni-path\") pod \"cilium-q2wlk\" (UID: \"5a7a7643-15de-49f1-b799-3f44a8701ae4\") " pod="kube-system/cilium-q2wlk" May 12 23:40:18.171228 kubelet[2880]: I0512 23:40:18.171238 2880 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5a7a7643-15de-49f1-b799-3f44a8701ae4-cilium-config-path\") pod \"cilium-q2wlk\" (UID: \"5a7a7643-15de-49f1-b799-3f44a8701ae4\") " pod="kube-system/cilium-q2wlk" May 12 23:40:18.171413 kubelet[2880]: I0512 23:40:18.171246 2880 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0adc2537-4a4e-4a99-b729-7509c4c33ec1-xtables-lock\") pod \"kube-proxy-64bhf\" (UID: \"0adc2537-4a4e-4a99-b729-7509c4c33ec1\") " pod="kube-system/kube-proxy-64bhf" May 12 23:40:18.171413 kubelet[2880]: I0512 23:40:18.171255 2880 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5a7a7643-15de-49f1-b799-3f44a8701ae4-xtables-lock\") pod \"cilium-q2wlk\" (UID: \"5a7a7643-15de-49f1-b799-3f44a8701ae4\") " pod="kube-system/cilium-q2wlk" May 12 23:40:18.171413 kubelet[2880]: I0512 23:40:18.171267 2880 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l6bxf\" (UniqueName: \"kubernetes.io/projected/5a7a7643-15de-49f1-b799-3f44a8701ae4-kube-api-access-l6bxf\") pod \"cilium-q2wlk\" (UID: \"5a7a7643-15de-49f1-b799-3f44a8701ae4\") " pod="kube-system/cilium-q2wlk" May 12 23:40:18.171413 kubelet[2880]: I0512 23:40:18.171276 2880 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0adc2537-4a4e-4a99-b729-7509c4c33ec1-lib-modules\") pod \"kube-proxy-64bhf\" (UID: \"0adc2537-4a4e-4a99-b729-7509c4c33ec1\") " pod="kube-system/kube-proxy-64bhf" May 12 23:40:18.171413 kubelet[2880]: I0512 23:40:18.171298 2880 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5a7a7643-15de-49f1-b799-3f44a8701ae4-clustermesh-secrets\") pod \"cilium-q2wlk\" (UID: \"5a7a7643-15de-49f1-b799-3f44a8701ae4\") " pod="kube-system/cilium-q2wlk" May 12 23:40:18.171505 kubelet[2880]: I0512 23:40:18.171309 2880 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5a7a7643-15de-49f1-b799-3f44a8701ae4-host-proc-sys-net\") pod \"cilium-q2wlk\" (UID: \"5a7a7643-15de-49f1-b799-3f44a8701ae4\") " pod="kube-system/cilium-q2wlk" May 12 23:40:18.171505 kubelet[2880]: I0512 23:40:18.171318 2880 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/0adc2537-4a4e-4a99-b729-7509c4c33ec1-kube-proxy\") pod \"kube-proxy-64bhf\" (UID: \"0adc2537-4a4e-4a99-b729-7509c4c33ec1\") " pod="kube-system/kube-proxy-64bhf" May 12 23:40:18.171505 kubelet[2880]: I0512 23:40:18.171327 2880 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4skqn\" (UniqueName: \"kubernetes.io/projected/0adc2537-4a4e-4a99-b729-7509c4c33ec1-kube-api-access-4skqn\") pod \"kube-proxy-64bhf\" (UID: \"0adc2537-4a4e-4a99-b729-7509c4c33ec1\") " pod="kube-system/kube-proxy-64bhf" May 12 23:40:18.171505 kubelet[2880]: I0512 23:40:18.171344 2880 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5a7a7643-15de-49f1-b799-3f44a8701ae4-cilium-run\") pod \"cilium-q2wlk\" (UID: \"5a7a7643-15de-49f1-b799-3f44a8701ae4\") " pod="kube-system/cilium-q2wlk" May 12 23:40:18.210619 containerd[1566]: time="2025-05-12T23:40:18.210560214Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-lfrsq,Uid:0164df58-cfc5-404c-a73b-8d45356dcf78,Namespace:kube-system,Attempt:0,}" May 12 23:40:18.303066 containerd[1566]: time="2025-05-12T23:40:18.303007455Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 12 23:40:18.303600 containerd[1566]: time="2025-05-12T23:40:18.303495035Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 12 23:40:18.303600 containerd[1566]: time="2025-05-12T23:40:18.303514954Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 12 23:40:18.303600 containerd[1566]: time="2025-05-12T23:40:18.303575197Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 12 23:40:18.317866 systemd[1]: Started cri-containerd-f31af28e1d05f503e0914e6fb1c56d10a75e9de3f22e56a8040d2bc273667abc.scope - libcontainer container f31af28e1d05f503e0914e6fb1c56d10a75e9de3f22e56a8040d2bc273667abc. May 12 23:40:18.333876 containerd[1566]: time="2025-05-12T23:40:18.333850176Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-64bhf,Uid:0adc2537-4a4e-4a99-b729-7509c4c33ec1,Namespace:kube-system,Attempt:0,}" May 12 23:40:18.345471 containerd[1566]: time="2025-05-12T23:40:18.345277999Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-q2wlk,Uid:5a7a7643-15de-49f1-b799-3f44a8701ae4,Namespace:kube-system,Attempt:0,}" May 12 23:40:18.349749 containerd[1566]: time="2025-05-12T23:40:18.349716862Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-lfrsq,Uid:0164df58-cfc5-404c-a73b-8d45356dcf78,Namespace:kube-system,Attempt:0,} returns sandbox id \"f31af28e1d05f503e0914e6fb1c56d10a75e9de3f22e56a8040d2bc273667abc\"" May 12 23:40:18.388602 containerd[1566]: time="2025-05-12T23:40:18.388569579Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" May 12 23:40:18.539808 containerd[1566]: time="2025-05-12T23:40:18.539620620Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 12 23:40:18.539808 containerd[1566]: time="2025-05-12T23:40:18.539697502Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 12 23:40:18.539808 containerd[1566]: time="2025-05-12T23:40:18.539719757Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 12 23:40:18.540032 containerd[1566]: time="2025-05-12T23:40:18.539877728Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 12 23:40:18.554869 systemd[1]: Started cri-containerd-7c891dc2b58685baf8c95af29e5c9a52b289e1a5b0b1e5bb0db52044a263fa9e.scope - libcontainer container 7c891dc2b58685baf8c95af29e5c9a52b289e1a5b0b1e5bb0db52044a263fa9e. May 12 23:40:18.572173 containerd[1566]: time="2025-05-12T23:40:18.572093241Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 12 23:40:18.572356 containerd[1566]: time="2025-05-12T23:40:18.572180526Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 12 23:40:18.572356 containerd[1566]: time="2025-05-12T23:40:18.572247985Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 12 23:40:18.573596 containerd[1566]: time="2025-05-12T23:40:18.572351711Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 12 23:40:18.577272 containerd[1566]: time="2025-05-12T23:40:18.577241967Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-q2wlk,Uid:5a7a7643-15de-49f1-b799-3f44a8701ae4,Namespace:kube-system,Attempt:0,} returns sandbox id \"7c891dc2b58685baf8c95af29e5c9a52b289e1a5b0b1e5bb0db52044a263fa9e\"" May 12 23:40:18.587890 systemd[1]: Started cri-containerd-9f881e3009d4f8dbac70c7c4b95cbf2f56b4e6b7bcc3d16167661e5678e18211.scope - libcontainer container 9f881e3009d4f8dbac70c7c4b95cbf2f56b4e6b7bcc3d16167661e5678e18211. May 12 23:40:18.605403 containerd[1566]: time="2025-05-12T23:40:18.605267614Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-64bhf,Uid:0adc2537-4a4e-4a99-b729-7509c4c33ec1,Namespace:kube-system,Attempt:0,} returns sandbox id \"9f881e3009d4f8dbac70c7c4b95cbf2f56b4e6b7bcc3d16167661e5678e18211\"" May 12 23:40:18.633811 containerd[1566]: time="2025-05-12T23:40:18.633680634Z" level=info msg="CreateContainer within sandbox \"9f881e3009d4f8dbac70c7c4b95cbf2f56b4e6b7bcc3d16167661e5678e18211\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 12 23:40:18.711708 containerd[1566]: time="2025-05-12T23:40:18.711494052Z" level=info msg="CreateContainer within sandbox \"9f881e3009d4f8dbac70c7c4b95cbf2f56b4e6b7bcc3d16167661e5678e18211\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"1679f5cec4c9be6258bfc2330829a4f1d55fda49e737175dae63e63c4b085d3e\"" May 12 23:40:18.733313 containerd[1566]: time="2025-05-12T23:40:18.733289983Z" level=info msg="StartContainer for \"1679f5cec4c9be6258bfc2330829a4f1d55fda49e737175dae63e63c4b085d3e\"" May 12 23:40:18.752865 systemd[1]: Started cri-containerd-1679f5cec4c9be6258bfc2330829a4f1d55fda49e737175dae63e63c4b085d3e.scope - libcontainer container 1679f5cec4c9be6258bfc2330829a4f1d55fda49e737175dae63e63c4b085d3e. May 12 23:40:18.778807 containerd[1566]: time="2025-05-12T23:40:18.777395751Z" level=info msg="StartContainer for \"1679f5cec4c9be6258bfc2330829a4f1d55fda49e737175dae63e63c4b085d3e\" returns successfully" May 12 23:40:19.701536 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1575093237.mount: Deactivated successfully. May 12 23:40:20.086830 containerd[1566]: time="2025-05-12T23:40:20.086750694Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 12 23:40:20.087500 containerd[1566]: time="2025-05-12T23:40:20.087441811Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" May 12 23:40:20.088995 containerd[1566]: time="2025-05-12T23:40:20.088942940Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 12 23:40:20.089646 containerd[1566]: time="2025-05-12T23:40:20.089631813Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 1.701031691s" May 12 23:40:20.089766 containerd[1566]: time="2025-05-12T23:40:20.089692636Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" May 12 23:40:20.090509 containerd[1566]: time="2025-05-12T23:40:20.090491914Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" May 12 23:40:20.094475 containerd[1566]: time="2025-05-12T23:40:20.094444458Z" level=info msg="CreateContainer within sandbox \"f31af28e1d05f503e0914e6fb1c56d10a75e9de3f22e56a8040d2bc273667abc\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" May 12 23:40:20.152928 containerd[1566]: time="2025-05-12T23:40:20.152892583Z" level=info msg="CreateContainer within sandbox \"f31af28e1d05f503e0914e6fb1c56d10a75e9de3f22e56a8040d2bc273667abc\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"01e842d1098e294ade9c8456924d5fa321f900e98cfbcbd3675de808a8b810b9\"" May 12 23:40:20.154044 containerd[1566]: time="2025-05-12T23:40:20.153481164Z" level=info msg="StartContainer for \"01e842d1098e294ade9c8456924d5fa321f900e98cfbcbd3675de808a8b810b9\"" May 12 23:40:20.173911 systemd[1]: Started cri-containerd-01e842d1098e294ade9c8456924d5fa321f900e98cfbcbd3675de808a8b810b9.scope - libcontainer container 01e842d1098e294ade9c8456924d5fa321f900e98cfbcbd3675de808a8b810b9. May 12 23:40:20.194438 containerd[1566]: time="2025-05-12T23:40:20.194408436Z" level=info msg="StartContainer for \"01e842d1098e294ade9c8456924d5fa321f900e98cfbcbd3675de808a8b810b9\" returns successfully" May 12 23:40:20.616848 kubelet[2880]: I0512 23:40:20.616655 2880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-64bhf" podStartSLOduration=3.616641315 podStartE2EDuration="3.616641315s" podCreationTimestamp="2025-05-12 23:40:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-12 23:40:19.69249697 +0000 UTC m=+16.379896199" watchObservedRunningTime="2025-05-12 23:40:20.616641315 +0000 UTC m=+17.304040553" May 12 23:40:23.868432 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3276949054.mount: Deactivated successfully. May 12 23:40:26.861813 containerd[1566]: time="2025-05-12T23:40:26.861772874Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 12 23:40:26.862297 containerd[1566]: time="2025-05-12T23:40:26.862241382Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" May 12 23:40:26.863186 containerd[1566]: time="2025-05-12T23:40:26.863165368Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 12 23:40:26.864579 containerd[1566]: time="2025-05-12T23:40:26.864556071Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 6.774042366s" May 12 23:40:26.865092 containerd[1566]: time="2025-05-12T23:40:26.864583534Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" May 12 23:40:26.970028 containerd[1566]: time="2025-05-12T23:40:26.969931538Z" level=info msg="CreateContainer within sandbox \"7c891dc2b58685baf8c95af29e5c9a52b289e1a5b0b1e5bb0db52044a263fa9e\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 12 23:40:26.999353 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1619468411.mount: Deactivated successfully. May 12 23:40:27.001880 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount419098572.mount: Deactivated successfully. May 12 23:40:27.007760 containerd[1566]: time="2025-05-12T23:40:27.007716064Z" level=info msg="CreateContainer within sandbox \"7c891dc2b58685baf8c95af29e5c9a52b289e1a5b0b1e5bb0db52044a263fa9e\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"7d54ae55e9b255082adb86369281a15635627e616cdbbb40ab7ca8f77ebb5e46\"" May 12 23:40:27.008237 containerd[1566]: time="2025-05-12T23:40:27.008215226Z" level=info msg="StartContainer for \"7d54ae55e9b255082adb86369281a15635627e616cdbbb40ab7ca8f77ebb5e46\"" May 12 23:40:27.077852 systemd[1]: Started cri-containerd-7d54ae55e9b255082adb86369281a15635627e616cdbbb40ab7ca8f77ebb5e46.scope - libcontainer container 7d54ae55e9b255082adb86369281a15635627e616cdbbb40ab7ca8f77ebb5e46. May 12 23:40:27.116629 containerd[1566]: time="2025-05-12T23:40:27.116488237Z" level=info msg="StartContainer for \"7d54ae55e9b255082adb86369281a15635627e616cdbbb40ab7ca8f77ebb5e46\" returns successfully" May 12 23:40:27.124049 systemd[1]: cri-containerd-7d54ae55e9b255082adb86369281a15635627e616cdbbb40ab7ca8f77ebb5e46.scope: Deactivated successfully. May 12 23:40:27.679553 containerd[1566]: time="2025-05-12T23:40:27.676512429Z" level=info msg="shim disconnected" id=7d54ae55e9b255082adb86369281a15635627e616cdbbb40ab7ca8f77ebb5e46 namespace=k8s.io May 12 23:40:27.679553 containerd[1566]: time="2025-05-12T23:40:27.679426018Z" level=warning msg="cleaning up after shim disconnected" id=7d54ae55e9b255082adb86369281a15635627e616cdbbb40ab7ca8f77ebb5e46 namespace=k8s.io May 12 23:40:27.679553 containerd[1566]: time="2025-05-12T23:40:27.679434924Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 12 23:40:27.875970 kubelet[2880]: I0512 23:40:27.875923 2880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-lfrsq" podStartSLOduration=9.173887187 podStartE2EDuration="10.875906256s" podCreationTimestamp="2025-05-12 23:40:17 +0000 UTC" firstStartedPulling="2025-05-12 23:40:18.388248367 +0000 UTC m=+15.075647603" lastFinishedPulling="2025-05-12 23:40:20.090267446 +0000 UTC m=+16.777666672" observedRunningTime="2025-05-12 23:40:20.617028661 +0000 UTC m=+17.304427887" watchObservedRunningTime="2025-05-12 23:40:27.875906256 +0000 UTC m=+24.563305484" May 12 23:40:27.996177 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7d54ae55e9b255082adb86369281a15635627e616cdbbb40ab7ca8f77ebb5e46-rootfs.mount: Deactivated successfully. May 12 23:40:28.650926 containerd[1566]: time="2025-05-12T23:40:28.650566861Z" level=info msg="CreateContainer within sandbox \"7c891dc2b58685baf8c95af29e5c9a52b289e1a5b0b1e5bb0db52044a263fa9e\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 12 23:40:28.661085 containerd[1566]: time="2025-05-12T23:40:28.661052544Z" level=info msg="CreateContainer within sandbox \"7c891dc2b58685baf8c95af29e5c9a52b289e1a5b0b1e5bb0db52044a263fa9e\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"c5065a8ede0feb357b1e754f136b2a5857e8c4a26238345ded7806c281fe4bad\"" May 12 23:40:28.662045 containerd[1566]: time="2025-05-12T23:40:28.661467432Z" level=info msg="StartContainer for \"c5065a8ede0feb357b1e754f136b2a5857e8c4a26238345ded7806c281fe4bad\"" May 12 23:40:28.690901 systemd[1]: Started cri-containerd-c5065a8ede0feb357b1e754f136b2a5857e8c4a26238345ded7806c281fe4bad.scope - libcontainer container c5065a8ede0feb357b1e754f136b2a5857e8c4a26238345ded7806c281fe4bad. May 12 23:40:28.707719 containerd[1566]: time="2025-05-12T23:40:28.707693007Z" level=info msg="StartContainer for \"c5065a8ede0feb357b1e754f136b2a5857e8c4a26238345ded7806c281fe4bad\" returns successfully" May 12 23:40:28.718672 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 12 23:40:28.718842 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 12 23:40:28.719390 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... May 12 23:40:28.722947 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 12 23:40:28.723104 systemd[1]: cri-containerd-c5065a8ede0feb357b1e754f136b2a5857e8c4a26238345ded7806c281fe4bad.scope: Deactivated successfully. May 12 23:40:28.742477 containerd[1566]: time="2025-05-12T23:40:28.742341863Z" level=info msg="shim disconnected" id=c5065a8ede0feb357b1e754f136b2a5857e8c4a26238345ded7806c281fe4bad namespace=k8s.io May 12 23:40:28.742477 containerd[1566]: time="2025-05-12T23:40:28.742376207Z" level=warning msg="cleaning up after shim disconnected" id=c5065a8ede0feb357b1e754f136b2a5857e8c4a26238345ded7806c281fe4bad namespace=k8s.io May 12 23:40:28.742477 containerd[1566]: time="2025-05-12T23:40:28.742381539Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 12 23:40:28.761010 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 12 23:40:28.996075 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c5065a8ede0feb357b1e754f136b2a5857e8c4a26238345ded7806c281fe4bad-rootfs.mount: Deactivated successfully. May 12 23:40:29.658887 containerd[1566]: time="2025-05-12T23:40:29.658814062Z" level=info msg="CreateContainer within sandbox \"7c891dc2b58685baf8c95af29e5c9a52b289e1a5b0b1e5bb0db52044a263fa9e\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 12 23:40:29.702182 containerd[1566]: time="2025-05-12T23:40:29.702156885Z" level=info msg="CreateContainer within sandbox \"7c891dc2b58685baf8c95af29e5c9a52b289e1a5b0b1e5bb0db52044a263fa9e\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"7597067a50b2143193a261804e71900d0999fd6cdf5afd3e64ff4d7fd8e82f30\"" May 12 23:40:29.702579 containerd[1566]: time="2025-05-12T23:40:29.702568068Z" level=info msg="StartContainer for \"7597067a50b2143193a261804e71900d0999fd6cdf5afd3e64ff4d7fd8e82f30\"" May 12 23:40:29.724850 systemd[1]: Started cri-containerd-7597067a50b2143193a261804e71900d0999fd6cdf5afd3e64ff4d7fd8e82f30.scope - libcontainer container 7597067a50b2143193a261804e71900d0999fd6cdf5afd3e64ff4d7fd8e82f30. May 12 23:40:29.741333 containerd[1566]: time="2025-05-12T23:40:29.741314312Z" level=info msg="StartContainer for \"7597067a50b2143193a261804e71900d0999fd6cdf5afd3e64ff4d7fd8e82f30\" returns successfully" May 12 23:40:29.749700 systemd[1]: cri-containerd-7597067a50b2143193a261804e71900d0999fd6cdf5afd3e64ff4d7fd8e82f30.scope: Deactivated successfully. May 12 23:40:29.749870 systemd[1]: cri-containerd-7597067a50b2143193a261804e71900d0999fd6cdf5afd3e64ff4d7fd8e82f30.scope: Consumed 11ms CPU time, 5.4M memory peak, 1M read from disk. May 12 23:40:29.762261 containerd[1566]: time="2025-05-12T23:40:29.762207993Z" level=info msg="shim disconnected" id=7597067a50b2143193a261804e71900d0999fd6cdf5afd3e64ff4d7fd8e82f30 namespace=k8s.io May 12 23:40:29.762261 containerd[1566]: time="2025-05-12T23:40:29.762253975Z" level=warning msg="cleaning up after shim disconnected" id=7597067a50b2143193a261804e71900d0999fd6cdf5afd3e64ff4d7fd8e82f30 namespace=k8s.io May 12 23:40:29.762261 containerd[1566]: time="2025-05-12T23:40:29.762259750Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 12 23:40:29.996040 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7597067a50b2143193a261804e71900d0999fd6cdf5afd3e64ff4d7fd8e82f30-rootfs.mount: Deactivated successfully. May 12 23:40:30.655756 containerd[1566]: time="2025-05-12T23:40:30.655391981Z" level=info msg="CreateContainer within sandbox \"7c891dc2b58685baf8c95af29e5c9a52b289e1a5b0b1e5bb0db52044a263fa9e\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 12 23:40:30.667058 containerd[1566]: time="2025-05-12T23:40:30.667029679Z" level=info msg="CreateContainer within sandbox \"7c891dc2b58685baf8c95af29e5c9a52b289e1a5b0b1e5bb0db52044a263fa9e\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"feb8088d36a2326ce6b0468c858fe40b57fa3a9b4dc0d0756f9c26aadceecd37\"" May 12 23:40:30.667526 containerd[1566]: time="2025-05-12T23:40:30.667460516Z" level=info msg="StartContainer for \"feb8088d36a2326ce6b0468c858fe40b57fa3a9b4dc0d0756f9c26aadceecd37\"" May 12 23:40:30.690868 systemd[1]: Started cri-containerd-feb8088d36a2326ce6b0468c858fe40b57fa3a9b4dc0d0756f9c26aadceecd37.scope - libcontainer container feb8088d36a2326ce6b0468c858fe40b57fa3a9b4dc0d0756f9c26aadceecd37. May 12 23:40:30.707482 systemd[1]: cri-containerd-feb8088d36a2326ce6b0468c858fe40b57fa3a9b4dc0d0756f9c26aadceecd37.scope: Deactivated successfully. May 12 23:40:30.716563 containerd[1566]: time="2025-05-12T23:40:30.716457522Z" level=info msg="StartContainer for \"feb8088d36a2326ce6b0468c858fe40b57fa3a9b4dc0d0756f9c26aadceecd37\" returns successfully" May 12 23:40:30.730086 containerd[1566]: time="2025-05-12T23:40:30.730002156Z" level=info msg="shim disconnected" id=feb8088d36a2326ce6b0468c858fe40b57fa3a9b4dc0d0756f9c26aadceecd37 namespace=k8s.io May 12 23:40:30.730086 containerd[1566]: time="2025-05-12T23:40:30.730079730Z" level=warning msg="cleaning up after shim disconnected" id=feb8088d36a2326ce6b0468c858fe40b57fa3a9b4dc0d0756f9c26aadceecd37 namespace=k8s.io May 12 23:40:30.730245 containerd[1566]: time="2025-05-12T23:40:30.730094334Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 12 23:40:30.996080 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-feb8088d36a2326ce6b0468c858fe40b57fa3a9b4dc0d0756f9c26aadceecd37-rootfs.mount: Deactivated successfully. May 12 23:40:31.659053 containerd[1566]: time="2025-05-12T23:40:31.658926107Z" level=info msg="CreateContainer within sandbox \"7c891dc2b58685baf8c95af29e5c9a52b289e1a5b0b1e5bb0db52044a263fa9e\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 12 23:40:31.679005 containerd[1566]: time="2025-05-12T23:40:31.678977660Z" level=info msg="CreateContainer within sandbox \"7c891dc2b58685baf8c95af29e5c9a52b289e1a5b0b1e5bb0db52044a263fa9e\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"d837827ac7cdbfe54efa5f9826dfd6a19efd6d4234caa231494eec2372f43901\"" May 12 23:40:31.680930 containerd[1566]: time="2025-05-12T23:40:31.679486323Z" level=info msg="StartContainer for \"d837827ac7cdbfe54efa5f9826dfd6a19efd6d4234caa231494eec2372f43901\"" May 12 23:40:31.699884 systemd[1]: Started cri-containerd-d837827ac7cdbfe54efa5f9826dfd6a19efd6d4234caa231494eec2372f43901.scope - libcontainer container d837827ac7cdbfe54efa5f9826dfd6a19efd6d4234caa231494eec2372f43901. May 12 23:40:31.720559 containerd[1566]: time="2025-05-12T23:40:31.720528854Z" level=info msg="StartContainer for \"d837827ac7cdbfe54efa5f9826dfd6a19efd6d4234caa231494eec2372f43901\" returns successfully" May 12 23:40:31.877833 kubelet[2880]: I0512 23:40:31.877756 2880 kubelet_node_status.go:497] "Fast updating node status as it just became ready" May 12 23:40:31.906041 kubelet[2880]: I0512 23:40:31.905521 2880 topology_manager.go:215] "Topology Admit Handler" podUID="731a4acf-36ff-4c87-a4db-02115a90ee81" podNamespace="kube-system" podName="coredns-7db6d8ff4d-276jk" May 12 23:40:31.908636 kubelet[2880]: I0512 23:40:31.908352 2880 topology_manager.go:215] "Topology Admit Handler" podUID="976e96bc-2abf-4899-a08a-0dd42f437878" podNamespace="kube-system" podName="coredns-7db6d8ff4d-szz8v" May 12 23:40:31.937793 systemd[1]: Created slice kubepods-burstable-pod976e96bc_2abf_4899_a08a_0dd42f437878.slice - libcontainer container kubepods-burstable-pod976e96bc_2abf_4899_a08a_0dd42f437878.slice. May 12 23:40:31.944311 systemd[1]: Created slice kubepods-burstable-pod731a4acf_36ff_4c87_a4db_02115a90ee81.slice - libcontainer container kubepods-burstable-pod731a4acf_36ff_4c87_a4db_02115a90ee81.slice. May 12 23:40:32.080288 kubelet[2880]: I0512 23:40:32.080262 2880 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bf2dw\" (UniqueName: \"kubernetes.io/projected/731a4acf-36ff-4c87-a4db-02115a90ee81-kube-api-access-bf2dw\") pod \"coredns-7db6d8ff4d-276jk\" (UID: \"731a4acf-36ff-4c87-a4db-02115a90ee81\") " pod="kube-system/coredns-7db6d8ff4d-276jk" May 12 23:40:32.080288 kubelet[2880]: I0512 23:40:32.080291 2880 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/976e96bc-2abf-4899-a08a-0dd42f437878-config-volume\") pod \"coredns-7db6d8ff4d-szz8v\" (UID: \"976e96bc-2abf-4899-a08a-0dd42f437878\") " pod="kube-system/coredns-7db6d8ff4d-szz8v" May 12 23:40:32.080413 kubelet[2880]: I0512 23:40:32.080304 2880 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pxvcv\" (UniqueName: \"kubernetes.io/projected/976e96bc-2abf-4899-a08a-0dd42f437878-kube-api-access-pxvcv\") pod \"coredns-7db6d8ff4d-szz8v\" (UID: \"976e96bc-2abf-4899-a08a-0dd42f437878\") " pod="kube-system/coredns-7db6d8ff4d-szz8v" May 12 23:40:32.080413 kubelet[2880]: I0512 23:40:32.080315 2880 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/731a4acf-36ff-4c87-a4db-02115a90ee81-config-volume\") pod \"coredns-7db6d8ff4d-276jk\" (UID: \"731a4acf-36ff-4c87-a4db-02115a90ee81\") " pod="kube-system/coredns-7db6d8ff4d-276jk" May 12 23:40:32.249291 containerd[1566]: time="2025-05-12T23:40:32.248901639Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-276jk,Uid:731a4acf-36ff-4c87-a4db-02115a90ee81,Namespace:kube-system,Attempt:0,}" May 12 23:40:32.249853 containerd[1566]: time="2025-05-12T23:40:32.249836930Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-szz8v,Uid:976e96bc-2abf-4899-a08a-0dd42f437878,Namespace:kube-system,Attempt:0,}" May 12 23:40:33.823236 systemd-networkd[1257]: cilium_host: Link UP May 12 23:40:33.823645 systemd-networkd[1257]: cilium_net: Link UP May 12 23:40:33.824284 systemd-networkd[1257]: cilium_net: Gained carrier May 12 23:40:33.826378 systemd-networkd[1257]: cilium_host: Gained carrier May 12 23:40:33.927157 systemd-networkd[1257]: cilium_vxlan: Link UP May 12 23:40:33.927286 systemd-networkd[1257]: cilium_vxlan: Gained carrier May 12 23:40:34.155868 systemd-networkd[1257]: cilium_net: Gained IPv6LL May 12 23:40:34.227859 systemd-networkd[1257]: cilium_host: Gained IPv6LL May 12 23:40:34.386799 kernel: NET: Registered PF_ALG protocol family May 12 23:40:34.929236 systemd-networkd[1257]: lxc_health: Link UP May 12 23:40:34.929417 systemd-networkd[1257]: lxc_health: Gained carrier May 12 23:40:35.227855 systemd-networkd[1257]: cilium_vxlan: Gained IPv6LL May 12 23:40:35.320591 systemd-networkd[1257]: lxc639c5dd38525: Link UP May 12 23:40:35.332415 systemd-networkd[1257]: lxcae8ac8e2d5be: Link UP May 12 23:40:35.333745 kernel: eth0: renamed from tmpa173c May 12 23:40:35.339303 kernel: eth0: renamed from tmp812f2 May 12 23:40:35.346093 systemd-networkd[1257]: lxcae8ac8e2d5be: Gained carrier May 12 23:40:35.346237 systemd-networkd[1257]: lxc639c5dd38525: Gained carrier May 12 23:40:36.187937 systemd-networkd[1257]: lxc_health: Gained IPv6LL May 12 23:40:36.356865 kubelet[2880]: I0512 23:40:36.356392 2880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-q2wlk" podStartSLOduration=11.067159863 podStartE2EDuration="19.356378744s" podCreationTimestamp="2025-05-12 23:40:17 +0000 UTC" firstStartedPulling="2025-05-12 23:40:18.58145191 +0000 UTC m=+15.268851136" lastFinishedPulling="2025-05-12 23:40:26.870670791 +0000 UTC m=+23.558070017" observedRunningTime="2025-05-12 23:40:32.672846102 +0000 UTC m=+29.360245332" watchObservedRunningTime="2025-05-12 23:40:36.356378744 +0000 UTC m=+33.043777972" May 12 23:40:37.275880 systemd-networkd[1257]: lxcae8ac8e2d5be: Gained IPv6LL May 12 23:40:37.340906 systemd-networkd[1257]: lxc639c5dd38525: Gained IPv6LL May 12 23:40:38.041522 containerd[1566]: time="2025-05-12T23:40:38.040211981Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 12 23:40:38.041522 containerd[1566]: time="2025-05-12T23:40:38.040249013Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 12 23:40:38.041522 containerd[1566]: time="2025-05-12T23:40:38.040353870Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 12 23:40:38.041522 containerd[1566]: time="2025-05-12T23:40:38.040501514Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 12 23:40:38.055780 containerd[1566]: time="2025-05-12T23:40:38.055049310Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 12 23:40:38.055780 containerd[1566]: time="2025-05-12T23:40:38.055104337Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 12 23:40:38.055780 containerd[1566]: time="2025-05-12T23:40:38.055114946Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 12 23:40:38.055780 containerd[1566]: time="2025-05-12T23:40:38.055156649Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 12 23:40:38.074843 systemd[1]: Started cri-containerd-812f218c5234e18f111d1778ccc4e0a6f4b12c5909b122065c241b4cb80e29e7.scope - libcontainer container 812f218c5234e18f111d1778ccc4e0a6f4b12c5909b122065c241b4cb80e29e7. May 12 23:40:38.076657 systemd[1]: Started cri-containerd-a173c6f9012438421c7206d9ebbbb80b7dcb15946bc93dd7d545c745a632c89c.scope - libcontainer container a173c6f9012438421c7206d9ebbbb80b7dcb15946bc93dd7d545c745a632c89c. May 12 23:40:38.087708 systemd-resolved[1491]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 12 23:40:38.098085 systemd-resolved[1491]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 12 23:40:38.116306 containerd[1566]: time="2025-05-12T23:40:38.116285320Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-276jk,Uid:731a4acf-36ff-4c87-a4db-02115a90ee81,Namespace:kube-system,Attempt:0,} returns sandbox id \"a173c6f9012438421c7206d9ebbbb80b7dcb15946bc93dd7d545c745a632c89c\"" May 12 23:40:38.121562 containerd[1566]: time="2025-05-12T23:40:38.121488876Z" level=info msg="CreateContainer within sandbox \"a173c6f9012438421c7206d9ebbbb80b7dcb15946bc93dd7d545c745a632c89c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 12 23:40:38.133509 containerd[1566]: time="2025-05-12T23:40:38.133477079Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-szz8v,Uid:976e96bc-2abf-4899-a08a-0dd42f437878,Namespace:kube-system,Attempt:0,} returns sandbox id \"812f218c5234e18f111d1778ccc4e0a6f4b12c5909b122065c241b4cb80e29e7\"" May 12 23:40:38.138933 containerd[1566]: time="2025-05-12T23:40:38.138239872Z" level=info msg="CreateContainer within sandbox \"812f218c5234e18f111d1778ccc4e0a6f4b12c5909b122065c241b4cb80e29e7\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 12 23:40:38.143992 containerd[1566]: time="2025-05-12T23:40:38.143894153Z" level=info msg="CreateContainer within sandbox \"a173c6f9012438421c7206d9ebbbb80b7dcb15946bc93dd7d545c745a632c89c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"68e7ae0fb73e1c20d6e4e084052a39d39011c6b219250f2c644a0c9f30dedef8\"" May 12 23:40:38.144622 containerd[1566]: time="2025-05-12T23:40:38.144605476Z" level=info msg="StartContainer for \"68e7ae0fb73e1c20d6e4e084052a39d39011c6b219250f2c644a0c9f30dedef8\"" May 12 23:40:38.146257 containerd[1566]: time="2025-05-12T23:40:38.146230489Z" level=info msg="CreateContainer within sandbox \"812f218c5234e18f111d1778ccc4e0a6f4b12c5909b122065c241b4cb80e29e7\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c14faf5862a36b495ad92d94efbaafdf71035293797cada5c1412252805ba8c3\"" May 12 23:40:38.146653 containerd[1566]: time="2025-05-12T23:40:38.146637783Z" level=info msg="StartContainer for \"c14faf5862a36b495ad92d94efbaafdf71035293797cada5c1412252805ba8c3\"" May 12 23:40:38.168005 systemd[1]: Started cri-containerd-68e7ae0fb73e1c20d6e4e084052a39d39011c6b219250f2c644a0c9f30dedef8.scope - libcontainer container 68e7ae0fb73e1c20d6e4e084052a39d39011c6b219250f2c644a0c9f30dedef8. May 12 23:40:38.174887 systemd[1]: Started cri-containerd-c14faf5862a36b495ad92d94efbaafdf71035293797cada5c1412252805ba8c3.scope - libcontainer container c14faf5862a36b495ad92d94efbaafdf71035293797cada5c1412252805ba8c3. May 12 23:40:38.199192 containerd[1566]: time="2025-05-12T23:40:38.199041618Z" level=info msg="StartContainer for \"68e7ae0fb73e1c20d6e4e084052a39d39011c6b219250f2c644a0c9f30dedef8\" returns successfully" May 12 23:40:38.200129 containerd[1566]: time="2025-05-12T23:40:38.199496302Z" level=info msg="StartContainer for \"c14faf5862a36b495ad92d94efbaafdf71035293797cada5c1412252805ba8c3\" returns successfully" May 12 23:40:38.791142 kubelet[2880]: I0512 23:40:38.790907 2880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-szz8v" podStartSLOduration=21.790893244 podStartE2EDuration="21.790893244s" podCreationTimestamp="2025-05-12 23:40:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-12 23:40:38.790691524 +0000 UTC m=+35.478090758" watchObservedRunningTime="2025-05-12 23:40:38.790893244 +0000 UTC m=+35.478292485" May 12 23:40:38.806597 kubelet[2880]: I0512 23:40:38.806565 2880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-276jk" podStartSLOduration=21.806552023 podStartE2EDuration="21.806552023s" podCreationTimestamp="2025-05-12 23:40:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-12 23:40:38.806147919 +0000 UTC m=+35.493547153" watchObservedRunningTime="2025-05-12 23:40:38.806552023 +0000 UTC m=+35.493951252" May 12 23:40:39.045985 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3767602204.mount: Deactivated successfully. May 12 23:41:09.806658 systemd[1]: Started sshd@7-139.178.70.108:22-139.178.68.195:39918.service - OpenSSH per-connection server daemon (139.178.68.195:39918). May 12 23:41:09.862349 sshd[4253]: Accepted publickey for core from 139.178.68.195 port 39918 ssh2: RSA SHA256:sCFuDykEXum0h7cf6aaOmFY5lUkY7O+fj3cBiGeu3s0 May 12 23:41:09.863460 sshd-session[4253]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 12 23:41:09.866946 systemd-logind[1545]: New session 10 of user core. May 12 23:41:09.870826 systemd[1]: Started session-10.scope - Session 10 of User core. May 12 23:41:10.330758 sshd[4255]: Connection closed by 139.178.68.195 port 39918 May 12 23:41:10.331182 sshd-session[4253]: pam_unix(sshd:session): session closed for user core May 12 23:41:10.332886 systemd[1]: sshd@7-139.178.70.108:22-139.178.68.195:39918.service: Deactivated successfully. May 12 23:41:10.335630 systemd[1]: session-10.scope: Deactivated successfully. May 12 23:41:10.337813 systemd-logind[1545]: Session 10 logged out. Waiting for processes to exit. May 12 23:41:10.338370 systemd-logind[1545]: Removed session 10. May 12 23:41:15.347004 systemd[1]: Started sshd@8-139.178.70.108:22-139.178.68.195:37442.service - OpenSSH per-connection server daemon (139.178.68.195:37442). May 12 23:41:15.387301 sshd[4268]: Accepted publickey for core from 139.178.68.195 port 37442 ssh2: RSA SHA256:sCFuDykEXum0h7cf6aaOmFY5lUkY7O+fj3cBiGeu3s0 May 12 23:41:15.388394 sshd-session[4268]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 12 23:41:15.391602 systemd-logind[1545]: New session 11 of user core. May 12 23:41:15.395949 systemd[1]: Started session-11.scope - Session 11 of User core. May 12 23:41:15.508764 sshd[4270]: Connection closed by 139.178.68.195 port 37442 May 12 23:41:15.508690 sshd-session[4268]: pam_unix(sshd:session): session closed for user core May 12 23:41:15.511023 systemd[1]: sshd@8-139.178.70.108:22-139.178.68.195:37442.service: Deactivated successfully. May 12 23:41:15.512481 systemd[1]: session-11.scope: Deactivated successfully. May 12 23:41:15.513256 systemd-logind[1545]: Session 11 logged out. Waiting for processes to exit. May 12 23:41:15.513911 systemd-logind[1545]: Removed session 11. May 12 23:41:20.525076 systemd[1]: Started sshd@9-139.178.70.108:22-139.178.68.195:37450.service - OpenSSH per-connection server daemon (139.178.68.195:37450). May 12 23:41:20.563413 sshd[4286]: Accepted publickey for core from 139.178.68.195 port 37450 ssh2: RSA SHA256:sCFuDykEXum0h7cf6aaOmFY5lUkY7O+fj3cBiGeu3s0 May 12 23:41:20.564211 sshd-session[4286]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 12 23:41:20.567409 systemd-logind[1545]: New session 12 of user core. May 12 23:41:20.576907 systemd[1]: Started session-12.scope - Session 12 of User core. May 12 23:41:20.668553 sshd[4288]: Connection closed by 139.178.68.195 port 37450 May 12 23:41:20.668847 sshd-session[4286]: pam_unix(sshd:session): session closed for user core May 12 23:41:20.670907 systemd[1]: sshd@9-139.178.70.108:22-139.178.68.195:37450.service: Deactivated successfully. May 12 23:41:20.672672 systemd[1]: session-12.scope: Deactivated successfully. May 12 23:41:20.674243 systemd-logind[1545]: Session 12 logged out. Waiting for processes to exit. May 12 23:41:20.675213 systemd-logind[1545]: Removed session 12. May 12 23:41:25.688181 systemd[1]: Started sshd@10-139.178.70.108:22-139.178.68.195:42504.service - OpenSSH per-connection server daemon (139.178.68.195:42504). May 12 23:41:25.720490 sshd[4301]: Accepted publickey for core from 139.178.68.195 port 42504 ssh2: RSA SHA256:sCFuDykEXum0h7cf6aaOmFY5lUkY7O+fj3cBiGeu3s0 May 12 23:41:25.721334 sshd-session[4301]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 12 23:41:25.724136 systemd-logind[1545]: New session 13 of user core. May 12 23:41:25.726863 systemd[1]: Started session-13.scope - Session 13 of User core. May 12 23:41:25.897406 sshd[4303]: Connection closed by 139.178.68.195 port 42504 May 12 23:41:25.897893 sshd-session[4301]: pam_unix(sshd:session): session closed for user core May 12 23:41:25.906076 systemd[1]: sshd@10-139.178.70.108:22-139.178.68.195:42504.service: Deactivated successfully. May 12 23:41:25.907164 systemd[1]: session-13.scope: Deactivated successfully. May 12 23:41:25.907614 systemd-logind[1545]: Session 13 logged out. Waiting for processes to exit. May 12 23:41:25.914078 systemd[1]: Started sshd@11-139.178.70.108:22-139.178.68.195:42520.service - OpenSSH per-connection server daemon (139.178.68.195:42520). May 12 23:41:25.914943 systemd-logind[1545]: Removed session 13. May 12 23:41:26.045713 sshd[4315]: Accepted publickey for core from 139.178.68.195 port 42520 ssh2: RSA SHA256:sCFuDykEXum0h7cf6aaOmFY5lUkY7O+fj3cBiGeu3s0 May 12 23:41:26.046545 sshd-session[4315]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 12 23:41:26.049285 systemd-logind[1545]: New session 14 of user core. May 12 23:41:26.059894 systemd[1]: Started session-14.scope - Session 14 of User core. May 12 23:41:26.385613 sshd[4318]: Connection closed by 139.178.68.195 port 42520 May 12 23:41:26.394244 systemd[1]: sshd@11-139.178.70.108:22-139.178.68.195:42520.service: Deactivated successfully. May 12 23:41:26.386117 sshd-session[4315]: pam_unix(sshd:session): session closed for user core May 12 23:41:26.396400 systemd[1]: session-14.scope: Deactivated successfully. May 12 23:41:26.397521 systemd-logind[1545]: Session 14 logged out. Waiting for processes to exit. May 12 23:41:26.403339 systemd[1]: Started sshd@12-139.178.70.108:22-139.178.68.195:42528.service - OpenSSH per-connection server daemon (139.178.68.195:42528). May 12 23:41:26.404502 systemd-logind[1545]: Removed session 14. May 12 23:41:26.440749 sshd[4326]: Accepted publickey for core from 139.178.68.195 port 42528 ssh2: RSA SHA256:sCFuDykEXum0h7cf6aaOmFY5lUkY7O+fj3cBiGeu3s0 May 12 23:41:26.441718 sshd-session[4326]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 12 23:41:26.445191 systemd-logind[1545]: New session 15 of user core. May 12 23:41:26.451002 systemd[1]: Started session-15.scope - Session 15 of User core. May 12 23:41:26.556766 sshd[4329]: Connection closed by 139.178.68.195 port 42528 May 12 23:41:26.557105 sshd-session[4326]: pam_unix(sshd:session): session closed for user core May 12 23:41:26.559274 systemd[1]: sshd@12-139.178.70.108:22-139.178.68.195:42528.service: Deactivated successfully. May 12 23:41:26.560389 systemd[1]: session-15.scope: Deactivated successfully. May 12 23:41:26.560903 systemd-logind[1545]: Session 15 logged out. Waiting for processes to exit. May 12 23:41:26.561416 systemd-logind[1545]: Removed session 15. May 12 23:41:31.567897 systemd[1]: Started sshd@13-139.178.70.108:22-139.178.68.195:42542.service - OpenSSH per-connection server daemon (139.178.68.195:42542). May 12 23:41:31.649724 sshd[4343]: Accepted publickey for core from 139.178.68.195 port 42542 ssh2: RSA SHA256:sCFuDykEXum0h7cf6aaOmFY5lUkY7O+fj3cBiGeu3s0 May 12 23:41:31.650581 sshd-session[4343]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 12 23:41:31.653207 systemd-logind[1545]: New session 16 of user core. May 12 23:41:31.658902 systemd[1]: Started session-16.scope - Session 16 of User core. May 12 23:41:31.759639 sshd[4345]: Connection closed by 139.178.68.195 port 42542 May 12 23:41:31.760041 sshd-session[4343]: pam_unix(sshd:session): session closed for user core May 12 23:41:31.762535 systemd[1]: sshd@13-139.178.70.108:22-139.178.68.195:42542.service: Deactivated successfully. May 12 23:41:31.764328 systemd[1]: session-16.scope: Deactivated successfully. May 12 23:41:31.765109 systemd-logind[1545]: Session 16 logged out. Waiting for processes to exit. May 12 23:41:31.765968 systemd-logind[1545]: Removed session 16. May 12 23:41:36.769727 systemd[1]: Started sshd@14-139.178.70.108:22-139.178.68.195:44350.service - OpenSSH per-connection server daemon (139.178.68.195:44350). May 12 23:41:36.804852 sshd[4357]: Accepted publickey for core from 139.178.68.195 port 44350 ssh2: RSA SHA256:sCFuDykEXum0h7cf6aaOmFY5lUkY7O+fj3cBiGeu3s0 May 12 23:41:36.805863 sshd-session[4357]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 12 23:41:36.808976 systemd-logind[1545]: New session 17 of user core. May 12 23:41:36.813920 systemd[1]: Started session-17.scope - Session 17 of User core. May 12 23:41:36.912473 sshd[4359]: Connection closed by 139.178.68.195 port 44350 May 12 23:41:36.912951 sshd-session[4357]: pam_unix(sshd:session): session closed for user core May 12 23:41:36.919640 systemd[1]: sshd@14-139.178.70.108:22-139.178.68.195:44350.service: Deactivated successfully. May 12 23:41:36.920748 systemd[1]: session-17.scope: Deactivated successfully. May 12 23:41:36.921321 systemd-logind[1545]: Session 17 logged out. Waiting for processes to exit. May 12 23:41:36.926128 systemd[1]: Started sshd@15-139.178.70.108:22-139.178.68.195:44354.service - OpenSSH per-connection server daemon (139.178.68.195:44354). May 12 23:41:36.927553 systemd-logind[1545]: Removed session 17. May 12 23:41:36.965611 sshd[4370]: Accepted publickey for core from 139.178.68.195 port 44354 ssh2: RSA SHA256:sCFuDykEXum0h7cf6aaOmFY5lUkY7O+fj3cBiGeu3s0 May 12 23:41:36.966633 sshd-session[4370]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 12 23:41:36.969807 systemd-logind[1545]: New session 18 of user core. May 12 23:41:36.975898 systemd[1]: Started session-18.scope - Session 18 of User core. May 12 23:41:37.723784 sshd[4373]: Connection closed by 139.178.68.195 port 44354 May 12 23:41:37.728098 sshd-session[4370]: pam_unix(sshd:session): session closed for user core May 12 23:41:37.735134 systemd[1]: Started sshd@16-139.178.70.108:22-139.178.68.195:44356.service - OpenSSH per-connection server daemon (139.178.68.195:44356). May 12 23:41:37.742967 systemd-logind[1545]: Session 18 logged out. Waiting for processes to exit. May 12 23:41:37.743075 systemd[1]: sshd@15-139.178.70.108:22-139.178.68.195:44354.service: Deactivated successfully. May 12 23:41:37.744329 systemd[1]: session-18.scope: Deactivated successfully. May 12 23:41:37.746181 systemd-logind[1545]: Removed session 18. May 12 23:41:37.779775 sshd[4380]: Accepted publickey for core from 139.178.68.195 port 44356 ssh2: RSA SHA256:sCFuDykEXum0h7cf6aaOmFY5lUkY7O+fj3cBiGeu3s0 May 12 23:41:37.780573 sshd-session[4380]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 12 23:41:37.786845 systemd-logind[1545]: New session 19 of user core. May 12 23:41:37.791818 systemd[1]: Started session-19.scope - Session 19 of User core. May 12 23:41:39.098387 sshd[4385]: Connection closed by 139.178.68.195 port 44356 May 12 23:41:39.099585 sshd-session[4380]: pam_unix(sshd:session): session closed for user core May 12 23:41:39.106374 systemd[1]: sshd@16-139.178.70.108:22-139.178.68.195:44356.service: Deactivated successfully. May 12 23:41:39.109199 systemd[1]: session-19.scope: Deactivated successfully. May 12 23:41:39.110885 systemd-logind[1545]: Session 19 logged out. Waiting for processes to exit. May 12 23:41:39.118471 systemd[1]: Started sshd@17-139.178.70.108:22-139.178.68.195:44360.service - OpenSSH per-connection server daemon (139.178.68.195:44360). May 12 23:41:39.120354 systemd-logind[1545]: Removed session 19. May 12 23:41:39.166427 sshd[4402]: Accepted publickey for core from 139.178.68.195 port 44360 ssh2: RSA SHA256:sCFuDykEXum0h7cf6aaOmFY5lUkY7O+fj3cBiGeu3s0 May 12 23:41:39.167259 sshd-session[4402]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 12 23:41:39.170748 systemd-logind[1545]: New session 20 of user core. May 12 23:41:39.174822 systemd[1]: Started session-20.scope - Session 20 of User core. May 12 23:41:39.356134 sshd[4405]: Connection closed by 139.178.68.195 port 44360 May 12 23:41:39.356860 sshd-session[4402]: pam_unix(sshd:session): session closed for user core May 12 23:41:39.365663 systemd[1]: sshd@17-139.178.70.108:22-139.178.68.195:44360.service: Deactivated successfully. May 12 23:41:39.367346 systemd[1]: session-20.scope: Deactivated successfully. May 12 23:41:39.368009 systemd-logind[1545]: Session 20 logged out. Waiting for processes to exit. May 12 23:41:39.374977 systemd[1]: Started sshd@18-139.178.70.108:22-139.178.68.195:44370.service - OpenSSH per-connection server daemon (139.178.68.195:44370). May 12 23:41:39.376213 systemd-logind[1545]: Removed session 20. May 12 23:41:39.406701 sshd[4414]: Accepted publickey for core from 139.178.68.195 port 44370 ssh2: RSA SHA256:sCFuDykEXum0h7cf6aaOmFY5lUkY7O+fj3cBiGeu3s0 May 12 23:41:39.407529 sshd-session[4414]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 12 23:41:39.411872 systemd-logind[1545]: New session 21 of user core. May 12 23:41:39.414117 systemd[1]: Started session-21.scope - Session 21 of User core. May 12 23:41:39.505679 sshd[4417]: Connection closed by 139.178.68.195 port 44370 May 12 23:41:39.506048 sshd-session[4414]: pam_unix(sshd:session): session closed for user core May 12 23:41:39.508713 systemd-logind[1545]: Session 21 logged out. Waiting for processes to exit. May 12 23:41:39.508813 systemd[1]: sshd@18-139.178.70.108:22-139.178.68.195:44370.service: Deactivated successfully. May 12 23:41:39.509957 systemd[1]: session-21.scope: Deactivated successfully. May 12 23:41:39.510543 systemd-logind[1545]: Removed session 21. May 12 23:41:44.523924 systemd[1]: Started sshd@19-139.178.70.108:22-139.178.68.195:60712.service - OpenSSH per-connection server daemon (139.178.68.195:60712). May 12 23:41:44.557100 sshd[4432]: Accepted publickey for core from 139.178.68.195 port 60712 ssh2: RSA SHA256:sCFuDykEXum0h7cf6aaOmFY5lUkY7O+fj3cBiGeu3s0 May 12 23:41:44.557981 sshd-session[4432]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 12 23:41:44.560969 systemd-logind[1545]: New session 22 of user core. May 12 23:41:44.565869 systemd[1]: Started session-22.scope - Session 22 of User core. May 12 23:41:44.685302 sshd[4434]: Connection closed by 139.178.68.195 port 60712 May 12 23:41:44.684447 sshd-session[4432]: pam_unix(sshd:session): session closed for user core May 12 23:41:44.686704 systemd[1]: sshd@19-139.178.70.108:22-139.178.68.195:60712.service: Deactivated successfully. May 12 23:41:44.688394 systemd[1]: session-22.scope: Deactivated successfully. May 12 23:41:44.689112 systemd-logind[1545]: Session 22 logged out. Waiting for processes to exit. May 12 23:41:44.689758 systemd-logind[1545]: Removed session 22. May 12 23:41:49.695261 systemd[1]: Started sshd@20-139.178.70.108:22-139.178.68.195:60728.service - OpenSSH per-connection server daemon (139.178.68.195:60728). May 12 23:41:49.729933 sshd[4449]: Accepted publickey for core from 139.178.68.195 port 60728 ssh2: RSA SHA256:sCFuDykEXum0h7cf6aaOmFY5lUkY7O+fj3cBiGeu3s0 May 12 23:41:49.730787 sshd-session[4449]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 12 23:41:49.733422 systemd-logind[1545]: New session 23 of user core. May 12 23:41:49.738827 systemd[1]: Started session-23.scope - Session 23 of User core. May 12 23:41:49.826213 sshd[4451]: Connection closed by 139.178.68.195 port 60728 May 12 23:41:49.826556 sshd-session[4449]: pam_unix(sshd:session): session closed for user core May 12 23:41:49.828837 systemd[1]: sshd@20-139.178.70.108:22-139.178.68.195:60728.service: Deactivated successfully. May 12 23:41:49.829994 systemd[1]: session-23.scope: Deactivated successfully. May 12 23:41:49.830952 systemd-logind[1545]: Session 23 logged out. Waiting for processes to exit. May 12 23:41:49.831551 systemd-logind[1545]: Removed session 23. May 12 23:41:54.845897 systemd[1]: Started sshd@21-139.178.70.108:22-139.178.68.195:53652.service - OpenSSH per-connection server daemon (139.178.68.195:53652). May 12 23:41:54.877979 sshd[4463]: Accepted publickey for core from 139.178.68.195 port 53652 ssh2: RSA SHA256:sCFuDykEXum0h7cf6aaOmFY5lUkY7O+fj3cBiGeu3s0 May 12 23:41:54.878889 sshd-session[4463]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 12 23:41:54.882785 systemd-logind[1545]: New session 24 of user core. May 12 23:41:54.887847 systemd[1]: Started session-24.scope - Session 24 of User core. May 12 23:41:54.977486 sshd[4465]: Connection closed by 139.178.68.195 port 53652 May 12 23:41:54.977840 sshd-session[4463]: pam_unix(sshd:session): session closed for user core May 12 23:41:54.983952 systemd[1]: sshd@21-139.178.70.108:22-139.178.68.195:53652.service: Deactivated successfully. May 12 23:41:54.984965 systemd[1]: session-24.scope: Deactivated successfully. May 12 23:41:54.985372 systemd-logind[1545]: Session 24 logged out. Waiting for processes to exit. May 12 23:41:54.986450 systemd[1]: Started sshd@22-139.178.70.108:22-139.178.68.195:53660.service - OpenSSH per-connection server daemon (139.178.68.195:53660). May 12 23:41:54.987705 systemd-logind[1545]: Removed session 24. May 12 23:41:55.020475 sshd[4476]: Accepted publickey for core from 139.178.68.195 port 53660 ssh2: RSA SHA256:sCFuDykEXum0h7cf6aaOmFY5lUkY7O+fj3cBiGeu3s0 May 12 23:41:55.021390 sshd-session[4476]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 12 23:41:55.025793 systemd-logind[1545]: New session 25 of user core. May 12 23:41:55.035918 systemd[1]: Started session-25.scope - Session 25 of User core. May 12 23:41:56.388778 containerd[1566]: time="2025-05-12T23:41:56.388544353Z" level=info msg="StopContainer for \"01e842d1098e294ade9c8456924d5fa321f900e98cfbcbd3675de808a8b810b9\" with timeout 30 (s)" May 12 23:41:56.394618 containerd[1566]: time="2025-05-12T23:41:56.394570118Z" level=info msg="Stop container \"01e842d1098e294ade9c8456924d5fa321f900e98cfbcbd3675de808a8b810b9\" with signal terminated" May 12 23:41:56.460940 systemd[1]: cri-containerd-01e842d1098e294ade9c8456924d5fa321f900e98cfbcbd3675de808a8b810b9.scope: Deactivated successfully. May 12 23:41:56.461478 systemd[1]: cri-containerd-01e842d1098e294ade9c8456924d5fa321f900e98cfbcbd3675de808a8b810b9.scope: Consumed 249ms CPU time, 24.8M memory peak, 6.5M read from disk, 4K written to disk. May 12 23:41:56.475370 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-01e842d1098e294ade9c8456924d5fa321f900e98cfbcbd3675de808a8b810b9-rootfs.mount: Deactivated successfully. May 12 23:41:56.493487 containerd[1566]: time="2025-05-12T23:41:56.493428766Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 12 23:41:56.508375 containerd[1566]: time="2025-05-12T23:41:56.508343124Z" level=info msg="StopContainer for \"d837827ac7cdbfe54efa5f9826dfd6a19efd6d4234caa231494eec2372f43901\" with timeout 2 (s)" May 12 23:41:56.508550 containerd[1566]: time="2025-05-12T23:41:56.508535801Z" level=info msg="Stop container \"d837827ac7cdbfe54efa5f9826dfd6a19efd6d4234caa231494eec2372f43901\" with signal terminated" May 12 23:41:56.516174 systemd-networkd[1257]: lxc_health: Link DOWN May 12 23:41:56.516180 systemd-networkd[1257]: lxc_health: Lost carrier May 12 23:41:56.533282 systemd[1]: cri-containerd-d837827ac7cdbfe54efa5f9826dfd6a19efd6d4234caa231494eec2372f43901.scope: Deactivated successfully. May 12 23:41:56.533510 systemd[1]: cri-containerd-d837827ac7cdbfe54efa5f9826dfd6a19efd6d4234caa231494eec2372f43901.scope: Consumed 4.543s CPU time, 189.3M memory peak, 66.4M read from disk, 13.3M written to disk. May 12 23:41:56.547670 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d837827ac7cdbfe54efa5f9826dfd6a19efd6d4234caa231494eec2372f43901-rootfs.mount: Deactivated successfully. May 12 23:41:56.718425 containerd[1566]: time="2025-05-12T23:41:56.718368510Z" level=info msg="shim disconnected" id=d837827ac7cdbfe54efa5f9826dfd6a19efd6d4234caa231494eec2372f43901 namespace=k8s.io May 12 23:41:56.718866 containerd[1566]: time="2025-05-12T23:41:56.718691709Z" level=warning msg="cleaning up after shim disconnected" id=d837827ac7cdbfe54efa5f9826dfd6a19efd6d4234caa231494eec2372f43901 namespace=k8s.io May 12 23:41:56.718866 containerd[1566]: time="2025-05-12T23:41:56.718710447Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 12 23:41:56.718866 containerd[1566]: time="2025-05-12T23:41:56.718598897Z" level=info msg="shim disconnected" id=01e842d1098e294ade9c8456924d5fa321f900e98cfbcbd3675de808a8b810b9 namespace=k8s.io May 12 23:41:56.718866 containerd[1566]: time="2025-05-12T23:41:56.718789220Z" level=warning msg="cleaning up after shim disconnected" id=01e842d1098e294ade9c8456924d5fa321f900e98cfbcbd3675de808a8b810b9 namespace=k8s.io May 12 23:41:56.718866 containerd[1566]: time="2025-05-12T23:41:56.718796574Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 12 23:41:56.738731 containerd[1566]: time="2025-05-12T23:41:56.738587821Z" level=info msg="StopContainer for \"d837827ac7cdbfe54efa5f9826dfd6a19efd6d4234caa231494eec2372f43901\" returns successfully" May 12 23:41:56.738731 containerd[1566]: time="2025-05-12T23:41:56.738739349Z" level=info msg="StopContainer for \"01e842d1098e294ade9c8456924d5fa321f900e98cfbcbd3675de808a8b810b9\" returns successfully" May 12 23:41:56.739154 containerd[1566]: time="2025-05-12T23:41:56.739136340Z" level=info msg="StopPodSandbox for \"f31af28e1d05f503e0914e6fb1c56d10a75e9de3f22e56a8040d2bc273667abc\"" May 12 23:41:56.739279 containerd[1566]: time="2025-05-12T23:41:56.739207391Z" level=info msg="StopPodSandbox for \"7c891dc2b58685baf8c95af29e5c9a52b289e1a5b0b1e5bb0db52044a263fa9e\"" May 12 23:41:56.747746 containerd[1566]: time="2025-05-12T23:41:56.740581905Z" level=info msg="Container to stop \"01e842d1098e294ade9c8456924d5fa321f900e98cfbcbd3675de808a8b810b9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 12 23:41:56.747350 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f31af28e1d05f503e0914e6fb1c56d10a75e9de3f22e56a8040d2bc273667abc-shm.mount: Deactivated successfully. May 12 23:41:56.752106 containerd[1566]: time="2025-05-12T23:41:56.740577985Z" level=info msg="Container to stop \"7597067a50b2143193a261804e71900d0999fd6cdf5afd3e64ff4d7fd8e82f30\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 12 23:41:56.752106 containerd[1566]: time="2025-05-12T23:41:56.752105632Z" level=info msg="Container to stop \"feb8088d36a2326ce6b0468c858fe40b57fa3a9b4dc0d0756f9c26aadceecd37\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 12 23:41:56.752405 containerd[1566]: time="2025-05-12T23:41:56.752112783Z" level=info msg="Container to stop \"d837827ac7cdbfe54efa5f9826dfd6a19efd6d4234caa231494eec2372f43901\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 12 23:41:56.752405 containerd[1566]: time="2025-05-12T23:41:56.752123900Z" level=info msg="Container to stop \"7d54ae55e9b255082adb86369281a15635627e616cdbbb40ab7ca8f77ebb5e46\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 12 23:41:56.752405 containerd[1566]: time="2025-05-12T23:41:56.752130228Z" level=info msg="Container to stop \"c5065a8ede0feb357b1e754f136b2a5857e8c4a26238345ded7806c281fe4bad\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 12 23:41:56.754114 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7c891dc2b58685baf8c95af29e5c9a52b289e1a5b0b1e5bb0db52044a263fa9e-shm.mount: Deactivated successfully. May 12 23:41:56.755806 systemd[1]: cri-containerd-f31af28e1d05f503e0914e6fb1c56d10a75e9de3f22e56a8040d2bc273667abc.scope: Deactivated successfully. May 12 23:41:56.765504 systemd[1]: cri-containerd-7c891dc2b58685baf8c95af29e5c9a52b289e1a5b0b1e5bb0db52044a263fa9e.scope: Deactivated successfully. May 12 23:41:56.784129 containerd[1566]: time="2025-05-12T23:41:56.784025776Z" level=info msg="shim disconnected" id=7c891dc2b58685baf8c95af29e5c9a52b289e1a5b0b1e5bb0db52044a263fa9e namespace=k8s.io May 12 23:41:56.784129 containerd[1566]: time="2025-05-12T23:41:56.784058642Z" level=warning msg="cleaning up after shim disconnected" id=7c891dc2b58685baf8c95af29e5c9a52b289e1a5b0b1e5bb0db52044a263fa9e namespace=k8s.io May 12 23:41:56.784129 containerd[1566]: time="2025-05-12T23:41:56.784063832Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 12 23:41:56.784129 containerd[1566]: time="2025-05-12T23:41:56.784067504Z" level=info msg="shim disconnected" id=f31af28e1d05f503e0914e6fb1c56d10a75e9de3f22e56a8040d2bc273667abc namespace=k8s.io May 12 23:41:56.784129 containerd[1566]: time="2025-05-12T23:41:56.784087986Z" level=warning msg="cleaning up after shim disconnected" id=f31af28e1d05f503e0914e6fb1c56d10a75e9de3f22e56a8040d2bc273667abc namespace=k8s.io May 12 23:41:56.784129 containerd[1566]: time="2025-05-12T23:41:56.784092821Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 12 23:41:56.793167 containerd[1566]: time="2025-05-12T23:41:56.793050550Z" level=info msg="TearDown network for sandbox \"f31af28e1d05f503e0914e6fb1c56d10a75e9de3f22e56a8040d2bc273667abc\" successfully" May 12 23:41:56.793167 containerd[1566]: time="2025-05-12T23:41:56.793068837Z" level=info msg="StopPodSandbox for \"f31af28e1d05f503e0914e6fb1c56d10a75e9de3f22e56a8040d2bc273667abc\" returns successfully" May 12 23:41:56.794053 containerd[1566]: time="2025-05-12T23:41:56.794003084Z" level=warning msg="cleanup warnings time=\"2025-05-12T23:41:56Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io May 12 23:41:56.795419 containerd[1566]: time="2025-05-12T23:41:56.794780910Z" level=info msg="TearDown network for sandbox \"7c891dc2b58685baf8c95af29e5c9a52b289e1a5b0b1e5bb0db52044a263fa9e\" successfully" May 12 23:41:56.795419 containerd[1566]: time="2025-05-12T23:41:56.794798049Z" level=info msg="StopPodSandbox for \"7c891dc2b58685baf8c95af29e5c9a52b289e1a5b0b1e5bb0db52044a263fa9e\" returns successfully" May 12 23:41:56.839594 kubelet[2880]: I0512 23:41:56.839563 2880 scope.go:117] "RemoveContainer" containerID="01e842d1098e294ade9c8456924d5fa321f900e98cfbcbd3675de808a8b810b9" May 12 23:41:56.843111 containerd[1566]: time="2025-05-12T23:41:56.843072872Z" level=info msg="RemoveContainer for \"01e842d1098e294ade9c8456924d5fa321f900e98cfbcbd3675de808a8b810b9\"" May 12 23:41:56.844924 containerd[1566]: time="2025-05-12T23:41:56.844759701Z" level=info msg="RemoveContainer for \"01e842d1098e294ade9c8456924d5fa321f900e98cfbcbd3675de808a8b810b9\" returns successfully" May 12 23:41:56.845450 kubelet[2880]: I0512 23:41:56.845091 2880 scope.go:117] "RemoveContainer" containerID="01e842d1098e294ade9c8456924d5fa321f900e98cfbcbd3675de808a8b810b9" May 12 23:41:56.845489 containerd[1566]: time="2025-05-12T23:41:56.845236601Z" level=error msg="ContainerStatus for \"01e842d1098e294ade9c8456924d5fa321f900e98cfbcbd3675de808a8b810b9\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"01e842d1098e294ade9c8456924d5fa321f900e98cfbcbd3675de808a8b810b9\": not found" May 12 23:41:56.850686 kubelet[2880]: E0512 23:41:56.850598 2880 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"01e842d1098e294ade9c8456924d5fa321f900e98cfbcbd3675de808a8b810b9\": not found" containerID="01e842d1098e294ade9c8456924d5fa321f900e98cfbcbd3675de808a8b810b9" May 12 23:41:56.850837 kubelet[2880]: I0512 23:41:56.850638 2880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"01e842d1098e294ade9c8456924d5fa321f900e98cfbcbd3675de808a8b810b9"} err="failed to get container status \"01e842d1098e294ade9c8456924d5fa321f900e98cfbcbd3675de808a8b810b9\": rpc error: code = NotFound desc = an error occurred when try to find container \"01e842d1098e294ade9c8456924d5fa321f900e98cfbcbd3675de808a8b810b9\": not found" May 12 23:41:56.853287 kubelet[2880]: I0512 23:41:56.850762 2880 scope.go:117] "RemoveContainer" containerID="d837827ac7cdbfe54efa5f9826dfd6a19efd6d4234caa231494eec2372f43901" May 12 23:41:56.854266 containerd[1566]: time="2025-05-12T23:41:56.853985889Z" level=info msg="RemoveContainer for \"d837827ac7cdbfe54efa5f9826dfd6a19efd6d4234caa231494eec2372f43901\"" May 12 23:41:56.855398 containerd[1566]: time="2025-05-12T23:41:56.855385433Z" level=info msg="RemoveContainer for \"d837827ac7cdbfe54efa5f9826dfd6a19efd6d4234caa231494eec2372f43901\" returns successfully" May 12 23:41:56.855529 kubelet[2880]: I0512 23:41:56.855521 2880 scope.go:117] "RemoveContainer" containerID="feb8088d36a2326ce6b0468c858fe40b57fa3a9b4dc0d0756f9c26aadceecd37" May 12 23:41:56.856924 containerd[1566]: time="2025-05-12T23:41:56.856911012Z" level=info msg="RemoveContainer for \"feb8088d36a2326ce6b0468c858fe40b57fa3a9b4dc0d0756f9c26aadceecd37\"" May 12 23:41:56.858399 containerd[1566]: time="2025-05-12T23:41:56.858386816Z" level=info msg="RemoveContainer for \"feb8088d36a2326ce6b0468c858fe40b57fa3a9b4dc0d0756f9c26aadceecd37\" returns successfully" May 12 23:41:56.858569 kubelet[2880]: I0512 23:41:56.858521 2880 scope.go:117] "RemoveContainer" containerID="7597067a50b2143193a261804e71900d0999fd6cdf5afd3e64ff4d7fd8e82f30" May 12 23:41:56.859098 containerd[1566]: time="2025-05-12T23:41:56.859030093Z" level=info msg="RemoveContainer for \"7597067a50b2143193a261804e71900d0999fd6cdf5afd3e64ff4d7fd8e82f30\"" May 12 23:41:56.860283 containerd[1566]: time="2025-05-12T23:41:56.860218505Z" level=info msg="RemoveContainer for \"7597067a50b2143193a261804e71900d0999fd6cdf5afd3e64ff4d7fd8e82f30\" returns successfully" May 12 23:41:56.860399 kubelet[2880]: I0512 23:41:56.860350 2880 scope.go:117] "RemoveContainer" containerID="c5065a8ede0feb357b1e754f136b2a5857e8c4a26238345ded7806c281fe4bad" May 12 23:41:56.860890 containerd[1566]: time="2025-05-12T23:41:56.860879848Z" level=info msg="RemoveContainer for \"c5065a8ede0feb357b1e754f136b2a5857e8c4a26238345ded7806c281fe4bad\"" May 12 23:41:56.862021 containerd[1566]: time="2025-05-12T23:41:56.862010021Z" level=info msg="RemoveContainer for \"c5065a8ede0feb357b1e754f136b2a5857e8c4a26238345ded7806c281fe4bad\" returns successfully" May 12 23:41:56.862162 kubelet[2880]: I0512 23:41:56.862135 2880 scope.go:117] "RemoveContainer" containerID="7d54ae55e9b255082adb86369281a15635627e616cdbbb40ab7ca8f77ebb5e46" May 12 23:41:56.862818 containerd[1566]: time="2025-05-12T23:41:56.862805118Z" level=info msg="RemoveContainer for \"7d54ae55e9b255082adb86369281a15635627e616cdbbb40ab7ca8f77ebb5e46\"" May 12 23:41:56.863862 containerd[1566]: time="2025-05-12T23:41:56.863847111Z" level=info msg="RemoveContainer for \"7d54ae55e9b255082adb86369281a15635627e616cdbbb40ab7ca8f77ebb5e46\" returns successfully" May 12 23:41:56.864070 kubelet[2880]: I0512 23:41:56.863948 2880 scope.go:117] "RemoveContainer" containerID="d837827ac7cdbfe54efa5f9826dfd6a19efd6d4234caa231494eec2372f43901" May 12 23:41:56.864111 containerd[1566]: time="2025-05-12T23:41:56.864036312Z" level=error msg="ContainerStatus for \"d837827ac7cdbfe54efa5f9826dfd6a19efd6d4234caa231494eec2372f43901\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d837827ac7cdbfe54efa5f9826dfd6a19efd6d4234caa231494eec2372f43901\": not found" May 12 23:41:56.864133 kubelet[2880]: E0512 23:41:56.864116 2880 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d837827ac7cdbfe54efa5f9826dfd6a19efd6d4234caa231494eec2372f43901\": not found" containerID="d837827ac7cdbfe54efa5f9826dfd6a19efd6d4234caa231494eec2372f43901" May 12 23:41:56.864153 kubelet[2880]: I0512 23:41:56.864130 2880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d837827ac7cdbfe54efa5f9826dfd6a19efd6d4234caa231494eec2372f43901"} err="failed to get container status \"d837827ac7cdbfe54efa5f9826dfd6a19efd6d4234caa231494eec2372f43901\": rpc error: code = NotFound desc = an error occurred when try to find container \"d837827ac7cdbfe54efa5f9826dfd6a19efd6d4234caa231494eec2372f43901\": not found" May 12 23:41:56.864153 kubelet[2880]: I0512 23:41:56.864142 2880 scope.go:117] "RemoveContainer" containerID="feb8088d36a2326ce6b0468c858fe40b57fa3a9b4dc0d0756f9c26aadceecd37" May 12 23:41:56.864304 containerd[1566]: time="2025-05-12T23:41:56.864255802Z" level=error msg="ContainerStatus for \"feb8088d36a2326ce6b0468c858fe40b57fa3a9b4dc0d0756f9c26aadceecd37\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"feb8088d36a2326ce6b0468c858fe40b57fa3a9b4dc0d0756f9c26aadceecd37\": not found" May 12 23:41:56.864351 kubelet[2880]: E0512 23:41:56.864340 2880 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"feb8088d36a2326ce6b0468c858fe40b57fa3a9b4dc0d0756f9c26aadceecd37\": not found" containerID="feb8088d36a2326ce6b0468c858fe40b57fa3a9b4dc0d0756f9c26aadceecd37" May 12 23:41:56.864374 kubelet[2880]: I0512 23:41:56.864351 2880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"feb8088d36a2326ce6b0468c858fe40b57fa3a9b4dc0d0756f9c26aadceecd37"} err="failed to get container status \"feb8088d36a2326ce6b0468c858fe40b57fa3a9b4dc0d0756f9c26aadceecd37\": rpc error: code = NotFound desc = an error occurred when try to find container \"feb8088d36a2326ce6b0468c858fe40b57fa3a9b4dc0d0756f9c26aadceecd37\": not found" May 12 23:41:56.864374 kubelet[2880]: I0512 23:41:56.864362 2880 scope.go:117] "RemoveContainer" containerID="7597067a50b2143193a261804e71900d0999fd6cdf5afd3e64ff4d7fd8e82f30" May 12 23:41:56.864925 kubelet[2880]: E0512 23:41:56.864536 2880 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7597067a50b2143193a261804e71900d0999fd6cdf5afd3e64ff4d7fd8e82f30\": not found" containerID="7597067a50b2143193a261804e71900d0999fd6cdf5afd3e64ff4d7fd8e82f30" May 12 23:41:56.864925 kubelet[2880]: I0512 23:41:56.864548 2880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7597067a50b2143193a261804e71900d0999fd6cdf5afd3e64ff4d7fd8e82f30"} err="failed to get container status \"7597067a50b2143193a261804e71900d0999fd6cdf5afd3e64ff4d7fd8e82f30\": rpc error: code = NotFound desc = an error occurred when try to find container \"7597067a50b2143193a261804e71900d0999fd6cdf5afd3e64ff4d7fd8e82f30\": not found" May 12 23:41:56.864925 kubelet[2880]: I0512 23:41:56.864556 2880 scope.go:117] "RemoveContainer" containerID="c5065a8ede0feb357b1e754f136b2a5857e8c4a26238345ded7806c281fe4bad" May 12 23:41:56.864925 kubelet[2880]: E0512 23:41:56.864716 2880 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c5065a8ede0feb357b1e754f136b2a5857e8c4a26238345ded7806c281fe4bad\": not found" containerID="c5065a8ede0feb357b1e754f136b2a5857e8c4a26238345ded7806c281fe4bad" May 12 23:41:56.864925 kubelet[2880]: I0512 23:41:56.864727 2880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c5065a8ede0feb357b1e754f136b2a5857e8c4a26238345ded7806c281fe4bad"} err="failed to get container status \"c5065a8ede0feb357b1e754f136b2a5857e8c4a26238345ded7806c281fe4bad\": rpc error: code = NotFound desc = an error occurred when try to find container \"c5065a8ede0feb357b1e754f136b2a5857e8c4a26238345ded7806c281fe4bad\": not found" May 12 23:41:56.864925 kubelet[2880]: I0512 23:41:56.864751 2880 scope.go:117] "RemoveContainer" containerID="7d54ae55e9b255082adb86369281a15635627e616cdbbb40ab7ca8f77ebb5e46" May 12 23:41:56.865183 containerd[1566]: time="2025-05-12T23:41:56.864452902Z" level=error msg="ContainerStatus for \"7597067a50b2143193a261804e71900d0999fd6cdf5afd3e64ff4d7fd8e82f30\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7597067a50b2143193a261804e71900d0999fd6cdf5afd3e64ff4d7fd8e82f30\": not found" May 12 23:41:56.865183 containerd[1566]: time="2025-05-12T23:41:56.864647163Z" level=error msg="ContainerStatus for \"c5065a8ede0feb357b1e754f136b2a5857e8c4a26238345ded7806c281fe4bad\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c5065a8ede0feb357b1e754f136b2a5857e8c4a26238345ded7806c281fe4bad\": not found" May 12 23:41:56.865183 containerd[1566]: time="2025-05-12T23:41:56.864834674Z" level=error msg="ContainerStatus for \"7d54ae55e9b255082adb86369281a15635627e616cdbbb40ab7ca8f77ebb5e46\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7d54ae55e9b255082adb86369281a15635627e616cdbbb40ab7ca8f77ebb5e46\": not found" May 12 23:41:56.865234 kubelet[2880]: E0512 23:41:56.864892 2880 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7d54ae55e9b255082adb86369281a15635627e616cdbbb40ab7ca8f77ebb5e46\": not found" containerID="7d54ae55e9b255082adb86369281a15635627e616cdbbb40ab7ca8f77ebb5e46" May 12 23:41:56.865234 kubelet[2880]: I0512 23:41:56.864901 2880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7d54ae55e9b255082adb86369281a15635627e616cdbbb40ab7ca8f77ebb5e46"} err="failed to get container status \"7d54ae55e9b255082adb86369281a15635627e616cdbbb40ab7ca8f77ebb5e46\": rpc error: code = NotFound desc = an error occurred when try to find container \"7d54ae55e9b255082adb86369281a15635627e616cdbbb40ab7ca8f77ebb5e46\": not found" May 12 23:41:56.958820 kubelet[2880]: I0512 23:41:56.958788 2880 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0164df58-cfc5-404c-a73b-8d45356dcf78-cilium-config-path\") pod \"0164df58-cfc5-404c-a73b-8d45356dcf78\" (UID: \"0164df58-cfc5-404c-a73b-8d45356dcf78\") " May 12 23:41:56.958820 kubelet[2880]: I0512 23:41:56.958818 2880 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5a7a7643-15de-49f1-b799-3f44a8701ae4-xtables-lock\") pod \"5a7a7643-15de-49f1-b799-3f44a8701ae4\" (UID: \"5a7a7643-15de-49f1-b799-3f44a8701ae4\") " May 12 23:41:56.959088 kubelet[2880]: I0512 23:41:56.958835 2880 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5a7a7643-15de-49f1-b799-3f44a8701ae4-clustermesh-secrets\") pod \"5a7a7643-15de-49f1-b799-3f44a8701ae4\" (UID: \"5a7a7643-15de-49f1-b799-3f44a8701ae4\") " May 12 23:41:56.959088 kubelet[2880]: I0512 23:41:56.958848 2880 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5a7a7643-15de-49f1-b799-3f44a8701ae4-etc-cni-netd\") pod \"5a7a7643-15de-49f1-b799-3f44a8701ae4\" (UID: \"5a7a7643-15de-49f1-b799-3f44a8701ae4\") " May 12 23:41:56.959088 kubelet[2880]: I0512 23:41:56.958859 2880 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5a7a7643-15de-49f1-b799-3f44a8701ae4-cilium-run\") pod \"5a7a7643-15de-49f1-b799-3f44a8701ae4\" (UID: \"5a7a7643-15de-49f1-b799-3f44a8701ae4\") " May 12 23:41:56.959088 kubelet[2880]: I0512 23:41:56.958872 2880 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5a7a7643-15de-49f1-b799-3f44a8701ae4-lib-modules\") pod \"5a7a7643-15de-49f1-b799-3f44a8701ae4\" (UID: \"5a7a7643-15de-49f1-b799-3f44a8701ae4\") " May 12 23:41:56.959088 kubelet[2880]: I0512 23:41:56.958883 2880 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5a7a7643-15de-49f1-b799-3f44a8701ae4-host-proc-sys-net\") pod \"5a7a7643-15de-49f1-b799-3f44a8701ae4\" (UID: \"5a7a7643-15de-49f1-b799-3f44a8701ae4\") " May 12 23:41:56.959088 kubelet[2880]: I0512 23:41:56.958893 2880 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5a7a7643-15de-49f1-b799-3f44a8701ae4-host-proc-sys-kernel\") pod \"5a7a7643-15de-49f1-b799-3f44a8701ae4\" (UID: \"5a7a7643-15de-49f1-b799-3f44a8701ae4\") " May 12 23:41:56.959271 kubelet[2880]: I0512 23:41:56.958906 2880 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l6bxf\" (UniqueName: \"kubernetes.io/projected/5a7a7643-15de-49f1-b799-3f44a8701ae4-kube-api-access-l6bxf\") pod \"5a7a7643-15de-49f1-b799-3f44a8701ae4\" (UID: \"5a7a7643-15de-49f1-b799-3f44a8701ae4\") " May 12 23:41:56.959271 kubelet[2880]: I0512 23:41:56.958917 2880 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5a7a7643-15de-49f1-b799-3f44a8701ae4-bpf-maps\") pod \"5a7a7643-15de-49f1-b799-3f44a8701ae4\" (UID: \"5a7a7643-15de-49f1-b799-3f44a8701ae4\") " May 12 23:41:56.959271 kubelet[2880]: I0512 23:41:56.958928 2880 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5a7a7643-15de-49f1-b799-3f44a8701ae4-cni-path\") pod \"5a7a7643-15de-49f1-b799-3f44a8701ae4\" (UID: \"5a7a7643-15de-49f1-b799-3f44a8701ae4\") " May 12 23:41:56.959271 kubelet[2880]: I0512 23:41:56.958941 2880 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w94w\" (UniqueName: \"kubernetes.io/projected/0164df58-cfc5-404c-a73b-8d45356dcf78-kube-api-access-2w94w\") pod \"0164df58-cfc5-404c-a73b-8d45356dcf78\" (UID: \"0164df58-cfc5-404c-a73b-8d45356dcf78\") " May 12 23:41:56.959271 kubelet[2880]: I0512 23:41:56.958953 2880 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5a7a7643-15de-49f1-b799-3f44a8701ae4-hubble-tls\") pod \"5a7a7643-15de-49f1-b799-3f44a8701ae4\" (UID: \"5a7a7643-15de-49f1-b799-3f44a8701ae4\") " May 12 23:41:56.959271 kubelet[2880]: I0512 23:41:56.958966 2880 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5a7a7643-15de-49f1-b799-3f44a8701ae4-hostproc\") pod \"5a7a7643-15de-49f1-b799-3f44a8701ae4\" (UID: \"5a7a7643-15de-49f1-b799-3f44a8701ae4\") " May 12 23:41:56.959400 kubelet[2880]: I0512 23:41:56.958982 2880 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5a7a7643-15de-49f1-b799-3f44a8701ae4-cilium-cgroup\") pod \"5a7a7643-15de-49f1-b799-3f44a8701ae4\" (UID: \"5a7a7643-15de-49f1-b799-3f44a8701ae4\") " May 12 23:41:56.959400 kubelet[2880]: I0512 23:41:56.958994 2880 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5a7a7643-15de-49f1-b799-3f44a8701ae4-cilium-config-path\") pod \"5a7a7643-15de-49f1-b799-3f44a8701ae4\" (UID: \"5a7a7643-15de-49f1-b799-3f44a8701ae4\") " May 12 23:41:56.962347 kubelet[2880]: I0512 23:41:56.961068 2880 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5a7a7643-15de-49f1-b799-3f44a8701ae4-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "5a7a7643-15de-49f1-b799-3f44a8701ae4" (UID: "5a7a7643-15de-49f1-b799-3f44a8701ae4"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 12 23:41:56.962347 kubelet[2880]: I0512 23:41:56.962259 2880 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5a7a7643-15de-49f1-b799-3f44a8701ae4-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "5a7a7643-15de-49f1-b799-3f44a8701ae4" (UID: "5a7a7643-15de-49f1-b799-3f44a8701ae4"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 12 23:41:56.962593 kubelet[2880]: I0512 23:41:56.962482 2880 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5a7a7643-15de-49f1-b799-3f44a8701ae4-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "5a7a7643-15de-49f1-b799-3f44a8701ae4" (UID: "5a7a7643-15de-49f1-b799-3f44a8701ae4"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 12 23:41:56.962593 kubelet[2880]: I0512 23:41:56.962503 2880 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5a7a7643-15de-49f1-b799-3f44a8701ae4-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "5a7a7643-15de-49f1-b799-3f44a8701ae4" (UID: "5a7a7643-15de-49f1-b799-3f44a8701ae4"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 12 23:41:56.962593 kubelet[2880]: I0512 23:41:56.962515 2880 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5a7a7643-15de-49f1-b799-3f44a8701ae4-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "5a7a7643-15de-49f1-b799-3f44a8701ae4" (UID: "5a7a7643-15de-49f1-b799-3f44a8701ae4"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 12 23:41:56.962593 kubelet[2880]: I0512 23:41:56.962525 2880 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5a7a7643-15de-49f1-b799-3f44a8701ae4-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "5a7a7643-15de-49f1-b799-3f44a8701ae4" (UID: "5a7a7643-15de-49f1-b799-3f44a8701ae4"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 12 23:41:56.966990 kubelet[2880]: I0512 23:41:56.966847 2880 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5a7a7643-15de-49f1-b799-3f44a8701ae4-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "5a7a7643-15de-49f1-b799-3f44a8701ae4" (UID: "5a7a7643-15de-49f1-b799-3f44a8701ae4"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 12 23:41:56.970089 kubelet[2880]: I0512 23:41:56.969316 2880 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5a7a7643-15de-49f1-b799-3f44a8701ae4-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "5a7a7643-15de-49f1-b799-3f44a8701ae4" (UID: "5a7a7643-15de-49f1-b799-3f44a8701ae4"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 12 23:41:56.970089 kubelet[2880]: I0512 23:41:56.969393 2880 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5a7a7643-15de-49f1-b799-3f44a8701ae4-cni-path" (OuterVolumeSpecName: "cni-path") pod "5a7a7643-15de-49f1-b799-3f44a8701ae4" (UID: "5a7a7643-15de-49f1-b799-3f44a8701ae4"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 12 23:41:56.971715 kubelet[2880]: I0512 23:41:56.971696 2880 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5a7a7643-15de-49f1-b799-3f44a8701ae4-hostproc" (OuterVolumeSpecName: "hostproc") pod "5a7a7643-15de-49f1-b799-3f44a8701ae4" (UID: "5a7a7643-15de-49f1-b799-3f44a8701ae4"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 12 23:41:56.972045 kubelet[2880]: I0512 23:41:56.971724 2880 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5a7a7643-15de-49f1-b799-3f44a8701ae4-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "5a7a7643-15de-49f1-b799-3f44a8701ae4" (UID: "5a7a7643-15de-49f1-b799-3f44a8701ae4"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 12 23:41:56.973261 kubelet[2880]: I0512 23:41:56.973246 2880 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a7a7643-15de-49f1-b799-3f44a8701ae4-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "5a7a7643-15de-49f1-b799-3f44a8701ae4" (UID: "5a7a7643-15de-49f1-b799-3f44a8701ae4"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" May 12 23:41:56.973488 kubelet[2880]: I0512 23:41:56.973426 2880 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a7a7643-15de-49f1-b799-3f44a8701ae4-kube-api-access-l6bxf" (OuterVolumeSpecName: "kube-api-access-l6bxf") pod "5a7a7643-15de-49f1-b799-3f44a8701ae4" (UID: "5a7a7643-15de-49f1-b799-3f44a8701ae4"). InnerVolumeSpecName "kube-api-access-l6bxf". PluginName "kubernetes.io/projected", VolumeGidValue "" May 12 23:41:56.973570 kubelet[2880]: I0512 23:41:56.973471 2880 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a7a7643-15de-49f1-b799-3f44a8701ae4-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "5a7a7643-15de-49f1-b799-3f44a8701ae4" (UID: "5a7a7643-15de-49f1-b799-3f44a8701ae4"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" May 12 23:41:56.974093 kubelet[2880]: I0512 23:41:56.974026 2880 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0164df58-cfc5-404c-a73b-8d45356dcf78-kube-api-access-2w94w" (OuterVolumeSpecName: "kube-api-access-2w94w") pod "0164df58-cfc5-404c-a73b-8d45356dcf78" (UID: "0164df58-cfc5-404c-a73b-8d45356dcf78"). InnerVolumeSpecName "kube-api-access-2w94w". PluginName "kubernetes.io/projected", VolumeGidValue "" May 12 23:41:56.975479 kubelet[2880]: I0512 23:41:56.975445 2880 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0164df58-cfc5-404c-a73b-8d45356dcf78-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "0164df58-cfc5-404c-a73b-8d45356dcf78" (UID: "0164df58-cfc5-404c-a73b-8d45356dcf78"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 12 23:41:57.061937 kubelet[2880]: I0512 23:41:57.061835 2880 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-2w94w\" (UniqueName: \"kubernetes.io/projected/0164df58-cfc5-404c-a73b-8d45356dcf78-kube-api-access-2w94w\") on node \"localhost\" DevicePath \"\"" May 12 23:41:57.061937 kubelet[2880]: I0512 23:41:57.061862 2880 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5a7a7643-15de-49f1-b799-3f44a8701ae4-hubble-tls\") on node \"localhost\" DevicePath \"\"" May 12 23:41:57.061937 kubelet[2880]: I0512 23:41:57.061871 2880 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5a7a7643-15de-49f1-b799-3f44a8701ae4-hostproc\") on node \"localhost\" DevicePath \"\"" May 12 23:41:57.061937 kubelet[2880]: I0512 23:41:57.061877 2880 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5a7a7643-15de-49f1-b799-3f44a8701ae4-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" May 12 23:41:57.061937 kubelet[2880]: I0512 23:41:57.061886 2880 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5a7a7643-15de-49f1-b799-3f44a8701ae4-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 12 23:41:57.061937 kubelet[2880]: I0512 23:41:57.061893 2880 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0164df58-cfc5-404c-a73b-8d45356dcf78-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 12 23:41:57.061937 kubelet[2880]: I0512 23:41:57.061900 2880 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5a7a7643-15de-49f1-b799-3f44a8701ae4-xtables-lock\") on node \"localhost\" DevicePath \"\"" May 12 23:41:57.061937 kubelet[2880]: I0512 23:41:57.061905 2880 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5a7a7643-15de-49f1-b799-3f44a8701ae4-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" May 12 23:41:57.062629 kubelet[2880]: I0512 23:41:57.061912 2880 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5a7a7643-15de-49f1-b799-3f44a8701ae4-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" May 12 23:41:57.062629 kubelet[2880]: I0512 23:41:57.061918 2880 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5a7a7643-15de-49f1-b799-3f44a8701ae4-cilium-run\") on node \"localhost\" DevicePath \"\"" May 12 23:41:57.062629 kubelet[2880]: I0512 23:41:57.061923 2880 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5a7a7643-15de-49f1-b799-3f44a8701ae4-lib-modules\") on node \"localhost\" DevicePath \"\"" May 12 23:41:57.062629 kubelet[2880]: I0512 23:41:57.061929 2880 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5a7a7643-15de-49f1-b799-3f44a8701ae4-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" May 12 23:41:57.062629 kubelet[2880]: I0512 23:41:57.061937 2880 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-l6bxf\" (UniqueName: \"kubernetes.io/projected/5a7a7643-15de-49f1-b799-3f44a8701ae4-kube-api-access-l6bxf\") on node \"localhost\" DevicePath \"\"" May 12 23:41:57.062629 kubelet[2880]: I0512 23:41:57.061944 2880 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5a7a7643-15de-49f1-b799-3f44a8701ae4-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" May 12 23:41:57.062629 kubelet[2880]: I0512 23:41:57.061950 2880 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5a7a7643-15de-49f1-b799-3f44a8701ae4-cni-path\") on node \"localhost\" DevicePath \"\"" May 12 23:41:57.062629 kubelet[2880]: I0512 23:41:57.061957 2880 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5a7a7643-15de-49f1-b799-3f44a8701ae4-bpf-maps\") on node \"localhost\" DevicePath \"\"" May 12 23:41:57.143394 systemd[1]: Removed slice kubepods-besteffort-pod0164df58_cfc5_404c_a73b_8d45356dcf78.slice - libcontainer container kubepods-besteffort-pod0164df58_cfc5_404c_a73b_8d45356dcf78.slice. May 12 23:41:57.143672 systemd[1]: kubepods-besteffort-pod0164df58_cfc5_404c_a73b_8d45356dcf78.slice: Consumed 268ms CPU time, 25.5M memory peak, 6.5M read from disk, 4K written to disk. May 12 23:41:57.153693 systemd[1]: Removed slice kubepods-burstable-pod5a7a7643_15de_49f1_b799_3f44a8701ae4.slice - libcontainer container kubepods-burstable-pod5a7a7643_15de_49f1_b799_3f44a8701ae4.slice. May 12 23:41:57.153795 systemd[1]: kubepods-burstable-pod5a7a7643_15de_49f1_b799_3f44a8701ae4.slice: Consumed 4.597s CPU time, 190.3M memory peak, 67.5M read from disk, 13.3M written to disk. May 12 23:41:57.439701 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7c891dc2b58685baf8c95af29e5c9a52b289e1a5b0b1e5bb0db52044a263fa9e-rootfs.mount: Deactivated successfully. May 12 23:41:57.440649 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f31af28e1d05f503e0914e6fb1c56d10a75e9de3f22e56a8040d2bc273667abc-rootfs.mount: Deactivated successfully. May 12 23:41:57.440768 systemd[1]: var-lib-kubelet-pods-5a7a7643\x2d15de\x2d49f1\x2db799\x2d3f44a8701ae4-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dl6bxf.mount: Deactivated successfully. May 12 23:41:57.440866 systemd[1]: var-lib-kubelet-pods-5a7a7643\x2d15de\x2d49f1\x2db799\x2d3f44a8701ae4-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 12 23:41:57.440920 systemd[1]: var-lib-kubelet-pods-5a7a7643\x2d15de\x2d49f1\x2db799\x2d3f44a8701ae4-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 12 23:41:57.440975 systemd[1]: var-lib-kubelet-pods-0164df58\x2dcfc5\x2d404c\x2da73b\x2d8d45356dcf78-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d2w94w.mount: Deactivated successfully. May 12 23:41:57.449015 kubelet[2880]: I0512 23:41:57.448967 2880 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0164df58-cfc5-404c-a73b-8d45356dcf78" path="/var/lib/kubelet/pods/0164df58-cfc5-404c-a73b-8d45356dcf78/volumes" May 12 23:41:57.449664 kubelet[2880]: I0512 23:41:57.449647 2880 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5a7a7643-15de-49f1-b799-3f44a8701ae4" path="/var/lib/kubelet/pods/5a7a7643-15de-49f1-b799-3f44a8701ae4/volumes" May 12 23:41:58.335767 sshd[4479]: Connection closed by 139.178.68.195 port 53660 May 12 23:41:58.336616 sshd-session[4476]: pam_unix(sshd:session): session closed for user core May 12 23:41:58.342723 systemd[1]: sshd@22-139.178.70.108:22-139.178.68.195:53660.service: Deactivated successfully. May 12 23:41:58.344135 systemd[1]: session-25.scope: Deactivated successfully. May 12 23:41:58.344836 systemd-logind[1545]: Session 25 logged out. Waiting for processes to exit. May 12 23:41:58.350929 systemd[1]: Started sshd@23-139.178.70.108:22-139.178.68.195:53670.service - OpenSSH per-connection server daemon (139.178.68.195:53670). May 12 23:41:58.352199 systemd-logind[1545]: Removed session 25. May 12 23:41:58.387223 sshd[4638]: Accepted publickey for core from 139.178.68.195 port 53670 ssh2: RSA SHA256:sCFuDykEXum0h7cf6aaOmFY5lUkY7O+fj3cBiGeu3s0 May 12 23:41:58.388225 sshd-session[4638]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 12 23:41:58.392415 systemd-logind[1545]: New session 26 of user core. May 12 23:41:58.398853 systemd[1]: Started session-26.scope - Session 26 of User core. May 12 23:41:58.482229 kubelet[2880]: E0512 23:41:58.482194 2880 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 12 23:41:58.848073 sshd[4641]: Connection closed by 139.178.68.195 port 53670 May 12 23:41:58.848281 sshd-session[4638]: pam_unix(sshd:session): session closed for user core May 12 23:41:58.859092 systemd[1]: sshd@23-139.178.70.108:22-139.178.68.195:53670.service: Deactivated successfully. May 12 23:41:58.861339 systemd[1]: session-26.scope: Deactivated successfully. May 12 23:41:58.862827 systemd-logind[1545]: Session 26 logged out. Waiting for processes to exit. May 12 23:41:58.867754 kubelet[2880]: I0512 23:41:58.866445 2880 topology_manager.go:215] "Topology Admit Handler" podUID="38488f80-f085-4fb5-847c-8fff175ca0e5" podNamespace="kube-system" podName="cilium-dhxqp" May 12 23:41:58.867875 kubelet[2880]: E0512 23:41:58.867865 2880 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="5a7a7643-15de-49f1-b799-3f44a8701ae4" containerName="mount-cgroup" May 12 23:41:58.869618 kubelet[2880]: E0512 23:41:58.867986 2880 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="5a7a7643-15de-49f1-b799-3f44a8701ae4" containerName="apply-sysctl-overwrites" May 12 23:41:58.869618 kubelet[2880]: E0512 23:41:58.867993 2880 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="5a7a7643-15de-49f1-b799-3f44a8701ae4" containerName="clean-cilium-state" May 12 23:41:58.869618 kubelet[2880]: E0512 23:41:58.867996 2880 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="5a7a7643-15de-49f1-b799-3f44a8701ae4" containerName="cilium-agent" May 12 23:41:58.869618 kubelet[2880]: E0512 23:41:58.868001 2880 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0164df58-cfc5-404c-a73b-8d45356dcf78" containerName="cilium-operator" May 12 23:41:58.869618 kubelet[2880]: E0512 23:41:58.868005 2880 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="5a7a7643-15de-49f1-b799-3f44a8701ae4" containerName="mount-bpf-fs" May 12 23:41:58.871211 systemd[1]: Started sshd@24-139.178.70.108:22-139.178.68.195:53676.service - OpenSSH per-connection server daemon (139.178.68.195:53676). May 12 23:41:58.875683 kubelet[2880]: I0512 23:41:58.875493 2880 memory_manager.go:354] "RemoveStaleState removing state" podUID="0164df58-cfc5-404c-a73b-8d45356dcf78" containerName="cilium-operator" May 12 23:41:58.875683 kubelet[2880]: I0512 23:41:58.875515 2880 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a7a7643-15de-49f1-b799-3f44a8701ae4" containerName="cilium-agent" May 12 23:41:58.876189 systemd-logind[1545]: Removed session 26. May 12 23:41:58.897041 systemd[1]: Created slice kubepods-burstable-pod38488f80_f085_4fb5_847c_8fff175ca0e5.slice - libcontainer container kubepods-burstable-pod38488f80_f085_4fb5_847c_8fff175ca0e5.slice. May 12 23:41:58.924231 sshd[4650]: Accepted publickey for core from 139.178.68.195 port 53676 ssh2: RSA SHA256:sCFuDykEXum0h7cf6aaOmFY5lUkY7O+fj3cBiGeu3s0 May 12 23:41:58.925086 sshd-session[4650]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 12 23:41:58.927657 systemd-logind[1545]: New session 27 of user core. May 12 23:41:58.938873 systemd[1]: Started session-27.scope - Session 27 of User core. May 12 23:41:58.973449 kubelet[2880]: I0512 23:41:58.973416 2880 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/38488f80-f085-4fb5-847c-8fff175ca0e5-bpf-maps\") pod \"cilium-dhxqp\" (UID: \"38488f80-f085-4fb5-847c-8fff175ca0e5\") " pod="kube-system/cilium-dhxqp" May 12 23:41:58.973449 kubelet[2880]: I0512 23:41:58.973443 2880 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/38488f80-f085-4fb5-847c-8fff175ca0e5-hostproc\") pod \"cilium-dhxqp\" (UID: \"38488f80-f085-4fb5-847c-8fff175ca0e5\") " pod="kube-system/cilium-dhxqp" May 12 23:41:58.973536 kubelet[2880]: I0512 23:41:58.973457 2880 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/38488f80-f085-4fb5-847c-8fff175ca0e5-cilium-cgroup\") pod \"cilium-dhxqp\" (UID: \"38488f80-f085-4fb5-847c-8fff175ca0e5\") " pod="kube-system/cilium-dhxqp" May 12 23:41:58.973536 kubelet[2880]: I0512 23:41:58.973466 2880 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/38488f80-f085-4fb5-847c-8fff175ca0e5-cilium-run\") pod \"cilium-dhxqp\" (UID: \"38488f80-f085-4fb5-847c-8fff175ca0e5\") " pod="kube-system/cilium-dhxqp" May 12 23:41:58.973536 kubelet[2880]: I0512 23:41:58.973476 2880 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/38488f80-f085-4fb5-847c-8fff175ca0e5-hubble-tls\") pod \"cilium-dhxqp\" (UID: \"38488f80-f085-4fb5-847c-8fff175ca0e5\") " pod="kube-system/cilium-dhxqp" May 12 23:41:58.973536 kubelet[2880]: I0512 23:41:58.973485 2880 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/38488f80-f085-4fb5-847c-8fff175ca0e5-etc-cni-netd\") pod \"cilium-dhxqp\" (UID: \"38488f80-f085-4fb5-847c-8fff175ca0e5\") " pod="kube-system/cilium-dhxqp" May 12 23:41:58.973536 kubelet[2880]: I0512 23:41:58.973496 2880 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/38488f80-f085-4fb5-847c-8fff175ca0e5-cilium-config-path\") pod \"cilium-dhxqp\" (UID: \"38488f80-f085-4fb5-847c-8fff175ca0e5\") " pod="kube-system/cilium-dhxqp" May 12 23:41:58.973536 kubelet[2880]: I0512 23:41:58.973508 2880 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5ds7g\" (UniqueName: \"kubernetes.io/projected/38488f80-f085-4fb5-847c-8fff175ca0e5-kube-api-access-5ds7g\") pod \"cilium-dhxqp\" (UID: \"38488f80-f085-4fb5-847c-8fff175ca0e5\") " pod="kube-system/cilium-dhxqp" May 12 23:41:58.973653 kubelet[2880]: I0512 23:41:58.973519 2880 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/38488f80-f085-4fb5-847c-8fff175ca0e5-clustermesh-secrets\") pod \"cilium-dhxqp\" (UID: \"38488f80-f085-4fb5-847c-8fff175ca0e5\") " pod="kube-system/cilium-dhxqp" May 12 23:41:58.973653 kubelet[2880]: I0512 23:41:58.973528 2880 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/38488f80-f085-4fb5-847c-8fff175ca0e5-lib-modules\") pod \"cilium-dhxqp\" (UID: \"38488f80-f085-4fb5-847c-8fff175ca0e5\") " pod="kube-system/cilium-dhxqp" May 12 23:41:58.973653 kubelet[2880]: I0512 23:41:58.973537 2880 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/38488f80-f085-4fb5-847c-8fff175ca0e5-cni-path\") pod \"cilium-dhxqp\" (UID: \"38488f80-f085-4fb5-847c-8fff175ca0e5\") " pod="kube-system/cilium-dhxqp" May 12 23:41:58.973653 kubelet[2880]: I0512 23:41:58.973545 2880 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/38488f80-f085-4fb5-847c-8fff175ca0e5-xtables-lock\") pod \"cilium-dhxqp\" (UID: \"38488f80-f085-4fb5-847c-8fff175ca0e5\") " pod="kube-system/cilium-dhxqp" May 12 23:41:58.973653 kubelet[2880]: I0512 23:41:58.973553 2880 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/38488f80-f085-4fb5-847c-8fff175ca0e5-cilium-ipsec-secrets\") pod \"cilium-dhxqp\" (UID: \"38488f80-f085-4fb5-847c-8fff175ca0e5\") " pod="kube-system/cilium-dhxqp" May 12 23:41:58.973653 kubelet[2880]: I0512 23:41:58.973562 2880 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/38488f80-f085-4fb5-847c-8fff175ca0e5-host-proc-sys-net\") pod \"cilium-dhxqp\" (UID: \"38488f80-f085-4fb5-847c-8fff175ca0e5\") " pod="kube-system/cilium-dhxqp" May 12 23:41:58.973767 kubelet[2880]: I0512 23:41:58.973571 2880 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/38488f80-f085-4fb5-847c-8fff175ca0e5-host-proc-sys-kernel\") pod \"cilium-dhxqp\" (UID: \"38488f80-f085-4fb5-847c-8fff175ca0e5\") " pod="kube-system/cilium-dhxqp" May 12 23:41:58.986632 sshd[4654]: Connection closed by 139.178.68.195 port 53676 May 12 23:41:58.986027 sshd-session[4650]: pam_unix(sshd:session): session closed for user core May 12 23:41:58.999046 systemd[1]: sshd@24-139.178.70.108:22-139.178.68.195:53676.service: Deactivated successfully. May 12 23:41:59.000115 systemd[1]: session-27.scope: Deactivated successfully. May 12 23:41:59.001082 systemd-logind[1545]: Session 27 logged out. Waiting for processes to exit. May 12 23:41:59.007933 systemd[1]: Started sshd@25-139.178.70.108:22-139.178.68.195:53684.service - OpenSSH per-connection server daemon (139.178.68.195:53684). May 12 23:41:59.008999 systemd-logind[1545]: Removed session 27. May 12 23:41:59.039895 sshd[4661]: Accepted publickey for core from 139.178.68.195 port 53684 ssh2: RSA SHA256:sCFuDykEXum0h7cf6aaOmFY5lUkY7O+fj3cBiGeu3s0 May 12 23:41:59.040709 sshd-session[4661]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 12 23:41:59.043628 systemd-logind[1545]: New session 28 of user core. May 12 23:41:59.052022 systemd[1]: Started session-28.scope - Session 28 of User core. May 12 23:41:59.199963 containerd[1566]: time="2025-05-12T23:41:59.199889965Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-dhxqp,Uid:38488f80-f085-4fb5-847c-8fff175ca0e5,Namespace:kube-system,Attempt:0,}" May 12 23:41:59.217084 containerd[1566]: time="2025-05-12T23:41:59.216475057Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 12 23:41:59.217084 containerd[1566]: time="2025-05-12T23:41:59.216924633Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 12 23:41:59.217084 containerd[1566]: time="2025-05-12T23:41:59.216946140Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 12 23:41:59.217084 containerd[1566]: time="2025-05-12T23:41:59.217018136Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 12 23:41:59.230902 systemd[1]: Started cri-containerd-164f6a067ecdeb43f5b12038c83349807f0869c07c4bb00b08be9751d15ae8d9.scope - libcontainer container 164f6a067ecdeb43f5b12038c83349807f0869c07c4bb00b08be9751d15ae8d9. May 12 23:41:59.248038 containerd[1566]: time="2025-05-12T23:41:59.248011270Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-dhxqp,Uid:38488f80-f085-4fb5-847c-8fff175ca0e5,Namespace:kube-system,Attempt:0,} returns sandbox id \"164f6a067ecdeb43f5b12038c83349807f0869c07c4bb00b08be9751d15ae8d9\"" May 12 23:41:59.250462 containerd[1566]: time="2025-05-12T23:41:59.250438036Z" level=info msg="CreateContainer within sandbox \"164f6a067ecdeb43f5b12038c83349807f0869c07c4bb00b08be9751d15ae8d9\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 12 23:41:59.256484 containerd[1566]: time="2025-05-12T23:41:59.256450706Z" level=info msg="CreateContainer within sandbox \"164f6a067ecdeb43f5b12038c83349807f0869c07c4bb00b08be9751d15ae8d9\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"843e7ab0960089cd634ac8b5226c0b1cc07e88353509677bac96bdd9cac9b650\"" May 12 23:41:59.256903 containerd[1566]: time="2025-05-12T23:41:59.256868934Z" level=info msg="StartContainer for \"843e7ab0960089cd634ac8b5226c0b1cc07e88353509677bac96bdd9cac9b650\"" May 12 23:41:59.281898 systemd[1]: Started cri-containerd-843e7ab0960089cd634ac8b5226c0b1cc07e88353509677bac96bdd9cac9b650.scope - libcontainer container 843e7ab0960089cd634ac8b5226c0b1cc07e88353509677bac96bdd9cac9b650. May 12 23:41:59.299560 containerd[1566]: time="2025-05-12T23:41:59.299537507Z" level=info msg="StartContainer for \"843e7ab0960089cd634ac8b5226c0b1cc07e88353509677bac96bdd9cac9b650\" returns successfully" May 12 23:41:59.308663 systemd[1]: cri-containerd-843e7ab0960089cd634ac8b5226c0b1cc07e88353509677bac96bdd9cac9b650.scope: Deactivated successfully. May 12 23:41:59.373212 containerd[1566]: time="2025-05-12T23:41:59.373122441Z" level=info msg="shim disconnected" id=843e7ab0960089cd634ac8b5226c0b1cc07e88353509677bac96bdd9cac9b650 namespace=k8s.io May 12 23:41:59.373212 containerd[1566]: time="2025-05-12T23:41:59.373178803Z" level=warning msg="cleaning up after shim disconnected" id=843e7ab0960089cd634ac8b5226c0b1cc07e88353509677bac96bdd9cac9b650 namespace=k8s.io May 12 23:41:59.373212 containerd[1566]: time="2025-05-12T23:41:59.373184165Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 12 23:41:59.447075 kubelet[2880]: E0512 23:41:59.446916 2880 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7db6d8ff4d-szz8v" podUID="976e96bc-2abf-4899-a08a-0dd42f437878" May 12 23:41:59.856219 containerd[1566]: time="2025-05-12T23:41:59.856182738Z" level=info msg="CreateContainer within sandbox \"164f6a067ecdeb43f5b12038c83349807f0869c07c4bb00b08be9751d15ae8d9\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 12 23:41:59.901364 containerd[1566]: time="2025-05-12T23:41:59.901293795Z" level=info msg="CreateContainer within sandbox \"164f6a067ecdeb43f5b12038c83349807f0869c07c4bb00b08be9751d15ae8d9\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"e6cd7e0705f56259cefd31266662a7a796f50bdb57ab2bdf4abcfdfa300d16d1\"" May 12 23:41:59.901829 containerd[1566]: time="2025-05-12T23:41:59.901726283Z" level=info msg="StartContainer for \"e6cd7e0705f56259cefd31266662a7a796f50bdb57ab2bdf4abcfdfa300d16d1\"" May 12 23:41:59.921911 systemd[1]: Started cri-containerd-e6cd7e0705f56259cefd31266662a7a796f50bdb57ab2bdf4abcfdfa300d16d1.scope - libcontainer container e6cd7e0705f56259cefd31266662a7a796f50bdb57ab2bdf4abcfdfa300d16d1. May 12 23:41:59.939353 containerd[1566]: time="2025-05-12T23:41:59.939327995Z" level=info msg="StartContainer for \"e6cd7e0705f56259cefd31266662a7a796f50bdb57ab2bdf4abcfdfa300d16d1\" returns successfully" May 12 23:41:59.943470 systemd[1]: cri-containerd-e6cd7e0705f56259cefd31266662a7a796f50bdb57ab2bdf4abcfdfa300d16d1.scope: Deactivated successfully. May 12 23:41:59.959539 containerd[1566]: time="2025-05-12T23:41:59.959493022Z" level=info msg="shim disconnected" id=e6cd7e0705f56259cefd31266662a7a796f50bdb57ab2bdf4abcfdfa300d16d1 namespace=k8s.io May 12 23:41:59.959658 containerd[1566]: time="2025-05-12T23:41:59.959547323Z" level=warning msg="cleaning up after shim disconnected" id=e6cd7e0705f56259cefd31266662a7a796f50bdb57ab2bdf4abcfdfa300d16d1 namespace=k8s.io May 12 23:41:59.959658 containerd[1566]: time="2025-05-12T23:41:59.959556381Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 12 23:42:00.858140 containerd[1566]: time="2025-05-12T23:42:00.858022891Z" level=info msg="CreateContainer within sandbox \"164f6a067ecdeb43f5b12038c83349807f0869c07c4bb00b08be9751d15ae8d9\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 12 23:42:00.895639 containerd[1566]: time="2025-05-12T23:42:00.895604904Z" level=info msg="CreateContainer within sandbox \"164f6a067ecdeb43f5b12038c83349807f0869c07c4bb00b08be9751d15ae8d9\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"4341cb019883590cdc65bd148f787c81ae322ca7848f9b7a0325717bc55a881e\"" May 12 23:42:00.896971 containerd[1566]: time="2025-05-12T23:42:00.896827378Z" level=info msg="StartContainer for \"4341cb019883590cdc65bd148f787c81ae322ca7848f9b7a0325717bc55a881e\"" May 12 23:42:00.917840 systemd[1]: Started cri-containerd-4341cb019883590cdc65bd148f787c81ae322ca7848f9b7a0325717bc55a881e.scope - libcontainer container 4341cb019883590cdc65bd148f787c81ae322ca7848f9b7a0325717bc55a881e. May 12 23:42:00.934769 containerd[1566]: time="2025-05-12T23:42:00.934591526Z" level=info msg="StartContainer for \"4341cb019883590cdc65bd148f787c81ae322ca7848f9b7a0325717bc55a881e\" returns successfully" May 12 23:42:00.940820 systemd[1]: cri-containerd-4341cb019883590cdc65bd148f787c81ae322ca7848f9b7a0325717bc55a881e.scope: Deactivated successfully. May 12 23:42:00.956789 containerd[1566]: time="2025-05-12T23:42:00.956727592Z" level=info msg="shim disconnected" id=4341cb019883590cdc65bd148f787c81ae322ca7848f9b7a0325717bc55a881e namespace=k8s.io May 12 23:42:00.956988 containerd[1566]: time="2025-05-12T23:42:00.956865220Z" level=warning msg="cleaning up after shim disconnected" id=4341cb019883590cdc65bd148f787c81ae322ca7848f9b7a0325717bc55a881e namespace=k8s.io May 12 23:42:00.956988 containerd[1566]: time="2025-05-12T23:42:00.956873947Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 12 23:42:01.077443 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4341cb019883590cdc65bd148f787c81ae322ca7848f9b7a0325717bc55a881e-rootfs.mount: Deactivated successfully. May 12 23:42:01.446879 kubelet[2880]: E0512 23:42:01.446662 2880 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7db6d8ff4d-szz8v" podUID="976e96bc-2abf-4899-a08a-0dd42f437878" May 12 23:42:01.861470 containerd[1566]: time="2025-05-12T23:42:01.861231868Z" level=info msg="CreateContainer within sandbox \"164f6a067ecdeb43f5b12038c83349807f0869c07c4bb00b08be9751d15ae8d9\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 12 23:42:01.869936 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1408358390.mount: Deactivated successfully. May 12 23:42:01.870936 containerd[1566]: time="2025-05-12T23:42:01.870860935Z" level=info msg="CreateContainer within sandbox \"164f6a067ecdeb43f5b12038c83349807f0869c07c4bb00b08be9751d15ae8d9\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"3f6d376f1cc2e140353f1225f709df40e7a190294871c15cbb3b1f647c18d9e7\"" May 12 23:42:01.871686 containerd[1566]: time="2025-05-12T23:42:01.871653685Z" level=info msg="StartContainer for \"3f6d376f1cc2e140353f1225f709df40e7a190294871c15cbb3b1f647c18d9e7\"" May 12 23:42:01.899939 systemd[1]: Started cri-containerd-3f6d376f1cc2e140353f1225f709df40e7a190294871c15cbb3b1f647c18d9e7.scope - libcontainer container 3f6d376f1cc2e140353f1225f709df40e7a190294871c15cbb3b1f647c18d9e7. May 12 23:42:01.915836 systemd[1]: cri-containerd-3f6d376f1cc2e140353f1225f709df40e7a190294871c15cbb3b1f647c18d9e7.scope: Deactivated successfully. May 12 23:42:01.916231 containerd[1566]: time="2025-05-12T23:42:01.916113082Z" level=info msg="StartContainer for \"3f6d376f1cc2e140353f1225f709df40e7a190294871c15cbb3b1f647c18d9e7\" returns successfully" May 12 23:42:01.929181 containerd[1566]: time="2025-05-12T23:42:01.929103468Z" level=info msg="shim disconnected" id=3f6d376f1cc2e140353f1225f709df40e7a190294871c15cbb3b1f647c18d9e7 namespace=k8s.io May 12 23:42:01.929181 containerd[1566]: time="2025-05-12T23:42:01.929178563Z" level=warning msg="cleaning up after shim disconnected" id=3f6d376f1cc2e140353f1225f709df40e7a190294871c15cbb3b1f647c18d9e7 namespace=k8s.io May 12 23:42:01.929337 containerd[1566]: time="2025-05-12T23:42:01.929185105Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 12 23:42:01.935811 containerd[1566]: time="2025-05-12T23:42:01.935788874Z" level=warning msg="cleanup warnings time=\"2025-05-12T23:42:01Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io May 12 23:42:02.077693 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3f6d376f1cc2e140353f1225f709df40e7a190294871c15cbb3b1f647c18d9e7-rootfs.mount: Deactivated successfully. May 12 23:42:02.863649 containerd[1566]: time="2025-05-12T23:42:02.863350207Z" level=info msg="CreateContainer within sandbox \"164f6a067ecdeb43f5b12038c83349807f0869c07c4bb00b08be9751d15ae8d9\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 12 23:42:02.928220 containerd[1566]: time="2025-05-12T23:42:02.928149097Z" level=info msg="CreateContainer within sandbox \"164f6a067ecdeb43f5b12038c83349807f0869c07c4bb00b08be9751d15ae8d9\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"f91c700618fb660ce2937cc2a8f2189249109d6bd627405bcd8961535797cffb\"" May 12 23:42:02.928745 containerd[1566]: time="2025-05-12T23:42:02.928619865Z" level=info msg="StartContainer for \"f91c700618fb660ce2937cc2a8f2189249109d6bd627405bcd8961535797cffb\"" May 12 23:42:02.950848 systemd[1]: Started cri-containerd-f91c700618fb660ce2937cc2a8f2189249109d6bd627405bcd8961535797cffb.scope - libcontainer container f91c700618fb660ce2937cc2a8f2189249109d6bd627405bcd8961535797cffb. May 12 23:42:02.970356 containerd[1566]: time="2025-05-12T23:42:02.970300118Z" level=info msg="StartContainer for \"f91c700618fb660ce2937cc2a8f2189249109d6bd627405bcd8961535797cffb\" returns successfully" May 12 23:42:03.428516 containerd[1566]: time="2025-05-12T23:42:03.428251540Z" level=info msg="StopPodSandbox for \"7c891dc2b58685baf8c95af29e5c9a52b289e1a5b0b1e5bb0db52044a263fa9e\"" May 12 23:42:03.428516 containerd[1566]: time="2025-05-12T23:42:03.428320942Z" level=info msg="TearDown network for sandbox \"7c891dc2b58685baf8c95af29e5c9a52b289e1a5b0b1e5bb0db52044a263fa9e\" successfully" May 12 23:42:03.428516 containerd[1566]: time="2025-05-12T23:42:03.428361583Z" level=info msg="StopPodSandbox for \"7c891dc2b58685baf8c95af29e5c9a52b289e1a5b0b1e5bb0db52044a263fa9e\" returns successfully" May 12 23:42:03.428718 containerd[1566]: time="2025-05-12T23:42:03.428697675Z" level=info msg="RemovePodSandbox for \"7c891dc2b58685baf8c95af29e5c9a52b289e1a5b0b1e5bb0db52044a263fa9e\"" May 12 23:42:03.428795 containerd[1566]: time="2025-05-12T23:42:03.428724453Z" level=info msg="Forcibly stopping sandbox \"7c891dc2b58685baf8c95af29e5c9a52b289e1a5b0b1e5bb0db52044a263fa9e\"" May 12 23:42:03.428891 containerd[1566]: time="2025-05-12T23:42:03.428773290Z" level=info msg="TearDown network for sandbox \"7c891dc2b58685baf8c95af29e5c9a52b289e1a5b0b1e5bb0db52044a263fa9e\" successfully" May 12 23:42:03.431745 containerd[1566]: time="2025-05-12T23:42:03.431711096Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7c891dc2b58685baf8c95af29e5c9a52b289e1a5b0b1e5bb0db52044a263fa9e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 12 23:42:03.431823 containerd[1566]: time="2025-05-12T23:42:03.431756140Z" level=info msg="RemovePodSandbox \"7c891dc2b58685baf8c95af29e5c9a52b289e1a5b0b1e5bb0db52044a263fa9e\" returns successfully" May 12 23:42:03.432228 containerd[1566]: time="2025-05-12T23:42:03.432123839Z" level=info msg="StopPodSandbox for \"f31af28e1d05f503e0914e6fb1c56d10a75e9de3f22e56a8040d2bc273667abc\"" May 12 23:42:03.432228 containerd[1566]: time="2025-05-12T23:42:03.432169815Z" level=info msg="TearDown network for sandbox \"f31af28e1d05f503e0914e6fb1c56d10a75e9de3f22e56a8040d2bc273667abc\" successfully" May 12 23:42:03.432228 containerd[1566]: time="2025-05-12T23:42:03.432177566Z" level=info msg="StopPodSandbox for \"f31af28e1d05f503e0914e6fb1c56d10a75e9de3f22e56a8040d2bc273667abc\" returns successfully" May 12 23:42:03.432851 containerd[1566]: time="2025-05-12T23:42:03.432366671Z" level=info msg="RemovePodSandbox for \"f31af28e1d05f503e0914e6fb1c56d10a75e9de3f22e56a8040d2bc273667abc\"" May 12 23:42:03.432851 containerd[1566]: time="2025-05-12T23:42:03.432381706Z" level=info msg="Forcibly stopping sandbox \"f31af28e1d05f503e0914e6fb1c56d10a75e9de3f22e56a8040d2bc273667abc\"" May 12 23:42:03.432851 containerd[1566]: time="2025-05-12T23:42:03.432417764Z" level=info msg="TearDown network for sandbox \"f31af28e1d05f503e0914e6fb1c56d10a75e9de3f22e56a8040d2bc273667abc\" successfully" May 12 23:42:03.433753 containerd[1566]: time="2025-05-12T23:42:03.433715541Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f31af28e1d05f503e0914e6fb1c56d10a75e9de3f22e56a8040d2bc273667abc\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 12 23:42:03.433794 containerd[1566]: time="2025-05-12T23:42:03.433755883Z" level=info msg="RemovePodSandbox \"f31af28e1d05f503e0914e6fb1c56d10a75e9de3f22e56a8040d2bc273667abc\" returns successfully" May 12 23:42:03.446981 kubelet[2880]: E0512 23:42:03.446846 2880 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7db6d8ff4d-szz8v" podUID="976e96bc-2abf-4899-a08a-0dd42f437878" May 12 23:42:03.688751 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) May 12 23:42:06.198131 systemd-networkd[1257]: lxc_health: Link UP May 12 23:42:06.198296 systemd-networkd[1257]: lxc_health: Gained carrier May 12 23:42:07.212452 kubelet[2880]: I0512 23:42:07.212409 2880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-dhxqp" podStartSLOduration=9.212392939 podStartE2EDuration="9.212392939s" podCreationTimestamp="2025-05-12 23:41:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-12 23:42:03.876067804 +0000 UTC m=+120.563467038" watchObservedRunningTime="2025-05-12 23:42:07.212392939 +0000 UTC m=+123.899792169" May 12 23:42:07.324816 systemd-networkd[1257]: lxc_health: Gained IPv6LL May 12 23:42:07.524777 systemd[1]: run-containerd-runc-k8s.io-f91c700618fb660ce2937cc2a8f2189249109d6bd627405bcd8961535797cffb-runc.fVfYMr.mount: Deactivated successfully. May 12 23:42:09.633363 systemd[1]: run-containerd-runc-k8s.io-f91c700618fb660ce2937cc2a8f2189249109d6bd627405bcd8961535797cffb-runc.Hj1Fqr.mount: Deactivated successfully. May 12 23:42:11.720015 systemd[1]: run-containerd-runc-k8s.io-f91c700618fb660ce2937cc2a8f2189249109d6bd627405bcd8961535797cffb-runc.faWP1B.mount: Deactivated successfully. May 12 23:42:11.754757 sshd[4664]: Connection closed by 139.178.68.195 port 53684 May 12 23:42:11.755369 sshd-session[4661]: pam_unix(sshd:session): session closed for user core May 12 23:42:11.757004 systemd[1]: sshd@25-139.178.70.108:22-139.178.68.195:53684.service: Deactivated successfully. May 12 23:42:11.758244 systemd[1]: session-28.scope: Deactivated successfully. May 12 23:42:11.759150 systemd-logind[1545]: Session 28 logged out. Waiting for processes to exit. May 12 23:42:11.759691 systemd-logind[1545]: Removed session 28.