May 17 00:42:32.690244 kernel: Linux version 5.15.182-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Fri May 16 23:09:52 -00 2025 May 17 00:42:32.690263 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=4aad7caeadb0359f379975532748a0b4ae6bb9b229507353e0f5ae84cb9335a0 May 17 00:42:32.690270 kernel: Disabled fast string operations May 17 00:42:32.690274 kernel: BIOS-provided physical RAM map: May 17 00:42:32.690279 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ebff] usable May 17 00:42:32.690291 kernel: BIOS-e820: [mem 0x000000000009ec00-0x000000000009ffff] reserved May 17 00:42:32.690300 kernel: BIOS-e820: [mem 0x00000000000dc000-0x00000000000fffff] reserved May 17 00:42:32.690304 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007fedffff] usable May 17 00:42:32.690308 kernel: BIOS-e820: [mem 0x000000007fee0000-0x000000007fefefff] ACPI data May 17 00:42:32.690312 kernel: BIOS-e820: [mem 0x000000007feff000-0x000000007fefffff] ACPI NVS May 17 00:42:32.690317 kernel: BIOS-e820: [mem 0x000000007ff00000-0x000000007fffffff] usable May 17 00:42:32.690320 kernel: BIOS-e820: [mem 0x00000000f0000000-0x00000000f7ffffff] reserved May 17 00:42:32.690325 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec0ffff] reserved May 17 00:42:32.690329 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved May 17 00:42:32.690335 kernel: BIOS-e820: [mem 0x00000000fffe0000-0x00000000ffffffff] reserved May 17 00:42:32.690340 kernel: NX (Execute Disable) protection: active May 17 00:42:32.690344 kernel: SMBIOS 2.7 present. May 17 00:42:32.690349 kernel: DMI: VMware, Inc. VMware Virtual Platform/440BX Desktop Reference Platform, BIOS 6.00 05/28/2020 May 17 00:42:32.690353 kernel: vmware: hypercall mode: 0x00 May 17 00:42:32.690358 kernel: Hypervisor detected: VMware May 17 00:42:32.690363 kernel: vmware: TSC freq read from hypervisor : 3408.000 MHz May 17 00:42:32.690368 kernel: vmware: Host bus clock speed read from hypervisor : 66000000 Hz May 17 00:42:32.690372 kernel: vmware: using clock offset of 5957378399 ns May 17 00:42:32.690376 kernel: tsc: Detected 3408.000 MHz processor May 17 00:42:32.690381 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 17 00:42:32.690386 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 17 00:42:32.690391 kernel: last_pfn = 0x80000 max_arch_pfn = 0x400000000 May 17 00:42:32.690396 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 17 00:42:32.690400 kernel: total RAM covered: 3072M May 17 00:42:32.690406 kernel: Found optimal setting for mtrr clean up May 17 00:42:32.690411 kernel: gran_size: 64K chunk_size: 64K num_reg: 2 lose cover RAM: 0G May 17 00:42:32.690415 kernel: Using GB pages for direct mapping May 17 00:42:32.690420 kernel: ACPI: Early table checksum verification disabled May 17 00:42:32.690425 kernel: ACPI: RSDP 0x00000000000F6A00 000024 (v02 PTLTD ) May 17 00:42:32.690429 kernel: ACPI: XSDT 0x000000007FEE965B 00005C (v01 INTEL 440BX 06040000 VMW 01324272) May 17 00:42:32.690434 kernel: ACPI: FACP 0x000000007FEFEE73 0000F4 (v04 INTEL 440BX 06040000 PTL 000F4240) May 17 00:42:32.690438 kernel: ACPI: DSDT 0x000000007FEEAD55 01411E (v01 PTLTD Custom 06040000 MSFT 03000001) May 17 00:42:32.690443 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 May 17 00:42:32.690447 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 May 17 00:42:32.690452 kernel: ACPI: BOOT 0x000000007FEEAD2D 000028 (v01 PTLTD $SBFTBL$ 06040000 LTP 00000001) May 17 00:42:32.690459 kernel: ACPI: APIC 0x000000007FEEA5EB 000742 (v01 PTLTD ? APIC 06040000 LTP 00000000) May 17 00:42:32.690464 kernel: ACPI: MCFG 0x000000007FEEA5AF 00003C (v01 PTLTD $PCITBL$ 06040000 LTP 00000001) May 17 00:42:32.690469 kernel: ACPI: SRAT 0x000000007FEE9757 0008A8 (v02 VMWARE MEMPLUG 06040000 VMW 00000001) May 17 00:42:32.690474 kernel: ACPI: HPET 0x000000007FEE971F 000038 (v01 VMWARE VMW HPET 06040000 VMW 00000001) May 17 00:42:32.690480 kernel: ACPI: WAET 0x000000007FEE96F7 000028 (v01 VMWARE VMW WAET 06040000 VMW 00000001) May 17 00:42:32.690485 kernel: ACPI: Reserving FACP table memory at [mem 0x7fefee73-0x7fefef66] May 17 00:42:32.690490 kernel: ACPI: Reserving DSDT table memory at [mem 0x7feead55-0x7fefee72] May 17 00:42:32.690495 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] May 17 00:42:32.690499 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] May 17 00:42:32.690504 kernel: ACPI: Reserving BOOT table memory at [mem 0x7feead2d-0x7feead54] May 17 00:42:32.690509 kernel: ACPI: Reserving APIC table memory at [mem 0x7feea5eb-0x7feead2c] May 17 00:42:32.690514 kernel: ACPI: Reserving MCFG table memory at [mem 0x7feea5af-0x7feea5ea] May 17 00:42:32.690519 kernel: ACPI: Reserving SRAT table memory at [mem 0x7fee9757-0x7fee9ffe] May 17 00:42:32.690525 kernel: ACPI: Reserving HPET table memory at [mem 0x7fee971f-0x7fee9756] May 17 00:42:32.690530 kernel: ACPI: Reserving WAET table memory at [mem 0x7fee96f7-0x7fee971e] May 17 00:42:32.690535 kernel: system APIC only can use physical flat May 17 00:42:32.690539 kernel: Setting APIC routing to physical flat. May 17 00:42:32.690544 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 May 17 00:42:32.690549 kernel: SRAT: PXM 0 -> APIC 0x02 -> Node 0 May 17 00:42:32.690554 kernel: SRAT: PXM 0 -> APIC 0x04 -> Node 0 May 17 00:42:32.690559 kernel: SRAT: PXM 0 -> APIC 0x06 -> Node 0 May 17 00:42:32.690564 kernel: SRAT: PXM 0 -> APIC 0x08 -> Node 0 May 17 00:42:32.690569 kernel: SRAT: PXM 0 -> APIC 0x0a -> Node 0 May 17 00:42:32.690574 kernel: SRAT: PXM 0 -> APIC 0x0c -> Node 0 May 17 00:42:32.690579 kernel: SRAT: PXM 0 -> APIC 0x0e -> Node 0 May 17 00:42:32.690584 kernel: SRAT: PXM 0 -> APIC 0x10 -> Node 0 May 17 00:42:32.690588 kernel: SRAT: PXM 0 -> APIC 0x12 -> Node 0 May 17 00:42:32.690593 kernel: SRAT: PXM 0 -> APIC 0x14 -> Node 0 May 17 00:42:32.690598 kernel: SRAT: PXM 0 -> APIC 0x16 -> Node 0 May 17 00:42:32.690603 kernel: SRAT: PXM 0 -> APIC 0x18 -> Node 0 May 17 00:42:32.690608 kernel: SRAT: PXM 0 -> APIC 0x1a -> Node 0 May 17 00:42:32.690613 kernel: SRAT: PXM 0 -> APIC 0x1c -> Node 0 May 17 00:42:32.690618 kernel: SRAT: PXM 0 -> APIC 0x1e -> Node 0 May 17 00:42:32.690623 kernel: SRAT: PXM 0 -> APIC 0x20 -> Node 0 May 17 00:42:32.690628 kernel: SRAT: PXM 0 -> APIC 0x22 -> Node 0 May 17 00:42:32.690633 kernel: SRAT: PXM 0 -> APIC 0x24 -> Node 0 May 17 00:42:32.690638 kernel: SRAT: PXM 0 -> APIC 0x26 -> Node 0 May 17 00:42:32.690643 kernel: SRAT: PXM 0 -> APIC 0x28 -> Node 0 May 17 00:42:32.690647 kernel: SRAT: PXM 0 -> APIC 0x2a -> Node 0 May 17 00:42:32.690652 kernel: SRAT: PXM 0 -> APIC 0x2c -> Node 0 May 17 00:42:32.690657 kernel: SRAT: PXM 0 -> APIC 0x2e -> Node 0 May 17 00:42:32.690662 kernel: SRAT: PXM 0 -> APIC 0x30 -> Node 0 May 17 00:42:32.690669 kernel: SRAT: PXM 0 -> APIC 0x32 -> Node 0 May 17 00:42:32.690674 kernel: SRAT: PXM 0 -> APIC 0x34 -> Node 0 May 17 00:42:32.690678 kernel: SRAT: PXM 0 -> APIC 0x36 -> Node 0 May 17 00:42:32.690683 kernel: SRAT: PXM 0 -> APIC 0x38 -> Node 0 May 17 00:42:32.690688 kernel: SRAT: PXM 0 -> APIC 0x3a -> Node 0 May 17 00:42:32.690693 kernel: SRAT: PXM 0 -> APIC 0x3c -> Node 0 May 17 00:42:32.690697 kernel: SRAT: PXM 0 -> APIC 0x3e -> Node 0 May 17 00:42:32.690714 kernel: SRAT: PXM 0 -> APIC 0x40 -> Node 0 May 17 00:42:32.690719 kernel: SRAT: PXM 0 -> APIC 0x42 -> Node 0 May 17 00:42:32.690790 kernel: SRAT: PXM 0 -> APIC 0x44 -> Node 0 May 17 00:42:32.690800 kernel: SRAT: PXM 0 -> APIC 0x46 -> Node 0 May 17 00:42:32.690805 kernel: SRAT: PXM 0 -> APIC 0x48 -> Node 0 May 17 00:42:32.690817 kernel: SRAT: PXM 0 -> APIC 0x4a -> Node 0 May 17 00:42:32.690823 kernel: SRAT: PXM 0 -> APIC 0x4c -> Node 0 May 17 00:42:32.690828 kernel: SRAT: PXM 0 -> APIC 0x4e -> Node 0 May 17 00:42:32.690833 kernel: SRAT: PXM 0 -> APIC 0x50 -> Node 0 May 17 00:42:32.690838 kernel: SRAT: PXM 0 -> APIC 0x52 -> Node 0 May 17 00:42:32.690843 kernel: SRAT: PXM 0 -> APIC 0x54 -> Node 0 May 17 00:42:32.690847 kernel: SRAT: PXM 0 -> APIC 0x56 -> Node 0 May 17 00:42:32.690852 kernel: SRAT: PXM 0 -> APIC 0x58 -> Node 0 May 17 00:42:32.690859 kernel: SRAT: PXM 0 -> APIC 0x5a -> Node 0 May 17 00:42:32.690864 kernel: SRAT: PXM 0 -> APIC 0x5c -> Node 0 May 17 00:42:32.690869 kernel: SRAT: PXM 0 -> APIC 0x5e -> Node 0 May 17 00:42:32.690874 kernel: SRAT: PXM 0 -> APIC 0x60 -> Node 0 May 17 00:42:32.690879 kernel: SRAT: PXM 0 -> APIC 0x62 -> Node 0 May 17 00:42:32.690884 kernel: SRAT: PXM 0 -> APIC 0x64 -> Node 0 May 17 00:42:32.690889 kernel: SRAT: PXM 0 -> APIC 0x66 -> Node 0 May 17 00:42:32.690893 kernel: SRAT: PXM 0 -> APIC 0x68 -> Node 0 May 17 00:42:32.690898 kernel: SRAT: PXM 0 -> APIC 0x6a -> Node 0 May 17 00:42:32.690903 kernel: SRAT: PXM 0 -> APIC 0x6c -> Node 0 May 17 00:42:32.690909 kernel: SRAT: PXM 0 -> APIC 0x6e -> Node 0 May 17 00:42:32.690914 kernel: SRAT: PXM 0 -> APIC 0x70 -> Node 0 May 17 00:42:32.690918 kernel: SRAT: PXM 0 -> APIC 0x72 -> Node 0 May 17 00:42:32.690923 kernel: SRAT: PXM 0 -> APIC 0x74 -> Node 0 May 17 00:42:32.690928 kernel: SRAT: PXM 0 -> APIC 0x76 -> Node 0 May 17 00:42:32.690933 kernel: SRAT: PXM 0 -> APIC 0x78 -> Node 0 May 17 00:42:32.690942 kernel: SRAT: PXM 0 -> APIC 0x7a -> Node 0 May 17 00:42:32.690948 kernel: SRAT: PXM 0 -> APIC 0x7c -> Node 0 May 17 00:42:32.690954 kernel: SRAT: PXM 0 -> APIC 0x7e -> Node 0 May 17 00:42:32.690959 kernel: SRAT: PXM 0 -> APIC 0x80 -> Node 0 May 17 00:42:32.690964 kernel: SRAT: PXM 0 -> APIC 0x82 -> Node 0 May 17 00:42:32.690970 kernel: SRAT: PXM 0 -> APIC 0x84 -> Node 0 May 17 00:42:32.690975 kernel: SRAT: PXM 0 -> APIC 0x86 -> Node 0 May 17 00:42:32.690980 kernel: SRAT: PXM 0 -> APIC 0x88 -> Node 0 May 17 00:42:32.690986 kernel: SRAT: PXM 0 -> APIC 0x8a -> Node 0 May 17 00:42:32.690991 kernel: SRAT: PXM 0 -> APIC 0x8c -> Node 0 May 17 00:42:32.690996 kernel: SRAT: PXM 0 -> APIC 0x8e -> Node 0 May 17 00:42:32.691001 kernel: SRAT: PXM 0 -> APIC 0x90 -> Node 0 May 17 00:42:32.691007 kernel: SRAT: PXM 0 -> APIC 0x92 -> Node 0 May 17 00:42:32.691012 kernel: SRAT: PXM 0 -> APIC 0x94 -> Node 0 May 17 00:42:32.691018 kernel: SRAT: PXM 0 -> APIC 0x96 -> Node 0 May 17 00:42:32.691023 kernel: SRAT: PXM 0 -> APIC 0x98 -> Node 0 May 17 00:42:32.691028 kernel: SRAT: PXM 0 -> APIC 0x9a -> Node 0 May 17 00:42:32.691033 kernel: SRAT: PXM 0 -> APIC 0x9c -> Node 0 May 17 00:42:32.691038 kernel: SRAT: PXM 0 -> APIC 0x9e -> Node 0 May 17 00:42:32.691044 kernel: SRAT: PXM 0 -> APIC 0xa0 -> Node 0 May 17 00:42:32.691049 kernel: SRAT: PXM 0 -> APIC 0xa2 -> Node 0 May 17 00:42:32.691054 kernel: SRAT: PXM 0 -> APIC 0xa4 -> Node 0 May 17 00:42:32.691060 kernel: SRAT: PXM 0 -> APIC 0xa6 -> Node 0 May 17 00:42:32.691066 kernel: SRAT: PXM 0 -> APIC 0xa8 -> Node 0 May 17 00:42:32.691071 kernel: SRAT: PXM 0 -> APIC 0xaa -> Node 0 May 17 00:42:32.691076 kernel: SRAT: PXM 0 -> APIC 0xac -> Node 0 May 17 00:42:32.691081 kernel: SRAT: PXM 0 -> APIC 0xae -> Node 0 May 17 00:42:32.691086 kernel: SRAT: PXM 0 -> APIC 0xb0 -> Node 0 May 17 00:42:32.691091 kernel: SRAT: PXM 0 -> APIC 0xb2 -> Node 0 May 17 00:42:32.691096 kernel: SRAT: PXM 0 -> APIC 0xb4 -> Node 0 May 17 00:42:32.691102 kernel: SRAT: PXM 0 -> APIC 0xb6 -> Node 0 May 17 00:42:32.691108 kernel: SRAT: PXM 0 -> APIC 0xb8 -> Node 0 May 17 00:42:32.691113 kernel: SRAT: PXM 0 -> APIC 0xba -> Node 0 May 17 00:42:32.691118 kernel: SRAT: PXM 0 -> APIC 0xbc -> Node 0 May 17 00:42:32.691123 kernel: SRAT: PXM 0 -> APIC 0xbe -> Node 0 May 17 00:42:32.691129 kernel: SRAT: PXM 0 -> APIC 0xc0 -> Node 0 May 17 00:42:32.691134 kernel: SRAT: PXM 0 -> APIC 0xc2 -> Node 0 May 17 00:42:32.691139 kernel: SRAT: PXM 0 -> APIC 0xc4 -> Node 0 May 17 00:42:32.691144 kernel: SRAT: PXM 0 -> APIC 0xc6 -> Node 0 May 17 00:42:32.691149 kernel: SRAT: PXM 0 -> APIC 0xc8 -> Node 0 May 17 00:42:32.691154 kernel: SRAT: PXM 0 -> APIC 0xca -> Node 0 May 17 00:42:32.691163 kernel: SRAT: PXM 0 -> APIC 0xcc -> Node 0 May 17 00:42:32.691168 kernel: SRAT: PXM 0 -> APIC 0xce -> Node 0 May 17 00:42:32.691174 kernel: SRAT: PXM 0 -> APIC 0xd0 -> Node 0 May 17 00:42:32.691179 kernel: SRAT: PXM 0 -> APIC 0xd2 -> Node 0 May 17 00:42:32.691184 kernel: SRAT: PXM 0 -> APIC 0xd4 -> Node 0 May 17 00:42:32.691189 kernel: SRAT: PXM 0 -> APIC 0xd6 -> Node 0 May 17 00:42:32.691194 kernel: SRAT: PXM 0 -> APIC 0xd8 -> Node 0 May 17 00:42:32.691199 kernel: SRAT: PXM 0 -> APIC 0xda -> Node 0 May 17 00:42:32.691204 kernel: SRAT: PXM 0 -> APIC 0xdc -> Node 0 May 17 00:42:32.691209 kernel: SRAT: PXM 0 -> APIC 0xde -> Node 0 May 17 00:42:32.691216 kernel: SRAT: PXM 0 -> APIC 0xe0 -> Node 0 May 17 00:42:32.691221 kernel: SRAT: PXM 0 -> APIC 0xe2 -> Node 0 May 17 00:42:32.691226 kernel: SRAT: PXM 0 -> APIC 0xe4 -> Node 0 May 17 00:42:32.691231 kernel: SRAT: PXM 0 -> APIC 0xe6 -> Node 0 May 17 00:42:32.691237 kernel: SRAT: PXM 0 -> APIC 0xe8 -> Node 0 May 17 00:42:32.691242 kernel: SRAT: PXM 0 -> APIC 0xea -> Node 0 May 17 00:42:32.691247 kernel: SRAT: PXM 0 -> APIC 0xec -> Node 0 May 17 00:42:32.691252 kernel: SRAT: PXM 0 -> APIC 0xee -> Node 0 May 17 00:42:32.691257 kernel: SRAT: PXM 0 -> APIC 0xf0 -> Node 0 May 17 00:42:32.691262 kernel: SRAT: PXM 0 -> APIC 0xf2 -> Node 0 May 17 00:42:32.691268 kernel: SRAT: PXM 0 -> APIC 0xf4 -> Node 0 May 17 00:42:32.691273 kernel: SRAT: PXM 0 -> APIC 0xf6 -> Node 0 May 17 00:42:32.691279 kernel: SRAT: PXM 0 -> APIC 0xf8 -> Node 0 May 17 00:42:32.691284 kernel: SRAT: PXM 0 -> APIC 0xfa -> Node 0 May 17 00:42:32.691289 kernel: SRAT: PXM 0 -> APIC 0xfc -> Node 0 May 17 00:42:32.691294 kernel: SRAT: PXM 0 -> APIC 0xfe -> Node 0 May 17 00:42:32.691299 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] May 17 00:42:32.691305 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] May 17 00:42:32.691310 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000-0xbfffffff] hotplug May 17 00:42:32.691316 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7fffffff] -> [mem 0x00000000-0x7fffffff] May 17 00:42:32.691322 kernel: NODE_DATA(0) allocated [mem 0x7fffa000-0x7fffffff] May 17 00:42:32.691328 kernel: Zone ranges: May 17 00:42:32.691333 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 17 00:42:32.691338 kernel: DMA32 [mem 0x0000000001000000-0x000000007fffffff] May 17 00:42:32.691344 kernel: Normal empty May 17 00:42:32.691349 kernel: Movable zone start for each node May 17 00:42:32.691354 kernel: Early memory node ranges May 17 00:42:32.691359 kernel: node 0: [mem 0x0000000000001000-0x000000000009dfff] May 17 00:42:32.691364 kernel: node 0: [mem 0x0000000000100000-0x000000007fedffff] May 17 00:42:32.691371 kernel: node 0: [mem 0x000000007ff00000-0x000000007fffffff] May 17 00:42:32.691376 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007fffffff] May 17 00:42:32.691381 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 17 00:42:32.691387 kernel: On node 0, zone DMA: 98 pages in unavailable ranges May 17 00:42:32.691392 kernel: On node 0, zone DMA32: 32 pages in unavailable ranges May 17 00:42:32.691397 kernel: ACPI: PM-Timer IO Port: 0x1008 May 17 00:42:32.691403 kernel: system APIC only can use physical flat May 17 00:42:32.691408 kernel: ACPI: LAPIC_NMI (acpi_id[0x00] high edge lint[0x1]) May 17 00:42:32.691413 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) May 17 00:42:32.691418 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) May 17 00:42:32.691424 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) May 17 00:42:32.691430 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) May 17 00:42:32.691435 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) May 17 00:42:32.691440 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) May 17 00:42:32.691445 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) May 17 00:42:32.691450 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) May 17 00:42:32.691456 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) May 17 00:42:32.691461 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) May 17 00:42:32.691466 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) May 17 00:42:32.691472 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) May 17 00:42:32.691477 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) May 17 00:42:32.691482 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) May 17 00:42:32.691487 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) May 17 00:42:32.691493 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) May 17 00:42:32.691498 kernel: ACPI: LAPIC_NMI (acpi_id[0x11] high edge lint[0x1]) May 17 00:42:32.691503 kernel: ACPI: LAPIC_NMI (acpi_id[0x12] high edge lint[0x1]) May 17 00:42:32.691508 kernel: ACPI: LAPIC_NMI (acpi_id[0x13] high edge lint[0x1]) May 17 00:42:32.691513 kernel: ACPI: LAPIC_NMI (acpi_id[0x14] high edge lint[0x1]) May 17 00:42:32.691520 kernel: ACPI: LAPIC_NMI (acpi_id[0x15] high edge lint[0x1]) May 17 00:42:32.691525 kernel: ACPI: LAPIC_NMI (acpi_id[0x16] high edge lint[0x1]) May 17 00:42:32.691530 kernel: ACPI: LAPIC_NMI (acpi_id[0x17] high edge lint[0x1]) May 17 00:42:32.691535 kernel: ACPI: LAPIC_NMI (acpi_id[0x18] high edge lint[0x1]) May 17 00:42:32.691540 kernel: ACPI: LAPIC_NMI (acpi_id[0x19] high edge lint[0x1]) May 17 00:42:32.691545 kernel: ACPI: LAPIC_NMI (acpi_id[0x1a] high edge lint[0x1]) May 17 00:42:32.691551 kernel: ACPI: LAPIC_NMI (acpi_id[0x1b] high edge lint[0x1]) May 17 00:42:32.691556 kernel: ACPI: LAPIC_NMI (acpi_id[0x1c] high edge lint[0x1]) May 17 00:42:32.691561 kernel: ACPI: LAPIC_NMI (acpi_id[0x1d] high edge lint[0x1]) May 17 00:42:32.691566 kernel: ACPI: LAPIC_NMI (acpi_id[0x1e] high edge lint[0x1]) May 17 00:42:32.691572 kernel: ACPI: LAPIC_NMI (acpi_id[0x1f] high edge lint[0x1]) May 17 00:42:32.691578 kernel: ACPI: LAPIC_NMI (acpi_id[0x20] high edge lint[0x1]) May 17 00:42:32.691583 kernel: ACPI: LAPIC_NMI (acpi_id[0x21] high edge lint[0x1]) May 17 00:42:32.691588 kernel: ACPI: LAPIC_NMI (acpi_id[0x22] high edge lint[0x1]) May 17 00:42:32.691593 kernel: ACPI: LAPIC_NMI (acpi_id[0x23] high edge lint[0x1]) May 17 00:42:32.691599 kernel: ACPI: LAPIC_NMI (acpi_id[0x24] high edge lint[0x1]) May 17 00:42:32.691604 kernel: ACPI: LAPIC_NMI (acpi_id[0x25] high edge lint[0x1]) May 17 00:42:32.691609 kernel: ACPI: LAPIC_NMI (acpi_id[0x26] high edge lint[0x1]) May 17 00:42:32.691614 kernel: ACPI: LAPIC_NMI (acpi_id[0x27] high edge lint[0x1]) May 17 00:42:32.691621 kernel: ACPI: LAPIC_NMI (acpi_id[0x28] high edge lint[0x1]) May 17 00:42:32.691626 kernel: ACPI: LAPIC_NMI (acpi_id[0x29] high edge lint[0x1]) May 17 00:42:32.691632 kernel: ACPI: LAPIC_NMI (acpi_id[0x2a] high edge lint[0x1]) May 17 00:42:32.691637 kernel: ACPI: LAPIC_NMI (acpi_id[0x2b] high edge lint[0x1]) May 17 00:42:32.691642 kernel: ACPI: LAPIC_NMI (acpi_id[0x2c] high edge lint[0x1]) May 17 00:42:32.691648 kernel: ACPI: LAPIC_NMI (acpi_id[0x2d] high edge lint[0x1]) May 17 00:42:32.691653 kernel: ACPI: LAPIC_NMI (acpi_id[0x2e] high edge lint[0x1]) May 17 00:42:32.691658 kernel: ACPI: LAPIC_NMI (acpi_id[0x2f] high edge lint[0x1]) May 17 00:42:32.691663 kernel: ACPI: LAPIC_NMI (acpi_id[0x30] high edge lint[0x1]) May 17 00:42:32.691668 kernel: ACPI: LAPIC_NMI (acpi_id[0x31] high edge lint[0x1]) May 17 00:42:32.691674 kernel: ACPI: LAPIC_NMI (acpi_id[0x32] high edge lint[0x1]) May 17 00:42:32.691680 kernel: ACPI: LAPIC_NMI (acpi_id[0x33] high edge lint[0x1]) May 17 00:42:32.691685 kernel: ACPI: LAPIC_NMI (acpi_id[0x34] high edge lint[0x1]) May 17 00:42:32.691690 kernel: ACPI: LAPIC_NMI (acpi_id[0x35] high edge lint[0x1]) May 17 00:42:32.691696 kernel: ACPI: LAPIC_NMI (acpi_id[0x36] high edge lint[0x1]) May 17 00:42:32.691711 kernel: ACPI: LAPIC_NMI (acpi_id[0x37] high edge lint[0x1]) May 17 00:42:32.691717 kernel: ACPI: LAPIC_NMI (acpi_id[0x38] high edge lint[0x1]) May 17 00:42:32.691722 kernel: ACPI: LAPIC_NMI (acpi_id[0x39] high edge lint[0x1]) May 17 00:42:32.691727 kernel: ACPI: LAPIC_NMI (acpi_id[0x3a] high edge lint[0x1]) May 17 00:42:32.691734 kernel: ACPI: LAPIC_NMI (acpi_id[0x3b] high edge lint[0x1]) May 17 00:42:32.691739 kernel: ACPI: LAPIC_NMI (acpi_id[0x3c] high edge lint[0x1]) May 17 00:42:32.691744 kernel: ACPI: LAPIC_NMI (acpi_id[0x3d] high edge lint[0x1]) May 17 00:42:32.691750 kernel: ACPI: LAPIC_NMI (acpi_id[0x3e] high edge lint[0x1]) May 17 00:42:32.691755 kernel: ACPI: LAPIC_NMI (acpi_id[0x3f] high edge lint[0x1]) May 17 00:42:32.691760 kernel: ACPI: LAPIC_NMI (acpi_id[0x40] high edge lint[0x1]) May 17 00:42:32.691765 kernel: ACPI: LAPIC_NMI (acpi_id[0x41] high edge lint[0x1]) May 17 00:42:32.691770 kernel: ACPI: LAPIC_NMI (acpi_id[0x42] high edge lint[0x1]) May 17 00:42:32.691776 kernel: ACPI: LAPIC_NMI (acpi_id[0x43] high edge lint[0x1]) May 17 00:42:32.691781 kernel: ACPI: LAPIC_NMI (acpi_id[0x44] high edge lint[0x1]) May 17 00:42:32.691787 kernel: ACPI: LAPIC_NMI (acpi_id[0x45] high edge lint[0x1]) May 17 00:42:32.691792 kernel: ACPI: LAPIC_NMI (acpi_id[0x46] high edge lint[0x1]) May 17 00:42:32.691798 kernel: ACPI: LAPIC_NMI (acpi_id[0x47] high edge lint[0x1]) May 17 00:42:32.691803 kernel: ACPI: LAPIC_NMI (acpi_id[0x48] high edge lint[0x1]) May 17 00:42:32.691808 kernel: ACPI: LAPIC_NMI (acpi_id[0x49] high edge lint[0x1]) May 17 00:42:32.691813 kernel: ACPI: LAPIC_NMI (acpi_id[0x4a] high edge lint[0x1]) May 17 00:42:32.691819 kernel: ACPI: LAPIC_NMI (acpi_id[0x4b] high edge lint[0x1]) May 17 00:42:32.691824 kernel: ACPI: LAPIC_NMI (acpi_id[0x4c] high edge lint[0x1]) May 17 00:42:32.691829 kernel: ACPI: LAPIC_NMI (acpi_id[0x4d] high edge lint[0x1]) May 17 00:42:32.691835 kernel: ACPI: LAPIC_NMI (acpi_id[0x4e] high edge lint[0x1]) May 17 00:42:32.691840 kernel: ACPI: LAPIC_NMI (acpi_id[0x4f] high edge lint[0x1]) May 17 00:42:32.691846 kernel: ACPI: LAPIC_NMI (acpi_id[0x50] high edge lint[0x1]) May 17 00:42:32.691851 kernel: ACPI: LAPIC_NMI (acpi_id[0x51] high edge lint[0x1]) May 17 00:42:32.691856 kernel: ACPI: LAPIC_NMI (acpi_id[0x52] high edge lint[0x1]) May 17 00:42:32.691861 kernel: ACPI: LAPIC_NMI (acpi_id[0x53] high edge lint[0x1]) May 17 00:42:32.691867 kernel: ACPI: LAPIC_NMI (acpi_id[0x54] high edge lint[0x1]) May 17 00:42:32.691872 kernel: ACPI: LAPIC_NMI (acpi_id[0x55] high edge lint[0x1]) May 17 00:42:32.691877 kernel: ACPI: LAPIC_NMI (acpi_id[0x56] high edge lint[0x1]) May 17 00:42:32.691882 kernel: ACPI: LAPIC_NMI (acpi_id[0x57] high edge lint[0x1]) May 17 00:42:32.691888 kernel: ACPI: LAPIC_NMI (acpi_id[0x58] high edge lint[0x1]) May 17 00:42:32.691893 kernel: ACPI: LAPIC_NMI (acpi_id[0x59] high edge lint[0x1]) May 17 00:42:32.691899 kernel: ACPI: LAPIC_NMI (acpi_id[0x5a] high edge lint[0x1]) May 17 00:42:32.691904 kernel: ACPI: LAPIC_NMI (acpi_id[0x5b] high edge lint[0x1]) May 17 00:42:32.691909 kernel: ACPI: LAPIC_NMI (acpi_id[0x5c] high edge lint[0x1]) May 17 00:42:32.691914 kernel: ACPI: LAPIC_NMI (acpi_id[0x5d] high edge lint[0x1]) May 17 00:42:32.691919 kernel: ACPI: LAPIC_NMI (acpi_id[0x5e] high edge lint[0x1]) May 17 00:42:32.691924 kernel: ACPI: LAPIC_NMI (acpi_id[0x5f] high edge lint[0x1]) May 17 00:42:32.691930 kernel: ACPI: LAPIC_NMI (acpi_id[0x60] high edge lint[0x1]) May 17 00:42:32.691936 kernel: ACPI: LAPIC_NMI (acpi_id[0x61] high edge lint[0x1]) May 17 00:42:32.691941 kernel: ACPI: LAPIC_NMI (acpi_id[0x62] high edge lint[0x1]) May 17 00:42:32.691946 kernel: ACPI: LAPIC_NMI (acpi_id[0x63] high edge lint[0x1]) May 17 00:42:32.691952 kernel: ACPI: LAPIC_NMI (acpi_id[0x64] high edge lint[0x1]) May 17 00:42:32.691957 kernel: ACPI: LAPIC_NMI (acpi_id[0x65] high edge lint[0x1]) May 17 00:42:32.691962 kernel: ACPI: LAPIC_NMI (acpi_id[0x66] high edge lint[0x1]) May 17 00:42:32.691967 kernel: ACPI: LAPIC_NMI (acpi_id[0x67] high edge lint[0x1]) May 17 00:42:32.691973 kernel: ACPI: LAPIC_NMI (acpi_id[0x68] high edge lint[0x1]) May 17 00:42:32.691978 kernel: ACPI: LAPIC_NMI (acpi_id[0x69] high edge lint[0x1]) May 17 00:42:32.691983 kernel: ACPI: LAPIC_NMI (acpi_id[0x6a] high edge lint[0x1]) May 17 00:42:32.691990 kernel: ACPI: LAPIC_NMI (acpi_id[0x6b] high edge lint[0x1]) May 17 00:42:32.691995 kernel: ACPI: LAPIC_NMI (acpi_id[0x6c] high edge lint[0x1]) May 17 00:42:32.692000 kernel: ACPI: LAPIC_NMI (acpi_id[0x6d] high edge lint[0x1]) May 17 00:42:32.692006 kernel: ACPI: LAPIC_NMI (acpi_id[0x6e] high edge lint[0x1]) May 17 00:42:32.692011 kernel: ACPI: LAPIC_NMI (acpi_id[0x6f] high edge lint[0x1]) May 17 00:42:32.692016 kernel: ACPI: LAPIC_NMI (acpi_id[0x70] high edge lint[0x1]) May 17 00:42:32.692021 kernel: ACPI: LAPIC_NMI (acpi_id[0x71] high edge lint[0x1]) May 17 00:42:32.692026 kernel: ACPI: LAPIC_NMI (acpi_id[0x72] high edge lint[0x1]) May 17 00:42:32.692032 kernel: ACPI: LAPIC_NMI (acpi_id[0x73] high edge lint[0x1]) May 17 00:42:32.692038 kernel: ACPI: LAPIC_NMI (acpi_id[0x74] high edge lint[0x1]) May 17 00:42:32.692044 kernel: ACPI: LAPIC_NMI (acpi_id[0x75] high edge lint[0x1]) May 17 00:42:32.692052 kernel: ACPI: LAPIC_NMI (acpi_id[0x76] high edge lint[0x1]) May 17 00:42:32.692057 kernel: ACPI: LAPIC_NMI (acpi_id[0x77] high edge lint[0x1]) May 17 00:42:32.692062 kernel: ACPI: LAPIC_NMI (acpi_id[0x78] high edge lint[0x1]) May 17 00:42:32.692068 kernel: ACPI: LAPIC_NMI (acpi_id[0x79] high edge lint[0x1]) May 17 00:42:32.692073 kernel: ACPI: LAPIC_NMI (acpi_id[0x7a] high edge lint[0x1]) May 17 00:42:32.692078 kernel: ACPI: LAPIC_NMI (acpi_id[0x7b] high edge lint[0x1]) May 17 00:42:32.692083 kernel: ACPI: LAPIC_NMI (acpi_id[0x7c] high edge lint[0x1]) May 17 00:42:32.692088 kernel: ACPI: LAPIC_NMI (acpi_id[0x7d] high edge lint[0x1]) May 17 00:42:32.692095 kernel: ACPI: LAPIC_NMI (acpi_id[0x7e] high edge lint[0x1]) May 17 00:42:32.692100 kernel: ACPI: LAPIC_NMI (acpi_id[0x7f] high edge lint[0x1]) May 17 00:42:32.692105 kernel: IOAPIC[0]: apic_id 1, version 17, address 0xfec00000, GSI 0-23 May 17 00:42:32.692111 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 high edge) May 17 00:42:32.692116 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 17 00:42:32.692121 kernel: ACPI: HPET id: 0x8086af01 base: 0xfed00000 May 17 00:42:32.692126 kernel: TSC deadline timer available May 17 00:42:32.692132 kernel: smpboot: Allowing 128 CPUs, 126 hotplug CPUs May 17 00:42:32.692137 kernel: [mem 0x80000000-0xefffffff] available for PCI devices May 17 00:42:32.692144 kernel: Booting paravirtualized kernel on VMware hypervisor May 17 00:42:32.692150 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 17 00:42:32.692155 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:128 nr_node_ids:1 May 17 00:42:32.692161 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u262144 May 17 00:42:32.692167 kernel: pcpu-alloc: s188696 r8192 d32488 u262144 alloc=1*2097152 May 17 00:42:32.692172 kernel: pcpu-alloc: [0] 000 001 002 003 004 005 006 007 May 17 00:42:32.692177 kernel: pcpu-alloc: [0] 008 009 010 011 012 013 014 015 May 17 00:42:32.692183 kernel: pcpu-alloc: [0] 016 017 018 019 020 021 022 023 May 17 00:42:32.692189 kernel: pcpu-alloc: [0] 024 025 026 027 028 029 030 031 May 17 00:42:32.692194 kernel: pcpu-alloc: [0] 032 033 034 035 036 037 038 039 May 17 00:42:32.692199 kernel: pcpu-alloc: [0] 040 041 042 043 044 045 046 047 May 17 00:42:32.692205 kernel: pcpu-alloc: [0] 048 049 050 051 052 053 054 055 May 17 00:42:32.692218 kernel: pcpu-alloc: [0] 056 057 058 059 060 061 062 063 May 17 00:42:32.692224 kernel: pcpu-alloc: [0] 064 065 066 067 068 069 070 071 May 17 00:42:32.692230 kernel: pcpu-alloc: [0] 072 073 074 075 076 077 078 079 May 17 00:42:32.692236 kernel: pcpu-alloc: [0] 080 081 082 083 084 085 086 087 May 17 00:42:32.692241 kernel: pcpu-alloc: [0] 088 089 090 091 092 093 094 095 May 17 00:42:32.692248 kernel: pcpu-alloc: [0] 096 097 098 099 100 101 102 103 May 17 00:42:32.692253 kernel: pcpu-alloc: [0] 104 105 106 107 108 109 110 111 May 17 00:42:32.692258 kernel: pcpu-alloc: [0] 112 113 114 115 116 117 118 119 May 17 00:42:32.692264 kernel: pcpu-alloc: [0] 120 121 122 123 124 125 126 127 May 17 00:42:32.692269 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515808 May 17 00:42:32.692275 kernel: Policy zone: DMA32 May 17 00:42:32.692281 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=4aad7caeadb0359f379975532748a0b4ae6bb9b229507353e0f5ae84cb9335a0 May 17 00:42:32.692288 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 17 00:42:32.692295 kernel: printk: log_buf_len individual max cpu contribution: 4096 bytes May 17 00:42:32.692302 kernel: printk: log_buf_len total cpu_extra contributions: 520192 bytes May 17 00:42:32.692307 kernel: printk: log_buf_len min size: 262144 bytes May 17 00:42:32.692314 kernel: printk: log_buf_len: 1048576 bytes May 17 00:42:32.692320 kernel: printk: early log buf free: 239728(91%) May 17 00:42:32.692325 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 17 00:42:32.692331 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) May 17 00:42:32.692337 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 17 00:42:32.692342 kernel: Memory: 1940392K/2096628K available (12294K kernel code, 2276K rwdata, 13724K rodata, 47472K init, 4108K bss, 155976K reserved, 0K cma-reserved) May 17 00:42:32.692349 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=128, Nodes=1 May 17 00:42:32.692355 kernel: ftrace: allocating 34585 entries in 136 pages May 17 00:42:32.692362 kernel: ftrace: allocated 136 pages with 2 groups May 17 00:42:32.692368 kernel: rcu: Hierarchical RCU implementation. May 17 00:42:32.692375 kernel: rcu: RCU event tracing is enabled. May 17 00:42:32.692380 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=128. May 17 00:42:32.692388 kernel: Rude variant of Tasks RCU enabled. May 17 00:42:32.692393 kernel: Tracing variant of Tasks RCU enabled. May 17 00:42:32.692399 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 17 00:42:32.692405 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=128 May 17 00:42:32.692410 kernel: NR_IRQS: 33024, nr_irqs: 1448, preallocated irqs: 16 May 17 00:42:32.692416 kernel: random: crng init done May 17 00:42:32.692421 kernel: Console: colour VGA+ 80x25 May 17 00:42:32.692427 kernel: printk: console [tty0] enabled May 17 00:42:32.692432 kernel: printk: console [ttyS0] enabled May 17 00:42:32.692439 kernel: ACPI: Core revision 20210730 May 17 00:42:32.692445 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 133484882848 ns May 17 00:42:32.692451 kernel: APIC: Switch to symmetric I/O mode setup May 17 00:42:32.692456 kernel: x2apic enabled May 17 00:42:32.692462 kernel: Switched APIC routing to physical x2apic. May 17 00:42:32.692468 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 May 17 00:42:32.692474 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns May 17 00:42:32.692479 kernel: Calibrating delay loop (skipped) preset value.. 6816.00 BogoMIPS (lpj=3408000) May 17 00:42:32.692485 kernel: Disabled fast string operations May 17 00:42:32.692492 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 May 17 00:42:32.692497 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 May 17 00:42:32.692503 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 17 00:42:32.692509 kernel: Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks! May 17 00:42:32.692515 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit May 17 00:42:32.692521 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall May 17 00:42:32.692526 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS May 17 00:42:32.692532 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT May 17 00:42:32.692538 kernel: RETBleed: Mitigation: Enhanced IBRS May 17 00:42:32.692544 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier May 17 00:42:32.692550 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp May 17 00:42:32.692556 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode May 17 00:42:32.692562 kernel: SRBDS: Unknown: Dependent on hypervisor status May 17 00:42:32.692567 kernel: GDS: Unknown: Dependent on hypervisor status May 17 00:42:32.692573 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' May 17 00:42:32.692579 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' May 17 00:42:32.692584 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' May 17 00:42:32.692591 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 May 17 00:42:32.692597 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. May 17 00:42:32.692602 kernel: Freeing SMP alternatives memory: 32K May 17 00:42:32.692608 kernel: pid_max: default: 131072 minimum: 1024 May 17 00:42:32.692614 kernel: LSM: Security Framework initializing May 17 00:42:32.692619 kernel: SELinux: Initializing. May 17 00:42:32.692625 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) May 17 00:42:32.692632 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) May 17 00:42:32.692639 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) May 17 00:42:32.692646 kernel: Performance Events: Skylake events, core PMU driver. May 17 00:42:32.692652 kernel: core: CPUID marked event: 'cpu cycles' unavailable May 17 00:42:32.692658 kernel: core: CPUID marked event: 'instructions' unavailable May 17 00:42:32.692664 kernel: core: CPUID marked event: 'bus cycles' unavailable May 17 00:42:32.692671 kernel: core: CPUID marked event: 'cache references' unavailable May 17 00:42:32.692676 kernel: core: CPUID marked event: 'cache misses' unavailable May 17 00:42:32.692682 kernel: core: CPUID marked event: 'branch instructions' unavailable May 17 00:42:32.692687 kernel: core: CPUID marked event: 'branch misses' unavailable May 17 00:42:32.692693 kernel: ... version: 1 May 17 00:42:32.692705 kernel: ... bit width: 48 May 17 00:42:32.692711 kernel: ... generic registers: 4 May 17 00:42:32.692717 kernel: ... value mask: 0000ffffffffffff May 17 00:42:32.692722 kernel: ... max period: 000000007fffffff May 17 00:42:32.692728 kernel: ... fixed-purpose events: 0 May 17 00:42:32.692734 kernel: ... event mask: 000000000000000f May 17 00:42:32.692739 kernel: signal: max sigframe size: 1776 May 17 00:42:32.692745 kernel: rcu: Hierarchical SRCU implementation. May 17 00:42:32.692750 kernel: NMI watchdog: Perf NMI watchdog permanently disabled May 17 00:42:32.692758 kernel: smp: Bringing up secondary CPUs ... May 17 00:42:32.692764 kernel: x86: Booting SMP configuration: May 17 00:42:32.692771 kernel: .... node #0, CPUs: #1 May 17 00:42:32.692777 kernel: Disabled fast string operations May 17 00:42:32.692782 kernel: smpboot: CPU 1 Converting physical 2 to logical package 1 May 17 00:42:32.692788 kernel: smpboot: CPU 1 Converting physical 0 to logical die 1 May 17 00:42:32.692793 kernel: smp: Brought up 1 node, 2 CPUs May 17 00:42:32.692799 kernel: smpboot: Max logical packages: 128 May 17 00:42:32.692807 kernel: smpboot: Total of 2 processors activated (13632.00 BogoMIPS) May 17 00:42:32.692813 kernel: devtmpfs: initialized May 17 00:42:32.692821 kernel: x86/mm: Memory block size: 128MB May 17 00:42:32.692826 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7feff000-0x7fefffff] (4096 bytes) May 17 00:42:32.692832 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 17 00:42:32.692838 kernel: futex hash table entries: 32768 (order: 9, 2097152 bytes, linear) May 17 00:42:32.692844 kernel: pinctrl core: initialized pinctrl subsystem May 17 00:42:32.692849 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 17 00:42:32.692855 kernel: audit: initializing netlink subsys (disabled) May 17 00:42:32.692861 kernel: audit: type=2000 audit(1747442551.058:1): state=initialized audit_enabled=0 res=1 May 17 00:42:32.692867 kernel: thermal_sys: Registered thermal governor 'step_wise' May 17 00:42:32.692873 kernel: thermal_sys: Registered thermal governor 'user_space' May 17 00:42:32.692879 kernel: cpuidle: using governor menu May 17 00:42:32.692885 kernel: Simple Boot Flag at 0x36 set to 0x80 May 17 00:42:32.692890 kernel: ACPI: bus type PCI registered May 17 00:42:32.692896 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 17 00:42:32.692902 kernel: dca service started, version 1.12.1 May 17 00:42:32.692908 kernel: PCI: MMCONFIG for domain 0000 [bus 00-7f] at [mem 0xf0000000-0xf7ffffff] (base 0xf0000000) May 17 00:42:32.692914 kernel: PCI: MMCONFIG at [mem 0xf0000000-0xf7ffffff] reserved in E820 May 17 00:42:32.692920 kernel: PCI: Using configuration type 1 for base access May 17 00:42:32.692927 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 17 00:42:32.692932 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages May 17 00:42:32.692938 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages May 17 00:42:32.692944 kernel: ACPI: Added _OSI(Module Device) May 17 00:42:32.692950 kernel: ACPI: Added _OSI(Processor Device) May 17 00:42:32.692956 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 17 00:42:32.692962 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 17 00:42:32.692968 kernel: ACPI: Added _OSI(Linux-Dell-Video) May 17 00:42:32.692973 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) May 17 00:42:32.692980 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) May 17 00:42:32.692986 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 17 00:42:32.692991 kernel: ACPI: [Firmware Bug]: BIOS _OSI(Linux) query ignored May 17 00:42:32.692997 kernel: ACPI: Interpreter enabled May 17 00:42:32.693002 kernel: ACPI: PM: (supports S0 S1 S5) May 17 00:42:32.693008 kernel: ACPI: Using IOAPIC for interrupt routing May 17 00:42:32.693015 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 17 00:42:32.693020 kernel: ACPI: Enabled 4 GPEs in block 00 to 0F May 17 00:42:32.693026 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-7f]) May 17 00:42:32.693119 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 17 00:42:32.693174 kernel: acpi PNP0A03:00: _OSC: platform does not support [AER LTR] May 17 00:42:32.693223 kernel: acpi PNP0A03:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability] May 17 00:42:32.693231 kernel: PCI host bridge to bus 0000:00 May 17 00:42:32.693279 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] May 17 00:42:32.693323 kernel: pci_bus 0000:00: root bus resource [mem 0x000cc000-0x000dbfff window] May 17 00:42:32.693367 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] May 17 00:42:32.693408 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] May 17 00:42:32.693448 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xfeff window] May 17 00:42:32.693489 kernel: pci_bus 0000:00: root bus resource [bus 00-7f] May 17 00:42:32.693544 kernel: pci 0000:00:00.0: [8086:7190] type 00 class 0x060000 May 17 00:42:32.693598 kernel: pci 0000:00:01.0: [8086:7191] type 01 class 0x060400 May 17 00:42:32.693650 kernel: pci 0000:00:07.0: [8086:7110] type 00 class 0x060100 May 17 00:42:32.693722 kernel: pci 0000:00:07.1: [8086:7111] type 00 class 0x01018a May 17 00:42:32.693779 kernel: pci 0000:00:07.1: reg 0x20: [io 0x1060-0x106f] May 17 00:42:32.693833 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] May 17 00:42:32.693881 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x14: [io 0x03f6] May 17 00:42:32.693928 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] May 17 00:42:32.693975 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x1c: [io 0x0376] May 17 00:42:32.694030 kernel: pci 0000:00:07.3: [8086:7113] type 00 class 0x068000 May 17 00:42:32.694078 kernel: pci 0000:00:07.3: quirk: [io 0x1000-0x103f] claimed by PIIX4 ACPI May 17 00:42:32.694125 kernel: pci 0000:00:07.3: quirk: [io 0x1040-0x104f] claimed by PIIX4 SMB May 17 00:42:32.694176 kernel: pci 0000:00:07.7: [15ad:0740] type 00 class 0x088000 May 17 00:42:32.694223 kernel: pci 0000:00:07.7: reg 0x10: [io 0x1080-0x10bf] May 17 00:42:32.694270 kernel: pci 0000:00:07.7: reg 0x14: [mem 0xfebfe000-0xfebfffff 64bit] May 17 00:42:32.694324 kernel: pci 0000:00:0f.0: [15ad:0405] type 00 class 0x030000 May 17 00:42:32.694372 kernel: pci 0000:00:0f.0: reg 0x10: [io 0x1070-0x107f] May 17 00:42:32.694419 kernel: pci 0000:00:0f.0: reg 0x14: [mem 0xe8000000-0xefffffff pref] May 17 00:42:32.694465 kernel: pci 0000:00:0f.0: reg 0x18: [mem 0xfe000000-0xfe7fffff] May 17 00:42:32.694510 kernel: pci 0000:00:0f.0: reg 0x30: [mem 0x00000000-0x00007fff pref] May 17 00:42:32.694556 kernel: pci 0000:00:0f.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] May 17 00:42:32.694605 kernel: pci 0000:00:11.0: [15ad:0790] type 01 class 0x060401 May 17 00:42:32.694659 kernel: pci 0000:00:15.0: [15ad:07a0] type 01 class 0x060400 May 17 00:42:32.706809 kernel: pci 0000:00:15.0: PME# supported from D0 D3hot D3cold May 17 00:42:32.706896 kernel: pci 0000:00:15.1: [15ad:07a0] type 01 class 0x060400 May 17 00:42:32.706953 kernel: pci 0000:00:15.1: PME# supported from D0 D3hot D3cold May 17 00:42:32.707007 kernel: pci 0000:00:15.2: [15ad:07a0] type 01 class 0x060400 May 17 00:42:32.707057 kernel: pci 0000:00:15.2: PME# supported from D0 D3hot D3cold May 17 00:42:32.707108 kernel: pci 0000:00:15.3: [15ad:07a0] type 01 class 0x060400 May 17 00:42:32.707160 kernel: pci 0000:00:15.3: PME# supported from D0 D3hot D3cold May 17 00:42:32.707211 kernel: pci 0000:00:15.4: [15ad:07a0] type 01 class 0x060400 May 17 00:42:32.707260 kernel: pci 0000:00:15.4: PME# supported from D0 D3hot D3cold May 17 00:42:32.707314 kernel: pci 0000:00:15.5: [15ad:07a0] type 01 class 0x060400 May 17 00:42:32.707363 kernel: pci 0000:00:15.5: PME# supported from D0 D3hot D3cold May 17 00:42:32.707414 kernel: pci 0000:00:15.6: [15ad:07a0] type 01 class 0x060400 May 17 00:42:32.707464 kernel: pci 0000:00:15.6: PME# supported from D0 D3hot D3cold May 17 00:42:32.707514 kernel: pci 0000:00:15.7: [15ad:07a0] type 01 class 0x060400 May 17 00:42:32.707561 kernel: pci 0000:00:15.7: PME# supported from D0 D3hot D3cold May 17 00:42:32.707613 kernel: pci 0000:00:16.0: [15ad:07a0] type 01 class 0x060400 May 17 00:42:32.707670 kernel: pci 0000:00:16.0: PME# supported from D0 D3hot D3cold May 17 00:42:32.707739 kernel: pci 0000:00:16.1: [15ad:07a0] type 01 class 0x060400 May 17 00:42:32.707795 kernel: pci 0000:00:16.1: PME# supported from D0 D3hot D3cold May 17 00:42:32.707870 kernel: pci 0000:00:16.2: [15ad:07a0] type 01 class 0x060400 May 17 00:42:32.707925 kernel: pci 0000:00:16.2: PME# supported from D0 D3hot D3cold May 17 00:42:32.707976 kernel: pci 0000:00:16.3: [15ad:07a0] type 01 class 0x060400 May 17 00:42:32.708023 kernel: pci 0000:00:16.3: PME# supported from D0 D3hot D3cold May 17 00:42:32.708078 kernel: pci 0000:00:16.4: [15ad:07a0] type 01 class 0x060400 May 17 00:42:32.708126 kernel: pci 0000:00:16.4: PME# supported from D0 D3hot D3cold May 17 00:42:32.708178 kernel: pci 0000:00:16.5: [15ad:07a0] type 01 class 0x060400 May 17 00:42:32.708225 kernel: pci 0000:00:16.5: PME# supported from D0 D3hot D3cold May 17 00:42:32.708286 kernel: pci 0000:00:16.6: [15ad:07a0] type 01 class 0x060400 May 17 00:42:32.708343 kernel: pci 0000:00:16.6: PME# supported from D0 D3hot D3cold May 17 00:42:32.708395 kernel: pci 0000:00:16.7: [15ad:07a0] type 01 class 0x060400 May 17 00:42:32.708446 kernel: pci 0000:00:16.7: PME# supported from D0 D3hot D3cold May 17 00:42:32.708497 kernel: pci 0000:00:17.0: [15ad:07a0] type 01 class 0x060400 May 17 00:42:32.708544 kernel: pci 0000:00:17.0: PME# supported from D0 D3hot D3cold May 17 00:42:32.708595 kernel: pci 0000:00:17.1: [15ad:07a0] type 01 class 0x060400 May 17 00:42:32.708642 kernel: pci 0000:00:17.1: PME# supported from D0 D3hot D3cold May 17 00:42:32.708693 kernel: pci 0000:00:17.2: [15ad:07a0] type 01 class 0x060400 May 17 00:42:32.708758 kernel: pci 0000:00:17.2: PME# supported from D0 D3hot D3cold May 17 00:42:32.708809 kernel: pci 0000:00:17.3: [15ad:07a0] type 01 class 0x060400 May 17 00:42:32.708857 kernel: pci 0000:00:17.3: PME# supported from D0 D3hot D3cold May 17 00:42:32.708908 kernel: pci 0000:00:17.4: [15ad:07a0] type 01 class 0x060400 May 17 00:42:32.708954 kernel: pci 0000:00:17.4: PME# supported from D0 D3hot D3cold May 17 00:42:32.709005 kernel: pci 0000:00:17.5: [15ad:07a0] type 01 class 0x060400 May 17 00:42:32.709072 kernel: pci 0000:00:17.5: PME# supported from D0 D3hot D3cold May 17 00:42:32.709144 kernel: pci 0000:00:17.6: [15ad:07a0] type 01 class 0x060400 May 17 00:42:32.709192 kernel: pci 0000:00:17.6: PME# supported from D0 D3hot D3cold May 17 00:42:32.709244 kernel: pci 0000:00:17.7: [15ad:07a0] type 01 class 0x060400 May 17 00:42:32.709291 kernel: pci 0000:00:17.7: PME# supported from D0 D3hot D3cold May 17 00:42:32.709342 kernel: pci 0000:00:18.0: [15ad:07a0] type 01 class 0x060400 May 17 00:42:32.709391 kernel: pci 0000:00:18.0: PME# supported from D0 D3hot D3cold May 17 00:42:32.709443 kernel: pci 0000:00:18.1: [15ad:07a0] type 01 class 0x060400 May 17 00:42:32.709490 kernel: pci 0000:00:18.1: PME# supported from D0 D3hot D3cold May 17 00:42:32.709541 kernel: pci 0000:00:18.2: [15ad:07a0] type 01 class 0x060400 May 17 00:42:32.709588 kernel: pci 0000:00:18.2: PME# supported from D0 D3hot D3cold May 17 00:42:32.709638 kernel: pci 0000:00:18.3: [15ad:07a0] type 01 class 0x060400 May 17 00:42:32.709687 kernel: pci 0000:00:18.3: PME# supported from D0 D3hot D3cold May 17 00:42:32.711839 kernel: pci 0000:00:18.4: [15ad:07a0] type 01 class 0x060400 May 17 00:42:32.711895 kernel: pci 0000:00:18.4: PME# supported from D0 D3hot D3cold May 17 00:42:32.711948 kernel: pci 0000:00:18.5: [15ad:07a0] type 01 class 0x060400 May 17 00:42:32.711997 kernel: pci 0000:00:18.5: PME# supported from D0 D3hot D3cold May 17 00:42:32.712059 kernel: pci 0000:00:18.6: [15ad:07a0] type 01 class 0x060400 May 17 00:42:32.712115 kernel: pci 0000:00:18.6: PME# supported from D0 D3hot D3cold May 17 00:42:32.712176 kernel: pci 0000:00:18.7: [15ad:07a0] type 01 class 0x060400 May 17 00:42:32.712237 kernel: pci 0000:00:18.7: PME# supported from D0 D3hot D3cold May 17 00:42:32.712306 kernel: pci_bus 0000:01: extended config space not accessible May 17 00:42:32.712380 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] May 17 00:42:32.712446 kernel: pci_bus 0000:02: extended config space not accessible May 17 00:42:32.712457 kernel: acpiphp: Slot [32] registered May 17 00:42:32.712465 kernel: acpiphp: Slot [33] registered May 17 00:42:32.712471 kernel: acpiphp: Slot [34] registered May 17 00:42:32.712477 kernel: acpiphp: Slot [35] registered May 17 00:42:32.712482 kernel: acpiphp: Slot [36] registered May 17 00:42:32.712488 kernel: acpiphp: Slot [37] registered May 17 00:42:32.712494 kernel: acpiphp: Slot [38] registered May 17 00:42:32.712499 kernel: acpiphp: Slot [39] registered May 17 00:42:32.712506 kernel: acpiphp: Slot [40] registered May 17 00:42:32.712514 kernel: acpiphp: Slot [41] registered May 17 00:42:32.712522 kernel: acpiphp: Slot [42] registered May 17 00:42:32.712533 kernel: acpiphp: Slot [43] registered May 17 00:42:32.712543 kernel: acpiphp: Slot [44] registered May 17 00:42:32.712550 kernel: acpiphp: Slot [45] registered May 17 00:42:32.712556 kernel: acpiphp: Slot [46] registered May 17 00:42:32.712562 kernel: acpiphp: Slot [47] registered May 17 00:42:32.712567 kernel: acpiphp: Slot [48] registered May 17 00:42:32.712573 kernel: acpiphp: Slot [49] registered May 17 00:42:32.712579 kernel: acpiphp: Slot [50] registered May 17 00:42:32.712584 kernel: acpiphp: Slot [51] registered May 17 00:42:32.712591 kernel: acpiphp: Slot [52] registered May 17 00:42:32.712597 kernel: acpiphp: Slot [53] registered May 17 00:42:32.712603 kernel: acpiphp: Slot [54] registered May 17 00:42:32.712608 kernel: acpiphp: Slot [55] registered May 17 00:42:32.712614 kernel: acpiphp: Slot [56] registered May 17 00:42:32.712620 kernel: acpiphp: Slot [57] registered May 17 00:42:32.712625 kernel: acpiphp: Slot [58] registered May 17 00:42:32.712631 kernel: acpiphp: Slot [59] registered May 17 00:42:32.712636 kernel: acpiphp: Slot [60] registered May 17 00:42:32.712642 kernel: acpiphp: Slot [61] registered May 17 00:42:32.712649 kernel: acpiphp: Slot [62] registered May 17 00:42:32.712655 kernel: acpiphp: Slot [63] registered May 17 00:42:32.712718 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] (subtractive decode) May 17 00:42:32.712771 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] May 17 00:42:32.712817 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] May 17 00:42:32.712867 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] May 17 00:42:32.712914 kernel: pci 0000:00:11.0: bridge window [mem 0x000a0000-0x000bffff window] (subtractive decode) May 17 00:42:32.712960 kernel: pci 0000:00:11.0: bridge window [mem 0x000cc000-0x000dbfff window] (subtractive decode) May 17 00:42:32.713009 kernel: pci 0000:00:11.0: bridge window [mem 0xc0000000-0xfebfffff window] (subtractive decode) May 17 00:42:32.713054 kernel: pci 0000:00:11.0: bridge window [io 0x0000-0x0cf7 window] (subtractive decode) May 17 00:42:32.713100 kernel: pci 0000:00:11.0: bridge window [io 0x0d00-0xfeff window] (subtractive decode) May 17 00:42:32.713153 kernel: pci 0000:03:00.0: [15ad:07c0] type 00 class 0x010700 May 17 00:42:32.713203 kernel: pci 0000:03:00.0: reg 0x10: [io 0x4000-0x4007] May 17 00:42:32.713251 kernel: pci 0000:03:00.0: reg 0x14: [mem 0xfd5f8000-0xfd5fffff 64bit] May 17 00:42:32.713298 kernel: pci 0000:03:00.0: reg 0x30: [mem 0x00000000-0x0000ffff pref] May 17 00:42:32.713348 kernel: pci 0000:03:00.0: PME# supported from D0 D3hot D3cold May 17 00:42:32.713396 kernel: pci 0000:03:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' May 17 00:42:32.713444 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] May 17 00:42:32.713490 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] May 17 00:42:32.713536 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] May 17 00:42:32.713583 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] May 17 00:42:32.713630 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] May 17 00:42:32.713678 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] May 17 00:42:32.713737 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] May 17 00:42:32.713786 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] May 17 00:42:32.713852 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] May 17 00:42:32.713900 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] May 17 00:42:32.713947 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] May 17 00:42:32.713995 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] May 17 00:42:32.714064 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] May 17 00:42:32.714115 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] May 17 00:42:32.714162 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] May 17 00:42:32.714207 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] May 17 00:42:32.714254 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] May 17 00:42:32.714304 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] May 17 00:42:32.714354 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] May 17 00:42:32.714410 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] May 17 00:42:32.714459 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] May 17 00:42:32.714505 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] May 17 00:42:32.714551 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] May 17 00:42:32.714597 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] May 17 00:42:32.714642 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] May 17 00:42:32.714691 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] May 17 00:42:32.714750 kernel: pci 0000:0b:00.0: [15ad:07b0] type 00 class 0x020000 May 17 00:42:32.714800 kernel: pci 0000:0b:00.0: reg 0x10: [mem 0xfd4fc000-0xfd4fcfff] May 17 00:42:32.714848 kernel: pci 0000:0b:00.0: reg 0x14: [mem 0xfd4fd000-0xfd4fdfff] May 17 00:42:32.714896 kernel: pci 0000:0b:00.0: reg 0x18: [mem 0xfd4fe000-0xfd4fffff] May 17 00:42:32.714947 kernel: pci 0000:0b:00.0: reg 0x1c: [io 0x5000-0x500f] May 17 00:42:32.714995 kernel: pci 0000:0b:00.0: reg 0x30: [mem 0x00000000-0x0000ffff pref] May 17 00:42:32.715049 kernel: pci 0000:0b:00.0: supports D1 D2 May 17 00:42:32.715101 kernel: pci 0000:0b:00.0: PME# supported from D0 D1 D2 D3hot D3cold May 17 00:42:32.715174 kernel: pci 0000:0b:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' May 17 00:42:32.715223 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] May 17 00:42:32.715269 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] May 17 00:42:32.715315 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] May 17 00:42:32.715362 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] May 17 00:42:32.715408 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] May 17 00:42:32.715457 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] May 17 00:42:32.715504 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] May 17 00:42:32.715551 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] May 17 00:42:32.715599 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] May 17 00:42:32.715643 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] May 17 00:42:32.715708 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] May 17 00:42:32.715761 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] May 17 00:42:32.715808 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] May 17 00:42:32.715857 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] May 17 00:42:32.715904 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] May 17 00:42:32.715949 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] May 17 00:42:32.715996 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] May 17 00:42:32.716043 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] May 17 00:42:32.720851 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] May 17 00:42:32.720943 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] May 17 00:42:32.721033 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] May 17 00:42:32.721122 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] May 17 00:42:32.721206 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] May 17 00:42:32.721290 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] May 17 00:42:32.721373 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] May 17 00:42:32.721457 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] May 17 00:42:32.721541 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] May 17 00:42:32.721620 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] May 17 00:42:32.721708 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] May 17 00:42:32.721795 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] May 17 00:42:32.721879 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] May 17 00:42:32.721963 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] May 17 00:42:32.722042 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] May 17 00:42:32.722122 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] May 17 00:42:32.722207 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] May 17 00:42:32.722289 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] May 17 00:42:32.722374 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] May 17 00:42:32.722457 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] May 17 00:42:32.722540 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] May 17 00:42:32.722623 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] May 17 00:42:32.722714 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] May 17 00:42:32.722799 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] May 17 00:42:32.722883 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] May 17 00:42:32.722967 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] May 17 00:42:32.723054 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] May 17 00:42:32.723138 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] May 17 00:42:32.723219 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] May 17 00:42:32.723303 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] May 17 00:42:32.723383 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] May 17 00:42:32.723464 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] May 17 00:42:32.723548 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] May 17 00:42:32.723630 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] May 17 00:42:32.723733 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] May 17 00:42:32.723824 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] May 17 00:42:32.723910 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] May 17 00:42:32.723992 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] May 17 00:42:32.724074 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] May 17 00:42:32.724160 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] May 17 00:42:32.724243 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] May 17 00:42:32.724326 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] May 17 00:42:32.724411 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] May 17 00:42:32.724495 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] May 17 00:42:32.724576 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] May 17 00:42:32.724658 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] May 17 00:42:32.724761 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] May 17 00:42:32.724844 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] May 17 00:42:32.724924 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] May 17 00:42:32.725010 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] May 17 00:42:32.725096 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] May 17 00:42:32.725473 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] May 17 00:42:32.725570 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] May 17 00:42:32.725658 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] May 17 00:42:32.725851 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] May 17 00:42:32.725940 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] May 17 00:42:32.726023 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] May 17 00:42:32.726107 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] May 17 00:42:32.726196 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] May 17 00:42:32.726279 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] May 17 00:42:32.726362 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] May 17 00:42:32.726376 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 9 May 17 00:42:32.726387 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 0 May 17 00:42:32.726398 kernel: ACPI: PCI: Interrupt link LNKB disabled May 17 00:42:32.726408 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 May 17 00:42:32.726418 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 10 May 17 00:42:32.726431 kernel: iommu: Default domain type: Translated May 17 00:42:32.726441 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 17 00:42:32.726521 kernel: pci 0000:00:0f.0: vgaarb: setting as boot VGA device May 17 00:42:32.726603 kernel: pci 0000:00:0f.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none May 17 00:42:32.726683 kernel: pci 0000:00:0f.0: vgaarb: bridge control possible May 17 00:42:32.726697 kernel: vgaarb: loaded May 17 00:42:32.726717 kernel: pps_core: LinuxPPS API ver. 1 registered May 17 00:42:32.726729 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti May 17 00:42:32.726739 kernel: PTP clock support registered May 17 00:42:32.726752 kernel: PCI: Using ACPI for IRQ routing May 17 00:42:32.726763 kernel: PCI: pci_cache_line_size set to 64 bytes May 17 00:42:32.726773 kernel: e820: reserve RAM buffer [mem 0x0009ec00-0x0009ffff] May 17 00:42:32.726783 kernel: e820: reserve RAM buffer [mem 0x7fee0000-0x7fffffff] May 17 00:42:32.726794 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 May 17 00:42:32.726804 kernel: hpet0: 16 comparators, 64-bit 14.318180 MHz counter May 17 00:42:32.726819 kernel: clocksource: Switched to clocksource tsc-early May 17 00:42:32.726829 kernel: VFS: Disk quotas dquot_6.6.0 May 17 00:42:32.726840 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 17 00:42:32.726852 kernel: pnp: PnP ACPI init May 17 00:42:32.727014 kernel: system 00:00: [io 0x1000-0x103f] has been reserved May 17 00:42:32.727093 kernel: system 00:00: [io 0x1040-0x104f] has been reserved May 17 00:42:32.727168 kernel: system 00:00: [io 0x0cf0-0x0cf1] has been reserved May 17 00:42:32.727247 kernel: system 00:04: [mem 0xfed00000-0xfed003ff] has been reserved May 17 00:42:32.727326 kernel: pnp 00:06: [dma 2] May 17 00:42:32.727410 kernel: system 00:07: [io 0xfce0-0xfcff] has been reserved May 17 00:42:32.727487 kernel: system 00:07: [mem 0xf0000000-0xf7ffffff] has been reserved May 17 00:42:32.727561 kernel: system 00:07: [mem 0xfe800000-0xfe9fffff] has been reserved May 17 00:42:32.727574 kernel: pnp: PnP ACPI: found 8 devices May 17 00:42:32.727585 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 17 00:42:32.727596 kernel: NET: Registered PF_INET protocol family May 17 00:42:32.727606 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) May 17 00:42:32.727617 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) May 17 00:42:32.727627 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 17 00:42:32.727640 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) May 17 00:42:32.727650 kernel: TCP bind hash table entries: 16384 (order: 6, 262144 bytes, linear) May 17 00:42:32.727660 kernel: TCP: Hash tables configured (established 16384 bind 16384) May 17 00:42:32.727670 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) May 17 00:42:32.727680 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) May 17 00:42:32.727690 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 17 00:42:32.727708 kernel: NET: Registered PF_XDP protocol family May 17 00:42:32.727794 kernel: pci 0000:00:15.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 May 17 00:42:32.727888 kernel: pci 0000:00:15.3: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 May 17 00:42:32.727972 kernel: pci 0000:00:15.4: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 May 17 00:42:32.728061 kernel: pci 0000:00:15.5: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 May 17 00:42:32.728145 kernel: pci 0000:00:15.6: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 May 17 00:42:32.728228 kernel: pci 0000:00:15.7: bridge window [io 0x1000-0x0fff] to [bus 0a] add_size 1000 May 17 00:42:32.728312 kernel: pci 0000:00:16.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 0b] add_size 200000 add_align 100000 May 17 00:42:32.728398 kernel: pci 0000:00:16.3: bridge window [io 0x1000-0x0fff] to [bus 0e] add_size 1000 May 17 00:42:32.728481 kernel: pci 0000:00:16.4: bridge window [io 0x1000-0x0fff] to [bus 0f] add_size 1000 May 17 00:42:32.728564 kernel: pci 0000:00:16.5: bridge window [io 0x1000-0x0fff] to [bus 10] add_size 1000 May 17 00:42:32.728648 kernel: pci 0000:00:16.6: bridge window [io 0x1000-0x0fff] to [bus 11] add_size 1000 May 17 00:42:32.734120 kernel: pci 0000:00:16.7: bridge window [io 0x1000-0x0fff] to [bus 12] add_size 1000 May 17 00:42:32.734228 kernel: pci 0000:00:17.3: bridge window [io 0x1000-0x0fff] to [bus 16] add_size 1000 May 17 00:42:32.734320 kernel: pci 0000:00:17.4: bridge window [io 0x1000-0x0fff] to [bus 17] add_size 1000 May 17 00:42:32.734409 kernel: pci 0000:00:17.5: bridge window [io 0x1000-0x0fff] to [bus 18] add_size 1000 May 17 00:42:32.734496 kernel: pci 0000:00:17.6: bridge window [io 0x1000-0x0fff] to [bus 19] add_size 1000 May 17 00:42:32.734586 kernel: pci 0000:00:17.7: bridge window [io 0x1000-0x0fff] to [bus 1a] add_size 1000 May 17 00:42:32.734688 kernel: pci 0000:00:18.2: bridge window [io 0x1000-0x0fff] to [bus 1d] add_size 1000 May 17 00:42:32.734798 kernel: pci 0000:00:18.3: bridge window [io 0x1000-0x0fff] to [bus 1e] add_size 1000 May 17 00:42:32.734890 kernel: pci 0000:00:18.4: bridge window [io 0x1000-0x0fff] to [bus 1f] add_size 1000 May 17 00:42:32.734977 kernel: pci 0000:00:18.5: bridge window [io 0x1000-0x0fff] to [bus 20] add_size 1000 May 17 00:42:32.735059 kernel: pci 0000:00:18.6: bridge window [io 0x1000-0x0fff] to [bus 21] add_size 1000 May 17 00:42:32.735144 kernel: pci 0000:00:18.7: bridge window [io 0x1000-0x0fff] to [bus 22] add_size 1000 May 17 00:42:32.735230 kernel: pci 0000:00:15.0: BAR 15: assigned [mem 0xc0000000-0xc01fffff 64bit pref] May 17 00:42:32.735316 kernel: pci 0000:00:16.0: BAR 15: assigned [mem 0xc0200000-0xc03fffff 64bit pref] May 17 00:42:32.735411 kernel: pci 0000:00:15.3: BAR 13: no space for [io size 0x1000] May 17 00:42:32.735507 kernel: pci 0000:00:15.3: BAR 13: failed to assign [io size 0x1000] May 17 00:42:32.735589 kernel: pci 0000:00:15.4: BAR 13: no space for [io size 0x1000] May 17 00:42:32.735669 kernel: pci 0000:00:15.4: BAR 13: failed to assign [io size 0x1000] May 17 00:42:32.738273 kernel: pci 0000:00:15.5: BAR 13: no space for [io size 0x1000] May 17 00:42:32.738372 kernel: pci 0000:00:15.5: BAR 13: failed to assign [io size 0x1000] May 17 00:42:32.738468 kernel: pci 0000:00:15.6: BAR 13: no space for [io size 0x1000] May 17 00:42:32.738553 kernel: pci 0000:00:15.6: BAR 13: failed to assign [io size 0x1000] May 17 00:42:32.738643 kernel: pci 0000:00:15.7: BAR 13: no space for [io size 0x1000] May 17 00:42:32.741004 kernel: pci 0000:00:15.7: BAR 13: failed to assign [io size 0x1000] May 17 00:42:32.741110 kernel: pci 0000:00:16.3: BAR 13: no space for [io size 0x1000] May 17 00:42:32.741198 kernel: pci 0000:00:16.3: BAR 13: failed to assign [io size 0x1000] May 17 00:42:32.741285 kernel: pci 0000:00:16.4: BAR 13: no space for [io size 0x1000] May 17 00:42:32.741370 kernel: pci 0000:00:16.4: BAR 13: failed to assign [io size 0x1000] May 17 00:42:32.741458 kernel: pci 0000:00:16.5: BAR 13: no space for [io size 0x1000] May 17 00:42:32.741544 kernel: pci 0000:00:16.5: BAR 13: failed to assign [io size 0x1000] May 17 00:42:32.741633 kernel: pci 0000:00:16.6: BAR 13: no space for [io size 0x1000] May 17 00:42:32.741744 kernel: pci 0000:00:16.6: BAR 13: failed to assign [io size 0x1000] May 17 00:42:32.741831 kernel: pci 0000:00:16.7: BAR 13: no space for [io size 0x1000] May 17 00:42:32.741914 kernel: pci 0000:00:16.7: BAR 13: failed to assign [io size 0x1000] May 17 00:42:32.741997 kernel: pci 0000:00:17.3: BAR 13: no space for [io size 0x1000] May 17 00:42:32.742081 kernel: pci 0000:00:17.3: BAR 13: failed to assign [io size 0x1000] May 17 00:42:32.742166 kernel: pci 0000:00:17.4: BAR 13: no space for [io size 0x1000] May 17 00:42:32.742247 kernel: pci 0000:00:17.4: BAR 13: failed to assign [io size 0x1000] May 17 00:42:32.742335 kernel: pci 0000:00:17.5: BAR 13: no space for [io size 0x1000] May 17 00:42:32.742418 kernel: pci 0000:00:17.5: BAR 13: failed to assign [io size 0x1000] May 17 00:42:32.742501 kernel: pci 0000:00:17.6: BAR 13: no space for [io size 0x1000] May 17 00:42:32.742582 kernel: pci 0000:00:17.6: BAR 13: failed to assign [io size 0x1000] May 17 00:42:32.742666 kernel: pci 0000:00:17.7: BAR 13: no space for [io size 0x1000] May 17 00:42:32.744265 kernel: pci 0000:00:17.7: BAR 13: failed to assign [io size 0x1000] May 17 00:42:32.744374 kernel: pci 0000:00:18.2: BAR 13: no space for [io size 0x1000] May 17 00:42:32.744460 kernel: pci 0000:00:18.2: BAR 13: failed to assign [io size 0x1000] May 17 00:42:32.744550 kernel: pci 0000:00:18.3: BAR 13: no space for [io size 0x1000] May 17 00:42:32.744635 kernel: pci 0000:00:18.3: BAR 13: failed to assign [io size 0x1000] May 17 00:42:32.745791 kernel: pci 0000:00:18.4: BAR 13: no space for [io size 0x1000] May 17 00:42:32.745892 kernel: pci 0000:00:18.4: BAR 13: failed to assign [io size 0x1000] May 17 00:42:32.745982 kernel: pci 0000:00:18.5: BAR 13: no space for [io size 0x1000] May 17 00:42:32.746065 kernel: pci 0000:00:18.5: BAR 13: failed to assign [io size 0x1000] May 17 00:42:32.746150 kernel: pci 0000:00:18.6: BAR 13: no space for [io size 0x1000] May 17 00:42:32.746233 kernel: pci 0000:00:18.6: BAR 13: failed to assign [io size 0x1000] May 17 00:42:32.746322 kernel: pci 0000:00:18.7: BAR 13: no space for [io size 0x1000] May 17 00:42:32.746405 kernel: pci 0000:00:18.7: BAR 13: failed to assign [io size 0x1000] May 17 00:42:32.746490 kernel: pci 0000:00:18.7: BAR 13: no space for [io size 0x1000] May 17 00:42:32.746572 kernel: pci 0000:00:18.7: BAR 13: failed to assign [io size 0x1000] May 17 00:42:32.746654 kernel: pci 0000:00:18.6: BAR 13: no space for [io size 0x1000] May 17 00:42:32.746746 kernel: pci 0000:00:18.6: BAR 13: failed to assign [io size 0x1000] May 17 00:42:32.746836 kernel: pci 0000:00:18.5: BAR 13: no space for [io size 0x1000] May 17 00:42:32.746917 kernel: pci 0000:00:18.5: BAR 13: failed to assign [io size 0x1000] May 17 00:42:32.747000 kernel: pci 0000:00:18.4: BAR 13: no space for [io size 0x1000] May 17 00:42:32.747087 kernel: pci 0000:00:18.4: BAR 13: failed to assign [io size 0x1000] May 17 00:42:32.747171 kernel: pci 0000:00:18.3: BAR 13: no space for [io size 0x1000] May 17 00:42:32.747254 kernel: pci 0000:00:18.3: BAR 13: failed to assign [io size 0x1000] May 17 00:42:32.747335 kernel: pci 0000:00:18.2: BAR 13: no space for [io size 0x1000] May 17 00:42:32.747412 kernel: pci 0000:00:18.2: BAR 13: failed to assign [io size 0x1000] May 17 00:42:32.747489 kernel: pci 0000:00:17.7: BAR 13: no space for [io size 0x1000] May 17 00:42:32.747569 kernel: pci 0000:00:17.7: BAR 13: failed to assign [io size 0x1000] May 17 00:42:32.747647 kernel: pci 0000:00:17.6: BAR 13: no space for [io size 0x1000] May 17 00:42:32.749996 kernel: pci 0000:00:17.6: BAR 13: failed to assign [io size 0x1000] May 17 00:42:32.750107 kernel: pci 0000:00:17.5: BAR 13: no space for [io size 0x1000] May 17 00:42:32.750197 kernel: pci 0000:00:17.5: BAR 13: failed to assign [io size 0x1000] May 17 00:42:32.750284 kernel: pci 0000:00:17.4: BAR 13: no space for [io size 0x1000] May 17 00:42:32.750371 kernel: pci 0000:00:17.4: BAR 13: failed to assign [io size 0x1000] May 17 00:42:32.750458 kernel: pci 0000:00:17.3: BAR 13: no space for [io size 0x1000] May 17 00:42:32.750542 kernel: pci 0000:00:17.3: BAR 13: failed to assign [io size 0x1000] May 17 00:42:32.750628 kernel: pci 0000:00:16.7: BAR 13: no space for [io size 0x1000] May 17 00:42:32.752316 kernel: pci 0000:00:16.7: BAR 13: failed to assign [io size 0x1000] May 17 00:42:32.752421 kernel: pci 0000:00:16.6: BAR 13: no space for [io size 0x1000] May 17 00:42:32.752511 kernel: pci 0000:00:16.6: BAR 13: failed to assign [io size 0x1000] May 17 00:42:32.752601 kernel: pci 0000:00:16.5: BAR 13: no space for [io size 0x1000] May 17 00:42:32.752682 kernel: pci 0000:00:16.5: BAR 13: failed to assign [io size 0x1000] May 17 00:42:32.752787 kernel: pci 0000:00:16.4: BAR 13: no space for [io size 0x1000] May 17 00:42:32.752870 kernel: pci 0000:00:16.4: BAR 13: failed to assign [io size 0x1000] May 17 00:42:32.752957 kernel: pci 0000:00:16.3: BAR 13: no space for [io size 0x1000] May 17 00:42:32.753039 kernel: pci 0000:00:16.3: BAR 13: failed to assign [io size 0x1000] May 17 00:42:32.753122 kernel: pci 0000:00:15.7: BAR 13: no space for [io size 0x1000] May 17 00:42:32.753204 kernel: pci 0000:00:15.7: BAR 13: failed to assign [io size 0x1000] May 17 00:42:32.753288 kernel: pci 0000:00:15.6: BAR 13: no space for [io size 0x1000] May 17 00:42:32.753375 kernel: pci 0000:00:15.6: BAR 13: failed to assign [io size 0x1000] May 17 00:42:32.753462 kernel: pci 0000:00:15.5: BAR 13: no space for [io size 0x1000] May 17 00:42:32.753545 kernel: pci 0000:00:15.5: BAR 13: failed to assign [io size 0x1000] May 17 00:42:32.753629 kernel: pci 0000:00:15.4: BAR 13: no space for [io size 0x1000] May 17 00:42:32.753723 kernel: pci 0000:00:15.4: BAR 13: failed to assign [io size 0x1000] May 17 00:42:32.753816 kernel: pci 0000:00:15.3: BAR 13: no space for [io size 0x1000] May 17 00:42:32.753903 kernel: pci 0000:00:15.3: BAR 13: failed to assign [io size 0x1000] May 17 00:42:32.753989 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] May 17 00:42:32.754085 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] May 17 00:42:32.754169 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] May 17 00:42:32.754257 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] May 17 00:42:32.754339 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] May 17 00:42:32.754430 kernel: pci 0000:03:00.0: BAR 6: assigned [mem 0xfd500000-0xfd50ffff pref] May 17 00:42:32.754556 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] May 17 00:42:32.754640 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] May 17 00:42:32.754735 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] May 17 00:42:32.754818 kernel: pci 0000:00:15.0: bridge window [mem 0xc0000000-0xc01fffff 64bit pref] May 17 00:42:32.754905 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] May 17 00:42:32.754994 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] May 17 00:42:32.755077 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] May 17 00:42:32.755161 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] May 17 00:42:32.755245 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] May 17 00:42:32.755329 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] May 17 00:42:32.755412 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] May 17 00:42:32.755496 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] May 17 00:42:32.755577 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] May 17 00:42:32.755658 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] May 17 00:42:32.755756 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] May 17 00:42:32.755840 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] May 17 00:42:32.755924 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] May 17 00:42:32.756007 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] May 17 00:42:32.756095 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] May 17 00:42:32.756177 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] May 17 00:42:32.756265 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] May 17 00:42:32.756350 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] May 17 00:42:32.756432 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] May 17 00:42:32.757101 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] May 17 00:42:32.757196 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] May 17 00:42:32.757718 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] May 17 00:42:32.757818 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] May 17 00:42:32.758188 kernel: pci 0000:0b:00.0: BAR 6: assigned [mem 0xfd400000-0xfd40ffff pref] May 17 00:42:32.758282 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] May 17 00:42:32.758863 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] May 17 00:42:32.758966 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] May 17 00:42:32.759055 kernel: pci 0000:00:16.0: bridge window [mem 0xc0200000-0xc03fffff 64bit pref] May 17 00:42:32.759142 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] May 17 00:42:32.759226 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] May 17 00:42:32.759310 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] May 17 00:42:32.759396 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] May 17 00:42:32.759481 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] May 17 00:42:32.759564 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] May 17 00:42:32.759646 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] May 17 00:42:32.759880 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] May 17 00:42:32.759971 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] May 17 00:42:32.760456 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] May 17 00:42:32.760554 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] May 17 00:42:32.760642 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] May 17 00:42:32.760749 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] May 17 00:42:32.760850 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] May 17 00:42:32.760936 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] May 17 00:42:32.761022 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] May 17 00:42:32.761112 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] May 17 00:42:32.761200 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] May 17 00:42:32.761286 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] May 17 00:42:32.761371 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] May 17 00:42:32.761459 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] May 17 00:42:32.761543 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] May 17 00:42:32.761629 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] May 17 00:42:32.761734 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] May 17 00:42:32.761835 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] May 17 00:42:32.761924 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] May 17 00:42:32.762175 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] May 17 00:42:32.762266 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] May 17 00:42:32.762796 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] May 17 00:42:32.762888 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] May 17 00:42:32.762976 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] May 17 00:42:32.763055 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] May 17 00:42:32.763134 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] May 17 00:42:32.763235 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] May 17 00:42:32.763365 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] May 17 00:42:32.763454 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] May 17 00:42:32.763537 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] May 17 00:42:32.763622 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] May 17 00:42:32.763716 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] May 17 00:42:32.763799 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] May 17 00:42:32.763891 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] May 17 00:42:32.763977 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] May 17 00:42:32.764064 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] May 17 00:42:32.764150 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] May 17 00:42:32.764237 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] May 17 00:42:32.764325 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] May 17 00:42:32.764409 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] May 17 00:42:32.764496 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] May 17 00:42:32.764581 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] May 17 00:42:32.764667 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] May 17 00:42:32.764763 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] May 17 00:42:32.764847 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] May 17 00:42:32.764929 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] May 17 00:42:32.765011 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] May 17 00:42:32.765099 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] May 17 00:42:32.765182 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] May 17 00:42:32.765264 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] May 17 00:42:32.765347 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] May 17 00:42:32.765431 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] May 17 00:42:32.765515 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] May 17 00:42:32.765598 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] May 17 00:42:32.765682 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] May 17 00:42:32.765988 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] May 17 00:42:32.766078 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] May 17 00:42:32.766167 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] May 17 00:42:32.766252 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] May 17 00:42:32.766334 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] May 17 00:42:32.766416 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] May 17 00:42:32.766496 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] May 17 00:42:32.766580 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] May 17 00:42:32.766665 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] May 17 00:42:32.766795 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] May 17 00:42:32.766887 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] May 17 00:42:32.766973 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] May 17 00:42:32.767485 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] May 17 00:42:32.767573 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] May 17 00:42:32.767843 kernel: pci_bus 0000:00: resource 4 [mem 0x000a0000-0x000bffff window] May 17 00:42:32.767922 kernel: pci_bus 0000:00: resource 5 [mem 0x000cc000-0x000dbfff window] May 17 00:42:32.768390 kernel: pci_bus 0000:00: resource 6 [mem 0xc0000000-0xfebfffff window] May 17 00:42:32.768463 kernel: pci_bus 0000:00: resource 7 [io 0x0000-0x0cf7 window] May 17 00:42:32.768904 kernel: pci_bus 0000:00: resource 8 [io 0x0d00-0xfeff window] May 17 00:42:32.768990 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x3fff] May 17 00:42:32.769072 kernel: pci_bus 0000:02: resource 1 [mem 0xfd600000-0xfdffffff] May 17 00:42:32.769150 kernel: pci_bus 0000:02: resource 2 [mem 0xe7b00000-0xe7ffffff 64bit pref] May 17 00:42:32.769249 kernel: pci_bus 0000:02: resource 4 [mem 0x000a0000-0x000bffff window] May 17 00:42:32.769633 kernel: pci_bus 0000:02: resource 5 [mem 0x000cc000-0x000dbfff window] May 17 00:42:32.769734 kernel: pci_bus 0000:02: resource 6 [mem 0xc0000000-0xfebfffff window] May 17 00:42:32.771272 kernel: pci_bus 0000:02: resource 7 [io 0x0000-0x0cf7 window] May 17 00:42:32.771357 kernel: pci_bus 0000:02: resource 8 [io 0x0d00-0xfeff window] May 17 00:42:32.771449 kernel: pci_bus 0000:03: resource 0 [io 0x4000-0x4fff] May 17 00:42:32.771527 kernel: pci_bus 0000:03: resource 1 [mem 0xfd500000-0xfd5fffff] May 17 00:42:32.771605 kernel: pci_bus 0000:03: resource 2 [mem 0xc0000000-0xc01fffff 64bit pref] May 17 00:42:32.771686 kernel: pci_bus 0000:04: resource 0 [io 0x8000-0x8fff] May 17 00:42:32.771783 kernel: pci_bus 0000:04: resource 1 [mem 0xfd100000-0xfd1fffff] May 17 00:42:32.771861 kernel: pci_bus 0000:04: resource 2 [mem 0xe7800000-0xe78fffff 64bit pref] May 17 00:42:32.771945 kernel: pci_bus 0000:05: resource 0 [io 0xc000-0xcfff] May 17 00:42:32.772024 kernel: pci_bus 0000:05: resource 1 [mem 0xfcd00000-0xfcdfffff] May 17 00:42:32.772101 kernel: pci_bus 0000:05: resource 2 [mem 0xe7400000-0xe74fffff 64bit pref] May 17 00:42:32.772187 kernel: pci_bus 0000:06: resource 1 [mem 0xfc900000-0xfc9fffff] May 17 00:42:32.772266 kernel: pci_bus 0000:06: resource 2 [mem 0xe7000000-0xe70fffff 64bit pref] May 17 00:42:32.772350 kernel: pci_bus 0000:07: resource 1 [mem 0xfc500000-0xfc5fffff] May 17 00:42:32.772630 kernel: pci_bus 0000:07: resource 2 [mem 0xe6c00000-0xe6cfffff 64bit pref] May 17 00:42:32.772761 kernel: pci_bus 0000:08: resource 1 [mem 0xfc100000-0xfc1fffff] May 17 00:42:32.772842 kernel: pci_bus 0000:08: resource 2 [mem 0xe6800000-0xe68fffff 64bit pref] May 17 00:42:32.772928 kernel: pci_bus 0000:09: resource 1 [mem 0xfbd00000-0xfbdfffff] May 17 00:42:32.773004 kernel: pci_bus 0000:09: resource 2 [mem 0xe6400000-0xe64fffff 64bit pref] May 17 00:42:32.773085 kernel: pci_bus 0000:0a: resource 1 [mem 0xfb900000-0xfb9fffff] May 17 00:42:32.773163 kernel: pci_bus 0000:0a: resource 2 [mem 0xe6000000-0xe60fffff 64bit pref] May 17 00:42:32.773250 kernel: pci_bus 0000:0b: resource 0 [io 0x5000-0x5fff] May 17 00:42:32.773326 kernel: pci_bus 0000:0b: resource 1 [mem 0xfd400000-0xfd4fffff] May 17 00:42:32.773400 kernel: pci_bus 0000:0b: resource 2 [mem 0xc0200000-0xc03fffff 64bit pref] May 17 00:42:32.773482 kernel: pci_bus 0000:0c: resource 0 [io 0x9000-0x9fff] May 17 00:42:32.773560 kernel: pci_bus 0000:0c: resource 1 [mem 0xfd000000-0xfd0fffff] May 17 00:42:32.773638 kernel: pci_bus 0000:0c: resource 2 [mem 0xe7700000-0xe77fffff 64bit pref] May 17 00:42:32.773754 kernel: pci_bus 0000:0d: resource 0 [io 0xd000-0xdfff] May 17 00:42:32.773835 kernel: pci_bus 0000:0d: resource 1 [mem 0xfcc00000-0xfccfffff] May 17 00:42:32.773910 kernel: pci_bus 0000:0d: resource 2 [mem 0xe7300000-0xe73fffff 64bit pref] May 17 00:42:32.773989 kernel: pci_bus 0000:0e: resource 1 [mem 0xfc800000-0xfc8fffff] May 17 00:42:32.774064 kernel: pci_bus 0000:0e: resource 2 [mem 0xe6f00000-0xe6ffffff 64bit pref] May 17 00:42:32.774146 kernel: pci_bus 0000:0f: resource 1 [mem 0xfc400000-0xfc4fffff] May 17 00:42:32.774220 kernel: pci_bus 0000:0f: resource 2 [mem 0xe6b00000-0xe6bfffff 64bit pref] May 17 00:42:32.774435 kernel: pci_bus 0000:10: resource 1 [mem 0xfc000000-0xfc0fffff] May 17 00:42:32.774518 kernel: pci_bus 0000:10: resource 2 [mem 0xe6700000-0xe67fffff 64bit pref] May 17 00:42:32.774601 kernel: pci_bus 0000:11: resource 1 [mem 0xfbc00000-0xfbcfffff] May 17 00:42:32.774682 kernel: pci_bus 0000:11: resource 2 [mem 0xe6300000-0xe63fffff 64bit pref] May 17 00:42:32.774811 kernel: pci_bus 0000:12: resource 1 [mem 0xfb800000-0xfb8fffff] May 17 00:42:32.774891 kernel: pci_bus 0000:12: resource 2 [mem 0xe5f00000-0xe5ffffff 64bit pref] May 17 00:42:32.774973 kernel: pci_bus 0000:13: resource 0 [io 0x6000-0x6fff] May 17 00:42:32.775065 kernel: pci_bus 0000:13: resource 1 [mem 0xfd300000-0xfd3fffff] May 17 00:42:32.775144 kernel: pci_bus 0000:13: resource 2 [mem 0xe7a00000-0xe7afffff 64bit pref] May 17 00:42:32.775227 kernel: pci_bus 0000:14: resource 0 [io 0xa000-0xafff] May 17 00:42:32.775307 kernel: pci_bus 0000:14: resource 1 [mem 0xfcf00000-0xfcffffff] May 17 00:42:32.775383 kernel: pci_bus 0000:14: resource 2 [mem 0xe7600000-0xe76fffff 64bit pref] May 17 00:42:32.775471 kernel: pci_bus 0000:15: resource 0 [io 0xe000-0xefff] May 17 00:42:32.775550 kernel: pci_bus 0000:15: resource 1 [mem 0xfcb00000-0xfcbfffff] May 17 00:42:32.775628 kernel: pci_bus 0000:15: resource 2 [mem 0xe7200000-0xe72fffff 64bit pref] May 17 00:42:32.775719 kernel: pci_bus 0000:16: resource 1 [mem 0xfc700000-0xfc7fffff] May 17 00:42:32.775799 kernel: pci_bus 0000:16: resource 2 [mem 0xe6e00000-0xe6efffff 64bit pref] May 17 00:42:32.775883 kernel: pci_bus 0000:17: resource 1 [mem 0xfc300000-0xfc3fffff] May 17 00:42:32.775960 kernel: pci_bus 0000:17: resource 2 [mem 0xe6a00000-0xe6afffff 64bit pref] May 17 00:42:32.776047 kernel: pci_bus 0000:18: resource 1 [mem 0xfbf00000-0xfbffffff] May 17 00:42:32.776123 kernel: pci_bus 0000:18: resource 2 [mem 0xe6600000-0xe66fffff 64bit pref] May 17 00:42:32.776222 kernel: pci_bus 0000:19: resource 1 [mem 0xfbb00000-0xfbbfffff] May 17 00:42:32.776298 kernel: pci_bus 0000:19: resource 2 [mem 0xe6200000-0xe62fffff 64bit pref] May 17 00:42:32.777267 kernel: pci_bus 0000:1a: resource 1 [mem 0xfb700000-0xfb7fffff] May 17 00:42:32.777352 kernel: pci_bus 0000:1a: resource 2 [mem 0xe5e00000-0xe5efffff 64bit pref] May 17 00:42:32.777445 kernel: pci_bus 0000:1b: resource 0 [io 0x7000-0x7fff] May 17 00:42:32.777523 kernel: pci_bus 0000:1b: resource 1 [mem 0xfd200000-0xfd2fffff] May 17 00:42:32.777596 kernel: pci_bus 0000:1b: resource 2 [mem 0xe7900000-0xe79fffff 64bit pref] May 17 00:42:32.777677 kernel: pci_bus 0000:1c: resource 0 [io 0xb000-0xbfff] May 17 00:42:32.777807 kernel: pci_bus 0000:1c: resource 1 [mem 0xfce00000-0xfcefffff] May 17 00:42:32.777894 kernel: pci_bus 0000:1c: resource 2 [mem 0xe7500000-0xe75fffff 64bit pref] May 17 00:42:32.777981 kernel: pci_bus 0000:1d: resource 1 [mem 0xfca00000-0xfcafffff] May 17 00:42:32.778062 kernel: pci_bus 0000:1d: resource 2 [mem 0xe7100000-0xe71fffff 64bit pref] May 17 00:42:32.778148 kernel: pci_bus 0000:1e: resource 1 [mem 0xfc600000-0xfc6fffff] May 17 00:42:32.778226 kernel: pci_bus 0000:1e: resource 2 [mem 0xe6d00000-0xe6dfffff 64bit pref] May 17 00:42:32.778311 kernel: pci_bus 0000:1f: resource 1 [mem 0xfc200000-0xfc2fffff] May 17 00:42:32.778389 kernel: pci_bus 0000:1f: resource 2 [mem 0xe6900000-0xe69fffff 64bit pref] May 17 00:42:32.778476 kernel: pci_bus 0000:20: resource 1 [mem 0xfbe00000-0xfbefffff] May 17 00:42:32.778553 kernel: pci_bus 0000:20: resource 2 [mem 0xe6500000-0xe65fffff 64bit pref] May 17 00:42:32.778638 kernel: pci_bus 0000:21: resource 1 [mem 0xfba00000-0xfbafffff] May 17 00:42:32.778726 kernel: pci_bus 0000:21: resource 2 [mem 0xe6100000-0xe61fffff 64bit pref] May 17 00:42:32.778811 kernel: pci_bus 0000:22: resource 1 [mem 0xfb600000-0xfb6fffff] May 17 00:42:32.778888 kernel: pci_bus 0000:22: resource 2 [mem 0xe5d00000-0xe5dfffff 64bit pref] May 17 00:42:32.778977 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers May 17 00:42:32.778996 kernel: PCI: CLS 32 bytes, default 64 May 17 00:42:32.779007 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer May 17 00:42:32.779019 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns May 17 00:42:32.779030 kernel: clocksource: Switched to clocksource tsc May 17 00:42:32.779041 kernel: Initialise system trusted keyrings May 17 00:42:32.779051 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 May 17 00:42:32.779061 kernel: Key type asymmetric registered May 17 00:42:32.779072 kernel: Asymmetric key parser 'x509' registered May 17 00:42:32.779085 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) May 17 00:42:32.779096 kernel: io scheduler mq-deadline registered May 17 00:42:32.779106 kernel: io scheduler kyber registered May 17 00:42:32.779116 kernel: io scheduler bfq registered May 17 00:42:32.779202 kernel: pcieport 0000:00:15.0: PME: Signaling with IRQ 24 May 17 00:42:32.779287 kernel: pcieport 0000:00:15.0: pciehp: Slot #160 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 17 00:42:32.779375 kernel: pcieport 0000:00:15.1: PME: Signaling with IRQ 25 May 17 00:42:32.781781 kernel: pcieport 0000:00:15.1: pciehp: Slot #161 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 17 00:42:32.781845 kernel: pcieport 0000:00:15.2: PME: Signaling with IRQ 26 May 17 00:42:32.781901 kernel: pcieport 0000:00:15.2: pciehp: Slot #162 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 17 00:42:32.781953 kernel: pcieport 0000:00:15.3: PME: Signaling with IRQ 27 May 17 00:42:32.782002 kernel: pcieport 0000:00:15.3: pciehp: Slot #163 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 17 00:42:32.782051 kernel: pcieport 0000:00:15.4: PME: Signaling with IRQ 28 May 17 00:42:32.782099 kernel: pcieport 0000:00:15.4: pciehp: Slot #164 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 17 00:42:32.782151 kernel: pcieport 0000:00:15.5: PME: Signaling with IRQ 29 May 17 00:42:32.782199 kernel: pcieport 0000:00:15.5: pciehp: Slot #165 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 17 00:42:32.782249 kernel: pcieport 0000:00:15.6: PME: Signaling with IRQ 30 May 17 00:42:32.782296 kernel: pcieport 0000:00:15.6: pciehp: Slot #166 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 17 00:42:32.782345 kernel: pcieport 0000:00:15.7: PME: Signaling with IRQ 31 May 17 00:42:32.782392 kernel: pcieport 0000:00:15.7: pciehp: Slot #167 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 17 00:42:32.782446 kernel: pcieport 0000:00:16.0: PME: Signaling with IRQ 32 May 17 00:42:32.782511 kernel: pcieport 0000:00:16.0: pciehp: Slot #192 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 17 00:42:32.782561 kernel: pcieport 0000:00:16.1: PME: Signaling with IRQ 33 May 17 00:42:32.782609 kernel: pcieport 0000:00:16.1: pciehp: Slot #193 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 17 00:42:32.782658 kernel: pcieport 0000:00:16.2: PME: Signaling with IRQ 34 May 17 00:42:32.782714 kernel: pcieport 0000:00:16.2: pciehp: Slot #194 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 17 00:42:32.782765 kernel: pcieport 0000:00:16.3: PME: Signaling with IRQ 35 May 17 00:42:32.782816 kernel: pcieport 0000:00:16.3: pciehp: Slot #195 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 17 00:42:32.782865 kernel: pcieport 0000:00:16.4: PME: Signaling with IRQ 36 May 17 00:42:32.782914 kernel: pcieport 0000:00:16.4: pciehp: Slot #196 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 17 00:42:32.782962 kernel: pcieport 0000:00:16.5: PME: Signaling with IRQ 37 May 17 00:42:32.783010 kernel: pcieport 0000:00:16.5: pciehp: Slot #197 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 17 00:42:32.783061 kernel: pcieport 0000:00:16.6: PME: Signaling with IRQ 38 May 17 00:42:32.783108 kernel: pcieport 0000:00:16.6: pciehp: Slot #198 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 17 00:42:32.783170 kernel: pcieport 0000:00:16.7: PME: Signaling with IRQ 39 May 17 00:42:32.783220 kernel: pcieport 0000:00:16.7: pciehp: Slot #199 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 17 00:42:32.783268 kernel: pcieport 0000:00:17.0: PME: Signaling with IRQ 40 May 17 00:42:32.783316 kernel: pcieport 0000:00:17.0: pciehp: Slot #224 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 17 00:42:32.783381 kernel: pcieport 0000:00:17.1: PME: Signaling with IRQ 41 May 17 00:42:32.783432 kernel: pcieport 0000:00:17.1: pciehp: Slot #225 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 17 00:42:32.783484 kernel: pcieport 0000:00:17.2: PME: Signaling with IRQ 42 May 17 00:42:32.783531 kernel: pcieport 0000:00:17.2: pciehp: Slot #226 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 17 00:42:32.783590 kernel: pcieport 0000:00:17.3: PME: Signaling with IRQ 43 May 17 00:42:32.783639 kernel: pcieport 0000:00:17.3: pciehp: Slot #227 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 17 00:42:32.783690 kernel: pcieport 0000:00:17.4: PME: Signaling with IRQ 44 May 17 00:42:32.783746 kernel: pcieport 0000:00:17.4: pciehp: Slot #228 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 17 00:42:32.783794 kernel: pcieport 0000:00:17.5: PME: Signaling with IRQ 45 May 17 00:42:32.783848 kernel: pcieport 0000:00:17.5: pciehp: Slot #229 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 17 00:42:32.783897 kernel: pcieport 0000:00:17.6: PME: Signaling with IRQ 46 May 17 00:42:32.783944 kernel: pcieport 0000:00:17.6: pciehp: Slot #230 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 17 00:42:32.783995 kernel: pcieport 0000:00:17.7: PME: Signaling with IRQ 47 May 17 00:42:32.784044 kernel: pcieport 0000:00:17.7: pciehp: Slot #231 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 17 00:42:32.784091 kernel: pcieport 0000:00:18.0: PME: Signaling with IRQ 48 May 17 00:42:32.784139 kernel: pcieport 0000:00:18.0: pciehp: Slot #256 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 17 00:42:32.784188 kernel: pcieport 0000:00:18.1: PME: Signaling with IRQ 49 May 17 00:42:32.784236 kernel: pcieport 0000:00:18.1: pciehp: Slot #257 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 17 00:42:32.784286 kernel: pcieport 0000:00:18.2: PME: Signaling with IRQ 50 May 17 00:42:32.784332 kernel: pcieport 0000:00:18.2: pciehp: Slot #258 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 17 00:42:32.784380 kernel: pcieport 0000:00:18.3: PME: Signaling with IRQ 51 May 17 00:42:32.784428 kernel: pcieport 0000:00:18.3: pciehp: Slot #259 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 17 00:42:32.784477 kernel: pcieport 0000:00:18.4: PME: Signaling with IRQ 52 May 17 00:42:32.784526 kernel: pcieport 0000:00:18.4: pciehp: Slot #260 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 17 00:42:32.784574 kernel: pcieport 0000:00:18.5: PME: Signaling with IRQ 53 May 17 00:42:32.784622 kernel: pcieport 0000:00:18.5: pciehp: Slot #261 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 17 00:42:32.784670 kernel: pcieport 0000:00:18.6: PME: Signaling with IRQ 54 May 17 00:42:32.784730 kernel: pcieport 0000:00:18.6: pciehp: Slot #262 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 17 00:42:32.784783 kernel: pcieport 0000:00:18.7: PME: Signaling with IRQ 55 May 17 00:42:32.784832 kernel: pcieport 0000:00:18.7: pciehp: Slot #263 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 17 00:42:32.784841 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 17 00:42:32.784847 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 17 00:42:32.784854 kernel: 00:05: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 17 00:42:32.784860 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBC,PNP0f13:MOUS] at 0x60,0x64 irq 1,12 May 17 00:42:32.784866 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 May 17 00:42:32.784872 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 May 17 00:42:32.784923 kernel: rtc_cmos 00:01: registered as rtc0 May 17 00:42:32.784968 kernel: rtc_cmos 00:01: setting system clock to 2025-05-17T00:42:32 UTC (1747442552) May 17 00:42:32.785011 kernel: rtc_cmos 00:01: alarms up to one month, y3k, 114 bytes nvram May 17 00:42:32.785019 kernel: intel_pstate: CPU model not supported May 17 00:42:32.785026 kernel: NET: Registered PF_INET6 protocol family May 17 00:42:32.785032 kernel: Segment Routing with IPv6 May 17 00:42:32.785038 kernel: In-situ OAM (IOAM) with IPv6 May 17 00:42:32.785044 kernel: NET: Registered PF_PACKET protocol family May 17 00:42:32.785052 kernel: Key type dns_resolver registered May 17 00:42:32.785058 kernel: IPI shorthand broadcast: enabled May 17 00:42:32.785064 kernel: sched_clock: Marking stable (864161037, 227850626)->(1158967453, -66955790) May 17 00:42:32.785070 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 May 17 00:42:32.785077 kernel: registered taskstats version 1 May 17 00:42:32.785083 kernel: Loading compiled-in X.509 certificates May 17 00:42:32.785089 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.182-flatcar: 01ca23caa8e5879327538f9287e5164b3e97ac0c' May 17 00:42:32.785095 kernel: Key type .fscrypt registered May 17 00:42:32.785101 kernel: Key type fscrypt-provisioning registered May 17 00:42:32.785108 kernel: ima: No TPM chip found, activating TPM-bypass! May 17 00:42:32.785114 kernel: ima: Allocated hash algorithm: sha1 May 17 00:42:32.785120 kernel: ima: No architecture policies found May 17 00:42:32.785126 kernel: clk: Disabling unused clocks May 17 00:42:32.785133 kernel: Freeing unused kernel image (initmem) memory: 47472K May 17 00:42:32.785139 kernel: Write protecting the kernel read-only data: 28672k May 17 00:42:32.785145 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K May 17 00:42:32.785151 kernel: Freeing unused kernel image (rodata/data gap) memory: 612K May 17 00:42:32.785157 kernel: Run /init as init process May 17 00:42:32.785165 kernel: with arguments: May 17 00:42:32.785172 kernel: /init May 17 00:42:32.785178 kernel: with environment: May 17 00:42:32.785184 kernel: HOME=/ May 17 00:42:32.785190 kernel: TERM=linux May 17 00:42:32.785196 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 17 00:42:32.785204 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) May 17 00:42:32.785211 systemd[1]: Detected virtualization vmware. May 17 00:42:32.785219 systemd[1]: Detected architecture x86-64. May 17 00:42:32.785225 systemd[1]: Running in initrd. May 17 00:42:32.785231 systemd[1]: No hostname configured, using default hostname. May 17 00:42:32.785237 systemd[1]: Hostname set to . May 17 00:42:32.785244 systemd[1]: Initializing machine ID from random generator. May 17 00:42:32.785250 systemd[1]: Queued start job for default target initrd.target. May 17 00:42:32.785256 systemd[1]: Started systemd-ask-password-console.path. May 17 00:42:32.785262 systemd[1]: Reached target cryptsetup.target. May 17 00:42:32.785269 systemd[1]: Reached target paths.target. May 17 00:42:32.785275 systemd[1]: Reached target slices.target. May 17 00:42:32.785281 systemd[1]: Reached target swap.target. May 17 00:42:32.785287 systemd[1]: Reached target timers.target. May 17 00:42:32.785294 systemd[1]: Listening on iscsid.socket. May 17 00:42:32.785300 systemd[1]: Listening on iscsiuio.socket. May 17 00:42:32.785306 systemd[1]: Listening on systemd-journald-audit.socket. May 17 00:42:32.785313 systemd[1]: Listening on systemd-journald-dev-log.socket. May 17 00:42:32.785320 systemd[1]: Listening on systemd-journald.socket. May 17 00:42:32.785326 systemd[1]: Listening on systemd-networkd.socket. May 17 00:42:32.785332 systemd[1]: Listening on systemd-udevd-control.socket. May 17 00:42:32.785338 systemd[1]: Listening on systemd-udevd-kernel.socket. May 17 00:42:32.785344 systemd[1]: Reached target sockets.target. May 17 00:42:32.785350 systemd[1]: Starting kmod-static-nodes.service... May 17 00:42:32.785356 systemd[1]: Finished network-cleanup.service. May 17 00:42:32.785363 systemd[1]: Starting systemd-fsck-usr.service... May 17 00:42:32.785369 systemd[1]: Starting systemd-journald.service... May 17 00:42:32.785377 systemd[1]: Starting systemd-modules-load.service... May 17 00:42:32.785383 systemd[1]: Starting systemd-resolved.service... May 17 00:42:32.785389 systemd[1]: Starting systemd-vconsole-setup.service... May 17 00:42:32.785395 systemd[1]: Finished kmod-static-nodes.service. May 17 00:42:32.785401 systemd[1]: Finished systemd-fsck-usr.service. May 17 00:42:32.785407 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... May 17 00:42:32.785414 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. May 17 00:42:32.785420 systemd[1]: Finished systemd-vconsole-setup.service. May 17 00:42:32.785426 kernel: audit: type=1130 audit(1747442552.694:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:32.785434 kernel: audit: type=1130 audit(1747442552.694:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:32.785440 systemd[1]: Starting dracut-cmdline-ask.service... May 17 00:42:32.785447 systemd[1]: Finished dracut-cmdline-ask.service. May 17 00:42:32.785453 systemd[1]: Starting dracut-cmdline.service... May 17 00:42:32.785459 kernel: audit: type=1130 audit(1747442552.715:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:32.785466 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 17 00:42:32.785472 kernel: Bridge firewalling registered May 17 00:42:32.785479 kernel: SCSI subsystem initialized May 17 00:42:32.785485 systemd[1]: Started systemd-resolved.service. May 17 00:42:32.785491 systemd[1]: Reached target nss-lookup.target. May 17 00:42:32.785498 kernel: audit: type=1130 audit(1747442552.752:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:32.785504 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 17 00:42:32.785511 kernel: device-mapper: uevent: version 1.0.3 May 17 00:42:32.785517 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com May 17 00:42:32.785524 systemd[1]: Finished systemd-modules-load.service. May 17 00:42:32.785530 systemd[1]: Starting systemd-sysctl.service... May 17 00:42:32.785537 kernel: audit: type=1130 audit(1747442552.774:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:32.785547 systemd-journald[217]: Journal started May 17 00:42:32.785578 systemd-journald[217]: Runtime Journal (/run/log/journal/ca12ce11e0f34af8b7a1b89db0b74756) is 4.8M, max 38.8M, 34.0M free. May 17 00:42:32.694000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:32.694000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:32.715000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:32.752000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:32.774000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:32.678167 systemd-modules-load[218]: Inserted module 'overlay' May 17 00:42:32.789156 systemd[1]: Started systemd-journald.service. May 17 00:42:32.789170 kernel: audit: type=1130 audit(1747442552.785:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:32.785000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:32.734656 systemd-modules-load[218]: Inserted module 'br_netfilter' May 17 00:42:32.740448 systemd-resolved[219]: Positive Trust Anchors: May 17 00:42:32.740461 systemd-resolved[219]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 17 00:42:32.740492 systemd-resolved[219]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test May 17 00:42:32.749599 systemd-resolved[219]: Defaulting to hostname 'linux'. May 17 00:42:32.774780 systemd-modules-load[218]: Inserted module 'dm_multipath' May 17 00:42:32.790767 dracut-cmdline[233]: dracut-dracut-053 May 17 00:42:32.790767 dracut-cmdline[233]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LA May 17 00:42:32.790767 dracut-cmdline[233]: BEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=4aad7caeadb0359f379975532748a0b4ae6bb9b229507353e0f5ae84cb9335a0 May 17 00:42:32.789000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:32.791272 systemd[1]: Finished systemd-sysctl.service. May 17 00:42:32.794920 kernel: audit: type=1130 audit(1747442552.789:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:32.808718 kernel: Loading iSCSI transport class v2.0-870. May 17 00:42:32.821719 kernel: iscsi: registered transport (tcp) May 17 00:42:32.837721 kernel: iscsi: registered transport (qla4xxx) May 17 00:42:32.837763 kernel: QLogic iSCSI HBA Driver May 17 00:42:32.853000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:32.855067 systemd[1]: Finished dracut-cmdline.service. May 17 00:42:32.858888 kernel: audit: type=1130 audit(1747442552.853:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:32.855690 systemd[1]: Starting dracut-pre-udev.service... May 17 00:42:32.893737 kernel: raid6: avx2x4 gen() 37075 MB/s May 17 00:42:32.910720 kernel: raid6: avx2x4 xor() 16244 MB/s May 17 00:42:32.927718 kernel: raid6: avx2x2 gen() 52735 MB/s May 17 00:42:32.944725 kernel: raid6: avx2x2 xor() 31349 MB/s May 17 00:42:32.961730 kernel: raid6: avx2x1 gen() 37338 MB/s May 17 00:42:32.978728 kernel: raid6: avx2x1 xor() 22790 MB/s May 17 00:42:32.995719 kernel: raid6: sse2x4 gen() 19239 MB/s May 17 00:42:33.012722 kernel: raid6: sse2x4 xor() 11513 MB/s May 17 00:42:33.029755 kernel: raid6: sse2x2 gen() 19356 MB/s May 17 00:42:33.046726 kernel: raid6: sse2x2 xor() 11668 MB/s May 17 00:42:33.063722 kernel: raid6: sse2x1 gen() 17134 MB/s May 17 00:42:33.080974 kernel: raid6: sse2x1 xor() 8826 MB/s May 17 00:42:33.081068 kernel: raid6: using algorithm avx2x2 gen() 52735 MB/s May 17 00:42:33.081083 kernel: raid6: .... xor() 31349 MB/s, rmw enabled May 17 00:42:33.082190 kernel: raid6: using avx2x2 recovery algorithm May 17 00:42:33.090734 kernel: xor: automatically using best checksumming function avx May 17 00:42:33.155726 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no May 17 00:42:33.161504 systemd[1]: Finished dracut-pre-udev.service. May 17 00:42:33.159000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:33.162231 systemd[1]: Starting systemd-udevd.service... May 17 00:42:33.164848 kernel: audit: type=1130 audit(1747442553.159:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:33.160000 audit: BPF prog-id=7 op=LOAD May 17 00:42:33.160000 audit: BPF prog-id=8 op=LOAD May 17 00:42:33.175158 systemd-udevd[416]: Using default interface naming scheme 'v252'. May 17 00:42:33.179162 systemd[1]: Started systemd-udevd.service. May 17 00:42:33.179925 systemd[1]: Starting dracut-pre-trigger.service... May 17 00:42:33.177000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:33.189290 dracut-pre-trigger[419]: rd.md=0: removing MD RAID activation May 17 00:42:33.208278 systemd[1]: Finished dracut-pre-trigger.service. May 17 00:42:33.206000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:33.208988 systemd[1]: Starting systemd-udev-trigger.service... May 17 00:42:33.272000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:33.274258 systemd[1]: Finished systemd-udev-trigger.service. May 17 00:42:33.343751 kernel: VMware PVSCSI driver - version 1.0.7.0-k May 17 00:42:33.343784 kernel: vmw_pvscsi: using 64bit dma May 17 00:42:33.350351 kernel: vmw_pvscsi: max_id: 16 May 17 00:42:33.350384 kernel: vmw_pvscsi: setting ring_pages to 8 May 17 00:42:33.367328 kernel: vmw_pvscsi: enabling reqCallThreshold May 17 00:42:33.367376 kernel: vmw_pvscsi: driver-based request coalescing enabled May 17 00:42:33.367386 kernel: vmw_pvscsi: using MSI-X May 17 00:42:33.367393 kernel: scsi host0: VMware PVSCSI storage adapter rev 2, req/cmp/msg rings: 8/8/1 pages, cmd_per_lun=254 May 17 00:42:33.369079 kernel: cryptd: max_cpu_qlen set to 1000 May 17 00:42:33.369110 kernel: vmw_pvscsi 0000:03:00.0: VMware PVSCSI rev 2 host #0 May 17 00:42:33.371258 kernel: scsi 0:0:0:0: Direct-Access VMware Virtual disk 2.0 PQ: 0 ANSI: 6 May 17 00:42:33.379879 kernel: VMware vmxnet3 virtual NIC driver - version 1.6.0.0-k-NAPI May 17 00:42:33.379916 kernel: vmxnet3 0000:0b:00.0: # of Tx queues : 2, # of Rx queues : 2 May 17 00:42:33.382887 kernel: vmxnet3 0000:0b:00.0 eth0: NIC Link is Up 10000 Mbps May 17 00:42:33.386722 kernel: vmxnet3 0000:0b:00.0 ens192: renamed from eth0 May 17 00:42:33.400738 kernel: libata version 3.00 loaded. May 17 00:42:33.402732 kernel: ata_piix 0000:00:07.1: version 2.13 May 17 00:42:33.410688 kernel: sd 0:0:0:0: [sda] 17805312 512-byte logical blocks: (9.12 GB/8.49 GiB) May 17 00:42:33.513812 kernel: AVX2 version of gcm_enc/dec engaged. May 17 00:42:33.513831 kernel: sd 0:0:0:0: [sda] Write Protect is off May 17 00:42:33.513947 kernel: sd 0:0:0:0: [sda] Mode Sense: 31 00 00 00 May 17 00:42:33.514052 kernel: sd 0:0:0:0: [sda] Cache data unavailable May 17 00:42:33.514135 kernel: sd 0:0:0:0: [sda] Assuming drive cache: write through May 17 00:42:33.514216 kernel: AES CTR mode by8 optimization enabled May 17 00:42:33.514225 kernel: scsi host1: ata_piix May 17 00:42:33.514314 kernel: scsi host2: ata_piix May 17 00:42:33.514390 kernel: ata1: PATA max UDMA/33 cmd 0x1f0 ctl 0x3f6 bmdma 0x1060 irq 14 May 17 00:42:33.514403 kernel: ata2: PATA max UDMA/33 cmd 0x170 ctl 0x376 bmdma 0x1068 irq 15 May 17 00:42:33.514413 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 17 00:42:33.514428 kernel: sd 0:0:0:0: [sda] Attached SCSI disk May 17 00:42:33.574739 kernel: ata2.00: ATAPI: VMware Virtual IDE CDROM Drive, 00000001, max UDMA/33 May 17 00:42:33.578717 kernel: scsi 2:0:0:0: CD-ROM NECVMWar VMware IDE CDR10 1.00 PQ: 0 ANSI: 5 May 17 00:42:33.607023 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 1x/1x writer dvd-ram cd/rw xa/form2 cdda tray May 17 00:42:33.624890 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 May 17 00:42:33.624907 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 May 17 00:42:33.650767 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. May 17 00:42:33.663924 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. May 17 00:42:33.664068 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. May 17 00:42:33.666072 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. May 17 00:42:33.666634 systemd[1]: Starting disk-uuid.service... May 17 00:42:33.672720 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by (udev-worker) (473) May 17 00:42:33.679361 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. May 17 00:42:33.790715 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 17 00:42:33.799388 kernel: GPT:disk_guids don't match. May 17 00:42:33.799425 kernel: GPT: Use GNU Parted to correct GPT errors. May 17 00:42:33.799434 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 17 00:42:34.828721 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 17 00:42:34.828793 disk-uuid[550]: The operation has completed successfully. May 17 00:42:35.076331 systemd[1]: disk-uuid.service: Deactivated successfully. May 17 00:42:35.074000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:35.074000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:35.076393 systemd[1]: Finished disk-uuid.service. May 17 00:42:35.077078 systemd[1]: Starting verity-setup.service... May 17 00:42:35.101720 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" May 17 00:42:35.497044 systemd[1]: Found device dev-mapper-usr.device. May 17 00:42:35.496000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:35.497720 systemd[1]: Mounting sysusr-usr.mount... May 17 00:42:35.498002 systemd[1]: Finished verity-setup.service. May 17 00:42:35.571483 systemd[1]: Mounted sysusr-usr.mount. May 17 00:42:35.571735 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. May 17 00:42:35.572159 systemd[1]: Starting afterburn-network-kargs.service... May 17 00:42:35.572689 systemd[1]: Starting ignition-setup.service... May 17 00:42:35.594965 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm May 17 00:42:35.595002 kernel: BTRFS info (device sda6): using free space tree May 17 00:42:35.595011 kernel: BTRFS info (device sda6): has skinny extents May 17 00:42:35.652730 kernel: BTRFS info (device sda6): enabling ssd optimizations May 17 00:42:35.677973 systemd[1]: mnt-oem.mount: Deactivated successfully. May 17 00:42:35.683363 systemd[1]: Finished ignition-setup.service. May 17 00:42:35.681000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:35.684026 systemd[1]: Starting ignition-fetch-offline.service... May 17 00:42:35.796691 systemd[1]: Finished afterburn-network-kargs.service. May 17 00:42:35.795000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=afterburn-network-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:35.797311 systemd[1]: Starting parse-ip-for-networkd.service... May 17 00:42:35.842000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:35.843753 systemd[1]: Finished parse-ip-for-networkd.service. May 17 00:42:35.842000 audit: BPF prog-id=9 op=LOAD May 17 00:42:35.844817 systemd[1]: Starting systemd-networkd.service... May 17 00:42:35.860881 systemd-networkd[736]: lo: Link UP May 17 00:42:35.860889 systemd-networkd[736]: lo: Gained carrier May 17 00:42:35.861172 systemd-networkd[736]: Enumeration completed May 17 00:42:35.865093 kernel: vmxnet3 0000:0b:00.0 ens192: intr type 3, mode 0, 3 vectors allocated May 17 00:42:35.865219 kernel: vmxnet3 0000:0b:00.0 ens192: NIC Link is Up 10000 Mbps May 17 00:42:35.859000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:35.861386 systemd-networkd[736]: ens192: Configuring with /etc/systemd/network/10-dracut-cmdline-99.network. May 17 00:42:35.861397 systemd[1]: Started systemd-networkd.service. May 17 00:42:35.861575 systemd[1]: Reached target network.target. May 17 00:42:35.862128 systemd[1]: Starting iscsiuio.service... May 17 00:42:35.865616 systemd-networkd[736]: ens192: Link UP May 17 00:42:35.865618 systemd-networkd[736]: ens192: Gained carrier May 17 00:42:35.867040 systemd[1]: Started iscsiuio.service. May 17 00:42:35.865000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:35.867793 systemd[1]: Starting iscsid.service... May 17 00:42:35.869941 iscsid[741]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi May 17 00:42:35.869941 iscsid[741]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. May 17 00:42:35.869941 iscsid[741]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. May 17 00:42:35.869941 iscsid[741]: If using hardware iscsi like qla4xxx this message can be ignored. May 17 00:42:35.869941 iscsid[741]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi May 17 00:42:35.869941 iscsid[741]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf May 17 00:42:35.870846 systemd[1]: Started iscsid.service. May 17 00:42:35.869000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:35.871629 systemd[1]: Starting dracut-initqueue.service... May 17 00:42:35.878728 systemd[1]: Finished dracut-initqueue.service. May 17 00:42:35.877000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:35.879117 systemd[1]: Reached target remote-fs-pre.target. May 17 00:42:35.879332 systemd[1]: Reached target remote-cryptsetup.target. May 17 00:42:35.879558 systemd[1]: Reached target remote-fs.target. May 17 00:42:35.880336 systemd[1]: Starting dracut-pre-mount.service... May 17 00:42:35.885787 systemd[1]: Finished dracut-pre-mount.service. May 17 00:42:35.884000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:36.665915 ignition[607]: Ignition 2.14.0 May 17 00:42:36.665929 ignition[607]: Stage: fetch-offline May 17 00:42:36.665978 ignition[607]: reading system config file "/usr/lib/ignition/base.d/base.ign" May 17 00:42:36.665999 ignition[607]: parsing config with SHA512: bd85a898f7da4744ff98e02742aa4854e1ceea8026a4e95cb6fb599b39b54cff0db353847df13d3c55ae196a9dc5d648977228d55e5da3ea20cd600fa7cec8ed May 17 00:42:36.673594 ignition[607]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" May 17 00:42:36.673791 ignition[607]: parsed url from cmdline: "" May 17 00:42:36.673795 ignition[607]: no config URL provided May 17 00:42:36.673799 ignition[607]: reading system config file "/usr/lib/ignition/user.ign" May 17 00:42:36.673807 ignition[607]: no config at "/usr/lib/ignition/user.ign" May 17 00:42:36.686280 ignition[607]: config successfully fetched May 17 00:42:36.686327 ignition[607]: parsing config with SHA512: 17bf85b29aa94edaa8605b8eec6cc006667be69c4e121ed1b8e97f894e34e9fcf0e8b929453514bed86bd9ffc5fe54ead45402dd2c32e973cded08dc50087e12 May 17 00:42:36.689629 unknown[607]: fetched base config from "system" May 17 00:42:36.689638 unknown[607]: fetched user config from "vmware" May 17 00:42:36.690151 ignition[607]: fetch-offline: fetch-offline passed May 17 00:42:36.690220 ignition[607]: Ignition finished successfully May 17 00:42:36.689000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:36.691261 systemd[1]: Finished ignition-fetch-offline.service. May 17 00:42:36.691449 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 17 00:42:36.692082 systemd[1]: Starting ignition-kargs.service... May 17 00:42:36.698161 ignition[755]: Ignition 2.14.0 May 17 00:42:36.698415 ignition[755]: Stage: kargs May 17 00:42:36.698586 ignition[755]: reading system config file "/usr/lib/ignition/base.d/base.ign" May 17 00:42:36.698757 ignition[755]: parsing config with SHA512: bd85a898f7da4744ff98e02742aa4854e1ceea8026a4e95cb6fb599b39b54cff0db353847df13d3c55ae196a9dc5d648977228d55e5da3ea20cd600fa7cec8ed May 17 00:42:36.699936 ignition[755]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" May 17 00:42:36.700995 ignition[755]: kargs: kargs passed May 17 00:42:36.701242 ignition[755]: Ignition finished successfully May 17 00:42:36.700000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:36.702138 systemd[1]: Finished ignition-kargs.service. May 17 00:42:36.702854 systemd[1]: Starting ignition-disks.service... May 17 00:42:36.707327 ignition[761]: Ignition 2.14.0 May 17 00:42:36.707334 ignition[761]: Stage: disks May 17 00:42:36.707394 ignition[761]: reading system config file "/usr/lib/ignition/base.d/base.ign" May 17 00:42:36.707403 ignition[761]: parsing config with SHA512: bd85a898f7da4744ff98e02742aa4854e1ceea8026a4e95cb6fb599b39b54cff0db353847df13d3c55ae196a9dc5d648977228d55e5da3ea20cd600fa7cec8ed May 17 00:42:36.708669 ignition[761]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" May 17 00:42:36.710189 ignition[761]: disks: disks passed May 17 00:42:36.710219 ignition[761]: Ignition finished successfully May 17 00:42:36.709000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:36.711004 systemd[1]: Finished ignition-disks.service. May 17 00:42:36.711183 systemd[1]: Reached target initrd-root-device.target. May 17 00:42:36.711279 systemd[1]: Reached target local-fs-pre.target. May 17 00:42:36.711364 systemd[1]: Reached target local-fs.target. May 17 00:42:36.711447 systemd[1]: Reached target sysinit.target. May 17 00:42:36.711527 systemd[1]: Reached target basic.target. May 17 00:42:36.712083 systemd[1]: Starting systemd-fsck-root.service... May 17 00:42:36.764144 systemd-fsck[769]: ROOT: clean, 619/1628000 files, 124060/1617920 blocks May 17 00:42:36.766844 systemd[1]: Finished systemd-fsck-root.service. May 17 00:42:36.765000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:36.767816 kernel: kauditd_printk_skb: 20 callbacks suppressed May 17 00:42:36.767835 kernel: audit: type=1130 audit(1747442556.765:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:36.770406 systemd[1]: Mounting sysroot.mount... May 17 00:42:36.820714 kernel: EXT4-fs (sda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. May 17 00:42:36.821285 systemd[1]: Mounted sysroot.mount. May 17 00:42:36.821792 systemd[1]: Reached target initrd-root-fs.target. May 17 00:42:36.826057 systemd[1]: Mounting sysroot-usr.mount... May 17 00:42:36.826698 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. May 17 00:42:36.826946 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 17 00:42:36.827167 systemd[1]: Reached target ignition-diskful.target. May 17 00:42:36.828044 systemd[1]: Mounted sysroot-usr.mount. May 17 00:42:36.832778 systemd[1]: Mounting sysroot-usr-share-oem.mount... May 17 00:42:36.833485 systemd[1]: Starting initrd-setup-root.service... May 17 00:42:36.837791 initrd-setup-root[780]: cut: /sysroot/etc/passwd: No such file or directory May 17 00:42:36.841830 initrd-setup-root[788]: cut: /sysroot/etc/group: No such file or directory May 17 00:42:36.845309 initrd-setup-root[796]: cut: /sysroot/etc/shadow: No such file or directory May 17 00:42:36.848490 initrd-setup-root[804]: cut: /sysroot/etc/gshadow: No such file or directory May 17 00:42:36.857717 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (775) May 17 00:42:36.866088 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm May 17 00:42:36.866124 kernel: BTRFS info (device sda6): using free space tree May 17 00:42:36.866133 kernel: BTRFS info (device sda6): has skinny extents May 17 00:42:36.903727 kernel: BTRFS info (device sda6): enabling ssd optimizations May 17 00:42:36.912138 systemd[1]: Mounted sysroot-usr-share-oem.mount. May 17 00:42:37.020474 systemd[1]: Finished initrd-setup-root.service. May 17 00:42:37.018000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:37.021048 systemd[1]: Starting ignition-mount.service... May 17 00:42:37.024402 kernel: audit: type=1130 audit(1747442557.018:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:37.024129 systemd[1]: Starting sysroot-boot.service... May 17 00:42:37.027204 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. May 17 00:42:37.027283 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. May 17 00:42:37.035282 ignition[840]: INFO : Ignition 2.14.0 May 17 00:42:37.035282 ignition[840]: INFO : Stage: mount May 17 00:42:37.035667 ignition[840]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" May 17 00:42:37.035667 ignition[840]: DEBUG : parsing config with SHA512: bd85a898f7da4744ff98e02742aa4854e1ceea8026a4e95cb6fb599b39b54cff0db353847df13d3c55ae196a9dc5d648977228d55e5da3ea20cd600fa7cec8ed May 17 00:42:37.036794 ignition[840]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" May 17 00:42:37.038609 ignition[840]: INFO : mount: mount passed May 17 00:42:37.038609 ignition[840]: INFO : Ignition finished successfully May 17 00:42:37.038840 systemd[1]: Finished ignition-mount.service. May 17 00:42:37.037000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:37.039460 systemd[1]: Starting ignition-files.service... May 17 00:42:37.042033 kernel: audit: type=1130 audit(1747442557.037:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:37.044140 systemd[1]: Mounting sysroot-usr-share-oem.mount... May 17 00:42:37.084727 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (847) May 17 00:42:37.086137 systemd[1]: Finished sysroot-boot.service. May 17 00:42:37.084000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:37.089713 kernel: audit: type=1130 audit(1747442557.084:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:37.098858 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm May 17 00:42:37.098902 kernel: BTRFS info (device sda6): using free space tree May 17 00:42:37.098915 kernel: BTRFS info (device sda6): has skinny extents May 17 00:42:37.143718 kernel: BTRFS info (device sda6): enabling ssd optimizations May 17 00:42:37.150434 systemd[1]: Mounted sysroot-usr-share-oem.mount. May 17 00:42:37.156224 ignition[869]: INFO : Ignition 2.14.0 May 17 00:42:37.156224 ignition[869]: INFO : Stage: files May 17 00:42:37.156662 ignition[869]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" May 17 00:42:37.156662 ignition[869]: DEBUG : parsing config with SHA512: bd85a898f7da4744ff98e02742aa4854e1ceea8026a4e95cb6fb599b39b54cff0db353847df13d3c55ae196a9dc5d648977228d55e5da3ea20cd600fa7cec8ed May 17 00:42:37.157696 ignition[869]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" May 17 00:42:37.163331 ignition[869]: DEBUG : files: compiled without relabeling support, skipping May 17 00:42:37.164526 ignition[869]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 17 00:42:37.164526 ignition[869]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 17 00:42:37.170456 ignition[869]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 17 00:42:37.170734 ignition[869]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 17 00:42:37.180045 unknown[869]: wrote ssh authorized keys file for user: core May 17 00:42:37.180667 ignition[869]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 17 00:42:37.184642 ignition[869]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" May 17 00:42:37.184642 ignition[869]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 May 17 00:42:37.275210 ignition[869]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 17 00:42:37.496869 ignition[869]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" May 17 00:42:37.501103 ignition[869]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" May 17 00:42:37.501398 ignition[869]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 May 17 00:42:37.850889 systemd-networkd[736]: ens192: Gained IPv6LL May 17 00:42:37.996486 ignition[869]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK May 17 00:42:38.048210 ignition[869]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" May 17 00:42:38.048210 ignition[869]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" May 17 00:42:38.048596 ignition[869]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" May 17 00:42:38.048596 ignition[869]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" May 17 00:42:38.048596 ignition[869]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" May 17 00:42:38.048596 ignition[869]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 17 00:42:38.048596 ignition[869]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 17 00:42:38.048596 ignition[869]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 17 00:42:38.048596 ignition[869]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 17 00:42:38.050262 ignition[869]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" May 17 00:42:38.050429 ignition[869]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 17 00:42:38.050429 ignition[869]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" May 17 00:42:38.050429 ignition[869]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" May 17 00:42:38.055243 ignition[869]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/etc/systemd/system/vmtoolsd.service" May 17 00:42:38.056565 ignition[869]: INFO : files: createFilesystemsFiles: createFiles: op(b): oem config not found in "/usr/share/oem", looking on oem partition May 17 00:42:38.058422 ignition[869]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(c): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3880951430" May 17 00:42:38.058661 ignition[869]: CRITICAL : files: createFilesystemsFiles: createFiles: op(b): op(c): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3880951430": device or resource busy May 17 00:42:38.058878 ignition[869]: ERROR : files: createFilesystemsFiles: createFiles: op(b): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem3880951430", trying btrfs: device or resource busy May 17 00:42:38.059090 ignition[869]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(d): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3880951430" May 17 00:42:38.061115 ignition[869]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(d): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3880951430" May 17 00:42:38.081234 ignition[869]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(e): [started] unmounting "/mnt/oem3880951430" May 17 00:42:38.082218 systemd[1]: mnt-oem3880951430.mount: Deactivated successfully. May 17 00:42:38.082937 ignition[869]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(e): [finished] unmounting "/mnt/oem3880951430" May 17 00:42:38.083165 ignition[869]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/etc/systemd/system/vmtoolsd.service" May 17 00:42:38.083380 ignition[869]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" May 17 00:42:38.083652 ignition[869]: INFO : files: createFilesystemsFiles: createFiles: op(f): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 May 17 00:42:38.710626 ignition[869]: INFO : files: createFilesystemsFiles: createFiles: op(f): GET result: OK May 17 00:42:38.899410 ignition[869]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" May 17 00:42:38.908369 ignition[869]: INFO : files: createFilesystemsFiles: createFiles: op(10): [started] writing file "/sysroot/etc/systemd/network/00-vmware.network" May 17 00:42:38.908726 ignition[869]: INFO : files: createFilesystemsFiles: createFiles: op(10): [finished] writing file "/sysroot/etc/systemd/network/00-vmware.network" May 17 00:42:38.908932 ignition[869]: INFO : files: op(11): [started] processing unit "vmtoolsd.service" May 17 00:42:38.909073 ignition[869]: INFO : files: op(11): [finished] processing unit "vmtoolsd.service" May 17 00:42:38.909214 ignition[869]: INFO : files: op(12): [started] processing unit "prepare-helm.service" May 17 00:42:38.909377 ignition[869]: INFO : files: op(12): op(13): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 17 00:42:38.909614 ignition[869]: INFO : files: op(12): op(13): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 17 00:42:38.909809 ignition[869]: INFO : files: op(12): [finished] processing unit "prepare-helm.service" May 17 00:42:38.909959 ignition[869]: INFO : files: op(14): [started] processing unit "coreos-metadata.service" May 17 00:42:38.910124 ignition[869]: INFO : files: op(14): op(15): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 17 00:42:38.910364 ignition[869]: INFO : files: op(14): op(15): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 17 00:42:38.910550 ignition[869]: INFO : files: op(14): [finished] processing unit "coreos-metadata.service" May 17 00:42:38.910696 ignition[869]: INFO : files: op(16): [started] setting preset to disabled for "coreos-metadata.service" May 17 00:42:38.910860 ignition[869]: INFO : files: op(16): op(17): [started] removing enablement symlink(s) for "coreos-metadata.service" May 17 00:42:39.806482 ignition[869]: INFO : files: op(16): op(17): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 17 00:42:39.807033 ignition[869]: INFO : files: op(16): [finished] setting preset to disabled for "coreos-metadata.service" May 17 00:42:39.807270 ignition[869]: INFO : files: op(18): [started] setting preset to enabled for "vmtoolsd.service" May 17 00:42:39.807449 ignition[869]: INFO : files: op(18): [finished] setting preset to enabled for "vmtoolsd.service" May 17 00:42:39.807655 ignition[869]: INFO : files: op(19): [started] setting preset to enabled for "prepare-helm.service" May 17 00:42:39.807871 ignition[869]: INFO : files: op(19): [finished] setting preset to enabled for "prepare-helm.service" May 17 00:42:39.808150 ignition[869]: INFO : files: createResultFile: createFiles: op(1a): [started] writing file "/sysroot/etc/.ignition-result.json" May 17 00:42:39.808403 ignition[869]: INFO : files: createResultFile: createFiles: op(1a): [finished] writing file "/sysroot/etc/.ignition-result.json" May 17 00:42:39.808588 ignition[869]: INFO : files: files passed May 17 00:42:39.808741 ignition[869]: INFO : Ignition finished successfully May 17 00:42:39.808000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:39.810143 systemd[1]: Finished ignition-files.service. May 17 00:42:39.810767 systemd[1]: Starting initrd-setup-root-after-ignition.service... May 17 00:42:39.810898 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). May 17 00:42:39.811294 systemd[1]: Starting ignition-quench.service... May 17 00:42:39.814718 kernel: audit: type=1130 audit(1747442559.808:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:39.820950 systemd[1]: ignition-quench.service: Deactivated successfully. May 17 00:42:39.821021 systemd[1]: Finished ignition-quench.service. May 17 00:42:39.819000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:39.822510 initrd-setup-root-after-ignition[895]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 17 00:42:39.826539 kernel: audit: type=1130 audit(1747442559.819:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:39.826558 kernel: audit: type=1131 audit(1747442559.819:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:39.819000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:39.822409 systemd[1]: Finished initrd-setup-root-after-ignition.service. May 17 00:42:39.825000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:39.827029 systemd[1]: Reached target ignition-complete.target. May 17 00:42:39.829714 kernel: audit: type=1130 audit(1747442559.825:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:39.830326 systemd[1]: Starting initrd-parse-etc.service... May 17 00:42:39.839524 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 17 00:42:39.839811 systemd[1]: Finished initrd-parse-etc.service. May 17 00:42:39.838000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:39.840146 systemd[1]: Reached target initrd-fs.target. May 17 00:42:39.844957 kernel: audit: type=1130 audit(1747442559.838:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:39.844974 kernel: audit: type=1131 audit(1747442559.838:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:39.838000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:39.845129 systemd[1]: Reached target initrd.target. May 17 00:42:39.845386 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. May 17 00:42:39.846073 systemd[1]: Starting dracut-pre-pivot.service... May 17 00:42:39.852870 systemd[1]: Finished dracut-pre-pivot.service. May 17 00:42:39.851000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:39.853575 systemd[1]: Starting initrd-cleanup.service... May 17 00:42:39.859220 systemd[1]: Stopped target nss-lookup.target. May 17 00:42:39.859505 systemd[1]: Stopped target remote-cryptsetup.target. May 17 00:42:39.859793 systemd[1]: Stopped target timers.target. May 17 00:42:39.860045 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 17 00:42:39.860247 systemd[1]: Stopped dracut-pre-pivot.service. May 17 00:42:39.858000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:39.860602 systemd[1]: Stopped target initrd.target. May 17 00:42:39.860907 systemd[1]: Stopped target basic.target. May 17 00:42:39.861164 systemd[1]: Stopped target ignition-complete.target. May 17 00:42:39.861427 systemd[1]: Stopped target ignition-diskful.target. May 17 00:42:39.861695 systemd[1]: Stopped target initrd-root-device.target. May 17 00:42:39.861980 systemd[1]: Stopped target remote-fs.target. May 17 00:42:39.862234 systemd[1]: Stopped target remote-fs-pre.target. May 17 00:42:39.862493 systemd[1]: Stopped target sysinit.target. May 17 00:42:39.862776 systemd[1]: Stopped target local-fs.target. May 17 00:42:39.863029 systemd[1]: Stopped target local-fs-pre.target. May 17 00:42:39.863288 systemd[1]: Stopped target swap.target. May 17 00:42:39.863515 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 17 00:42:39.863741 systemd[1]: Stopped dracut-pre-mount.service. May 17 00:42:39.862000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:39.864086 systemd[1]: Stopped target cryptsetup.target. May 17 00:42:39.864340 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 17 00:42:39.864554 systemd[1]: Stopped dracut-initqueue.service. May 17 00:42:39.863000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:39.864980 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 17 00:42:39.865187 systemd[1]: Stopped ignition-fetch-offline.service. May 17 00:42:39.863000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:39.865530 systemd[1]: Stopped target paths.target. May 17 00:42:39.865768 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 17 00:42:39.867774 systemd[1]: Stopped systemd-ask-password-console.path. May 17 00:42:39.868062 systemd[1]: Stopped target slices.target. May 17 00:42:39.868316 systemd[1]: Stopped target sockets.target. May 17 00:42:39.868573 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 17 00:42:39.868947 systemd[1]: Stopped initrd-setup-root-after-ignition.service. May 17 00:42:39.867000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:39.869278 systemd[1]: ignition-files.service: Deactivated successfully. May 17 00:42:39.869473 systemd[1]: Stopped ignition-files.service. May 17 00:42:39.868000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:39.870321 systemd[1]: Stopping ignition-mount.service... May 17 00:42:39.872277 iscsid[741]: iscsid shutting down. May 17 00:42:39.872575 systemd[1]: Stopping iscsid.service... May 17 00:42:39.873347 systemd[1]: Stopping sysroot-boot.service... May 17 00:42:39.873597 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 17 00:42:39.873887 systemd[1]: Stopped systemd-udev-trigger.service. May 17 00:42:39.874223 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 17 00:42:39.874443 systemd[1]: Stopped dracut-pre-trigger.service. May 17 00:42:39.874634 ignition[908]: INFO : Ignition 2.14.0 May 17 00:42:39.874634 ignition[908]: INFO : Stage: umount May 17 00:42:39.872000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:39.874995 ignition[908]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" May 17 00:42:39.874995 ignition[908]: DEBUG : parsing config with SHA512: bd85a898f7da4744ff98e02742aa4854e1ceea8026a4e95cb6fb599b39b54cff0db353847df13d3c55ae196a9dc5d648977228d55e5da3ea20cd600fa7cec8ed May 17 00:42:39.873000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:39.876483 ignition[908]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" May 17 00:42:39.876747 systemd[1]: iscsid.service: Deactivated successfully. May 17 00:42:39.876970 systemd[1]: Stopped iscsid.service. May 17 00:42:39.877683 ignition[908]: INFO : umount: umount passed May 17 00:42:39.877683 ignition[908]: INFO : Ignition finished successfully May 17 00:42:39.878096 systemd[1]: ignition-mount.service: Deactivated successfully. May 17 00:42:39.878145 systemd[1]: Stopped ignition-mount.service. May 17 00:42:39.878427 systemd[1]: iscsid.socket: Deactivated successfully. May 17 00:42:39.878472 systemd[1]: Closed iscsid.socket. May 17 00:42:39.878606 systemd[1]: ignition-disks.service: Deactivated successfully. May 17 00:42:39.878660 systemd[1]: Stopped ignition-disks.service. May 17 00:42:39.878897 systemd[1]: ignition-kargs.service: Deactivated successfully. May 17 00:42:39.876000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:39.876000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:39.877000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:39.877000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:39.878000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:39.878000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:39.878000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:39.878951 systemd[1]: Stopped ignition-kargs.service. May 17 00:42:39.879646 systemd[1]: ignition-setup.service: Deactivated successfully. May 17 00:42:39.879669 systemd[1]: Stopped ignition-setup.service. May 17 00:42:39.879803 systemd[1]: Stopping iscsiuio.service... May 17 00:42:39.880004 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 17 00:42:39.880047 systemd[1]: Finished initrd-cleanup.service. May 17 00:42:39.882992 systemd[1]: iscsiuio.service: Deactivated successfully. May 17 00:42:39.883169 systemd[1]: Stopped iscsiuio.service. May 17 00:42:39.881000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:39.883450 systemd[1]: Stopped target network.target. May 17 00:42:39.883662 systemd[1]: iscsiuio.socket: Deactivated successfully. May 17 00:42:39.883680 systemd[1]: Closed iscsiuio.socket. May 17 00:42:39.883905 systemd[1]: Stopping systemd-networkd.service... May 17 00:42:39.884380 systemd[1]: Stopping systemd-resolved.service... May 17 00:42:39.888240 systemd[1]: systemd-networkd.service: Deactivated successfully. May 17 00:42:39.888423 systemd[1]: Stopped systemd-networkd.service. May 17 00:42:39.886000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:39.888963 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 17 00:42:39.888983 systemd[1]: Closed systemd-networkd.socket. May 17 00:42:39.889665 systemd[1]: Stopping network-cleanup.service... May 17 00:42:39.888000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:39.890532 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 17 00:42:39.890560 systemd[1]: Stopped parse-ip-for-networkd.service. May 17 00:42:39.891119 systemd[1]: afterburn-network-kargs.service: Deactivated successfully. May 17 00:42:39.891144 systemd[1]: Stopped afterburn-network-kargs.service. May 17 00:42:39.889000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=afterburn-network-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:39.891531 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 17 00:42:39.890000 audit: BPF prog-id=9 op=UNLOAD May 17 00:42:39.891557 systemd[1]: Stopped systemd-sysctl.service. May 17 00:42:39.890000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:39.892192 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 17 00:42:39.892216 systemd[1]: Stopped systemd-modules-load.service. May 17 00:42:39.890000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:39.892637 systemd[1]: Stopping systemd-udevd.service... May 17 00:42:39.894759 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 17 00:42:39.895057 systemd[1]: systemd-resolved.service: Deactivated successfully. May 17 00:42:39.895959 systemd[1]: Stopped systemd-resolved.service. May 17 00:42:39.894000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:39.896471 systemd[1]: systemd-udevd.service: Deactivated successfully. May 17 00:42:39.896593 systemd[1]: Stopped systemd-udevd.service. May 17 00:42:39.894000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:39.896000 audit: BPF prog-id=6 op=UNLOAD May 17 00:42:39.896000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:39.896000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:39.896000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:39.897548 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 17 00:42:39.897000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:39.897573 systemd[1]: Closed systemd-udevd-control.socket. May 17 00:42:39.897693 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 17 00:42:39.897796 systemd[1]: Closed systemd-udevd-kernel.socket. May 17 00:42:39.897000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:39.897924 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 17 00:42:39.898000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:39.897948 systemd[1]: Stopped dracut-pre-udev.service. May 17 00:42:39.898060 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 17 00:42:39.898080 systemd[1]: Stopped dracut-cmdline.service. May 17 00:42:39.898180 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 17 00:42:39.898205 systemd[1]: Stopped dracut-cmdline-ask.service. May 17 00:42:39.898793 systemd[1]: Starting initrd-udevadm-cleanup-db.service... May 17 00:42:39.898907 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 17 00:42:39.898939 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. May 17 00:42:39.899455 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 17 00:42:39.899481 systemd[1]: Stopped kmod-static-nodes.service. May 17 00:42:39.899619 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 17 00:42:39.899640 systemd[1]: Stopped systemd-vconsole-setup.service. May 17 00:42:39.901251 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. May 17 00:42:39.901549 systemd[1]: network-cleanup.service: Deactivated successfully. May 17 00:42:39.901611 systemd[1]: Stopped network-cleanup.service. May 17 00:42:39.900000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:39.904129 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 17 00:42:39.905604 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 17 00:42:39.905825 systemd[1]: Finished initrd-udevadm-cleanup-db.service. May 17 00:42:39.904000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:39.904000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:39.910383 systemd[1]: sysroot-boot.service: Deactivated successfully. May 17 00:42:39.910629 systemd[1]: Stopped sysroot-boot.service. May 17 00:42:39.909000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:39.911032 systemd[1]: Reached target initrd-switch-root.target. May 17 00:42:39.911311 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 17 00:42:39.911497 systemd[1]: Stopped initrd-setup-root.service. May 17 00:42:39.910000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:39.912382 systemd[1]: Starting initrd-switch-root.service... May 17 00:42:39.920143 systemd[1]: Switching root. May 17 00:42:39.937517 systemd-journald[217]: Journal stopped May 17 00:42:44.369011 systemd-journald[217]: Received SIGTERM from PID 1 (systemd). May 17 00:42:44.369032 kernel: SELinux: Class mctp_socket not defined in policy. May 17 00:42:44.369040 kernel: SELinux: Class anon_inode not defined in policy. May 17 00:42:44.369046 kernel: SELinux: the above unknown classes and permissions will be allowed May 17 00:42:44.369052 kernel: SELinux: policy capability network_peer_controls=1 May 17 00:42:44.369058 kernel: SELinux: policy capability open_perms=1 May 17 00:42:44.369066 kernel: SELinux: policy capability extended_socket_class=1 May 17 00:42:44.369072 kernel: SELinux: policy capability always_check_network=0 May 17 00:42:44.369077 kernel: SELinux: policy capability cgroup_seclabel=1 May 17 00:42:44.369083 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 17 00:42:44.369088 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 17 00:42:44.369094 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 17 00:42:44.369101 systemd[1]: Successfully loaded SELinux policy in 90.516ms. May 17 00:42:44.369109 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 7.131ms. May 17 00:42:44.369117 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) May 17 00:42:44.369123 systemd[1]: Detected virtualization vmware. May 17 00:42:44.369131 systemd[1]: Detected architecture x86-64. May 17 00:42:44.369137 systemd[1]: Detected first boot. May 17 00:42:44.369144 systemd[1]: Initializing machine ID from random generator. May 17 00:42:44.369150 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). May 17 00:42:44.369156 systemd[1]: Populated /etc with preset unit settings. May 17 00:42:44.369163 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 17 00:42:44.369170 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 17 00:42:44.369177 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 17 00:42:44.369185 kernel: kauditd_printk_skb: 53 callbacks suppressed May 17 00:42:44.369191 kernel: audit: type=1334 audit(1747442564.229:87): prog-id=12 op=LOAD May 17 00:42:44.369197 kernel: audit: type=1334 audit(1747442564.229:88): prog-id=3 op=UNLOAD May 17 00:42:44.369203 kernel: audit: type=1334 audit(1747442564.230:89): prog-id=13 op=LOAD May 17 00:42:44.369209 kernel: audit: type=1334 audit(1747442564.231:90): prog-id=14 op=LOAD May 17 00:42:44.369215 kernel: audit: type=1334 audit(1747442564.231:91): prog-id=4 op=UNLOAD May 17 00:42:44.369221 kernel: audit: type=1334 audit(1747442564.231:92): prog-id=5 op=UNLOAD May 17 00:42:44.369228 kernel: audit: type=1334 audit(1747442564.233:93): prog-id=15 op=LOAD May 17 00:42:44.369234 kernel: audit: type=1334 audit(1747442564.233:94): prog-id=12 op=UNLOAD May 17 00:42:44.369240 kernel: audit: type=1334 audit(1747442564.235:95): prog-id=16 op=LOAD May 17 00:42:44.369246 kernel: audit: type=1334 audit(1747442564.237:96): prog-id=17 op=LOAD May 17 00:42:44.369252 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 17 00:42:44.369258 systemd[1]: Stopped initrd-switch-root.service. May 17 00:42:44.369264 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 17 00:42:44.369272 systemd[1]: Created slice system-addon\x2dconfig.slice. May 17 00:42:44.369279 systemd[1]: Created slice system-addon\x2drun.slice. May 17 00:42:44.369288 systemd[1]: Created slice system-getty.slice. May 17 00:42:44.369294 systemd[1]: Created slice system-modprobe.slice. May 17 00:42:44.369301 systemd[1]: Created slice system-serial\x2dgetty.slice. May 17 00:42:44.369308 systemd[1]: Created slice system-system\x2dcloudinit.slice. May 17 00:42:44.369314 systemd[1]: Created slice system-systemd\x2dfsck.slice. May 17 00:42:44.369321 systemd[1]: Created slice user.slice. May 17 00:42:44.369327 systemd[1]: Started systemd-ask-password-console.path. May 17 00:42:44.369335 systemd[1]: Started systemd-ask-password-wall.path. May 17 00:42:44.369342 systemd[1]: Set up automount boot.automount. May 17 00:42:44.369349 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. May 17 00:42:44.369355 systemd[1]: Stopped target initrd-switch-root.target. May 17 00:42:44.369362 systemd[1]: Stopped target initrd-fs.target. May 17 00:42:44.369369 systemd[1]: Stopped target initrd-root-fs.target. May 17 00:42:44.369376 systemd[1]: Reached target integritysetup.target. May 17 00:42:44.369382 systemd[1]: Reached target remote-cryptsetup.target. May 17 00:42:44.369390 systemd[1]: Reached target remote-fs.target. May 17 00:42:44.369397 systemd[1]: Reached target slices.target. May 17 00:42:44.369403 systemd[1]: Reached target swap.target. May 17 00:42:44.369410 systemd[1]: Reached target torcx.target. May 17 00:42:44.369417 systemd[1]: Reached target veritysetup.target. May 17 00:42:44.369423 systemd[1]: Listening on systemd-coredump.socket. May 17 00:42:44.369430 systemd[1]: Listening on systemd-initctl.socket. May 17 00:42:44.369436 systemd[1]: Listening on systemd-networkd.socket. May 17 00:42:44.369443 systemd[1]: Listening on systemd-udevd-control.socket. May 17 00:42:44.369451 systemd[1]: Listening on systemd-udevd-kernel.socket. May 17 00:42:44.369458 systemd[1]: Listening on systemd-userdbd.socket. May 17 00:42:44.369465 systemd[1]: Mounting dev-hugepages.mount... May 17 00:42:44.369472 systemd[1]: Mounting dev-mqueue.mount... May 17 00:42:44.369479 systemd[1]: Mounting media.mount... May 17 00:42:44.369487 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:42:44.369494 systemd[1]: Mounting sys-kernel-debug.mount... May 17 00:42:44.369501 systemd[1]: Mounting sys-kernel-tracing.mount... May 17 00:42:44.369508 systemd[1]: Mounting tmp.mount... May 17 00:42:44.369514 systemd[1]: Starting flatcar-tmpfiles.service... May 17 00:42:44.369523 systemd[1]: Starting ignition-delete-config.service... May 17 00:42:44.369534 systemd[1]: Starting kmod-static-nodes.service... May 17 00:42:44.369546 systemd[1]: Starting modprobe@configfs.service... May 17 00:42:44.369556 systemd[1]: Starting modprobe@dm_mod.service... May 17 00:42:44.369564 systemd[1]: Starting modprobe@drm.service... May 17 00:42:44.369571 systemd[1]: Starting modprobe@efi_pstore.service... May 17 00:42:44.369578 systemd[1]: Starting modprobe@fuse.service... May 17 00:42:44.369585 systemd[1]: Starting modprobe@loop.service... May 17 00:42:44.369592 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 17 00:42:44.369599 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 17 00:42:44.369605 systemd[1]: Stopped systemd-fsck-root.service. May 17 00:42:44.369612 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 17 00:42:44.369620 systemd[1]: Stopped systemd-fsck-usr.service. May 17 00:42:44.369627 systemd[1]: Stopped systemd-journald.service. May 17 00:42:44.369634 systemd[1]: Starting systemd-journald.service... May 17 00:42:44.369641 systemd[1]: Starting systemd-modules-load.service... May 17 00:42:44.369648 systemd[1]: Starting systemd-network-generator.service... May 17 00:42:44.369655 systemd[1]: Starting systemd-remount-fs.service... May 17 00:42:44.369662 systemd[1]: Starting systemd-udev-trigger.service... May 17 00:42:44.369669 systemd[1]: verity-setup.service: Deactivated successfully. May 17 00:42:44.369676 systemd[1]: Stopped verity-setup.service. May 17 00:42:44.369684 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:42:44.369691 systemd[1]: Mounted dev-hugepages.mount. May 17 00:42:44.369698 systemd[1]: Mounted dev-mqueue.mount. May 17 00:42:44.369719 systemd[1]: Mounted media.mount. May 17 00:42:44.369729 systemd[1]: Mounted sys-kernel-debug.mount. May 17 00:42:44.369737 systemd[1]: Mounted sys-kernel-tracing.mount. May 17 00:42:44.369743 systemd[1]: Mounted tmp.mount. May 17 00:42:44.369750 systemd[1]: Finished kmod-static-nodes.service. May 17 00:42:44.369757 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 17 00:42:44.369765 systemd[1]: Finished modprobe@dm_mod.service. May 17 00:42:44.369772 systemd[1]: modprobe@drm.service: Deactivated successfully. May 17 00:42:44.369779 systemd[1]: Finished modprobe@drm.service. May 17 00:42:44.369786 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 17 00:42:44.369793 systemd[1]: Finished modprobe@efi_pstore.service. May 17 00:42:44.369803 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 17 00:42:44.369814 systemd[1]: Finished modprobe@configfs.service. May 17 00:42:44.369825 systemd[1]: Finished systemd-network-generator.service. May 17 00:42:44.369837 systemd[1]: Finished systemd-remount-fs.service. May 17 00:42:44.369847 systemd[1]: Reached target network-pre.target. May 17 00:42:44.369854 systemd[1]: Mounting sys-kernel-config.mount... May 17 00:42:44.369861 kernel: fuse: init (API version 7.34) May 17 00:42:44.369868 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 17 00:42:44.369875 kernel: loop: module loaded May 17 00:42:44.369887 systemd-journald[1021]: Journal started May 17 00:42:44.369918 systemd-journald[1021]: Runtime Journal (/run/log/journal/7733560014704317aa4722f110ff3dd8) is 4.8M, max 38.8M, 34.0M free. May 17 00:42:40.218000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 May 17 00:42:40.369000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 May 17 00:42:40.369000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 May 17 00:42:40.369000 audit: BPF prog-id=10 op=LOAD May 17 00:42:40.369000 audit: BPF prog-id=10 op=UNLOAD May 17 00:42:40.369000 audit: BPF prog-id=11 op=LOAD May 17 00:42:40.369000 audit: BPF prog-id=11 op=UNLOAD May 17 00:42:40.654000 audit[941]: AVC avc: denied { associate } for pid=941 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" May 17 00:42:40.654000 audit[941]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c0001aa0c2 a1=c000186018 a2=c000188040 a3=32 items=0 ppid=924 pid=941 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:40.654000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 May 17 00:42:40.656000 audit[941]: AVC avc: denied { associate } for pid=941 comm="torcx-generator" name="usr" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 May 17 00:42:40.656000 audit[941]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c0001aa199 a2=1ed a3=0 items=2 ppid=924 pid=941 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:40.656000 audit: CWD cwd="/" May 17 00:42:40.656000 audit: PATH item=0 name=(null) inode=2 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:42:40.656000 audit: PATH item=1 name=(null) inode=3 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:42:40.656000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 May 17 00:42:44.229000 audit: BPF prog-id=12 op=LOAD May 17 00:42:44.229000 audit: BPF prog-id=3 op=UNLOAD May 17 00:42:44.230000 audit: BPF prog-id=13 op=LOAD May 17 00:42:44.231000 audit: BPF prog-id=14 op=LOAD May 17 00:42:44.231000 audit: BPF prog-id=4 op=UNLOAD May 17 00:42:44.231000 audit: BPF prog-id=5 op=UNLOAD May 17 00:42:44.233000 audit: BPF prog-id=15 op=LOAD May 17 00:42:44.233000 audit: BPF prog-id=12 op=UNLOAD May 17 00:42:44.235000 audit: BPF prog-id=16 op=LOAD May 17 00:42:44.237000 audit: BPF prog-id=17 op=LOAD May 17 00:42:44.237000 audit: BPF prog-id=13 op=UNLOAD May 17 00:42:44.237000 audit: BPF prog-id=14 op=UNLOAD May 17 00:42:44.237000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:44.241000 audit: BPF prog-id=15 op=UNLOAD May 17 00:42:44.241000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:44.241000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:44.306000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:44.308000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:44.309000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:44.309000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:44.309000 audit: BPF prog-id=18 op=LOAD May 17 00:42:44.309000 audit: BPF prog-id=19 op=LOAD May 17 00:42:44.309000 audit: BPF prog-id=20 op=LOAD May 17 00:42:44.309000 audit: BPF prog-id=16 op=UNLOAD May 17 00:42:44.309000 audit: BPF prog-id=17 op=UNLOAD May 17 00:42:44.333000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:44.343000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:44.345000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:44.345000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:44.347000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:44.347000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:44.352000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:44.352000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:44.354000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:44.354000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:44.355000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:44.356000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:44.365000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 May 17 00:42:44.365000 audit[1021]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7fffdcc0bd40 a2=4000 a3=7fffdcc0bddc items=0 ppid=1 pid=1021 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:44.365000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" May 17 00:42:40.648478 /usr/lib/systemd/system-generators/torcx-generator[941]: time="2025-05-17T00:42:40Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 17 00:42:44.228510 systemd[1]: Queued start job for default target multi-user.target. May 17 00:42:40.650994 /usr/lib/systemd/system-generators/torcx-generator[941]: time="2025-05-17T00:42:40Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json May 17 00:42:44.228519 systemd[1]: Unnecessary job was removed for dev-sda6.device. May 17 00:42:40.651008 /usr/lib/systemd/system-generators/torcx-generator[941]: time="2025-05-17T00:42:40Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json May 17 00:42:44.239425 systemd[1]: systemd-journald.service: Deactivated successfully. May 17 00:42:40.651051 /usr/lib/systemd/system-generators/torcx-generator[941]: time="2025-05-17T00:42:40Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" May 17 00:42:40.651058 /usr/lib/systemd/system-generators/torcx-generator[941]: time="2025-05-17T00:42:40Z" level=debug msg="skipped missing lower profile" missing profile=oem May 17 00:42:40.651084 /usr/lib/systemd/system-generators/torcx-generator[941]: time="2025-05-17T00:42:40Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" May 17 00:42:40.651092 /usr/lib/systemd/system-generators/torcx-generator[941]: time="2025-05-17T00:42:40Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= May 17 00:42:40.651257 /usr/lib/systemd/system-generators/torcx-generator[941]: time="2025-05-17T00:42:40Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack May 17 00:42:40.651294 /usr/lib/systemd/system-generators/torcx-generator[941]: time="2025-05-17T00:42:40Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json May 17 00:42:44.371592 jq[1008]: true May 17 00:42:40.651310 /usr/lib/systemd/system-generators/torcx-generator[941]: time="2025-05-17T00:42:40Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json May 17 00:42:40.654526 /usr/lib/systemd/system-generators/torcx-generator[941]: time="2025-05-17T00:42:40Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 May 17 00:42:40.654558 /usr/lib/systemd/system-generators/torcx-generator[941]: time="2025-05-17T00:42:40Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl May 17 00:42:40.654575 /usr/lib/systemd/system-generators/torcx-generator[941]: time="2025-05-17T00:42:40Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.7: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.7 May 17 00:42:40.654585 /usr/lib/systemd/system-generators/torcx-generator[941]: time="2025-05-17T00:42:40Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store May 17 00:42:40.654596 /usr/lib/systemd/system-generators/torcx-generator[941]: time="2025-05-17T00:42:40Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.7: no such file or directory" path=/var/lib/torcx/store/3510.3.7 May 17 00:42:40.654604 /usr/lib/systemd/system-generators/torcx-generator[941]: time="2025-05-17T00:42:40Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store May 17 00:42:42.856591 /usr/lib/systemd/system-generators/torcx-generator[941]: time="2025-05-17T00:42:42Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl May 17 00:42:42.856778 /usr/lib/systemd/system-generators/torcx-generator[941]: time="2025-05-17T00:42:42Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl May 17 00:42:42.856846 /usr/lib/systemd/system-generators/torcx-generator[941]: time="2025-05-17T00:42:42Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl May 17 00:42:42.856959 /usr/lib/systemd/system-generators/torcx-generator[941]: time="2025-05-17T00:42:42Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl May 17 00:42:42.856992 /usr/lib/systemd/system-generators/torcx-generator[941]: time="2025-05-17T00:42:42Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= May 17 00:42:44.372251 jq[1034]: true May 17 00:42:42.857033 /usr/lib/systemd/system-generators/torcx-generator[941]: time="2025-05-17T00:42:42Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx May 17 00:42:44.390955 systemd[1]: Starting systemd-hwdb-update.service... May 17 00:42:44.390986 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 17 00:42:44.392385 systemd[1]: Starting systemd-random-seed.service... May 17 00:42:44.392000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:44.394415 systemd[1]: Finished flatcar-tmpfiles.service. May 17 00:42:44.392000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:44.394717 systemd[1]: Started systemd-journald.service. May 17 00:42:44.394627 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 17 00:42:44.394714 systemd[1]: Finished modprobe@fuse.service. May 17 00:42:44.393000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:44.393000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:44.394919 systemd[1]: modprobe@loop.service: Deactivated successfully. May 17 00:42:44.394988 systemd[1]: Finished modprobe@loop.service. May 17 00:42:44.393000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:44.393000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:44.395507 systemd[1]: Mounted sys-kernel-config.mount. May 17 00:42:44.396968 systemd[1]: Mounting sys-fs-fuse-connections.mount... May 17 00:42:44.397718 systemd[1]: Starting systemd-journal-flush.service... May 17 00:42:44.398343 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 17 00:42:44.399620 systemd[1]: Starting systemd-sysusers.service... May 17 00:42:44.400358 systemd[1]: Mounted sys-fs-fuse-connections.mount. May 17 00:42:44.416611 systemd[1]: Finished systemd-modules-load.service. May 17 00:42:44.415000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:44.417525 systemd[1]: Starting systemd-sysctl.service... May 17 00:42:44.425632 systemd-journald[1021]: Time spent on flushing to /var/log/journal/7733560014704317aa4722f110ff3dd8 is 40.370ms for 2028 entries. May 17 00:42:44.425632 systemd-journald[1021]: System Journal (/var/log/journal/7733560014704317aa4722f110ff3dd8) is 8.0M, max 584.8M, 576.8M free. May 17 00:42:44.513464 systemd-journald[1021]: Received client request to flush runtime journal. May 17 00:42:44.428000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:44.456000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:44.430259 systemd[1]: Finished systemd-random-seed.service. May 17 00:42:44.512000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:44.430432 systemd[1]: Reached target first-boot-complete.target. May 17 00:42:44.458263 systemd[1]: Finished systemd-sysctl.service. May 17 00:42:44.514221 systemd[1]: Finished systemd-journal-flush.service. May 17 00:42:44.517000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:44.518955 systemd[1]: Finished systemd-udev-trigger.service. May 17 00:42:44.519883 systemd[1]: Starting systemd-udev-settle.service... May 17 00:42:44.529128 udevadm[1067]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. May 17 00:42:44.576623 systemd[1]: Finished systemd-sysusers.service. May 17 00:42:44.575000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:44.577606 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... May 17 00:42:44.699000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:44.701528 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. May 17 00:42:44.822137 ignition[1058]: Ignition 2.14.0 May 17 00:42:44.822384 ignition[1058]: deleting config from guestinfo properties May 17 00:42:44.831539 ignition[1058]: Successfully deleted config May 17 00:42:44.832213 systemd[1]: Finished ignition-delete-config.service. May 17 00:42:44.830000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ignition-delete-config comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:45.141387 systemd[1]: Finished systemd-hwdb-update.service. May 17 00:42:45.139000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:45.140000 audit: BPF prog-id=21 op=LOAD May 17 00:42:45.140000 audit: BPF prog-id=22 op=LOAD May 17 00:42:45.140000 audit: BPF prog-id=7 op=UNLOAD May 17 00:42:45.140000 audit: BPF prog-id=8 op=UNLOAD May 17 00:42:45.142691 systemd[1]: Starting systemd-udevd.service... May 17 00:42:45.154810 systemd-udevd[1074]: Using default interface naming scheme 'v252'. May 17 00:42:45.332000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:45.333000 audit: BPF prog-id=23 op=LOAD May 17 00:42:45.333663 systemd[1]: Started systemd-udevd.service. May 17 00:42:45.335241 systemd[1]: Starting systemd-networkd.service... May 17 00:42:45.356183 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. May 17 00:42:45.399737 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 May 17 00:42:45.401000 audit: BPF prog-id=24 op=LOAD May 17 00:42:45.402000 audit: BPF prog-id=25 op=LOAD May 17 00:42:45.402000 audit: BPF prog-id=26 op=LOAD May 17 00:42:45.404201 systemd[1]: Starting systemd-userdbd.service... May 17 00:42:45.415714 kernel: ACPI: button: Power Button [PWRF] May 17 00:42:45.436825 systemd[1]: Started systemd-userdbd.service. May 17 00:42:45.435000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:45.442000 audit[1080]: AVC avc: denied { confidentiality } for pid=1080 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 May 17 00:42:45.442000 audit[1080]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=56415aa44710 a1=338ac a2=7fda6ebf4bc5 a3=5 items=110 ppid=1074 pid=1080 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:45.442000 audit: CWD cwd="/" May 17 00:42:45.442000 audit: PATH item=0 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:42:45.442000 audit: PATH item=1 name=(null) inode=24284 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:42:45.442000 audit: PATH item=2 name=(null) inode=24284 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:42:45.442000 audit: PATH item=3 name=(null) inode=24285 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:42:45.442000 audit: PATH item=4 name=(null) inode=24284 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:42:45.442000 audit: PATH item=5 name=(null) inode=24286 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:42:45.442000 audit: PATH item=6 name=(null) inode=24284 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:42:45.442000 audit: PATH item=7 name=(null) inode=24287 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:42:45.442000 audit: PATH item=8 name=(null) inode=24287 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:42:45.442000 audit: PATH item=9 name=(null) inode=24288 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:42:45.442000 audit: PATH item=10 name=(null) inode=24287 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:42:45.442000 audit: PATH item=11 name=(null) inode=24289 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:42:45.442000 audit: PATH item=12 name=(null) inode=24287 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:42:45.442000 audit: PATH item=13 name=(null) inode=24290 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:42:45.442000 audit: PATH item=14 name=(null) inode=24287 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:42:45.442000 audit: PATH item=15 name=(null) inode=24291 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:42:45.442000 audit: PATH item=16 name=(null) inode=24287 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:42:45.442000 audit: PATH item=17 name=(null) inode=24292 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:42:45.442000 audit: PATH item=18 name=(null) inode=24284 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:42:45.442000 audit: PATH item=19 name=(null) inode=24293 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:42:45.442000 audit: PATH item=20 name=(null) inode=24293 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:42:45.442000 audit: PATH item=21 name=(null) inode=24294 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:42:45.442000 audit: PATH item=22 name=(null) inode=24293 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:42:45.442000 audit: PATH item=23 name=(null) inode=24295 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:42:45.442000 audit: PATH item=24 name=(null) inode=24293 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:42:45.442000 audit: PATH item=25 name=(null) inode=24296 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:42:45.442000 audit: PATH item=26 name=(null) inode=24293 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:42:45.442000 audit: PATH item=27 name=(null) inode=24297 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:42:45.442000 audit: PATH item=28 name=(null) inode=24293 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:42:45.442000 audit: PATH item=29 name=(null) inode=24298 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:42:45.442000 audit: PATH item=30 name=(null) inode=24284 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:42:45.442000 audit: PATH item=31 name=(null) inode=24299 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:42:45.442000 audit: PATH item=32 name=(null) inode=24299 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:42:45.442000 audit: PATH item=33 name=(null) inode=24300 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:42:45.442000 audit: PATH item=34 name=(null) inode=24299 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:42:45.442000 audit: PATH item=35 name=(null) inode=24301 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:42:45.442000 audit: PATH item=36 name=(null) inode=24299 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:42:45.442000 audit: PATH item=37 name=(null) inode=24302 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:42:45.442000 audit: PATH item=38 name=(null) inode=24299 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:42:45.442000 audit: PATH item=39 name=(null) inode=24303 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:42:45.442000 audit: PATH item=40 name=(null) inode=24299 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:42:45.442000 audit: PATH item=41 name=(null) inode=24304 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:42:45.442000 audit: PATH item=42 name=(null) inode=24284 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:42:45.442000 audit: PATH item=43 name=(null) inode=24305 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:42:45.442000 audit: PATH item=44 name=(null) inode=24305 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:42:45.442000 audit: PATH item=45 name=(null) inode=24306 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:42:45.442000 audit: PATH item=46 name=(null) inode=24305 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:42:45.442000 audit: PATH item=47 name=(null) inode=24307 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:42:45.442000 audit: PATH item=48 name=(null) inode=24305 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:42:45.442000 audit: PATH item=49 name=(null) inode=24308 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:42:45.442000 audit: PATH item=50 name=(null) inode=24305 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:42:45.442000 audit: PATH item=51 name=(null) inode=24309 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:42:45.442000 audit: PATH item=52 name=(null) inode=24305 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:42:45.442000 audit: PATH item=53 name=(null) inode=24310 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:42:45.442000 audit: PATH item=54 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:42:45.442000 audit: PATH item=55 name=(null) inode=24311 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:42:45.442000 audit: PATH item=56 name=(null) inode=24311 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:42:45.442000 audit: PATH item=57 name=(null) inode=24312 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:42:45.442000 audit: PATH item=58 name=(null) inode=24311 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:42:45.442000 audit: PATH item=59 name=(null) inode=24313 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:42:45.450284 kernel: vmw_vmci 0000:00:07.7: Found VMCI PCI device at 0x11080, irq 16 May 17 00:42:45.452900 kernel: vmw_vmci 0000:00:07.7: Using capabilities 0xc May 17 00:42:45.452982 kernel: Guest personality initialized and is active May 17 00:42:45.442000 audit: PATH item=60 name=(null) inode=24311 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:42:45.442000 audit: PATH item=61 name=(null) inode=24314 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:42:45.442000 audit: PATH item=62 name=(null) inode=24314 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:42:45.442000 audit: PATH item=63 name=(null) inode=24315 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:42:45.442000 audit: PATH item=64 name=(null) inode=24314 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:42:45.442000 audit: PATH item=65 name=(null) inode=24316 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:42:45.442000 audit: PATH item=66 name=(null) inode=24314 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:42:45.442000 audit: PATH item=67 name=(null) inode=24317 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:42:45.442000 audit: PATH item=68 name=(null) inode=24314 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:42:45.442000 audit: PATH item=69 name=(null) inode=24318 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:42:45.442000 audit: PATH item=70 name=(null) inode=24314 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:42:45.442000 audit: PATH item=71 name=(null) inode=24319 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:42:45.442000 audit: PATH item=72 name=(null) inode=24311 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:42:45.442000 audit: PATH item=73 name=(null) inode=24320 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:42:45.442000 audit: PATH item=74 name=(null) inode=24320 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:42:45.442000 audit: PATH item=75 name=(null) inode=24321 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:42:45.442000 audit: PATH item=76 name=(null) inode=24320 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:42:45.442000 audit: PATH item=77 name=(null) inode=24322 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:42:45.442000 audit: PATH item=78 name=(null) inode=24320 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:42:45.442000 audit: PATH item=79 name=(null) inode=24323 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:42:45.442000 audit: PATH item=80 name=(null) inode=24320 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:42:45.442000 audit: PATH item=81 name=(null) inode=24324 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:42:45.442000 audit: PATH item=82 name=(null) inode=24320 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:42:45.442000 audit: PATH item=83 name=(null) inode=24325 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:42:45.442000 audit: PATH item=84 name=(null) inode=24311 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:42:45.442000 audit: PATH item=85 name=(null) inode=24326 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:42:45.442000 audit: PATH item=86 name=(null) inode=24326 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:42:45.442000 audit: PATH item=87 name=(null) inode=24327 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:42:45.442000 audit: PATH item=88 name=(null) inode=24326 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:42:45.442000 audit: PATH item=89 name=(null) inode=24328 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:42:45.442000 audit: PATH item=90 name=(null) inode=24326 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:42:45.442000 audit: PATH item=91 name=(null) inode=24329 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:42:45.442000 audit: PATH item=92 name=(null) inode=24326 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:42:45.442000 audit: PATH item=93 name=(null) inode=24330 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:42:45.442000 audit: PATH item=94 name=(null) inode=24326 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:42:45.442000 audit: PATH item=95 name=(null) inode=24331 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:42:45.442000 audit: PATH item=96 name=(null) inode=24311 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:42:45.442000 audit: PATH item=97 name=(null) inode=24332 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:42:45.442000 audit: PATH item=98 name=(null) inode=24332 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:42:45.442000 audit: PATH item=99 name=(null) inode=24333 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:42:45.442000 audit: PATH item=100 name=(null) inode=24332 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:42:45.442000 audit: PATH item=101 name=(null) inode=24334 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:42:45.442000 audit: PATH item=102 name=(null) inode=24332 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:42:45.442000 audit: PATH item=103 name=(null) inode=24335 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:42:45.442000 audit: PATH item=104 name=(null) inode=24332 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:42:45.442000 audit: PATH item=105 name=(null) inode=24336 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:42:45.442000 audit: PATH item=106 name=(null) inode=24332 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:42:45.442000 audit: PATH item=107 name=(null) inode=24337 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:42:45.442000 audit: PATH item=108 name=(null) inode=1 dev=00:07 mode=040700 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:42:45.442000 audit: PATH item=109 name=(null) inode=24338 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:42:45.442000 audit: PROCTITLE proctitle="(udev-worker)" May 17 00:42:45.455173 kernel: VMCI host device registered (name=vmci, major=10, minor=125) May 17 00:42:45.455206 kernel: Initialized host personality May 17 00:42:45.462721 kernel: piix4_smbus 0000:00:07.3: SMBus Host Controller not enabled! May 17 00:42:45.482550 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input3 May 17 00:42:45.504713 kernel: mousedev: PS/2 mouse device common for all mice May 17 00:42:45.516971 (udev-worker)[1077]: id: Truncating stdout of 'dmi_memory_id' up to 16384 byte. May 17 00:42:45.518119 systemd-networkd[1082]: lo: Link UP May 17 00:42:45.518124 systemd-networkd[1082]: lo: Gained carrier May 17 00:42:45.518446 systemd-networkd[1082]: Enumeration completed May 17 00:42:45.518504 systemd[1]: Started systemd-networkd.service. May 17 00:42:45.516000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:45.518819 systemd-networkd[1082]: ens192: Configuring with /etc/systemd/network/00-vmware.network. May 17 00:42:45.522445 kernel: vmxnet3 0000:0b:00.0 ens192: intr type 3, mode 0, 3 vectors allocated May 17 00:42:45.523634 kernel: vmxnet3 0000:0b:00.0 ens192: NIC Link is Up 10000 Mbps May 17 00:42:45.523738 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): ens192: link becomes ready May 17 00:42:45.523680 systemd-networkd[1082]: ens192: Link UP May 17 00:42:45.523865 systemd-networkd[1082]: ens192: Gained carrier May 17 00:42:45.546051 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. May 17 00:42:45.566000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:45.567926 systemd[1]: Finished systemd-udev-settle.service. May 17 00:42:45.568911 systemd[1]: Starting lvm2-activation-early.service... May 17 00:42:45.647122 lvm[1107]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 17 00:42:45.675291 systemd[1]: Finished lvm2-activation-early.service. May 17 00:42:45.673000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:45.675479 systemd[1]: Reached target cryptsetup.target. May 17 00:42:45.676368 systemd[1]: Starting lvm2-activation.service... May 17 00:42:45.679045 lvm[1108]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 17 00:42:45.698237 systemd[1]: Finished lvm2-activation.service. May 17 00:42:45.696000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:45.698414 systemd[1]: Reached target local-fs-pre.target. May 17 00:42:45.698519 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 17 00:42:45.698538 systemd[1]: Reached target local-fs.target. May 17 00:42:45.698627 systemd[1]: Reached target machines.target. May 17 00:42:45.699578 systemd[1]: Starting ldconfig.service... May 17 00:42:45.709345 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 17 00:42:45.709380 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 17 00:42:45.710393 systemd[1]: Starting systemd-boot-update.service... May 17 00:42:45.711334 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... May 17 00:42:45.712378 systemd[1]: Starting systemd-machine-id-commit.service... May 17 00:42:45.713491 systemd[1]: Starting systemd-sysext.service... May 17 00:42:45.737737 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1110 (bootctl) May 17 00:42:45.739250 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... May 17 00:42:45.744801 systemd[1]: Unmounting usr-share-oem.mount... May 17 00:42:45.746803 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. May 17 00:42:45.745000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:45.765884 systemd[1]: usr-share-oem.mount: Deactivated successfully. May 17 00:42:45.766021 systemd[1]: Unmounted usr-share-oem.mount. May 17 00:42:45.792785 kernel: loop0: detected capacity change from 0 to 224512 May 17 00:42:46.262423 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 17 00:42:46.263208 systemd[1]: Finished systemd-machine-id-commit.service. May 17 00:42:46.261000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:46.325716 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 17 00:42:46.355725 kernel: loop1: detected capacity change from 0 to 224512 May 17 00:42:46.367011 systemd-fsck[1119]: fsck.fat 4.2 (2021-01-31) May 17 00:42:46.367011 systemd-fsck[1119]: /dev/sda1: 790 files, 120726/258078 clusters May 17 00:42:46.368055 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. May 17 00:42:46.366000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:46.369493 systemd[1]: Mounting boot.mount... May 17 00:42:46.435620 systemd[1]: Mounted boot.mount. May 17 00:42:46.452232 (sd-sysext)[1122]: Using extensions 'kubernetes'. May 17 00:42:46.453366 (sd-sysext)[1122]: Merged extensions into '/usr'. May 17 00:42:46.457840 systemd[1]: Finished systemd-boot-update.service. May 17 00:42:46.456000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:46.466286 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:42:46.467670 systemd[1]: Mounting usr-share-oem.mount... May 17 00:42:46.468586 systemd[1]: Starting modprobe@dm_mod.service... May 17 00:42:46.470552 systemd[1]: Starting modprobe@efi_pstore.service... May 17 00:42:46.471939 systemd[1]: Starting modprobe@loop.service... May 17 00:42:46.472080 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 17 00:42:46.472164 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 17 00:42:46.472241 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:42:46.474610 systemd[1]: Mounted usr-share-oem.mount. May 17 00:42:46.475025 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 17 00:42:46.475130 systemd[1]: Finished modprobe@dm_mod.service. May 17 00:42:46.473000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:46.473000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:46.475502 systemd[1]: modprobe@loop.service: Deactivated successfully. May 17 00:42:46.475598 systemd[1]: Finished modprobe@loop.service. May 17 00:42:46.474000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:46.474000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:46.475984 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 17 00:42:46.474000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:46.474000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:46.476204 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 17 00:42:46.476301 systemd[1]: Finished modprobe@efi_pstore.service. May 17 00:42:46.476559 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 17 00:42:46.477473 systemd[1]: Finished systemd-sysext.service. May 17 00:42:46.475000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:46.478474 systemd[1]: Starting ensure-sysext.service... May 17 00:42:46.479465 systemd[1]: Starting systemd-tmpfiles-setup.service... May 17 00:42:46.485460 systemd[1]: Reloading. May 17 00:42:46.497132 systemd-tmpfiles[1131]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. May 17 00:42:46.504710 systemd-tmpfiles[1131]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 17 00:42:46.509888 systemd-tmpfiles[1131]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 17 00:42:46.535566 /usr/lib/systemd/system-generators/torcx-generator[1153]: time="2025-05-17T00:42:46Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 17 00:42:46.535809 /usr/lib/systemd/system-generators/torcx-generator[1153]: time="2025-05-17T00:42:46Z" level=info msg="torcx already run" May 17 00:42:46.617226 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 17 00:42:46.617244 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 17 00:42:46.633869 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 17 00:42:46.671000 audit: BPF prog-id=27 op=LOAD May 17 00:42:46.671000 audit: BPF prog-id=28 op=LOAD May 17 00:42:46.671000 audit: BPF prog-id=21 op=UNLOAD May 17 00:42:46.671000 audit: BPF prog-id=22 op=UNLOAD May 17 00:42:46.671000 audit: BPF prog-id=29 op=LOAD May 17 00:42:46.671000 audit: BPF prog-id=18 op=UNLOAD May 17 00:42:46.671000 audit: BPF prog-id=30 op=LOAD May 17 00:42:46.672000 audit: BPF prog-id=31 op=LOAD May 17 00:42:46.672000 audit: BPF prog-id=19 op=UNLOAD May 17 00:42:46.672000 audit: BPF prog-id=20 op=UNLOAD May 17 00:42:46.673000 audit: BPF prog-id=32 op=LOAD May 17 00:42:46.673000 audit: BPF prog-id=23 op=UNLOAD May 17 00:42:46.674000 audit: BPF prog-id=33 op=LOAD May 17 00:42:46.674000 audit: BPF prog-id=24 op=UNLOAD May 17 00:42:46.674000 audit: BPF prog-id=34 op=LOAD May 17 00:42:46.674000 audit: BPF prog-id=35 op=LOAD May 17 00:42:46.674000 audit: BPF prog-id=25 op=UNLOAD May 17 00:42:46.674000 audit: BPF prog-id=26 op=UNLOAD May 17 00:42:46.682302 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:42:46.683119 systemd[1]: Starting modprobe@dm_mod.service... May 17 00:42:46.684344 systemd[1]: Starting modprobe@efi_pstore.service... May 17 00:42:46.685163 systemd[1]: Starting modprobe@loop.service... May 17 00:42:46.685387 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 17 00:42:46.685463 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 17 00:42:46.685528 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:42:46.685998 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 17 00:42:46.686077 systemd[1]: Finished modprobe@dm_mod.service. May 17 00:42:46.684000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:46.684000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:46.686658 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 17 00:42:46.686738 systemd[1]: Finished modprobe@efi_pstore.service. May 17 00:42:46.685000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:46.685000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:46.687199 systemd[1]: modprobe@loop.service: Deactivated successfully. May 17 00:42:46.687318 systemd[1]: Finished modprobe@loop.service. May 17 00:42:46.685000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:46.685000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:46.688428 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:42:46.689171 systemd[1]: Starting modprobe@dm_mod.service... May 17 00:42:46.690104 systemd[1]: Starting modprobe@efi_pstore.service... May 17 00:42:46.691243 systemd[1]: Starting modprobe@loop.service... May 17 00:42:46.691412 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 17 00:42:46.691527 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 17 00:42:46.691681 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:42:46.692209 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 17 00:42:46.692285 systemd[1]: Finished modprobe@dm_mod.service. May 17 00:42:46.690000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:46.690000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:46.692804 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 17 00:42:46.692928 systemd[1]: Finished modprobe@efi_pstore.service. May 17 00:42:46.691000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:46.691000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:46.693381 systemd[1]: modprobe@loop.service: Deactivated successfully. May 17 00:42:46.693511 systemd[1]: Finished modprobe@loop.service. May 17 00:42:46.692000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:46.692000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:46.695309 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:42:46.696251 systemd[1]: Starting modprobe@dm_mod.service... May 17 00:42:46.697251 systemd[1]: Starting modprobe@drm.service... May 17 00:42:46.698099 systemd[1]: Starting modprobe@efi_pstore.service... May 17 00:42:46.699324 systemd[1]: Starting modprobe@loop.service... May 17 00:42:46.699580 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 17 00:42:46.699651 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 17 00:42:46.700538 systemd[1]: Starting systemd-networkd-wait-online.service... May 17 00:42:46.700800 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:42:46.701551 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 17 00:42:46.701685 systemd[1]: Finished modprobe@dm_mod.service. May 17 00:42:46.700000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:46.700000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:46.702257 systemd[1]: modprobe@drm.service: Deactivated successfully. May 17 00:42:46.702381 systemd[1]: Finished modprobe@drm.service. May 17 00:42:46.700000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:46.700000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:46.702835 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 17 00:42:46.702958 systemd[1]: Finished modprobe@efi_pstore.service. May 17 00:42:46.701000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:46.701000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:46.703406 systemd[1]: modprobe@loop.service: Deactivated successfully. May 17 00:42:46.703525 systemd[1]: Finished modprobe@loop.service. May 17 00:42:46.701000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:46.702000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:46.704199 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 17 00:42:46.704291 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 17 00:42:46.704973 systemd[1]: Finished ensure-sysext.service. May 17 00:42:46.703000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:46.950000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:46.951608 systemd[1]: Finished systemd-tmpfiles-setup.service. May 17 00:42:46.952820 systemd[1]: Starting audit-rules.service... May 17 00:42:46.953981 systemd[1]: Starting clean-ca-certificates.service... May 17 00:42:46.954000 audit: BPF prog-id=36 op=LOAD May 17 00:42:46.955000 audit: BPF prog-id=37 op=LOAD May 17 00:42:46.955030 systemd[1]: Starting systemd-journal-catalog-update.service... May 17 00:42:46.956361 systemd[1]: Starting systemd-resolved.service... May 17 00:42:46.957639 systemd[1]: Starting systemd-timesyncd.service... May 17 00:42:46.959629 systemd[1]: Starting systemd-update-utmp.service... May 17 00:42:46.967977 systemd[1]: Finished clean-ca-certificates.service. May 17 00:42:46.969248 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 17 00:42:46.966000 audit[1229]: SYSTEM_BOOT pid=1229 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' May 17 00:42:46.966000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:46.970151 systemd[1]: Finished systemd-update-utmp.service. May 17 00:42:46.968000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:46.990624 systemd[1]: Finished systemd-journal-catalog-update.service. May 17 00:42:46.989000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:47.002764 systemd-networkd[1082]: ens192: Gained IPv6LL May 17 00:42:47.009280 systemd[1]: Finished systemd-networkd-wait-online.service. May 17 00:42:47.007000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:47.022000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 May 17 00:42:47.022000 audit[1243]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffde16f37b0 a2=420 a3=0 items=0 ppid=1223 pid=1243 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:47.022000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 May 17 00:42:47.024881 augenrules[1243]: No rules May 17 00:42:47.025333 systemd[1]: Finished audit-rules.service. May 17 00:42:47.027307 systemd[1]: Started systemd-timesyncd.service. May 17 00:42:47.027463 systemd[1]: Reached target time-set.target. May 17 00:42:47.048529 systemd-resolved[1226]: Positive Trust Anchors: May 17 00:42:47.048749 systemd-resolved[1226]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 17 00:42:47.048812 systemd-resolved[1226]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test May 17 00:44:08.479683 systemd-timesyncd[1227]: Contacted time server 15.204.87.223:123 (0.flatcar.pool.ntp.org). May 17 00:44:08.479727 systemd-timesyncd[1227]: Initial clock synchronization to Sat 2025-05-17 00:44:08.479584 UTC. May 17 00:44:08.533083 systemd-resolved[1226]: Defaulting to hostname 'linux'. May 17 00:44:08.534354 systemd[1]: Started systemd-resolved.service. May 17 00:44:08.534549 systemd[1]: Reached target network.target. May 17 00:44:08.534676 systemd[1]: Reached target network-online.target. May 17 00:44:08.534798 systemd[1]: Reached target nss-lookup.target. May 17 00:44:09.031374 ldconfig[1109]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 17 00:44:09.062231 systemd[1]: Finished ldconfig.service. May 17 00:44:09.063293 systemd[1]: Starting systemd-update-done.service... May 17 00:44:09.071036 systemd[1]: Finished systemd-update-done.service. May 17 00:44:09.071205 systemd[1]: Reached target sysinit.target. May 17 00:44:09.071371 systemd[1]: Started motdgen.path. May 17 00:44:09.071467 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. May 17 00:44:09.071648 systemd[1]: Started logrotate.timer. May 17 00:44:09.071764 systemd[1]: Started mdadm.timer. May 17 00:44:09.071842 systemd[1]: Started systemd-tmpfiles-clean.timer. May 17 00:44:09.071931 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 17 00:44:09.071946 systemd[1]: Reached target paths.target. May 17 00:44:09.072027 systemd[1]: Reached target timers.target. May 17 00:44:09.072268 systemd[1]: Listening on dbus.socket. May 17 00:44:09.073042 systemd[1]: Starting docker.socket... May 17 00:44:09.084552 systemd[1]: Listening on sshd.socket. May 17 00:44:09.084763 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 17 00:44:09.085010 systemd[1]: Listening on docker.socket. May 17 00:44:09.085240 systemd[1]: Reached target sockets.target. May 17 00:44:09.085408 systemd[1]: Reached target basic.target. May 17 00:44:09.085570 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. May 17 00:44:09.085585 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. May 17 00:44:09.086424 systemd[1]: Starting containerd.service... May 17 00:44:09.087239 systemd[1]: Starting dbus.service... May 17 00:44:09.088329 systemd[1]: Starting enable-oem-cloudinit.service... May 17 00:44:09.089292 systemd[1]: Starting extend-filesystems.service... May 17 00:44:09.089949 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). May 17 00:44:09.090406 jq[1254]: false May 17 00:44:09.093509 systemd[1]: Starting kubelet.service... May 17 00:44:09.095573 systemd[1]: Starting motdgen.service... May 17 00:44:09.096487 systemd[1]: Starting prepare-helm.service... May 17 00:44:09.097827 systemd[1]: Starting ssh-key-proc-cmdline.service... May 17 00:44:09.098785 systemd[1]: Starting sshd-keygen.service... May 17 00:44:09.100390 systemd[1]: Starting systemd-logind.service... May 17 00:44:09.100533 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 17 00:44:09.100574 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 17 00:44:09.101025 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 17 00:44:09.101412 systemd[1]: Starting update-engine.service... May 17 00:44:09.104786 systemd[1]: Starting update-ssh-keys-after-ignition.service... May 17 00:44:09.106208 systemd[1]: Starting vmtoolsd.service... May 17 00:44:09.109604 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 17 00:44:09.109730 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. May 17 00:44:09.115505 jq[1266]: true May 17 00:44:09.116970 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 17 00:44:09.117099 systemd[1]: Finished ssh-key-proc-cmdline.service. May 17 00:44:09.122522 systemd[1]: Started vmtoolsd.service. May 17 00:44:09.125718 systemd[1]: motdgen.service: Deactivated successfully. May 17 00:44:09.125904 systemd[1]: Finished motdgen.service. May 17 00:44:09.136922 jq[1274]: true May 17 00:44:09.147661 extend-filesystems[1255]: Found loop1 May 17 00:44:09.147661 extend-filesystems[1255]: Found sda May 17 00:44:09.147661 extend-filesystems[1255]: Found sda1 May 17 00:44:09.147661 extend-filesystems[1255]: Found sda2 May 17 00:44:09.147661 extend-filesystems[1255]: Found sda3 May 17 00:44:09.147661 extend-filesystems[1255]: Found usr May 17 00:44:09.147661 extend-filesystems[1255]: Found sda4 May 17 00:44:09.147661 extend-filesystems[1255]: Found sda6 May 17 00:44:09.147661 extend-filesystems[1255]: Found sda7 May 17 00:44:09.147661 extend-filesystems[1255]: Found sda9 May 17 00:44:09.147661 extend-filesystems[1255]: Checking size of /dev/sda9 May 17 00:44:09.152248 env[1275]: time="2025-05-17T00:44:09.152224975Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 May 17 00:44:09.158160 tar[1270]: linux-amd64/LICENSE May 17 00:44:09.158296 tar[1270]: linux-amd64/helm May 17 00:44:09.187340 extend-filesystems[1255]: Old size kept for /dev/sda9 May 17 00:44:09.187340 extend-filesystems[1255]: Found sr0 May 17 00:44:09.187011 systemd[1]: extend-filesystems.service: Deactivated successfully. May 17 00:44:09.187110 systemd[1]: Finished extend-filesystems.service. May 17 00:44:09.190186 systemd-logind[1263]: Watching system buttons on /dev/input/event1 (Power Button) May 17 00:44:09.190200 systemd-logind[1263]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 17 00:44:09.190661 env[1275]: time="2025-05-17T00:44:09.190536945Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 17 00:44:09.190661 env[1275]: time="2025-05-17T00:44:09.190624742Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 17 00:44:09.191291 systemd-logind[1263]: New seat seat0. May 17 00:44:09.194556 env[1275]: time="2025-05-17T00:44:09.194532991Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.182-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 17 00:44:09.194556 env[1275]: time="2025-05-17T00:44:09.194554031Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 17 00:44:09.194689 env[1275]: time="2025-05-17T00:44:09.194673061Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 17 00:44:09.194689 env[1275]: time="2025-05-17T00:44:09.194686143Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 17 00:44:09.194732 env[1275]: time="2025-05-17T00:44:09.194694508Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" May 17 00:44:09.194732 env[1275]: time="2025-05-17T00:44:09.194700042Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 17 00:44:09.194761 env[1275]: time="2025-05-17T00:44:09.194741849Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 17 00:44:09.194891 env[1275]: time="2025-05-17T00:44:09.194876677Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 17 00:44:09.198069 env[1275]: time="2025-05-17T00:44:09.194947369Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 17 00:44:09.198069 env[1275]: time="2025-05-17T00:44:09.198060636Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 17 00:44:09.198117 env[1275]: time="2025-05-17T00:44:09.198095367Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" May 17 00:44:09.198117 env[1275]: time="2025-05-17T00:44:09.198103620Z" level=info msg="metadata content store policy set" policy=shared May 17 00:44:09.244313 env[1275]: time="2025-05-17T00:44:09.244284841Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 17 00:44:09.244313 env[1275]: time="2025-05-17T00:44:09.244313986Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 17 00:44:09.244410 env[1275]: time="2025-05-17T00:44:09.244322456Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 17 00:44:09.244410 env[1275]: time="2025-05-17T00:44:09.244364110Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 17 00:44:09.244410 env[1275]: time="2025-05-17T00:44:09.244375178Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 17 00:44:09.244410 env[1275]: time="2025-05-17T00:44:09.244384971Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 17 00:44:09.244410 env[1275]: time="2025-05-17T00:44:09.244392021Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 17 00:44:09.244410 env[1275]: time="2025-05-17T00:44:09.244399297Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 17 00:44:09.244410 env[1275]: time="2025-05-17T00:44:09.244406189Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 May 17 00:44:09.244517 env[1275]: time="2025-05-17T00:44:09.244442583Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 17 00:44:09.244517 env[1275]: time="2025-05-17T00:44:09.244453479Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 17 00:44:09.244517 env[1275]: time="2025-05-17T00:44:09.244471693Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 17 00:44:09.244565 env[1275]: time="2025-05-17T00:44:09.244551463Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 17 00:44:09.244639 env[1275]: time="2025-05-17T00:44:09.244627340Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 17 00:44:09.244852 env[1275]: time="2025-05-17T00:44:09.244839233Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 17 00:44:09.244892 env[1275]: time="2025-05-17T00:44:09.244868296Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 17 00:44:09.244921 env[1275]: time="2025-05-17T00:44:09.244893310Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 17 00:44:09.244941 env[1275]: time="2025-05-17T00:44:09.244927256Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 17 00:44:09.244941 env[1275]: time="2025-05-17T00:44:09.244935832Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 17 00:44:09.244974 env[1275]: time="2025-05-17T00:44:09.244942614Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 17 00:44:09.245004 env[1275]: time="2025-05-17T00:44:09.244992676Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 17 00:44:09.245031 env[1275]: time="2025-05-17T00:44:09.245004689Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 17 00:44:09.245031 env[1275]: time="2025-05-17T00:44:09.245012336Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 17 00:44:09.245031 env[1275]: time="2025-05-17T00:44:09.245018987Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 17 00:44:09.245079 env[1275]: time="2025-05-17T00:44:09.245034590Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 17 00:44:09.245079 env[1275]: time="2025-05-17T00:44:09.245045595Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 17 00:44:09.245158 env[1275]: time="2025-05-17T00:44:09.245140698Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 17 00:44:09.245191 env[1275]: time="2025-05-17T00:44:09.245164095Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 17 00:44:09.245191 env[1275]: time="2025-05-17T00:44:09.245183742Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 17 00:44:09.245231 env[1275]: time="2025-05-17T00:44:09.245191368Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 17 00:44:09.245231 env[1275]: time="2025-05-17T00:44:09.245201183Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 May 17 00:44:09.245231 env[1275]: time="2025-05-17T00:44:09.245207618Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 17 00:44:09.245231 env[1275]: time="2025-05-17T00:44:09.245217628Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" May 17 00:44:09.245299 env[1275]: time="2025-05-17T00:44:09.245249243Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 17 00:44:09.245419 env[1275]: time="2025-05-17T00:44:09.245380748Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 17 00:44:09.257620 env[1275]: time="2025-05-17T00:44:09.245422420Z" level=info msg="Connect containerd service" May 17 00:44:09.257620 env[1275]: time="2025-05-17T00:44:09.245449400Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 17 00:44:09.257620 env[1275]: time="2025-05-17T00:44:09.245873028Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 17 00:44:09.257620 env[1275]: time="2025-05-17T00:44:09.245965088Z" level=info msg="Start subscribing containerd event" May 17 00:44:09.257620 env[1275]: time="2025-05-17T00:44:09.246001823Z" level=info msg="Start recovering state" May 17 00:44:09.257620 env[1275]: time="2025-05-17T00:44:09.246041911Z" level=info msg="Start event monitor" May 17 00:44:09.257620 env[1275]: time="2025-05-17T00:44:09.246049568Z" level=info msg="Start snapshots syncer" May 17 00:44:09.257620 env[1275]: time="2025-05-17T00:44:09.246054746Z" level=info msg="Start cni network conf syncer for default" May 17 00:44:09.257620 env[1275]: time="2025-05-17T00:44:09.246058464Z" level=info msg="Start streaming server" May 17 00:44:09.257620 env[1275]: time="2025-05-17T00:44:09.246089947Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 17 00:44:09.257620 env[1275]: time="2025-05-17T00:44:09.246116586Z" level=info msg=serving... address=/run/containerd/containerd.sock May 17 00:44:09.257620 env[1275]: time="2025-05-17T00:44:09.246345518Z" level=info msg="containerd successfully booted in 0.094812s" May 17 00:44:09.257830 bash[1305]: Updated "/home/core/.ssh/authorized_keys" May 17 00:44:09.246195 systemd[1]: Started containerd.service. May 17 00:44:09.256338 systemd[1]: Finished update-ssh-keys-after-ignition.service. May 17 00:44:09.304168 kernel: NET: Registered PF_VSOCK protocol family May 17 00:44:09.320965 dbus-daemon[1253]: [system] SELinux support is enabled May 17 00:44:09.321076 systemd[1]: Started dbus.service. May 17 00:44:09.322318 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 17 00:44:09.322335 systemd[1]: Reached target system-config.target. May 17 00:44:09.322476 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 17 00:44:09.322495 systemd[1]: Reached target user-config.target. May 17 00:44:09.322934 systemd[1]: Started systemd-logind.service. May 17 00:44:09.323018 dbus-daemon[1253]: [system] Successfully activated service 'org.freedesktop.systemd1' May 17 00:44:09.434773 update_engine[1264]: I0517 00:44:09.427733 1264 main.cc:92] Flatcar Update Engine starting May 17 00:44:09.448115 systemd[1]: Started update-engine.service. May 17 00:44:09.449641 systemd[1]: Started locksmithd.service. May 17 00:44:09.450751 update_engine[1264]: I0517 00:44:09.450726 1264 update_check_scheduler.cc:74] Next update check in 11m19s May 17 00:44:09.544464 tar[1270]: linux-amd64/README.md May 17 00:44:09.547847 systemd[1]: Finished prepare-helm.service. May 17 00:44:09.873787 locksmithd[1317]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 17 00:44:10.249746 sshd_keygen[1283]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 17 00:44:10.262431 systemd[1]: Finished sshd-keygen.service. May 17 00:44:10.263664 systemd[1]: Starting issuegen.service... May 17 00:44:10.266955 systemd[1]: issuegen.service: Deactivated successfully. May 17 00:44:10.267056 systemd[1]: Finished issuegen.service. May 17 00:44:10.268137 systemd[1]: Starting systemd-user-sessions.service... May 17 00:44:10.277270 systemd[1]: Finished systemd-user-sessions.service. May 17 00:44:10.278383 systemd[1]: Started getty@tty1.service. May 17 00:44:10.279245 systemd[1]: Started serial-getty@ttyS0.service. May 17 00:44:10.279447 systemd[1]: Reached target getty.target. May 17 00:44:11.755092 systemd[1]: Started kubelet.service. May 17 00:44:11.755489 systemd[1]: Reached target multi-user.target. May 17 00:44:11.756719 systemd[1]: Starting systemd-update-utmp-runlevel.service... May 17 00:44:11.763895 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. May 17 00:44:11.764024 systemd[1]: Finished systemd-update-utmp-runlevel.service. May 17 00:44:11.764329 systemd[1]: Startup finished in 926ms (kernel) + 7.521s (initrd) + 10.267s (userspace) = 18.715s. May 17 00:44:11.803234 login[1381]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) May 17 00:44:11.803337 login[1382]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) May 17 00:44:11.814888 systemd[1]: Created slice user-500.slice. May 17 00:44:11.815836 systemd[1]: Starting user-runtime-dir@500.service... May 17 00:44:11.820600 systemd-logind[1263]: New session 1 of user core. May 17 00:44:11.822979 systemd-logind[1263]: New session 2 of user core. May 17 00:44:11.825494 systemd[1]: Finished user-runtime-dir@500.service. May 17 00:44:11.826546 systemd[1]: Starting user@500.service... May 17 00:44:11.829996 (systemd)[1388]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 17 00:44:11.907139 systemd[1388]: Queued start job for default target default.target. May 17 00:44:11.907905 systemd[1388]: Reached target paths.target. May 17 00:44:11.907920 systemd[1388]: Reached target sockets.target. May 17 00:44:11.907928 systemd[1388]: Reached target timers.target. May 17 00:44:11.907935 systemd[1388]: Reached target basic.target. May 17 00:44:11.907996 systemd[1]: Started user@500.service. May 17 00:44:11.908632 systemd[1388]: Reached target default.target. May 17 00:44:11.908657 systemd[1388]: Startup finished in 75ms. May 17 00:44:11.908794 systemd[1]: Started session-1.scope. May 17 00:44:11.909369 systemd[1]: Started session-2.scope. May 17 00:44:12.360880 kubelet[1385]: E0517 00:44:12.360842 1385 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:44:12.362041 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:44:12.362124 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:44:22.612782 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 17 00:44:22.612971 systemd[1]: Stopped kubelet.service. May 17 00:44:22.614298 systemd[1]: Starting kubelet.service... May 17 00:44:22.797230 systemd[1]: Started kubelet.service. May 17 00:44:22.843454 kubelet[1417]: E0517 00:44:22.843418 1417 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:44:22.845236 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:44:22.845321 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:44:32.901106 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 17 00:44:32.901252 systemd[1]: Stopped kubelet.service. May 17 00:44:32.902366 systemd[1]: Starting kubelet.service... May 17 00:44:33.104571 systemd[1]: Started kubelet.service. May 17 00:44:33.171945 kubelet[1427]: E0517 00:44:33.171864 1427 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:44:33.173140 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:44:33.173235 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:44:39.640390 systemd[1]: Created slice system-sshd.slice. May 17 00:44:39.641194 systemd[1]: Started sshd@0-139.178.70.99:22-147.75.109.163:58050.service. May 17 00:44:39.779867 sshd[1433]: Accepted publickey for core from 147.75.109.163 port 58050 ssh2: RSA SHA256:c2z/7wLfdEkcn4/VTlfeChibQyT7Fv7HLPVdQSmDlR8 May 17 00:44:39.780625 sshd[1433]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:44:39.783972 systemd[1]: Started session-3.scope. May 17 00:44:39.784692 systemd-logind[1263]: New session 3 of user core. May 17 00:44:39.832731 systemd[1]: Started sshd@1-139.178.70.99:22-147.75.109.163:58066.service. May 17 00:44:39.869311 sshd[1438]: Accepted publickey for core from 147.75.109.163 port 58066 ssh2: RSA SHA256:c2z/7wLfdEkcn4/VTlfeChibQyT7Fv7HLPVdQSmDlR8 May 17 00:44:39.870118 sshd[1438]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:44:39.872919 systemd-logind[1263]: New session 4 of user core. May 17 00:44:39.873233 systemd[1]: Started session-4.scope. May 17 00:44:39.924265 sshd[1438]: pam_unix(sshd:session): session closed for user core May 17 00:44:39.926113 systemd[1]: Started sshd@2-139.178.70.99:22-147.75.109.163:58072.service. May 17 00:44:39.927982 systemd-logind[1263]: Session 4 logged out. Waiting for processes to exit. May 17 00:44:39.928106 systemd[1]: sshd@1-139.178.70.99:22-147.75.109.163:58066.service: Deactivated successfully. May 17 00:44:39.928496 systemd[1]: session-4.scope: Deactivated successfully. May 17 00:44:39.928970 systemd-logind[1263]: Removed session 4. May 17 00:44:39.963379 sshd[1443]: Accepted publickey for core from 147.75.109.163 port 58072 ssh2: RSA SHA256:c2z/7wLfdEkcn4/VTlfeChibQyT7Fv7HLPVdQSmDlR8 May 17 00:44:39.964385 sshd[1443]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:44:39.966876 systemd-logind[1263]: New session 5 of user core. May 17 00:44:39.967358 systemd[1]: Started session-5.scope. May 17 00:44:40.014823 sshd[1443]: pam_unix(sshd:session): session closed for user core May 17 00:44:40.016678 systemd[1]: Started sshd@3-139.178.70.99:22-147.75.109.163:58076.service. May 17 00:44:40.017214 systemd[1]: sshd@2-139.178.70.99:22-147.75.109.163:58072.service: Deactivated successfully. May 17 00:44:40.017666 systemd[1]: session-5.scope: Deactivated successfully. May 17 00:44:40.019316 systemd-logind[1263]: Session 5 logged out. Waiting for processes to exit. May 17 00:44:40.020091 systemd-logind[1263]: Removed session 5. May 17 00:44:40.053824 sshd[1449]: Accepted publickey for core from 147.75.109.163 port 58076 ssh2: RSA SHA256:c2z/7wLfdEkcn4/VTlfeChibQyT7Fv7HLPVdQSmDlR8 May 17 00:44:40.054611 sshd[1449]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:44:40.057601 systemd[1]: Started session-6.scope. May 17 00:44:40.057828 systemd-logind[1263]: New session 6 of user core. May 17 00:44:40.107900 sshd[1449]: pam_unix(sshd:session): session closed for user core May 17 00:44:40.109834 systemd[1]: sshd@3-139.178.70.99:22-147.75.109.163:58076.service: Deactivated successfully. May 17 00:44:40.110149 systemd[1]: session-6.scope: Deactivated successfully. May 17 00:44:40.110633 systemd-logind[1263]: Session 6 logged out. Waiting for processes to exit. May 17 00:44:40.111233 systemd[1]: Started sshd@4-139.178.70.99:22-147.75.109.163:58088.service. May 17 00:44:40.111789 systemd-logind[1263]: Removed session 6. May 17 00:44:40.147783 sshd[1456]: Accepted publickey for core from 147.75.109.163 port 58088 ssh2: RSA SHA256:c2z/7wLfdEkcn4/VTlfeChibQyT7Fv7HLPVdQSmDlR8 May 17 00:44:40.148728 sshd[1456]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:44:40.152118 systemd-logind[1263]: New session 7 of user core. May 17 00:44:40.152673 systemd[1]: Started session-7.scope. May 17 00:44:40.364917 sudo[1459]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 17 00:44:40.365780 sudo[1459]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) May 17 00:44:40.388243 systemd[1]: Starting docker.service... May 17 00:44:40.420794 env[1469]: time="2025-05-17T00:44:40.420768085Z" level=info msg="Starting up" May 17 00:44:40.421704 env[1469]: time="2025-05-17T00:44:40.421689425Z" level=info msg="parsed scheme: \"unix\"" module=grpc May 17 00:44:40.421751 env[1469]: time="2025-05-17T00:44:40.421704733Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc May 17 00:44:40.421751 env[1469]: time="2025-05-17T00:44:40.421720305Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc May 17 00:44:40.421751 env[1469]: time="2025-05-17T00:44:40.421726743Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc May 17 00:44:40.422906 env[1469]: time="2025-05-17T00:44:40.422889015Z" level=info msg="parsed scheme: \"unix\"" module=grpc May 17 00:44:40.422960 env[1469]: time="2025-05-17T00:44:40.422951426Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc May 17 00:44:40.423007 env[1469]: time="2025-05-17T00:44:40.422996792Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc May 17 00:44:40.423047 env[1469]: time="2025-05-17T00:44:40.423038617Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc May 17 00:44:40.426478 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport933702508-merged.mount: Deactivated successfully. May 17 00:44:40.526001 env[1469]: time="2025-05-17T00:44:40.525966700Z" level=info msg="Loading containers: start." May 17 00:44:40.616170 kernel: Initializing XFRM netlink socket May 17 00:44:40.642078 env[1469]: time="2025-05-17T00:44:40.642053685Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" May 17 00:44:40.684586 systemd-networkd[1082]: docker0: Link UP May 17 00:44:40.691945 env[1469]: time="2025-05-17T00:44:40.691920337Z" level=info msg="Loading containers: done." May 17 00:44:40.698409 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2768164194-merged.mount: Deactivated successfully. May 17 00:44:40.701565 env[1469]: time="2025-05-17T00:44:40.701538234Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 17 00:44:40.701673 env[1469]: time="2025-05-17T00:44:40.701658365Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 May 17 00:44:40.701731 env[1469]: time="2025-05-17T00:44:40.701719925Z" level=info msg="Daemon has completed initialization" May 17 00:44:40.713741 systemd[1]: Started docker.service. May 17 00:44:40.714878 env[1469]: time="2025-05-17T00:44:40.714848473Z" level=info msg="API listen on /run/docker.sock" May 17 00:44:41.841247 env[1275]: time="2025-05-17T00:44:41.841073269Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.5\"" May 17 00:44:42.605317 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1859653664.mount: Deactivated successfully. May 17 00:44:43.401055 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. May 17 00:44:43.401212 systemd[1]: Stopped kubelet.service. May 17 00:44:43.402237 systemd[1]: Starting kubelet.service... May 17 00:44:43.464118 systemd[1]: Started kubelet.service. May 17 00:44:43.541581 kubelet[1595]: E0517 00:44:43.541548 1595 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:44:43.542399 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:44:43.542473 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:44:43.777643 env[1275]: time="2025-05-17T00:44:43.777426820Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.32.5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:44:43.779018 env[1275]: time="2025-05-17T00:44:43.779005776Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:495c5ce47cf7c8b58655ef50d0f0a9b43c5ae18492059dc9af4c9aacae82a5a4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:44:43.780087 env[1275]: time="2025-05-17T00:44:43.780074716Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.32.5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:44:43.781507 env[1275]: time="2025-05-17T00:44:43.781493657Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:0bee1bf751fe06009678c0cde7545443ba3a8d2edf71cea4c69cbb5774b9bf47,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:44:43.783581 env[1275]: time="2025-05-17T00:44:43.783553098Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.5\" returns image reference \"sha256:495c5ce47cf7c8b58655ef50d0f0a9b43c5ae18492059dc9af4c9aacae82a5a4\"" May 17 00:44:43.783961 env[1275]: time="2025-05-17T00:44:43.783949114Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.5\"" May 17 00:44:45.105164 env[1275]: time="2025-05-17T00:44:45.105108284Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.32.5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:44:45.106097 env[1275]: time="2025-05-17T00:44:45.106075551Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:85dcaf69f000132c34fa34452e0fd8444bdf360b593fe06b1103680f6ecc7e00,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:44:45.108631 env[1275]: time="2025-05-17T00:44:45.108611317Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.32.5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:44:45.110449 env[1275]: time="2025-05-17T00:44:45.110418797Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:79bcf2f5e614c336c02dcea9dfcdf485d7297aed6a21239a99c87f7164f9baca,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:44:45.111723 env[1275]: time="2025-05-17T00:44:45.111692075Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.5\" returns image reference \"sha256:85dcaf69f000132c34fa34452e0fd8444bdf360b593fe06b1103680f6ecc7e00\"" May 17 00:44:45.113359 env[1275]: time="2025-05-17T00:44:45.113332183Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.5\"" May 17 00:44:46.237025 env[1275]: time="2025-05-17T00:44:46.236992938Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.32.5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:44:46.237893 env[1275]: time="2025-05-17T00:44:46.237874451Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2729fb488407e634105c62238a45a599db1692680526e20844060a7a8197b45a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:44:46.238915 env[1275]: time="2025-05-17T00:44:46.238899607Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.32.5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:44:46.239956 env[1275]: time="2025-05-17T00:44:46.239942369Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:f0f39d8b9808c407cacb3a46a5a9ce4d4a4a7cf3b674ba4bd221f5bc90051d2a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:44:46.240444 env[1275]: time="2025-05-17T00:44:46.240428404Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.5\" returns image reference \"sha256:2729fb488407e634105c62238a45a599db1692680526e20844060a7a8197b45a\"" May 17 00:44:46.240853 env[1275]: time="2025-05-17T00:44:46.240841132Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.5\"" May 17 00:44:47.410252 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount486514437.mount: Deactivated successfully. May 17 00:44:47.998657 env[1275]: time="2025-05-17T00:44:47.998628012Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.32.5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:44:48.025315 env[1275]: time="2025-05-17T00:44:48.025287506Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:f532b7356fac4d7c4e4f6763bb5a15a43e3bb740c9fb26c85b906a4d971f2363,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:44:48.032349 env[1275]: time="2025-05-17T00:44:48.032326880Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.32.5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:44:48.034551 env[1275]: time="2025-05-17T00:44:48.034536985Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:9dc6553459c3319525ba4090a780db1a133d5dee68c08e07f9b9d6ba83b42a0b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:44:48.034730 env[1275]: time="2025-05-17T00:44:48.034711342Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.5\" returns image reference \"sha256:f532b7356fac4d7c4e4f6763bb5a15a43e3bb740c9fb26c85b906a4d971f2363\"" May 17 00:44:48.035195 env[1275]: time="2025-05-17T00:44:48.035175972Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" May 17 00:44:48.588268 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1843908654.mount: Deactivated successfully. May 17 00:44:49.395208 env[1275]: time="2025-05-17T00:44:49.395179063Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:44:49.410303 env[1275]: time="2025-05-17T00:44:49.410273175Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:44:49.415784 env[1275]: time="2025-05-17T00:44:49.415764446Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:44:49.421203 env[1275]: time="2025-05-17T00:44:49.421184926Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:44:49.421759 env[1275]: time="2025-05-17T00:44:49.421744219Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" May 17 00:44:49.422619 env[1275]: time="2025-05-17T00:44:49.422606759Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 17 00:44:49.886466 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4224812875.mount: Deactivated successfully. May 17 00:44:49.889345 env[1275]: time="2025-05-17T00:44:49.889319368Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:44:49.889889 env[1275]: time="2025-05-17T00:44:49.889877056Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:44:49.890616 env[1275]: time="2025-05-17T00:44:49.890599633Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:44:49.891365 env[1275]: time="2025-05-17T00:44:49.891349381Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:44:49.891741 env[1275]: time="2025-05-17T00:44:49.891725943Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" May 17 00:44:49.892151 env[1275]: time="2025-05-17T00:44:49.892135819Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" May 17 00:44:50.459820 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3213843579.mount: Deactivated successfully. May 17 00:44:52.961121 env[1275]: time="2025-05-17T00:44:52.961072711Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.16-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:44:52.972309 env[1275]: time="2025-05-17T00:44:52.972272502Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:44:52.980172 env[1275]: time="2025-05-17T00:44:52.980137559Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.16-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:44:52.989600 env[1275]: time="2025-05-17T00:44:52.989569554Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:44:52.990473 env[1275]: time="2025-05-17T00:44:52.990447002Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" May 17 00:44:53.565036 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. May 17 00:44:53.565205 systemd[1]: Stopped kubelet.service. May 17 00:44:53.566394 systemd[1]: Starting kubelet.service... May 17 00:44:55.064573 update_engine[1264]: I0517 00:44:55.064196 1264 update_attempter.cc:509] Updating boot flags... May 17 00:44:55.339698 systemd[1]: Started kubelet.service. May 17 00:44:55.376326 kubelet[1643]: E0517 00:44:55.376301 1643 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:44:55.377350 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:44:55.377432 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:44:55.702201 systemd[1]: Stopped kubelet.service. May 17 00:44:55.704034 systemd[1]: Starting kubelet.service... May 17 00:44:55.723051 systemd[1]: Reloading. May 17 00:44:55.784477 /usr/lib/systemd/system-generators/torcx-generator[1675]: time="2025-05-17T00:44:55Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 17 00:44:55.784683 /usr/lib/systemd/system-generators/torcx-generator[1675]: time="2025-05-17T00:44:55Z" level=info msg="torcx already run" May 17 00:44:55.845728 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 17 00:44:55.845840 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 17 00:44:55.857460 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 17 00:44:55.959809 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM May 17 00:44:55.959876 systemd[1]: kubelet.service: Failed with result 'signal'. May 17 00:44:55.960125 systemd[1]: Stopped kubelet.service. May 17 00:44:55.961638 systemd[1]: Starting kubelet.service... May 17 00:44:56.429795 systemd[1]: Started kubelet.service. May 17 00:44:56.520185 kubelet[1739]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 17 00:44:56.520185 kubelet[1739]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 17 00:44:56.520185 kubelet[1739]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 17 00:44:56.520434 kubelet[1739]: I0517 00:44:56.520222 1739 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 17 00:44:56.809305 kubelet[1739]: I0517 00:44:56.809245 1739 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" May 17 00:44:56.809305 kubelet[1739]: I0517 00:44:56.809264 1739 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 17 00:44:56.809799 kubelet[1739]: I0517 00:44:56.809786 1739 server.go:954] "Client rotation is on, will bootstrap in background" May 17 00:44:56.869358 kubelet[1739]: E0517 00:44:56.869333 1739 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://139.178.70.99:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 139.178.70.99:6443: connect: connection refused" logger="UnhandledError" May 17 00:44:56.870193 kubelet[1739]: I0517 00:44:56.870177 1739 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 17 00:44:56.876998 kubelet[1739]: E0517 00:44:56.876969 1739 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 17 00:44:56.876998 kubelet[1739]: I0517 00:44:56.876989 1739 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 17 00:44:56.880441 kubelet[1739]: I0517 00:44:56.880418 1739 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 17 00:44:56.881502 kubelet[1739]: I0517 00:44:56.881472 1739 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 17 00:44:56.881610 kubelet[1739]: I0517 00:44:56.881502 1739 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 17 00:44:56.881677 kubelet[1739]: I0517 00:44:56.881614 1739 topology_manager.go:138] "Creating topology manager with none policy" May 17 00:44:56.881677 kubelet[1739]: I0517 00:44:56.881620 1739 container_manager_linux.go:304] "Creating device plugin manager" May 17 00:44:56.881726 kubelet[1739]: I0517 00:44:56.881692 1739 state_mem.go:36] "Initialized new in-memory state store" May 17 00:44:56.884833 kubelet[1739]: I0517 00:44:56.884815 1739 kubelet.go:446] "Attempting to sync node with API server" May 17 00:44:56.884886 kubelet[1739]: I0517 00:44:56.884838 1739 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 17 00:44:56.884886 kubelet[1739]: I0517 00:44:56.884852 1739 kubelet.go:352] "Adding apiserver pod source" May 17 00:44:56.884939 kubelet[1739]: I0517 00:44:56.884927 1739 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 17 00:44:56.888422 kubelet[1739]: W0517 00:44:56.888389 1739 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://139.178.70.99:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 139.178.70.99:6443: connect: connection refused May 17 00:44:56.888482 kubelet[1739]: E0517 00:44:56.888431 1739 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://139.178.70.99:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 139.178.70.99:6443: connect: connection refused" logger="UnhandledError" May 17 00:44:56.896755 kubelet[1739]: I0517 00:44:56.896732 1739 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" May 17 00:44:56.897123 kubelet[1739]: I0517 00:44:56.897114 1739 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 17 00:44:56.900237 kubelet[1739]: W0517 00:44:56.900210 1739 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://139.178.70.99:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 139.178.70.99:6443: connect: connection refused May 17 00:44:56.900320 kubelet[1739]: E0517 00:44:56.900307 1739 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://139.178.70.99:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 139.178.70.99:6443: connect: connection refused" logger="UnhandledError" May 17 00:44:56.904081 kubelet[1739]: W0517 00:44:56.904066 1739 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 17 00:44:56.908780 kubelet[1739]: I0517 00:44:56.908763 1739 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 17 00:44:56.908822 kubelet[1739]: I0517 00:44:56.908792 1739 server.go:1287] "Started kubelet" May 17 00:44:56.910405 kubelet[1739]: I0517 00:44:56.910383 1739 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 17 00:44:56.915001 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). May 17 00:44:56.915164 kubelet[1739]: I0517 00:44:56.915146 1739 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 17 00:44:56.915272 kubelet[1739]: I0517 00:44:56.915228 1739 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 17 00:44:56.915444 kubelet[1739]: I0517 00:44:56.915433 1739 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 17 00:44:56.915806 kubelet[1739]: I0517 00:44:56.915791 1739 server.go:479] "Adding debug handlers to kubelet server" May 17 00:44:56.918209 kubelet[1739]: I0517 00:44:56.918194 1739 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 17 00:44:56.921336 kubelet[1739]: E0517 00:44:56.920120 1739 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://139.178.70.99:6443/api/v1/namespaces/default/events\": dial tcp 139.178.70.99:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.184029dfde9918b9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-17 00:44:56.908773561 +0000 UTC m=+0.475533407,LastTimestamp:2025-05-17 00:44:56.908773561 +0000 UTC m=+0.475533407,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 17 00:44:56.922915 kubelet[1739]: E0517 00:44:56.922716 1739 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 17 00:44:56.922915 kubelet[1739]: I0517 00:44:56.922741 1739 volume_manager.go:297] "Starting Kubelet Volume Manager" May 17 00:44:56.922915 kubelet[1739]: I0517 00:44:56.922855 1739 desired_state_of_world_populator.go:150] "Desired state populator starts to run" May 17 00:44:56.922915 kubelet[1739]: I0517 00:44:56.922885 1739 reconciler.go:26] "Reconciler: start to sync state" May 17 00:44:56.923161 kubelet[1739]: W0517 00:44:56.923107 1739 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://139.178.70.99:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.70.99:6443: connect: connection refused May 17 00:44:56.923161 kubelet[1739]: E0517 00:44:56.923133 1739 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://139.178.70.99:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 139.178.70.99:6443: connect: connection refused" logger="UnhandledError" May 17 00:44:56.923257 kubelet[1739]: I0517 00:44:56.923244 1739 factory.go:221] Registration of the systemd container factory successfully May 17 00:44:56.923313 kubelet[1739]: I0517 00:44:56.923298 1739 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 17 00:44:56.924068 kubelet[1739]: E0517 00:44:56.924055 1739 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 17 00:44:56.924126 kubelet[1739]: I0517 00:44:56.924115 1739 factory.go:221] Registration of the containerd container factory successfully May 17 00:44:56.934962 kubelet[1739]: I0517 00:44:56.934934 1739 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 17 00:44:56.935616 kubelet[1739]: I0517 00:44:56.935606 1739 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 17 00:44:56.935671 kubelet[1739]: I0517 00:44:56.935664 1739 status_manager.go:227] "Starting to sync pod status with apiserver" May 17 00:44:56.935729 kubelet[1739]: I0517 00:44:56.935719 1739 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 17 00:44:56.935774 kubelet[1739]: I0517 00:44:56.935766 1739 kubelet.go:2382] "Starting kubelet main sync loop" May 17 00:44:56.935844 kubelet[1739]: E0517 00:44:56.935833 1739 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 17 00:44:56.939319 kubelet[1739]: E0517 00:44:56.939289 1739 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.99:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.99:6443: connect: connection refused" interval="200ms" May 17 00:44:56.941981 kubelet[1739]: W0517 00:44:56.941940 1739 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://139.178.70.99:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.70.99:6443: connect: connection refused May 17 00:44:56.942057 kubelet[1739]: E0517 00:44:56.941986 1739 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://139.178.70.99:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 139.178.70.99:6443: connect: connection refused" logger="UnhandledError" May 17 00:44:56.943590 kubelet[1739]: I0517 00:44:56.943583 1739 cpu_manager.go:221] "Starting CPU manager" policy="none" May 17 00:44:56.943653 kubelet[1739]: I0517 00:44:56.943637 1739 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 17 00:44:56.943702 kubelet[1739]: I0517 00:44:56.943695 1739 state_mem.go:36] "Initialized new in-memory state store" May 17 00:44:56.944643 kubelet[1739]: I0517 00:44:56.944636 1739 policy_none.go:49] "None policy: Start" May 17 00:44:56.944695 kubelet[1739]: I0517 00:44:56.944688 1739 memory_manager.go:186] "Starting memorymanager" policy="None" May 17 00:44:56.944741 kubelet[1739]: I0517 00:44:56.944735 1739 state_mem.go:35] "Initializing new in-memory state store" May 17 00:44:56.947534 systemd[1]: Created slice kubepods.slice. May 17 00:44:56.950392 systemd[1]: Created slice kubepods-burstable.slice. May 17 00:44:56.952507 systemd[1]: Created slice kubepods-besteffort.slice. May 17 00:44:56.957801 kubelet[1739]: I0517 00:44:56.957783 1739 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 17 00:44:56.957904 kubelet[1739]: I0517 00:44:56.957893 1739 eviction_manager.go:189] "Eviction manager: starting control loop" May 17 00:44:56.957939 kubelet[1739]: I0517 00:44:56.957910 1739 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 17 00:44:56.958682 kubelet[1739]: I0517 00:44:56.958485 1739 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 17 00:44:56.959368 kubelet[1739]: E0517 00:44:56.959106 1739 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 17 00:44:56.959368 kubelet[1739]: E0517 00:44:56.959126 1739 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" May 17 00:44:57.040240 systemd[1]: Created slice kubepods-burstable-pod8c283c5e30ad78e962db6607dc744c00.slice. May 17 00:44:57.055289 kubelet[1739]: E0517 00:44:57.055258 1739 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 17 00:44:57.057027 systemd[1]: Created slice kubepods-burstable-pod7c751acbcd1525da2f1a64e395f86bdd.slice. May 17 00:44:57.059818 kubelet[1739]: I0517 00:44:57.059773 1739 kubelet_node_status.go:75] "Attempting to register node" node="localhost" May 17 00:44:57.060575 kubelet[1739]: E0517 00:44:57.060564 1739 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 17 00:44:57.062526 kubelet[1739]: E0517 00:44:57.062034 1739 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://139.178.70.99:6443/api/v1/nodes\": dial tcp 139.178.70.99:6443: connect: connection refused" node="localhost" May 17 00:44:57.062440 systemd[1]: Created slice kubepods-burstable-pod447e79232307504a6964f3be51e3d64d.slice. May 17 00:44:57.063890 kubelet[1739]: E0517 00:44:57.063880 1739 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 17 00:44:57.139776 kubelet[1739]: E0517 00:44:57.139752 1739 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.99:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.99:6443: connect: connection refused" interval="400ms" May 17 00:44:57.225383 kubelet[1739]: I0517 00:44:57.225356 1739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8c283c5e30ad78e962db6607dc744c00-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"8c283c5e30ad78e962db6607dc744c00\") " pod="kube-system/kube-apiserver-localhost" May 17 00:44:57.225545 kubelet[1739]: I0517 00:44:57.225534 1739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 17 00:44:57.225603 kubelet[1739]: I0517 00:44:57.225594 1739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 17 00:44:57.225660 kubelet[1739]: I0517 00:44:57.225650 1739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 17 00:44:57.225715 kubelet[1739]: I0517 00:44:57.225706 1739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/447e79232307504a6964f3be51e3d64d-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"447e79232307504a6964f3be51e3d64d\") " pod="kube-system/kube-scheduler-localhost" May 17 00:44:57.225781 kubelet[1739]: I0517 00:44:57.225772 1739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8c283c5e30ad78e962db6607dc744c00-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"8c283c5e30ad78e962db6607dc744c00\") " pod="kube-system/kube-apiserver-localhost" May 17 00:44:57.225832 kubelet[1739]: I0517 00:44:57.225823 1739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8c283c5e30ad78e962db6607dc744c00-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"8c283c5e30ad78e962db6607dc744c00\") " pod="kube-system/kube-apiserver-localhost" May 17 00:44:57.225895 kubelet[1739]: I0517 00:44:57.225885 1739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 17 00:44:57.225949 kubelet[1739]: I0517 00:44:57.225940 1739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 17 00:44:57.262915 kubelet[1739]: I0517 00:44:57.262891 1739 kubelet_node_status.go:75] "Attempting to register node" node="localhost" May 17 00:44:57.263124 kubelet[1739]: E0517 00:44:57.263111 1739 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://139.178.70.99:6443/api/v1/nodes\": dial tcp 139.178.70.99:6443: connect: connection refused" node="localhost" May 17 00:44:57.356681 env[1275]: time="2025-05-17T00:44:57.356412565Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:8c283c5e30ad78e962db6607dc744c00,Namespace:kube-system,Attempt:0,}" May 17 00:44:57.361688 env[1275]: time="2025-05-17T00:44:57.361670036Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:7c751acbcd1525da2f1a64e395f86bdd,Namespace:kube-system,Attempt:0,}" May 17 00:44:57.365364 env[1275]: time="2025-05-17T00:44:57.365210401Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:447e79232307504a6964f3be51e3d64d,Namespace:kube-system,Attempt:0,}" May 17 00:44:57.540670 kubelet[1739]: E0517 00:44:57.540639 1739 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.99:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.99:6443: connect: connection refused" interval="800ms" May 17 00:44:57.664132 kubelet[1739]: I0517 00:44:57.664111 1739 kubelet_node_status.go:75] "Attempting to register node" node="localhost" May 17 00:44:57.664328 kubelet[1739]: E0517 00:44:57.664309 1739 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://139.178.70.99:6443/api/v1/nodes\": dial tcp 139.178.70.99:6443: connect: connection refused" node="localhost" May 17 00:44:57.744230 kubelet[1739]: W0517 00:44:57.744170 1739 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://139.178.70.99:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.70.99:6443: connect: connection refused May 17 00:44:57.744230 kubelet[1739]: E0517 00:44:57.744218 1739 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://139.178.70.99:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 139.178.70.99:6443: connect: connection refused" logger="UnhandledError" May 17 00:44:57.958770 kubelet[1739]: W0517 00:44:57.958688 1739 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://139.178.70.99:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 139.178.70.99:6443: connect: connection refused May 17 00:44:57.958770 kubelet[1739]: E0517 00:44:57.958741 1739 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://139.178.70.99:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 139.178.70.99:6443: connect: connection refused" logger="UnhandledError" May 17 00:44:57.963325 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2767109963.mount: Deactivated successfully. May 17 00:44:57.977216 kubelet[1739]: W0517 00:44:57.977185 1739 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://139.178.70.99:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.70.99:6443: connect: connection refused May 17 00:44:57.977278 kubelet[1739]: E0517 00:44:57.977224 1739 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://139.178.70.99:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 139.178.70.99:6443: connect: connection refused" logger="UnhandledError" May 17 00:44:57.982400 env[1275]: time="2025-05-17T00:44:57.982379176Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:44:57.984054 env[1275]: time="2025-05-17T00:44:57.984035113Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:44:57.985296 env[1275]: time="2025-05-17T00:44:57.985283482Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:44:57.986164 env[1275]: time="2025-05-17T00:44:57.986145071Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:44:57.986812 env[1275]: time="2025-05-17T00:44:57.986788998Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:44:57.987271 env[1275]: time="2025-05-17T00:44:57.987256361Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:44:57.988865 env[1275]: time="2025-05-17T00:44:57.988842539Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:44:57.990273 env[1275]: time="2025-05-17T00:44:57.990257133Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:44:57.991621 env[1275]: time="2025-05-17T00:44:57.991609203Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:44:57.992235 env[1275]: time="2025-05-17T00:44:57.992223256Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:44:57.992861 env[1275]: time="2025-05-17T00:44:57.992845338Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:44:57.993263 env[1275]: time="2025-05-17T00:44:57.993252120Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:44:58.007694 env[1275]: time="2025-05-17T00:44:58.007659338Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:44:58.007775 env[1275]: time="2025-05-17T00:44:58.007695314Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:44:58.007775 env[1275]: time="2025-05-17T00:44:58.007709536Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:44:58.007824 env[1275]: time="2025-05-17T00:44:58.007780439Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/e51db38e9a5bbc71c68e512494a73f53ffc63ee4f66d4639de61e076997f6939 pid=1775 runtime=io.containerd.runc.v2 May 17 00:44:58.017293 env[1275]: time="2025-05-17T00:44:58.017250101Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:44:58.017441 env[1275]: time="2025-05-17T00:44:58.017275476Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:44:58.017522 env[1275]: time="2025-05-17T00:44:58.017429860Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:44:58.017856 env[1275]: time="2025-05-17T00:44:58.017813064Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/67930a653a8630ea127923fc8185c4b1e1d9a8b05dfd83043a7030421cff5c89 pid=1794 runtime=io.containerd.runc.v2 May 17 00:44:58.020721 systemd[1]: Started cri-containerd-e51db38e9a5bbc71c68e512494a73f53ffc63ee4f66d4639de61e076997f6939.scope. May 17 00:44:58.025843 env[1275]: time="2025-05-17T00:44:58.025277538Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:44:58.025843 env[1275]: time="2025-05-17T00:44:58.025318557Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:44:58.025843 env[1275]: time="2025-05-17T00:44:58.025325754Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:44:58.025843 env[1275]: time="2025-05-17T00:44:58.025417042Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/d4a26c46a67ad3f856b35fd068668fb073c6cc34658c3a7b9f43fc1246e3a2fe pid=1812 runtime=io.containerd.runc.v2 May 17 00:44:58.035582 systemd[1]: Started cri-containerd-67930a653a8630ea127923fc8185c4b1e1d9a8b05dfd83043a7030421cff5c89.scope. May 17 00:44:58.054586 systemd[1]: Started cri-containerd-d4a26c46a67ad3f856b35fd068668fb073c6cc34658c3a7b9f43fc1246e3a2fe.scope. May 17 00:44:58.072779 env[1275]: time="2025-05-17T00:44:58.072753952Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:8c283c5e30ad78e962db6607dc744c00,Namespace:kube-system,Attempt:0,} returns sandbox id \"e51db38e9a5bbc71c68e512494a73f53ffc63ee4f66d4639de61e076997f6939\"" May 17 00:44:58.077307 env[1275]: time="2025-05-17T00:44:58.077284827Z" level=info msg="CreateContainer within sandbox \"e51db38e9a5bbc71c68e512494a73f53ffc63ee4f66d4639de61e076997f6939\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 17 00:44:58.087818 env[1275]: time="2025-05-17T00:44:58.087788183Z" level=info msg="CreateContainer within sandbox \"e51db38e9a5bbc71c68e512494a73f53ffc63ee4f66d4639de61e076997f6939\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"174d19570e3d8912ab44e35a6b3595c6266922013d2f446a51fa29fcbf5e2e4b\"" May 17 00:44:58.088255 env[1275]: time="2025-05-17T00:44:58.088238062Z" level=info msg="StartContainer for \"174d19570e3d8912ab44e35a6b3595c6266922013d2f446a51fa29fcbf5e2e4b\"" May 17 00:44:58.095415 env[1275]: time="2025-05-17T00:44:58.095386399Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:7c751acbcd1525da2f1a64e395f86bdd,Namespace:kube-system,Attempt:0,} returns sandbox id \"67930a653a8630ea127923fc8185c4b1e1d9a8b05dfd83043a7030421cff5c89\"" May 17 00:44:58.097334 env[1275]: time="2025-05-17T00:44:58.097312864Z" level=info msg="CreateContainer within sandbox \"67930a653a8630ea127923fc8185c4b1e1d9a8b05dfd83043a7030421cff5c89\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 17 00:44:58.106786 systemd[1]: Started cri-containerd-174d19570e3d8912ab44e35a6b3595c6266922013d2f446a51fa29fcbf5e2e4b.scope. May 17 00:44:58.110471 env[1275]: time="2025-05-17T00:44:58.109891727Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:447e79232307504a6964f3be51e3d64d,Namespace:kube-system,Attempt:0,} returns sandbox id \"d4a26c46a67ad3f856b35fd068668fb073c6cc34658c3a7b9f43fc1246e3a2fe\"" May 17 00:44:58.115305 env[1275]: time="2025-05-17T00:44:58.115277385Z" level=info msg="CreateContainer within sandbox \"67930a653a8630ea127923fc8185c4b1e1d9a8b05dfd83043a7030421cff5c89\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"c8d3c61321690c5ee106a4b737272678db04194ac6ab52cdac312ee791cb081a\"" May 17 00:44:58.115904 env[1275]: time="2025-05-17T00:44:58.115878981Z" level=info msg="CreateContainer within sandbox \"d4a26c46a67ad3f856b35fd068668fb073c6cc34658c3a7b9f43fc1246e3a2fe\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 17 00:44:58.116048 env[1275]: time="2025-05-17T00:44:58.116028213Z" level=info msg="StartContainer for \"c8d3c61321690c5ee106a4b737272678db04194ac6ab52cdac312ee791cb081a\"" May 17 00:44:58.134479 env[1275]: time="2025-05-17T00:44:58.134451332Z" level=info msg="CreateContainer within sandbox \"d4a26c46a67ad3f856b35fd068668fb073c6cc34658c3a7b9f43fc1246e3a2fe\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"be0fc8446e171363ed1cb6d5b6770415f80a7fd5840069fa456e72f78797226c\"" May 17 00:44:58.135070 env[1275]: time="2025-05-17T00:44:58.135057440Z" level=info msg="StartContainer for \"be0fc8446e171363ed1cb6d5b6770415f80a7fd5840069fa456e72f78797226c\"" May 17 00:44:58.137833 systemd[1]: Started cri-containerd-c8d3c61321690c5ee106a4b737272678db04194ac6ab52cdac312ee791cb081a.scope. May 17 00:44:58.153400 env[1275]: time="2025-05-17T00:44:58.153371365Z" level=info msg="StartContainer for \"174d19570e3d8912ab44e35a6b3595c6266922013d2f446a51fa29fcbf5e2e4b\" returns successfully" May 17 00:44:58.160430 systemd[1]: Started cri-containerd-be0fc8446e171363ed1cb6d5b6770415f80a7fd5840069fa456e72f78797226c.scope. May 17 00:44:58.179255 env[1275]: time="2025-05-17T00:44:58.179227795Z" level=info msg="StartContainer for \"c8d3c61321690c5ee106a4b737272678db04194ac6ab52cdac312ee791cb081a\" returns successfully" May 17 00:44:58.199886 env[1275]: time="2025-05-17T00:44:58.199855420Z" level=info msg="StartContainer for \"be0fc8446e171363ed1cb6d5b6770415f80a7fd5840069fa456e72f78797226c\" returns successfully" May 17 00:44:58.273999 kubelet[1739]: W0517 00:44:58.273913 1739 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://139.178.70.99:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 139.178.70.99:6443: connect: connection refused May 17 00:44:58.273999 kubelet[1739]: E0517 00:44:58.273957 1739 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://139.178.70.99:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 139.178.70.99:6443: connect: connection refused" logger="UnhandledError" May 17 00:44:58.341988 kubelet[1739]: E0517 00:44:58.341958 1739 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.99:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.99:6443: connect: connection refused" interval="1.6s" May 17 00:44:58.465915 kubelet[1739]: I0517 00:44:58.465630 1739 kubelet_node_status.go:75] "Attempting to register node" node="localhost" May 17 00:44:58.465915 kubelet[1739]: E0517 00:44:58.465879 1739 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://139.178.70.99:6443/api/v1/nodes\": dial tcp 139.178.70.99:6443: connect: connection refused" node="localhost" May 17 00:44:58.947260 kubelet[1739]: E0517 00:44:58.947244 1739 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 17 00:44:58.951243 kubelet[1739]: E0517 00:44:58.951229 1739 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 17 00:44:58.952632 kubelet[1739]: E0517 00:44:58.952622 1739 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 17 00:44:59.944112 kubelet[1739]: E0517 00:44:59.944091 1739 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" May 17 00:44:59.953883 kubelet[1739]: E0517 00:44:59.953864 1739 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 17 00:44:59.954091 kubelet[1739]: E0517 00:44:59.954035 1739 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 17 00:45:00.067695 kubelet[1739]: I0517 00:45:00.067673 1739 kubelet_node_status.go:75] "Attempting to register node" node="localhost" May 17 00:45:00.086976 kubelet[1739]: I0517 00:45:00.086948 1739 kubelet_node_status.go:78] "Successfully registered node" node="localhost" May 17 00:45:00.086976 kubelet[1739]: E0517 00:45:00.086972 1739 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" May 17 00:45:00.093549 kubelet[1739]: E0517 00:45:00.093513 1739 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 17 00:45:00.157356 kubelet[1739]: E0517 00:45:00.157338 1739 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 17 00:45:00.194364 kubelet[1739]: E0517 00:45:00.194286 1739 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 17 00:45:00.326691 kubelet[1739]: I0517 00:45:00.326667 1739 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" May 17 00:45:00.329900 kubelet[1739]: E0517 00:45:00.329881 1739 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" May 17 00:45:00.330023 kubelet[1739]: I0517 00:45:00.330014 1739 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 17 00:45:00.330905 kubelet[1739]: E0517 00:45:00.330894 1739 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" May 17 00:45:00.330960 kubelet[1739]: I0517 00:45:00.330953 1739 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 17 00:45:00.332893 kubelet[1739]: E0517 00:45:00.332882 1739 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" May 17 00:45:00.891203 kubelet[1739]: I0517 00:45:00.891181 1739 apiserver.go:52] "Watching apiserver" May 17 00:45:00.923125 kubelet[1739]: I0517 00:45:00.923105 1739 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" May 17 00:45:01.436259 systemd[1]: Reloading. May 17 00:45:01.505559 /usr/lib/systemd/system-generators/torcx-generator[2031]: time="2025-05-17T00:45:01Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 17 00:45:01.505581 /usr/lib/systemd/system-generators/torcx-generator[2031]: time="2025-05-17T00:45:01Z" level=info msg="torcx already run" May 17 00:45:01.560611 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 17 00:45:01.560739 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 17 00:45:01.572620 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 17 00:45:01.642178 kubelet[1739]: I0517 00:45:01.642123 1739 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 17 00:45:01.642452 systemd[1]: Stopping kubelet.service... May 17 00:45:01.661501 systemd[1]: kubelet.service: Deactivated successfully. May 17 00:45:01.661627 systemd[1]: Stopped kubelet.service. May 17 00:45:01.663150 systemd[1]: Starting kubelet.service... May 17 00:45:03.477940 systemd[1]: Started kubelet.service. May 17 00:45:03.538408 sudo[2105]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin May 17 00:45:03.538624 sudo[2105]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) May 17 00:45:03.544377 kubelet[2095]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 17 00:45:03.544377 kubelet[2095]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 17 00:45:03.544377 kubelet[2095]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 17 00:45:03.544377 kubelet[2095]: I0517 00:45:03.544249 2095 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 17 00:45:03.552028 kubelet[2095]: I0517 00:45:03.548991 2095 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" May 17 00:45:03.552028 kubelet[2095]: I0517 00:45:03.549007 2095 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 17 00:45:03.552028 kubelet[2095]: I0517 00:45:03.549200 2095 server.go:954] "Client rotation is on, will bootstrap in background" May 17 00:45:03.552028 kubelet[2095]: I0517 00:45:03.549895 2095 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 17 00:45:03.556914 kubelet[2095]: I0517 00:45:03.556814 2095 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 17 00:45:03.562274 kubelet[2095]: E0517 00:45:03.562244 2095 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 17 00:45:03.562274 kubelet[2095]: I0517 00:45:03.562261 2095 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 17 00:45:03.565248 kubelet[2095]: I0517 00:45:03.565229 2095 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 17 00:45:03.565397 kubelet[2095]: I0517 00:45:03.565348 2095 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 17 00:45:03.565558 kubelet[2095]: I0517 00:45:03.565370 2095 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 17 00:45:03.565558 kubelet[2095]: I0517 00:45:03.565520 2095 topology_manager.go:138] "Creating topology manager with none policy" May 17 00:45:03.565558 kubelet[2095]: I0517 00:45:03.565526 2095 container_manager_linux.go:304] "Creating device plugin manager" May 17 00:45:03.567358 kubelet[2095]: I0517 00:45:03.567345 2095 state_mem.go:36] "Initialized new in-memory state store" May 17 00:45:03.567476 kubelet[2095]: I0517 00:45:03.567465 2095 kubelet.go:446] "Attempting to sync node with API server" May 17 00:45:03.567518 kubelet[2095]: I0517 00:45:03.567493 2095 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 17 00:45:03.567518 kubelet[2095]: I0517 00:45:03.567507 2095 kubelet.go:352] "Adding apiserver pod source" May 17 00:45:03.567518 kubelet[2095]: I0517 00:45:03.567516 2095 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 17 00:45:03.569807 kubelet[2095]: I0517 00:45:03.569793 2095 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" May 17 00:45:03.570053 kubelet[2095]: I0517 00:45:03.570038 2095 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 17 00:45:03.570289 kubelet[2095]: I0517 00:45:03.570277 2095 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 17 00:45:03.570336 kubelet[2095]: I0517 00:45:03.570294 2095 server.go:1287] "Started kubelet" May 17 00:45:03.573398 kubelet[2095]: I0517 00:45:03.573382 2095 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 17 00:45:03.583750 kubelet[2095]: I0517 00:45:03.583715 2095 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 17 00:45:03.584476 kubelet[2095]: I0517 00:45:03.584462 2095 server.go:479] "Adding debug handlers to kubelet server" May 17 00:45:03.585025 kubelet[2095]: I0517 00:45:03.584993 2095 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 17 00:45:03.585119 kubelet[2095]: I0517 00:45:03.585108 2095 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 17 00:45:03.585306 kubelet[2095]: I0517 00:45:03.585262 2095 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 17 00:45:03.585693 kubelet[2095]: I0517 00:45:03.585682 2095 volume_manager.go:297] "Starting Kubelet Volume Manager" May 17 00:45:03.585803 kubelet[2095]: E0517 00:45:03.585789 2095 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 17 00:45:03.587986 kubelet[2095]: I0517 00:45:03.587971 2095 desired_state_of_world_populator.go:150] "Desired state populator starts to run" May 17 00:45:03.590178 kubelet[2095]: I0517 00:45:03.588236 2095 reconciler.go:26] "Reconciler: start to sync state" May 17 00:45:03.597779 kubelet[2095]: I0517 00:45:03.597758 2095 factory.go:221] Registration of the systemd container factory successfully May 17 00:45:03.597891 kubelet[2095]: I0517 00:45:03.597827 2095 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 17 00:45:03.602494 kubelet[2095]: I0517 00:45:03.602474 2095 factory.go:221] Registration of the containerd container factory successfully May 17 00:45:03.617437 kubelet[2095]: I0517 00:45:03.617082 2095 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 17 00:45:03.618172 kubelet[2095]: I0517 00:45:03.618149 2095 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 17 00:45:03.618234 kubelet[2095]: I0517 00:45:03.618227 2095 status_manager.go:227] "Starting to sync pod status with apiserver" May 17 00:45:03.618285 kubelet[2095]: I0517 00:45:03.618277 2095 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 17 00:45:03.618329 kubelet[2095]: I0517 00:45:03.618323 2095 kubelet.go:2382] "Starting kubelet main sync loop" May 17 00:45:03.618420 kubelet[2095]: E0517 00:45:03.618400 2095 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 17 00:45:03.635704 kubelet[2095]: I0517 00:45:03.635686 2095 cpu_manager.go:221] "Starting CPU manager" policy="none" May 17 00:45:03.635704 kubelet[2095]: I0517 00:45:03.635697 2095 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 17 00:45:03.635704 kubelet[2095]: I0517 00:45:03.635706 2095 state_mem.go:36] "Initialized new in-memory state store" May 17 00:45:03.635829 kubelet[2095]: I0517 00:45:03.635794 2095 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 17 00:45:03.635829 kubelet[2095]: I0517 00:45:03.635801 2095 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 17 00:45:03.635829 kubelet[2095]: I0517 00:45:03.635812 2095 policy_none.go:49] "None policy: Start" May 17 00:45:03.635829 kubelet[2095]: I0517 00:45:03.635817 2095 memory_manager.go:186] "Starting memorymanager" policy="None" May 17 00:45:03.635829 kubelet[2095]: I0517 00:45:03.635823 2095 state_mem.go:35] "Initializing new in-memory state store" May 17 00:45:03.635922 kubelet[2095]: I0517 00:45:03.635881 2095 state_mem.go:75] "Updated machine memory state" May 17 00:45:03.638959 kubelet[2095]: I0517 00:45:03.638943 2095 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 17 00:45:03.639330 kubelet[2095]: I0517 00:45:03.639322 2095 eviction_manager.go:189] "Eviction manager: starting control loop" May 17 00:45:03.639531 kubelet[2095]: I0517 00:45:03.639506 2095 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 17 00:45:03.639731 kubelet[2095]: I0517 00:45:03.639724 2095 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 17 00:45:03.641922 kubelet[2095]: E0517 00:45:03.641892 2095 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 17 00:45:03.719018 kubelet[2095]: I0517 00:45:03.718993 2095 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 17 00:45:03.721287 kubelet[2095]: I0517 00:45:03.721273 2095 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 17 00:45:03.721465 kubelet[2095]: I0517 00:45:03.721328 2095 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" May 17 00:45:03.741435 kubelet[2095]: I0517 00:45:03.741378 2095 kubelet_node_status.go:75] "Attempting to register node" node="localhost" May 17 00:45:03.748455 kubelet[2095]: I0517 00:45:03.748437 2095 kubelet_node_status.go:124] "Node was previously registered" node="localhost" May 17 00:45:03.748636 kubelet[2095]: I0517 00:45:03.748626 2095 kubelet_node_status.go:78] "Successfully registered node" node="localhost" May 17 00:45:03.897281 kubelet[2095]: I0517 00:45:03.897258 2095 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 17 00:45:03.897462 kubelet[2095]: I0517 00:45:03.897448 2095 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 17 00:45:03.897542 kubelet[2095]: I0517 00:45:03.897532 2095 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/447e79232307504a6964f3be51e3d64d-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"447e79232307504a6964f3be51e3d64d\") " pod="kube-system/kube-scheduler-localhost" May 17 00:45:03.897603 kubelet[2095]: I0517 00:45:03.897590 2095 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 17 00:45:03.897664 kubelet[2095]: I0517 00:45:03.897654 2095 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8c283c5e30ad78e962db6607dc744c00-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"8c283c5e30ad78e962db6607dc744c00\") " pod="kube-system/kube-apiserver-localhost" May 17 00:45:03.897726 kubelet[2095]: I0517 00:45:03.897717 2095 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8c283c5e30ad78e962db6607dc744c00-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"8c283c5e30ad78e962db6607dc744c00\") " pod="kube-system/kube-apiserver-localhost" May 17 00:45:03.897778 kubelet[2095]: I0517 00:45:03.897769 2095 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 17 00:45:03.897833 kubelet[2095]: I0517 00:45:03.897825 2095 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 17 00:45:03.897886 kubelet[2095]: I0517 00:45:03.897877 2095 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8c283c5e30ad78e962db6607dc744c00-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"8c283c5e30ad78e962db6607dc744c00\") " pod="kube-system/kube-apiserver-localhost" May 17 00:45:04.336851 sudo[2105]: pam_unix(sudo:session): session closed for user root May 17 00:45:04.570193 kubelet[2095]: I0517 00:45:04.570171 2095 apiserver.go:52] "Watching apiserver" May 17 00:45:04.588594 kubelet[2095]: I0517 00:45:04.588529 2095 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" May 17 00:45:04.632099 kubelet[2095]: I0517 00:45:04.632080 2095 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 17 00:45:04.636257 kubelet[2095]: E0517 00:45:04.636232 2095 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 17 00:45:04.649319 kubelet[2095]: I0517 00:45:04.649276 2095 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.649262583 podStartE2EDuration="1.649262583s" podCreationTimestamp="2025-05-17 00:45:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:45:04.64555522 +0000 UTC m=+1.158060703" watchObservedRunningTime="2025-05-17 00:45:04.649262583 +0000 UTC m=+1.161768065" May 17 00:45:04.654164 kubelet[2095]: I0517 00:45:04.654123 2095 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.654110517 podStartE2EDuration="1.654110517s" podCreationTimestamp="2025-05-17 00:45:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:45:04.649782102 +0000 UTC m=+1.162287584" watchObservedRunningTime="2025-05-17 00:45:04.654110517 +0000 UTC m=+1.166615993" May 17 00:45:04.659469 kubelet[2095]: I0517 00:45:04.659437 2095 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.659426324 podStartE2EDuration="1.659426324s" podCreationTimestamp="2025-05-17 00:45:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:45:04.654449544 +0000 UTC m=+1.166955027" watchObservedRunningTime="2025-05-17 00:45:04.659426324 +0000 UTC m=+1.171931806" May 17 00:45:06.240115 sudo[1459]: pam_unix(sudo:session): session closed for user root May 17 00:45:06.242096 sshd[1456]: pam_unix(sshd:session): session closed for user core May 17 00:45:06.243646 systemd[1]: sshd@4-139.178.70.99:22-147.75.109.163:58088.service: Deactivated successfully. May 17 00:45:06.244084 systemd[1]: session-7.scope: Deactivated successfully. May 17 00:45:06.244214 systemd[1]: session-7.scope: Consumed 3.424s CPU time. May 17 00:45:06.244576 systemd-logind[1263]: Session 7 logged out. Waiting for processes to exit. May 17 00:45:06.245338 systemd-logind[1263]: Removed session 7. May 17 00:45:07.193732 systemd[1]: Created slice kubepods-burstable-pod83bda6c3_5a5a_46af_a2d7_028195cd0545.slice. May 17 00:45:07.199346 systemd[1]: Created slice kubepods-besteffort-podb54d050e_36b6_4e62_8a46_8a21b5606b6f.slice. May 17 00:45:07.213623 kubelet[2095]: I0517 00:45:07.213598 2095 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 17 00:45:07.213903 env[1275]: time="2025-05-17T00:45:07.213787463Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 17 00:45:07.214112 kubelet[2095]: I0517 00:45:07.214101 2095 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 17 00:45:07.223619 kubelet[2095]: I0517 00:45:07.223596 2095 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qfjx4\" (UniqueName: \"kubernetes.io/projected/83bda6c3-5a5a-46af-a2d7-028195cd0545-kube-api-access-qfjx4\") pod \"cilium-vnm2m\" (UID: \"83bda6c3-5a5a-46af-a2d7-028195cd0545\") " pod="kube-system/cilium-vnm2m" May 17 00:45:07.223784 kubelet[2095]: I0517 00:45:07.223770 2095 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/83bda6c3-5a5a-46af-a2d7-028195cd0545-etc-cni-netd\") pod \"cilium-vnm2m\" (UID: \"83bda6c3-5a5a-46af-a2d7-028195cd0545\") " pod="kube-system/cilium-vnm2m" May 17 00:45:07.223858 kubelet[2095]: I0517 00:45:07.223845 2095 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b54d050e-36b6-4e62-8a46-8a21b5606b6f-lib-modules\") pod \"kube-proxy-wwltm\" (UID: \"b54d050e-36b6-4e62-8a46-8a21b5606b6f\") " pod="kube-system/kube-proxy-wwltm" May 17 00:45:07.223930 kubelet[2095]: I0517 00:45:07.223919 2095 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/83bda6c3-5a5a-46af-a2d7-028195cd0545-host-proc-sys-net\") pod \"cilium-vnm2m\" (UID: \"83bda6c3-5a5a-46af-a2d7-028195cd0545\") " pod="kube-system/cilium-vnm2m" May 17 00:45:07.224000 kubelet[2095]: I0517 00:45:07.223979 2095 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/83bda6c3-5a5a-46af-a2d7-028195cd0545-xtables-lock\") pod \"cilium-vnm2m\" (UID: \"83bda6c3-5a5a-46af-a2d7-028195cd0545\") " pod="kube-system/cilium-vnm2m" May 17 00:45:07.224060 kubelet[2095]: I0517 00:45:07.224050 2095 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/b54d050e-36b6-4e62-8a46-8a21b5606b6f-kube-proxy\") pod \"kube-proxy-wwltm\" (UID: \"b54d050e-36b6-4e62-8a46-8a21b5606b6f\") " pod="kube-system/kube-proxy-wwltm" May 17 00:45:07.224125 kubelet[2095]: I0517 00:45:07.224115 2095 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/83bda6c3-5a5a-46af-a2d7-028195cd0545-host-proc-sys-kernel\") pod \"cilium-vnm2m\" (UID: \"83bda6c3-5a5a-46af-a2d7-028195cd0545\") " pod="kube-system/cilium-vnm2m" May 17 00:45:07.224201 kubelet[2095]: I0517 00:45:07.224190 2095 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/83bda6c3-5a5a-46af-a2d7-028195cd0545-bpf-maps\") pod \"cilium-vnm2m\" (UID: \"83bda6c3-5a5a-46af-a2d7-028195cd0545\") " pod="kube-system/cilium-vnm2m" May 17 00:45:07.224259 kubelet[2095]: I0517 00:45:07.224250 2095 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/83bda6c3-5a5a-46af-a2d7-028195cd0545-lib-modules\") pod \"cilium-vnm2m\" (UID: \"83bda6c3-5a5a-46af-a2d7-028195cd0545\") " pod="kube-system/cilium-vnm2m" May 17 00:45:07.224318 kubelet[2095]: I0517 00:45:07.224308 2095 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b54d050e-36b6-4e62-8a46-8a21b5606b6f-xtables-lock\") pod \"kube-proxy-wwltm\" (UID: \"b54d050e-36b6-4e62-8a46-8a21b5606b6f\") " pod="kube-system/kube-proxy-wwltm" May 17 00:45:07.224376 kubelet[2095]: I0517 00:45:07.224368 2095 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/83bda6c3-5a5a-46af-a2d7-028195cd0545-hostproc\") pod \"cilium-vnm2m\" (UID: \"83bda6c3-5a5a-46af-a2d7-028195cd0545\") " pod="kube-system/cilium-vnm2m" May 17 00:45:07.224462 kubelet[2095]: I0517 00:45:07.224454 2095 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/83bda6c3-5a5a-46af-a2d7-028195cd0545-cni-path\") pod \"cilium-vnm2m\" (UID: \"83bda6c3-5a5a-46af-a2d7-028195cd0545\") " pod="kube-system/cilium-vnm2m" May 17 00:45:07.224813 kubelet[2095]: I0517 00:45:07.224803 2095 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/83bda6c3-5a5a-46af-a2d7-028195cd0545-cilium-run\") pod \"cilium-vnm2m\" (UID: \"83bda6c3-5a5a-46af-a2d7-028195cd0545\") " pod="kube-system/cilium-vnm2m" May 17 00:45:07.224891 kubelet[2095]: I0517 00:45:07.224882 2095 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/83bda6c3-5a5a-46af-a2d7-028195cd0545-cilium-cgroup\") pod \"cilium-vnm2m\" (UID: \"83bda6c3-5a5a-46af-a2d7-028195cd0545\") " pod="kube-system/cilium-vnm2m" May 17 00:45:07.224952 kubelet[2095]: I0517 00:45:07.224942 2095 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wxp47\" (UniqueName: \"kubernetes.io/projected/b54d050e-36b6-4e62-8a46-8a21b5606b6f-kube-api-access-wxp47\") pod \"kube-proxy-wwltm\" (UID: \"b54d050e-36b6-4e62-8a46-8a21b5606b6f\") " pod="kube-system/kube-proxy-wwltm" May 17 00:45:07.225021 kubelet[2095]: I0517 00:45:07.225013 2095 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/83bda6c3-5a5a-46af-a2d7-028195cd0545-hubble-tls\") pod \"cilium-vnm2m\" (UID: \"83bda6c3-5a5a-46af-a2d7-028195cd0545\") " pod="kube-system/cilium-vnm2m" May 17 00:45:07.225433 kubelet[2095]: I0517 00:45:07.225419 2095 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/83bda6c3-5a5a-46af-a2d7-028195cd0545-clustermesh-secrets\") pod \"cilium-vnm2m\" (UID: \"83bda6c3-5a5a-46af-a2d7-028195cd0545\") " pod="kube-system/cilium-vnm2m" May 17 00:45:07.225515 kubelet[2095]: I0517 00:45:07.225504 2095 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/83bda6c3-5a5a-46af-a2d7-028195cd0545-cilium-config-path\") pod \"cilium-vnm2m\" (UID: \"83bda6c3-5a5a-46af-a2d7-028195cd0545\") " pod="kube-system/cilium-vnm2m" May 17 00:45:07.327060 kubelet[2095]: I0517 00:45:07.327036 2095 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" May 17 00:45:07.335282 kubelet[2095]: E0517 00:45:07.335261 2095 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found May 17 00:45:07.335390 kubelet[2095]: E0517 00:45:07.335382 2095 projected.go:194] Error preparing data for projected volume kube-api-access-qfjx4 for pod kube-system/cilium-vnm2m: configmap "kube-root-ca.crt" not found May 17 00:45:07.335451 kubelet[2095]: E0517 00:45:07.335262 2095 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found May 17 00:45:07.335485 kubelet[2095]: E0517 00:45:07.335452 2095 projected.go:194] Error preparing data for projected volume kube-api-access-wxp47 for pod kube-system/kube-proxy-wwltm: configmap "kube-root-ca.crt" not found May 17 00:45:07.335536 kubelet[2095]: E0517 00:45:07.335528 2095 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/83bda6c3-5a5a-46af-a2d7-028195cd0545-kube-api-access-qfjx4 podName:83bda6c3-5a5a-46af-a2d7-028195cd0545 nodeName:}" failed. No retries permitted until 2025-05-17 00:45:07.83551479 +0000 UTC m=+4.348020264 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-qfjx4" (UniqueName: "kubernetes.io/projected/83bda6c3-5a5a-46af-a2d7-028195cd0545-kube-api-access-qfjx4") pod "cilium-vnm2m" (UID: "83bda6c3-5a5a-46af-a2d7-028195cd0545") : configmap "kube-root-ca.crt" not found May 17 00:45:07.335626 kubelet[2095]: E0517 00:45:07.335613 2095 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b54d050e-36b6-4e62-8a46-8a21b5606b6f-kube-api-access-wxp47 podName:b54d050e-36b6-4e62-8a46-8a21b5606b6f nodeName:}" failed. No retries permitted until 2025-05-17 00:45:07.835605595 +0000 UTC m=+4.348111070 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-wxp47" (UniqueName: "kubernetes.io/projected/b54d050e-36b6-4e62-8a46-8a21b5606b6f-kube-api-access-wxp47") pod "kube-proxy-wwltm" (UID: "b54d050e-36b6-4e62-8a46-8a21b5606b6f") : configmap "kube-root-ca.crt" not found May 17 00:45:08.096420 env[1275]: time="2025-05-17T00:45:08.096384414Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-vnm2m,Uid:83bda6c3-5a5a-46af-a2d7-028195cd0545,Namespace:kube-system,Attempt:0,}" May 17 00:45:08.106272 env[1275]: time="2025-05-17T00:45:08.106245118Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-wwltm,Uid:b54d050e-36b6-4e62-8a46-8a21b5606b6f,Namespace:kube-system,Attempt:0,}" May 17 00:45:08.131197 env[1275]: time="2025-05-17T00:45:08.123767950Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:45:08.131197 env[1275]: time="2025-05-17T00:45:08.123797146Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:45:08.131197 env[1275]: time="2025-05-17T00:45:08.123805720Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:45:08.131197 env[1275]: time="2025-05-17T00:45:08.123906085Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/b14e9d4d1b853d3150d70f5044bc4067f4ee356b6113ac8b0640cfa81b61032d pid=2184 runtime=io.containerd.runc.v2 May 17 00:45:08.131197 env[1275]: time="2025-05-17T00:45:08.124233828Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:45:08.131197 env[1275]: time="2025-05-17T00:45:08.124284446Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:45:08.131197 env[1275]: time="2025-05-17T00:45:08.124297927Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:45:08.131197 env[1275]: time="2025-05-17T00:45:08.124432112Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/e907fedb9de5a4d690839a82edf49bf47632613a5d4571f8400e4069032af846 pid=2181 runtime=io.containerd.runc.v2 May 17 00:45:08.144661 systemd[1]: Started cri-containerd-e907fedb9de5a4d690839a82edf49bf47632613a5d4571f8400e4069032af846.scope. May 17 00:45:08.148900 systemd[1]: Started cri-containerd-b14e9d4d1b853d3150d70f5044bc4067f4ee356b6113ac8b0640cfa81b61032d.scope. May 17 00:45:08.176271 env[1275]: time="2025-05-17T00:45:08.176236228Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-vnm2m,Uid:83bda6c3-5a5a-46af-a2d7-028195cd0545,Namespace:kube-system,Attempt:0,} returns sandbox id \"b14e9d4d1b853d3150d70f5044bc4067f4ee356b6113ac8b0640cfa81b61032d\"" May 17 00:45:08.178375 env[1275]: time="2025-05-17T00:45:08.178350413Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" May 17 00:45:08.182705 env[1275]: time="2025-05-17T00:45:08.182638407Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-wwltm,Uid:b54d050e-36b6-4e62-8a46-8a21b5606b6f,Namespace:kube-system,Attempt:0,} returns sandbox id \"e907fedb9de5a4d690839a82edf49bf47632613a5d4571f8400e4069032af846\"" May 17 00:45:08.185184 env[1275]: time="2025-05-17T00:45:08.184312480Z" level=info msg="CreateContainer within sandbox \"e907fedb9de5a4d690839a82edf49bf47632613a5d4571f8400e4069032af846\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 17 00:45:08.192951 env[1275]: time="2025-05-17T00:45:08.192912822Z" level=info msg="CreateContainer within sandbox \"e907fedb9de5a4d690839a82edf49bf47632613a5d4571f8400e4069032af846\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"51356afa236dc90f6aba8e10f1424c5ff37055361fa79fd2e66ec75326840a86\"" May 17 00:45:08.194197 env[1275]: time="2025-05-17T00:45:08.194174209Z" level=info msg="StartContainer for \"51356afa236dc90f6aba8e10f1424c5ff37055361fa79fd2e66ec75326840a86\"" May 17 00:45:08.209298 systemd[1]: Started cri-containerd-51356afa236dc90f6aba8e10f1424c5ff37055361fa79fd2e66ec75326840a86.scope. May 17 00:45:08.235580 env[1275]: time="2025-05-17T00:45:08.235546770Z" level=info msg="StartContainer for \"51356afa236dc90f6aba8e10f1424c5ff37055361fa79fd2e66ec75326840a86\" returns successfully" May 17 00:45:08.309828 systemd[1]: Created slice kubepods-besteffort-pod5b6b807d_7944_4679_963c_49de35c56b4f.slice. May 17 00:45:08.334253 kubelet[2095]: I0517 00:45:08.334228 2095 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r7jfz\" (UniqueName: \"kubernetes.io/projected/5b6b807d-7944-4679-963c-49de35c56b4f-kube-api-access-r7jfz\") pod \"cilium-operator-6c4d7847fc-vvcdl\" (UID: \"5b6b807d-7944-4679-963c-49de35c56b4f\") " pod="kube-system/cilium-operator-6c4d7847fc-vvcdl" May 17 00:45:08.334253 kubelet[2095]: I0517 00:45:08.334252 2095 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5b6b807d-7944-4679-963c-49de35c56b4f-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-vvcdl\" (UID: \"5b6b807d-7944-4679-963c-49de35c56b4f\") " pod="kube-system/cilium-operator-6c4d7847fc-vvcdl" May 17 00:45:08.612457 env[1275]: time="2025-05-17T00:45:08.612422336Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-vvcdl,Uid:5b6b807d-7944-4679-963c-49de35c56b4f,Namespace:kube-system,Attempt:0,}" May 17 00:45:08.622815 env[1275]: time="2025-05-17T00:45:08.622771000Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:45:08.622815 env[1275]: time="2025-05-17T00:45:08.622794696Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:45:08.622959 env[1275]: time="2025-05-17T00:45:08.622805184Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:45:08.623060 env[1275]: time="2025-05-17T00:45:08.623036989Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/0c8f5adf4f1df10f39fd20b0f8906acfba60ec1a711ee0cfa26bc569e332276a pid=2294 runtime=io.containerd.runc.v2 May 17 00:45:08.639115 systemd[1]: Started cri-containerd-0c8f5adf4f1df10f39fd20b0f8906acfba60ec1a711ee0cfa26bc569e332276a.scope. May 17 00:45:08.672695 env[1275]: time="2025-05-17T00:45:08.672663498Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-vvcdl,Uid:5b6b807d-7944-4679-963c-49de35c56b4f,Namespace:kube-system,Attempt:0,} returns sandbox id \"0c8f5adf4f1df10f39fd20b0f8906acfba60ec1a711ee0cfa26bc569e332276a\"" May 17 00:45:09.356013 systemd[1]: run-containerd-runc-k8s.io-0c8f5adf4f1df10f39fd20b0f8906acfba60ec1a711ee0cfa26bc569e332276a-runc.mqLjNx.mount: Deactivated successfully. May 17 00:45:12.078442 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4141097859.mount: Deactivated successfully. May 17 00:45:14.227023 kubelet[2095]: I0517 00:45:14.225182 2095 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-wwltm" podStartSLOduration=7.225166143 podStartE2EDuration="7.225166143s" podCreationTimestamp="2025-05-17 00:45:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:45:08.652806039 +0000 UTC m=+5.165311521" watchObservedRunningTime="2025-05-17 00:45:14.225166143 +0000 UTC m=+10.737671626" May 17 00:45:14.887417 env[1275]: time="2025-05-17T00:45:14.887391568Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:45:14.901005 env[1275]: time="2025-05-17T00:45:14.900979503Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:45:14.906212 env[1275]: time="2025-05-17T00:45:14.906141910Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:45:14.909641 env[1275]: time="2025-05-17T00:45:14.906694670Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" May 17 00:45:14.909641 env[1275]: time="2025-05-17T00:45:14.907798934Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" May 17 00:45:14.910163 env[1275]: time="2025-05-17T00:45:14.910137549Z" level=info msg="CreateContainer within sandbox \"b14e9d4d1b853d3150d70f5044bc4067f4ee356b6113ac8b0640cfa81b61032d\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 17 00:45:14.941313 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2809611327.mount: Deactivated successfully. May 17 00:45:14.951763 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1677536781.mount: Deactivated successfully. May 17 00:45:14.967691 env[1275]: time="2025-05-17T00:45:14.967664806Z" level=info msg="CreateContainer within sandbox \"b14e9d4d1b853d3150d70f5044bc4067f4ee356b6113ac8b0640cfa81b61032d\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"46fbd8ffb718b7dba622e744e421c4cae9c1831454c4cdbe1deef46b22bc56cc\"" May 17 00:45:14.968443 env[1275]: time="2025-05-17T00:45:14.968040885Z" level=info msg="StartContainer for \"46fbd8ffb718b7dba622e744e421c4cae9c1831454c4cdbe1deef46b22bc56cc\"" May 17 00:45:14.983565 systemd[1]: Started cri-containerd-46fbd8ffb718b7dba622e744e421c4cae9c1831454c4cdbe1deef46b22bc56cc.scope. May 17 00:45:15.006247 env[1275]: time="2025-05-17T00:45:15.006221562Z" level=info msg="StartContainer for \"46fbd8ffb718b7dba622e744e421c4cae9c1831454c4cdbe1deef46b22bc56cc\" returns successfully" May 17 00:45:15.014918 systemd[1]: cri-containerd-46fbd8ffb718b7dba622e744e421c4cae9c1831454c4cdbe1deef46b22bc56cc.scope: Deactivated successfully. May 17 00:45:15.471629 env[1275]: time="2025-05-17T00:45:15.471589139Z" level=info msg="shim disconnected" id=46fbd8ffb718b7dba622e744e421c4cae9c1831454c4cdbe1deef46b22bc56cc May 17 00:45:15.471808 env[1275]: time="2025-05-17T00:45:15.471797339Z" level=warning msg="cleaning up after shim disconnected" id=46fbd8ffb718b7dba622e744e421c4cae9c1831454c4cdbe1deef46b22bc56cc namespace=k8s.io May 17 00:45:15.472437 env[1275]: time="2025-05-17T00:45:15.471952405Z" level=info msg="cleaning up dead shim" May 17 00:45:15.477082 env[1275]: time="2025-05-17T00:45:15.477055581Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:45:15Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2513 runtime=io.containerd.runc.v2\n" May 17 00:45:15.671173 env[1275]: time="2025-05-17T00:45:15.670840486Z" level=info msg="CreateContainer within sandbox \"b14e9d4d1b853d3150d70f5044bc4067f4ee356b6113ac8b0640cfa81b61032d\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 17 00:45:15.681255 env[1275]: time="2025-05-17T00:45:15.681226636Z" level=info msg="CreateContainer within sandbox \"b14e9d4d1b853d3150d70f5044bc4067f4ee356b6113ac8b0640cfa81b61032d\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"a5eb72902db601c249e9a2b07f1f512859335e621bd0a75e23f2946d009cbdb6\"" May 17 00:45:15.682900 env[1275]: time="2025-05-17T00:45:15.682259011Z" level=info msg="StartContainer for \"a5eb72902db601c249e9a2b07f1f512859335e621bd0a75e23f2946d009cbdb6\"" May 17 00:45:15.694770 systemd[1]: Started cri-containerd-a5eb72902db601c249e9a2b07f1f512859335e621bd0a75e23f2946d009cbdb6.scope. May 17 00:45:15.717279 env[1275]: time="2025-05-17T00:45:15.717253183Z" level=info msg="StartContainer for \"a5eb72902db601c249e9a2b07f1f512859335e621bd0a75e23f2946d009cbdb6\" returns successfully" May 17 00:45:15.724534 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 17 00:45:15.724720 systemd[1]: Stopped systemd-sysctl.service. May 17 00:45:15.724874 systemd[1]: Stopping systemd-sysctl.service... May 17 00:45:15.726471 systemd[1]: Starting systemd-sysctl.service... May 17 00:45:15.731035 systemd[1]: cri-containerd-a5eb72902db601c249e9a2b07f1f512859335e621bd0a75e23f2946d009cbdb6.scope: Deactivated successfully. May 17 00:45:15.752250 env[1275]: time="2025-05-17T00:45:15.752226722Z" level=info msg="shim disconnected" id=a5eb72902db601c249e9a2b07f1f512859335e621bd0a75e23f2946d009cbdb6 May 17 00:45:15.752347 env[1275]: time="2025-05-17T00:45:15.752336676Z" level=warning msg="cleaning up after shim disconnected" id=a5eb72902db601c249e9a2b07f1f512859335e621bd0a75e23f2946d009cbdb6 namespace=k8s.io May 17 00:45:15.752395 env[1275]: time="2025-05-17T00:45:15.752384639Z" level=info msg="cleaning up dead shim" May 17 00:45:15.757245 env[1275]: time="2025-05-17T00:45:15.757220184Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:45:15Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2577 runtime=io.containerd.runc.v2\n" May 17 00:45:15.824249 systemd[1]: Finished systemd-sysctl.service. May 17 00:45:15.936955 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-46fbd8ffb718b7dba622e744e421c4cae9c1831454c4cdbe1deef46b22bc56cc-rootfs.mount: Deactivated successfully. May 17 00:45:16.668550 env[1275]: time="2025-05-17T00:45:16.668524756Z" level=info msg="CreateContainer within sandbox \"b14e9d4d1b853d3150d70f5044bc4067f4ee356b6113ac8b0640cfa81b61032d\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 17 00:45:16.707756 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1040918042.mount: Deactivated successfully. May 17 00:45:16.711136 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2545349733.mount: Deactivated successfully. May 17 00:45:16.724139 env[1275]: time="2025-05-17T00:45:16.724116399Z" level=info msg="CreateContainer within sandbox \"b14e9d4d1b853d3150d70f5044bc4067f4ee356b6113ac8b0640cfa81b61032d\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"8206b2b1bf8605b16a389f78869765ed0804e77c7de14fe084aa7b6c5380835c\"" May 17 00:45:16.724591 env[1275]: time="2025-05-17T00:45:16.724578275Z" level=info msg="StartContainer for \"8206b2b1bf8605b16a389f78869765ed0804e77c7de14fe084aa7b6c5380835c\"" May 17 00:45:16.734406 systemd[1]: Started cri-containerd-8206b2b1bf8605b16a389f78869765ed0804e77c7de14fe084aa7b6c5380835c.scope. May 17 00:45:16.758049 env[1275]: time="2025-05-17T00:45:16.758011241Z" level=info msg="StartContainer for \"8206b2b1bf8605b16a389f78869765ed0804e77c7de14fe084aa7b6c5380835c\" returns successfully" May 17 00:45:16.771063 systemd[1]: cri-containerd-8206b2b1bf8605b16a389f78869765ed0804e77c7de14fe084aa7b6c5380835c.scope: Deactivated successfully. May 17 00:45:16.788280 env[1275]: time="2025-05-17T00:45:16.788241203Z" level=info msg="shim disconnected" id=8206b2b1bf8605b16a389f78869765ed0804e77c7de14fe084aa7b6c5380835c May 17 00:45:16.788280 env[1275]: time="2025-05-17T00:45:16.788272888Z" level=warning msg="cleaning up after shim disconnected" id=8206b2b1bf8605b16a389f78869765ed0804e77c7de14fe084aa7b6c5380835c namespace=k8s.io May 17 00:45:16.788280 env[1275]: time="2025-05-17T00:45:16.788280879Z" level=info msg="cleaning up dead shim" May 17 00:45:16.793568 env[1275]: time="2025-05-17T00:45:16.793541930Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:45:16Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2633 runtime=io.containerd.runc.v2\n" May 17 00:45:17.670581 env[1275]: time="2025-05-17T00:45:17.670372087Z" level=info msg="CreateContainer within sandbox \"b14e9d4d1b853d3150d70f5044bc4067f4ee356b6113ac8b0640cfa81b61032d\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 17 00:45:17.775681 env[1275]: time="2025-05-17T00:45:17.775647856Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:45:17.801372 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3703966823.mount: Deactivated successfully. May 17 00:45:17.812101 env[1275]: time="2025-05-17T00:45:17.802920278Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:45:17.814754 env[1275]: time="2025-05-17T00:45:17.814723345Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:45:17.827010 env[1275]: time="2025-05-17T00:45:17.814869880Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" May 17 00:45:17.827010 env[1275]: time="2025-05-17T00:45:17.816315548Z" level=info msg="CreateContainer within sandbox \"0c8f5adf4f1df10f39fd20b0f8906acfba60ec1a711ee0cfa26bc569e332276a\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" May 17 00:45:17.833614 env[1275]: time="2025-05-17T00:45:17.833521312Z" level=info msg="CreateContainer within sandbox \"b14e9d4d1b853d3150d70f5044bc4067f4ee356b6113ac8b0640cfa81b61032d\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"d9ef219199b1d71bf6fe4e898b1adf198cd6ef35edeff9f3085d452b8c2862cf\"" May 17 00:45:17.835029 env[1275]: time="2025-05-17T00:45:17.835006858Z" level=info msg="StartContainer for \"d9ef219199b1d71bf6fe4e898b1adf198cd6ef35edeff9f3085d452b8c2862cf\"" May 17 00:45:17.852345 systemd[1]: Started cri-containerd-d9ef219199b1d71bf6fe4e898b1adf198cd6ef35edeff9f3085d452b8c2862cf.scope. May 17 00:45:17.904768 env[1275]: time="2025-05-17T00:45:17.904740895Z" level=info msg="StartContainer for \"d9ef219199b1d71bf6fe4e898b1adf198cd6ef35edeff9f3085d452b8c2862cf\" returns successfully" May 17 00:45:17.958826 systemd[1]: cri-containerd-d9ef219199b1d71bf6fe4e898b1adf198cd6ef35edeff9f3085d452b8c2862cf.scope: Deactivated successfully. May 17 00:45:17.970266 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d9ef219199b1d71bf6fe4e898b1adf198cd6ef35edeff9f3085d452b8c2862cf-rootfs.mount: Deactivated successfully. May 17 00:45:18.200959 env[1275]: time="2025-05-17T00:45:18.200918445Z" level=info msg="CreateContainer within sandbox \"0c8f5adf4f1df10f39fd20b0f8906acfba60ec1a711ee0cfa26bc569e332276a\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"9b7474de2ec06c54ed77bc5b05096aa44a060b009e9335df641e2d7bbecb08ea\"" May 17 00:45:18.203219 env[1275]: time="2025-05-17T00:45:18.203193812Z" level=info msg="StartContainer for \"9b7474de2ec06c54ed77bc5b05096aa44a060b009e9335df641e2d7bbecb08ea\"" May 17 00:45:18.204085 env[1275]: time="2025-05-17T00:45:18.204042335Z" level=info msg="shim disconnected" id=d9ef219199b1d71bf6fe4e898b1adf198cd6ef35edeff9f3085d452b8c2862cf May 17 00:45:18.204360 env[1275]: time="2025-05-17T00:45:18.204305718Z" level=warning msg="cleaning up after shim disconnected" id=d9ef219199b1d71bf6fe4e898b1adf198cd6ef35edeff9f3085d452b8c2862cf namespace=k8s.io May 17 00:45:18.204453 env[1275]: time="2025-05-17T00:45:18.204439628Z" level=info msg="cleaning up dead shim" May 17 00:45:18.216826 env[1275]: time="2025-05-17T00:45:18.216745182Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:45:18Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2689 runtime=io.containerd.runc.v2\n" May 17 00:45:18.223041 systemd[1]: Started cri-containerd-9b7474de2ec06c54ed77bc5b05096aa44a060b009e9335df641e2d7bbecb08ea.scope. May 17 00:45:18.248402 env[1275]: time="2025-05-17T00:45:18.248368356Z" level=info msg="StartContainer for \"9b7474de2ec06c54ed77bc5b05096aa44a060b009e9335df641e2d7bbecb08ea\" returns successfully" May 17 00:45:18.674058 env[1275]: time="2025-05-17T00:45:18.674035900Z" level=info msg="CreateContainer within sandbox \"b14e9d4d1b853d3150d70f5044bc4067f4ee356b6113ac8b0640cfa81b61032d\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 17 00:45:18.721347 env[1275]: time="2025-05-17T00:45:18.721316999Z" level=info msg="CreateContainer within sandbox \"b14e9d4d1b853d3150d70f5044bc4067f4ee356b6113ac8b0640cfa81b61032d\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"69378c9783804df1850753ff601c01ddeb3943026a718ef9b3092d47239532f9\"" May 17 00:45:18.721826 env[1275]: time="2025-05-17T00:45:18.721811570Z" level=info msg="StartContainer for \"69378c9783804df1850753ff601c01ddeb3943026a718ef9b3092d47239532f9\"" May 17 00:45:18.736361 systemd[1]: Started cri-containerd-69378c9783804df1850753ff601c01ddeb3943026a718ef9b3092d47239532f9.scope. May 17 00:45:18.764702 env[1275]: time="2025-05-17T00:45:18.764674451Z" level=info msg="StartContainer for \"69378c9783804df1850753ff601c01ddeb3943026a718ef9b3092d47239532f9\" returns successfully" May 17 00:45:19.070540 kubelet[2095]: I0517 00:45:19.070471 2095 kubelet_node_status.go:501] "Fast updating node status as it just became ready" May 17 00:45:19.234762 kubelet[2095]: I0517 00:45:19.234707 2095 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-vvcdl" podStartSLOduration=2.086501364 podStartE2EDuration="11.228582871s" podCreationTimestamp="2025-05-17 00:45:08 +0000 UTC" firstStartedPulling="2025-05-17 00:45:08.673470676 +0000 UTC m=+5.185976150" lastFinishedPulling="2025-05-17 00:45:17.815552182 +0000 UTC m=+14.328057657" observedRunningTime="2025-05-17 00:45:18.82868815 +0000 UTC m=+15.341193632" watchObservedRunningTime="2025-05-17 00:45:19.228582871 +0000 UTC m=+15.741088348" May 17 00:45:19.296769 systemd[1]: Created slice kubepods-burstable-pod8c605821_5008_48ea_b1c9_cf52c60cad80.slice. May 17 00:45:19.298183 systemd[1]: Created slice kubepods-burstable-pod48e99246_b8f3_4255_a870_dd9abe85d762.slice. May 17 00:45:19.408148 kubelet[2095]: I0517 00:45:19.408061 2095 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8c605821-5008-48ea-b1c9-cf52c60cad80-config-volume\") pod \"coredns-668d6bf9bc-nlft4\" (UID: \"8c605821-5008-48ea-b1c9-cf52c60cad80\") " pod="kube-system/coredns-668d6bf9bc-nlft4" May 17 00:45:19.408148 kubelet[2095]: I0517 00:45:19.408085 2095 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-529mv\" (UniqueName: \"kubernetes.io/projected/8c605821-5008-48ea-b1c9-cf52c60cad80-kube-api-access-529mv\") pod \"coredns-668d6bf9bc-nlft4\" (UID: \"8c605821-5008-48ea-b1c9-cf52c60cad80\") " pod="kube-system/coredns-668d6bf9bc-nlft4" May 17 00:45:19.408148 kubelet[2095]: I0517 00:45:19.408100 2095 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zlqws\" (UniqueName: \"kubernetes.io/projected/48e99246-b8f3-4255-a870-dd9abe85d762-kube-api-access-zlqws\") pod \"coredns-668d6bf9bc-xdbl2\" (UID: \"48e99246-b8f3-4255-a870-dd9abe85d762\") " pod="kube-system/coredns-668d6bf9bc-xdbl2" May 17 00:45:19.408148 kubelet[2095]: I0517 00:45:19.408114 2095 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/48e99246-b8f3-4255-a870-dd9abe85d762-config-volume\") pod \"coredns-668d6bf9bc-xdbl2\" (UID: \"48e99246-b8f3-4255-a870-dd9abe85d762\") " pod="kube-system/coredns-668d6bf9bc-xdbl2" May 17 00:45:19.622738 env[1275]: time="2025-05-17T00:45:19.622708404Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-nlft4,Uid:8c605821-5008-48ea-b1c9-cf52c60cad80,Namespace:kube-system,Attempt:0,}" May 17 00:45:19.623098 env[1275]: time="2025-05-17T00:45:19.622921640Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-xdbl2,Uid:48e99246-b8f3-4255-a870-dd9abe85d762,Namespace:kube-system,Attempt:0,}" May 17 00:45:20.346180 kernel: Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks! May 17 00:45:20.764189 kernel: Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks! May 17 00:45:22.395058 systemd-networkd[1082]: cilium_host: Link UP May 17 00:45:22.399880 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready May 17 00:45:22.395133 systemd-networkd[1082]: cilium_net: Link UP May 17 00:45:22.395135 systemd-networkd[1082]: cilium_net: Gained carrier May 17 00:45:22.395238 systemd-networkd[1082]: cilium_host: Gained carrier May 17 00:45:22.395330 systemd-networkd[1082]: cilium_host: Gained IPv6LL May 17 00:45:22.526138 systemd-networkd[1082]: cilium_vxlan: Link UP May 17 00:45:22.526143 systemd-networkd[1082]: cilium_vxlan: Gained carrier May 17 00:45:23.226260 systemd-networkd[1082]: cilium_net: Gained IPv6LL May 17 00:45:23.866269 systemd-networkd[1082]: cilium_vxlan: Gained IPv6LL May 17 00:45:23.904171 kernel: NET: Registered PF_ALG protocol family May 17 00:45:24.869619 systemd-networkd[1082]: lxc_health: Link UP May 17 00:45:24.875930 systemd-networkd[1082]: lxc_health: Gained carrier May 17 00:45:24.876290 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready May 17 00:45:25.206607 systemd-networkd[1082]: lxc964bdb9f304d: Link UP May 17 00:45:25.215270 kernel: eth0: renamed from tmp6068d May 17 00:45:25.218654 systemd-networkd[1082]: lxcd2e1f9d1f42c: Link UP May 17 00:45:25.225173 kernel: eth0: renamed from tmpd507b May 17 00:45:25.232046 systemd-networkd[1082]: lxc964bdb9f304d: Gained carrier May 17 00:45:25.232191 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc964bdb9f304d: link becomes ready May 17 00:45:25.233432 systemd-networkd[1082]: lxcd2e1f9d1f42c: Gained carrier May 17 00:45:25.234210 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcd2e1f9d1f42c: link becomes ready May 17 00:45:26.114362 kubelet[2095]: I0517 00:45:26.114314 2095 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-vnm2m" podStartSLOduration=12.384079647 podStartE2EDuration="19.114295477s" podCreationTimestamp="2025-05-17 00:45:07 +0000 UTC" firstStartedPulling="2025-05-17 00:45:08.17712508 +0000 UTC m=+4.689630555" lastFinishedPulling="2025-05-17 00:45:14.907340913 +0000 UTC m=+11.419846385" observedRunningTime="2025-05-17 00:45:19.691343794 +0000 UTC m=+16.203849276" watchObservedRunningTime="2025-05-17 00:45:26.114295477 +0000 UTC m=+22.626800957" May 17 00:45:26.170233 systemd-networkd[1082]: lxc_health: Gained IPv6LL May 17 00:45:26.810330 systemd-networkd[1082]: lxcd2e1f9d1f42c: Gained IPv6LL May 17 00:45:26.874248 systemd-networkd[1082]: lxc964bdb9f304d: Gained IPv6LL May 17 00:45:27.871534 env[1275]: time="2025-05-17T00:45:27.871494842Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:45:27.871851 env[1275]: time="2025-05-17T00:45:27.871835470Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:45:27.871931 env[1275]: time="2025-05-17T00:45:27.871912174Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:45:27.872090 env[1275]: time="2025-05-17T00:45:27.872072739Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/d507b74d9fcd084b1ccba0db1387821587397296390bbf96084fa0fe35419175 pid=3283 runtime=io.containerd.runc.v2 May 17 00:45:27.891177 systemd[1]: Started cri-containerd-d507b74d9fcd084b1ccba0db1387821587397296390bbf96084fa0fe35419175.scope. May 17 00:45:27.893467 systemd[1]: run-containerd-runc-k8s.io-d507b74d9fcd084b1ccba0db1387821587397296390bbf96084fa0fe35419175-runc.nb7Rfj.mount: Deactivated successfully. May 17 00:45:27.898447 env[1275]: time="2025-05-17T00:45:27.898405254Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:45:27.898540 env[1275]: time="2025-05-17T00:45:27.898464393Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:45:27.898540 env[1275]: time="2025-05-17T00:45:27.898482702Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:45:27.898616 env[1275]: time="2025-05-17T00:45:27.898592912Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/6068d7dea07690b855f6b997fe3202d5f778f4bd899c2f28ea11c194b154fa48 pid=3312 runtime=io.containerd.runc.v2 May 17 00:45:27.916802 systemd-resolved[1226]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 17 00:45:27.925211 systemd[1]: Started cri-containerd-6068d7dea07690b855f6b997fe3202d5f778f4bd899c2f28ea11c194b154fa48.scope. May 17 00:45:27.937327 systemd-resolved[1226]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 17 00:45:27.960590 env[1275]: time="2025-05-17T00:45:27.960565590Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-nlft4,Uid:8c605821-5008-48ea-b1c9-cf52c60cad80,Namespace:kube-system,Attempt:0,} returns sandbox id \"d507b74d9fcd084b1ccba0db1387821587397296390bbf96084fa0fe35419175\"" May 17 00:45:27.962908 env[1275]: time="2025-05-17T00:45:27.962892310Z" level=info msg="CreateContainer within sandbox \"d507b74d9fcd084b1ccba0db1387821587397296390bbf96084fa0fe35419175\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 17 00:45:27.966927 env[1275]: time="2025-05-17T00:45:27.966896748Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-xdbl2,Uid:48e99246-b8f3-4255-a870-dd9abe85d762,Namespace:kube-system,Attempt:0,} returns sandbox id \"6068d7dea07690b855f6b997fe3202d5f778f4bd899c2f28ea11c194b154fa48\"" May 17 00:45:27.969305 env[1275]: time="2025-05-17T00:45:27.969279310Z" level=info msg="CreateContainer within sandbox \"6068d7dea07690b855f6b997fe3202d5f778f4bd899c2f28ea11c194b154fa48\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 17 00:45:27.993354 env[1275]: time="2025-05-17T00:45:27.993319509Z" level=info msg="CreateContainer within sandbox \"6068d7dea07690b855f6b997fe3202d5f778f4bd899c2f28ea11c194b154fa48\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ef4d2ad14e2c4395d5382a769d0fcee6d5568e63434a1be2965fa33f559b5901\"" May 17 00:45:27.994058 env[1275]: time="2025-05-17T00:45:27.994043890Z" level=info msg="StartContainer for \"ef4d2ad14e2c4395d5382a769d0fcee6d5568e63434a1be2965fa33f559b5901\"" May 17 00:45:27.998251 env[1275]: time="2025-05-17T00:45:27.998219327Z" level=info msg="CreateContainer within sandbox \"d507b74d9fcd084b1ccba0db1387821587397296390bbf96084fa0fe35419175\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"7326786ea354407de8ca1be1b78fa341ef84ceaa9ea0a86865b5de30d2123669\"" May 17 00:45:27.998814 env[1275]: time="2025-05-17T00:45:27.998754277Z" level=info msg="StartContainer for \"7326786ea354407de8ca1be1b78fa341ef84ceaa9ea0a86865b5de30d2123669\"" May 17 00:45:28.015526 systemd[1]: Started cri-containerd-ef4d2ad14e2c4395d5382a769d0fcee6d5568e63434a1be2965fa33f559b5901.scope. May 17 00:45:28.025736 systemd[1]: Started cri-containerd-7326786ea354407de8ca1be1b78fa341ef84ceaa9ea0a86865b5de30d2123669.scope. May 17 00:45:28.066349 env[1275]: time="2025-05-17T00:45:28.066309178Z" level=info msg="StartContainer for \"7326786ea354407de8ca1be1b78fa341ef84ceaa9ea0a86865b5de30d2123669\" returns successfully" May 17 00:45:28.067927 env[1275]: time="2025-05-17T00:45:28.067895281Z" level=info msg="StartContainer for \"ef4d2ad14e2c4395d5382a769d0fcee6d5568e63434a1be2965fa33f559b5901\" returns successfully" May 17 00:45:28.720339 kubelet[2095]: I0517 00:45:28.720302 2095 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-nlft4" podStartSLOduration=20.720280114 podStartE2EDuration="20.720280114s" podCreationTimestamp="2025-05-17 00:45:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:45:28.711530018 +0000 UTC m=+25.224035500" watchObservedRunningTime="2025-05-17 00:45:28.720280114 +0000 UTC m=+25.232785591" May 17 00:45:28.720874 kubelet[2095]: I0517 00:45:28.720849 2095 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-xdbl2" podStartSLOduration=20.720838108 podStartE2EDuration="20.720838108s" podCreationTimestamp="2025-05-17 00:45:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:45:28.719716481 +0000 UTC m=+25.232221965" watchObservedRunningTime="2025-05-17 00:45:28.720838108 +0000 UTC m=+25.233343588" May 17 00:45:28.876437 systemd[1]: run-containerd-runc-k8s.io-6068d7dea07690b855f6b997fe3202d5f778f4bd899c2f28ea11c194b154fa48-runc.2wTmox.mount: Deactivated successfully. May 17 00:46:12.476371 systemd[1]: Started sshd@5-139.178.70.99:22-147.75.109.163:54808.service. May 17 00:46:12.605055 sshd[3452]: Accepted publickey for core from 147.75.109.163 port 54808 ssh2: RSA SHA256:c2z/7wLfdEkcn4/VTlfeChibQyT7Fv7HLPVdQSmDlR8 May 17 00:46:12.613451 sshd[3452]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:46:12.619351 systemd-logind[1263]: New session 8 of user core. May 17 00:46:12.620148 systemd[1]: Started session-8.scope. May 17 00:46:12.878813 sshd[3452]: pam_unix(sshd:session): session closed for user core May 17 00:46:12.880628 systemd[1]: sshd@5-139.178.70.99:22-147.75.109.163:54808.service: Deactivated successfully. May 17 00:46:12.881084 systemd[1]: session-8.scope: Deactivated successfully. May 17 00:46:12.881474 systemd-logind[1263]: Session 8 logged out. Waiting for processes to exit. May 17 00:46:12.882037 systemd-logind[1263]: Removed session 8. May 17 00:46:17.881773 systemd[1]: Started sshd@6-139.178.70.99:22-147.75.109.163:54822.service. May 17 00:46:18.027697 sshd[3464]: Accepted publickey for core from 147.75.109.163 port 54822 ssh2: RSA SHA256:c2z/7wLfdEkcn4/VTlfeChibQyT7Fv7HLPVdQSmDlR8 May 17 00:46:18.028557 sshd[3464]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:46:18.031715 systemd[1]: Started session-9.scope. May 17 00:46:18.031923 systemd-logind[1263]: New session 9 of user core. May 17 00:46:18.225606 sshd[3464]: pam_unix(sshd:session): session closed for user core May 17 00:46:18.227387 systemd[1]: sshd@6-139.178.70.99:22-147.75.109.163:54822.service: Deactivated successfully. May 17 00:46:18.227815 systemd[1]: session-9.scope: Deactivated successfully. May 17 00:46:18.228035 systemd-logind[1263]: Session 9 logged out. Waiting for processes to exit. May 17 00:46:18.228453 systemd-logind[1263]: Removed session 9. May 17 00:46:23.228879 systemd[1]: Started sshd@7-139.178.70.99:22-147.75.109.163:51126.service. May 17 00:46:23.265181 sshd[3477]: Accepted publickey for core from 147.75.109.163 port 51126 ssh2: RSA SHA256:c2z/7wLfdEkcn4/VTlfeChibQyT7Fv7HLPVdQSmDlR8 May 17 00:46:23.266059 sshd[3477]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:46:23.268641 systemd-logind[1263]: New session 10 of user core. May 17 00:46:23.269124 systemd[1]: Started session-10.scope. May 17 00:46:23.354295 sshd[3477]: pam_unix(sshd:session): session closed for user core May 17 00:46:23.355853 systemd-logind[1263]: Session 10 logged out. Waiting for processes to exit. May 17 00:46:23.356695 systemd[1]: sshd@7-139.178.70.99:22-147.75.109.163:51126.service: Deactivated successfully. May 17 00:46:23.357195 systemd[1]: session-10.scope: Deactivated successfully. May 17 00:46:23.358084 systemd-logind[1263]: Removed session 10. May 17 00:46:28.358007 systemd[1]: Started sshd@8-139.178.70.99:22-147.75.109.163:36766.service. May 17 00:46:28.402503 sshd[3492]: Accepted publickey for core from 147.75.109.163 port 36766 ssh2: RSA SHA256:c2z/7wLfdEkcn4/VTlfeChibQyT7Fv7HLPVdQSmDlR8 May 17 00:46:28.403413 sshd[3492]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:46:28.406512 systemd[1]: Started session-11.scope. May 17 00:46:28.406874 systemd-logind[1263]: New session 11 of user core. May 17 00:46:28.530099 systemd[1]: Started sshd@9-139.178.70.99:22-147.75.109.163:36782.service. May 17 00:46:28.530559 sshd[3492]: pam_unix(sshd:session): session closed for user core May 17 00:46:28.533761 systemd-logind[1263]: Session 11 logged out. Waiting for processes to exit. May 17 00:46:28.534893 systemd[1]: sshd@8-139.178.70.99:22-147.75.109.163:36766.service: Deactivated successfully. May 17 00:46:28.535306 systemd[1]: session-11.scope: Deactivated successfully. May 17 00:46:28.536268 systemd-logind[1263]: Removed session 11. May 17 00:46:28.567836 sshd[3503]: Accepted publickey for core from 147.75.109.163 port 36782 ssh2: RSA SHA256:c2z/7wLfdEkcn4/VTlfeChibQyT7Fv7HLPVdQSmDlR8 May 17 00:46:28.568869 sshd[3503]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:46:28.571835 systemd[1]: Started session-12.scope. May 17 00:46:28.572268 systemd-logind[1263]: New session 12 of user core. May 17 00:46:28.706344 sshd[3503]: pam_unix(sshd:session): session closed for user core May 17 00:46:28.708730 systemd[1]: Started sshd@10-139.178.70.99:22-147.75.109.163:36794.service. May 17 00:46:28.712396 systemd[1]: sshd@9-139.178.70.99:22-147.75.109.163:36782.service: Deactivated successfully. May 17 00:46:28.713181 systemd[1]: session-12.scope: Deactivated successfully. May 17 00:46:28.714271 systemd-logind[1263]: Session 12 logged out. Waiting for processes to exit. May 17 00:46:28.714934 systemd-logind[1263]: Removed session 12. May 17 00:46:28.764063 sshd[3513]: Accepted publickey for core from 147.75.109.163 port 36794 ssh2: RSA SHA256:c2z/7wLfdEkcn4/VTlfeChibQyT7Fv7HLPVdQSmDlR8 May 17 00:46:28.764952 sshd[3513]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:46:28.770958 systemd[1]: Started session-13.scope. May 17 00:46:28.771267 systemd-logind[1263]: New session 13 of user core. May 17 00:46:28.871301 sshd[3513]: pam_unix(sshd:session): session closed for user core May 17 00:46:28.873009 systemd[1]: sshd@10-139.178.70.99:22-147.75.109.163:36794.service: Deactivated successfully. May 17 00:46:28.873454 systemd[1]: session-13.scope: Deactivated successfully. May 17 00:46:28.873748 systemd-logind[1263]: Session 13 logged out. Waiting for processes to exit. May 17 00:46:28.874165 systemd-logind[1263]: Removed session 13. May 17 00:46:33.875079 systemd[1]: Started sshd@11-139.178.70.99:22-147.75.109.163:36796.service. May 17 00:46:33.910614 sshd[3525]: Accepted publickey for core from 147.75.109.163 port 36796 ssh2: RSA SHA256:c2z/7wLfdEkcn4/VTlfeChibQyT7Fv7HLPVdQSmDlR8 May 17 00:46:33.911664 sshd[3525]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:46:33.914688 systemd[1]: Started session-14.scope. May 17 00:46:33.914989 systemd-logind[1263]: New session 14 of user core. May 17 00:46:33.996926 sshd[3525]: pam_unix(sshd:session): session closed for user core May 17 00:46:33.998383 systemd[1]: sshd@11-139.178.70.99:22-147.75.109.163:36796.service: Deactivated successfully. May 17 00:46:33.998827 systemd[1]: session-14.scope: Deactivated successfully. May 17 00:46:33.999263 systemd-logind[1263]: Session 14 logged out. Waiting for processes to exit. May 17 00:46:33.999721 systemd-logind[1263]: Removed session 14. May 17 00:46:39.000939 systemd[1]: Started sshd@12-139.178.70.99:22-147.75.109.163:35648.service. May 17 00:46:39.037696 sshd[3537]: Accepted publickey for core from 147.75.109.163 port 35648 ssh2: RSA SHA256:c2z/7wLfdEkcn4/VTlfeChibQyT7Fv7HLPVdQSmDlR8 May 17 00:46:39.038954 sshd[3537]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:46:39.042664 systemd[1]: Started session-15.scope. May 17 00:46:39.043047 systemd-logind[1263]: New session 15 of user core. May 17 00:46:39.138858 sshd[3537]: pam_unix(sshd:session): session closed for user core May 17 00:46:39.141233 systemd[1]: Started sshd@13-139.178.70.99:22-147.75.109.163:35660.service. May 17 00:46:39.143395 systemd[1]: sshd@12-139.178.70.99:22-147.75.109.163:35648.service: Deactivated successfully. May 17 00:46:39.143770 systemd[1]: session-15.scope: Deactivated successfully. May 17 00:46:39.144427 systemd-logind[1263]: Session 15 logged out. Waiting for processes to exit. May 17 00:46:39.144874 systemd-logind[1263]: Removed session 15. May 17 00:46:39.178431 sshd[3550]: Accepted publickey for core from 147.75.109.163 port 35660 ssh2: RSA SHA256:c2z/7wLfdEkcn4/VTlfeChibQyT7Fv7HLPVdQSmDlR8 May 17 00:46:39.179458 sshd[3550]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:46:39.183205 systemd[1]: Started session-16.scope. May 17 00:46:39.184216 systemd-logind[1263]: New session 16 of user core. May 17 00:46:39.653989 sshd[3550]: pam_unix(sshd:session): session closed for user core May 17 00:46:39.656558 systemd[1]: Started sshd@14-139.178.70.99:22-147.75.109.163:35662.service. May 17 00:46:39.660330 systemd[1]: sshd@13-139.178.70.99:22-147.75.109.163:35660.service: Deactivated successfully. May 17 00:46:39.660751 systemd[1]: session-16.scope: Deactivated successfully. May 17 00:46:39.661833 systemd-logind[1263]: Session 16 logged out. Waiting for processes to exit. May 17 00:46:39.662775 systemd-logind[1263]: Removed session 16. May 17 00:46:39.698206 sshd[3560]: Accepted publickey for core from 147.75.109.163 port 35662 ssh2: RSA SHA256:c2z/7wLfdEkcn4/VTlfeChibQyT7Fv7HLPVdQSmDlR8 May 17 00:46:39.699400 sshd[3560]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:46:39.703092 systemd[1]: Started session-17.scope. May 17 00:46:39.703394 systemd-logind[1263]: New session 17 of user core. May 17 00:46:40.541893 systemd[1]: Started sshd@15-139.178.70.99:22-147.75.109.163:35674.service. May 17 00:46:40.542736 sshd[3560]: pam_unix(sshd:session): session closed for user core May 17 00:46:40.550828 systemd[1]: sshd@14-139.178.70.99:22-147.75.109.163:35662.service: Deactivated successfully. May 17 00:46:40.551488 systemd-logind[1263]: Session 17 logged out. Waiting for processes to exit. May 17 00:46:40.551792 systemd[1]: session-17.scope: Deactivated successfully. May 17 00:46:40.552348 systemd-logind[1263]: Removed session 17. May 17 00:46:40.591406 sshd[3575]: Accepted publickey for core from 147.75.109.163 port 35674 ssh2: RSA SHA256:c2z/7wLfdEkcn4/VTlfeChibQyT7Fv7HLPVdQSmDlR8 May 17 00:46:40.592298 sshd[3575]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:46:40.595540 systemd[1]: Started session-18.scope. May 17 00:46:40.595900 systemd-logind[1263]: New session 18 of user core. May 17 00:46:40.803045 sshd[3575]: pam_unix(sshd:session): session closed for user core May 17 00:46:40.804703 systemd[1]: Started sshd@16-139.178.70.99:22-147.75.109.163:35684.service. May 17 00:46:40.809199 systemd[1]: sshd@15-139.178.70.99:22-147.75.109.163:35674.service: Deactivated successfully. May 17 00:46:40.809628 systemd[1]: session-18.scope: Deactivated successfully. May 17 00:46:40.810560 systemd-logind[1263]: Session 18 logged out. Waiting for processes to exit. May 17 00:46:40.811637 systemd-logind[1263]: Removed session 18. May 17 00:46:40.848994 sshd[3586]: Accepted publickey for core from 147.75.109.163 port 35684 ssh2: RSA SHA256:c2z/7wLfdEkcn4/VTlfeChibQyT7Fv7HLPVdQSmDlR8 May 17 00:46:40.850138 sshd[3586]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:46:40.854213 systemd-logind[1263]: New session 19 of user core. May 17 00:46:40.854336 systemd[1]: Started session-19.scope. May 17 00:46:40.950210 sshd[3586]: pam_unix(sshd:session): session closed for user core May 17 00:46:40.951672 systemd-logind[1263]: Session 19 logged out. Waiting for processes to exit. May 17 00:46:40.951834 systemd[1]: sshd@16-139.178.70.99:22-147.75.109.163:35684.service: Deactivated successfully. May 17 00:46:40.952255 systemd[1]: session-19.scope: Deactivated successfully. May 17 00:46:40.952764 systemd-logind[1263]: Removed session 19. May 17 00:46:45.953794 systemd[1]: Started sshd@17-139.178.70.99:22-147.75.109.163:35688.service. May 17 00:46:45.990828 sshd[3601]: Accepted publickey for core from 147.75.109.163 port 35688 ssh2: RSA SHA256:c2z/7wLfdEkcn4/VTlfeChibQyT7Fv7HLPVdQSmDlR8 May 17 00:46:45.992501 sshd[3601]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:46:45.996207 systemd-logind[1263]: New session 20 of user core. May 17 00:46:45.996219 systemd[1]: Started session-20.scope. May 17 00:46:46.106235 sshd[3601]: pam_unix(sshd:session): session closed for user core May 17 00:46:46.108594 systemd[1]: sshd@17-139.178.70.99:22-147.75.109.163:35688.service: Deactivated successfully. May 17 00:46:46.109030 systemd[1]: session-20.scope: Deactivated successfully. May 17 00:46:46.109712 systemd-logind[1263]: Session 20 logged out. Waiting for processes to exit. May 17 00:46:46.110172 systemd-logind[1263]: Removed session 20. May 17 00:46:51.109824 systemd[1]: Started sshd@18-139.178.70.99:22-147.75.109.163:39204.service. May 17 00:46:51.149384 sshd[3613]: Accepted publickey for core from 147.75.109.163 port 39204 ssh2: RSA SHA256:c2z/7wLfdEkcn4/VTlfeChibQyT7Fv7HLPVdQSmDlR8 May 17 00:46:51.150809 sshd[3613]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:46:51.154333 systemd-logind[1263]: New session 21 of user core. May 17 00:46:51.154975 systemd[1]: Started session-21.scope. May 17 00:46:51.242349 sshd[3613]: pam_unix(sshd:session): session closed for user core May 17 00:46:51.243851 systemd-logind[1263]: Session 21 logged out. Waiting for processes to exit. May 17 00:46:51.244006 systemd[1]: sshd@18-139.178.70.99:22-147.75.109.163:39204.service: Deactivated successfully. May 17 00:46:51.244419 systemd[1]: session-21.scope: Deactivated successfully. May 17 00:46:51.244931 systemd-logind[1263]: Removed session 21. May 17 00:46:56.245946 systemd[1]: Started sshd@19-139.178.70.99:22-147.75.109.163:39216.service. May 17 00:46:56.282527 sshd[3625]: Accepted publickey for core from 147.75.109.163 port 39216 ssh2: RSA SHA256:c2z/7wLfdEkcn4/VTlfeChibQyT7Fv7HLPVdQSmDlR8 May 17 00:46:56.283332 sshd[3625]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:46:56.286031 systemd-logind[1263]: New session 22 of user core. May 17 00:46:56.286644 systemd[1]: Started session-22.scope. May 17 00:46:56.393478 sshd[3625]: pam_unix(sshd:session): session closed for user core May 17 00:46:56.395410 systemd-logind[1263]: Session 22 logged out. Waiting for processes to exit. May 17 00:46:56.395520 systemd[1]: sshd@19-139.178.70.99:22-147.75.109.163:39216.service: Deactivated successfully. May 17 00:46:56.396099 systemd[1]: session-22.scope: Deactivated successfully. May 17 00:46:56.396686 systemd-logind[1263]: Removed session 22. May 17 00:47:01.397708 systemd[1]: Started sshd@20-139.178.70.99:22-147.75.109.163:56940.service. May 17 00:47:01.439097 sshd[3637]: Accepted publickey for core from 147.75.109.163 port 56940 ssh2: RSA SHA256:c2z/7wLfdEkcn4/VTlfeChibQyT7Fv7HLPVdQSmDlR8 May 17 00:47:01.440023 sshd[3637]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:47:01.443328 systemd[1]: Started session-23.scope. May 17 00:47:01.443976 systemd-logind[1263]: New session 23 of user core. May 17 00:47:01.580551 sshd[3637]: pam_unix(sshd:session): session closed for user core May 17 00:47:01.583083 systemd[1]: Started sshd@21-139.178.70.99:22-147.75.109.163:56956.service. May 17 00:47:01.585715 systemd[1]: sshd@20-139.178.70.99:22-147.75.109.163:56940.service: Deactivated successfully. May 17 00:47:01.586215 systemd[1]: session-23.scope: Deactivated successfully. May 17 00:47:01.586728 systemd-logind[1263]: Session 23 logged out. Waiting for processes to exit. May 17 00:47:01.587253 systemd-logind[1263]: Removed session 23. May 17 00:47:01.668268 sshd[3648]: Accepted publickey for core from 147.75.109.163 port 56956 ssh2: RSA SHA256:c2z/7wLfdEkcn4/VTlfeChibQyT7Fv7HLPVdQSmDlR8 May 17 00:47:01.669213 sshd[3648]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:47:01.672237 systemd[1]: Started session-24.scope. May 17 00:47:01.672611 systemd-logind[1263]: New session 24 of user core. May 17 00:47:03.966245 env[1275]: time="2025-05-17T00:47:03.966206652Z" level=info msg="StopContainer for \"9b7474de2ec06c54ed77bc5b05096aa44a060b009e9335df641e2d7bbecb08ea\" with timeout 30 (s)" May 17 00:47:03.966893 env[1275]: time="2025-05-17T00:47:03.966869213Z" level=info msg="Stop container \"9b7474de2ec06c54ed77bc5b05096aa44a060b009e9335df641e2d7bbecb08ea\" with signal terminated" May 17 00:47:03.975126 systemd[1]: run-containerd-runc-k8s.io-69378c9783804df1850753ff601c01ddeb3943026a718ef9b3092d47239532f9-runc.NHyEEu.mount: Deactivated successfully. May 17 00:47:04.003167 systemd[1]: cri-containerd-9b7474de2ec06c54ed77bc5b05096aa44a060b009e9335df641e2d7bbecb08ea.scope: Deactivated successfully. May 17 00:47:04.014744 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9b7474de2ec06c54ed77bc5b05096aa44a060b009e9335df641e2d7bbecb08ea-rootfs.mount: Deactivated successfully. May 17 00:47:04.030264 env[1275]: time="2025-05-17T00:47:04.030218374Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 17 00:47:04.048333 env[1275]: time="2025-05-17T00:47:04.048309275Z" level=info msg="StopContainer for \"69378c9783804df1850753ff601c01ddeb3943026a718ef9b3092d47239532f9\" with timeout 2 (s)" May 17 00:47:04.051904 env[1275]: time="2025-05-17T00:47:04.048668765Z" level=info msg="Stop container \"69378c9783804df1850753ff601c01ddeb3943026a718ef9b3092d47239532f9\" with signal terminated" May 17 00:47:04.061463 systemd-networkd[1082]: lxc_health: Link DOWN May 17 00:47:04.061468 systemd-networkd[1082]: lxc_health: Lost carrier May 17 00:47:04.085192 env[1275]: time="2025-05-17T00:47:04.085157837Z" level=info msg="shim disconnected" id=9b7474de2ec06c54ed77bc5b05096aa44a060b009e9335df641e2d7bbecb08ea May 17 00:47:04.085326 env[1275]: time="2025-05-17T00:47:04.085313815Z" level=warning msg="cleaning up after shim disconnected" id=9b7474de2ec06c54ed77bc5b05096aa44a060b009e9335df641e2d7bbecb08ea namespace=k8s.io May 17 00:47:04.085379 env[1275]: time="2025-05-17T00:47:04.085369408Z" level=info msg="cleaning up dead shim" May 17 00:47:04.091246 env[1275]: time="2025-05-17T00:47:04.091223204Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:47:04Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3706 runtime=io.containerd.runc.v2\n" May 17 00:47:04.095452 systemd[1]: cri-containerd-69378c9783804df1850753ff601c01ddeb3943026a718ef9b3092d47239532f9.scope: Deactivated successfully. May 17 00:47:04.095610 systemd[1]: cri-containerd-69378c9783804df1850753ff601c01ddeb3943026a718ef9b3092d47239532f9.scope: Consumed 4.612s CPU time. May 17 00:47:04.098839 env[1275]: time="2025-05-17T00:47:04.098818412Z" level=info msg="StopContainer for \"9b7474de2ec06c54ed77bc5b05096aa44a060b009e9335df641e2d7bbecb08ea\" returns successfully" May 17 00:47:04.107058 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-69378c9783804df1850753ff601c01ddeb3943026a718ef9b3092d47239532f9-rootfs.mount: Deactivated successfully. May 17 00:47:04.116962 env[1275]: time="2025-05-17T00:47:04.116939188Z" level=info msg="StopPodSandbox for \"0c8f5adf4f1df10f39fd20b0f8906acfba60ec1a711ee0cfa26bc569e332276a\"" May 17 00:47:04.117245 env[1275]: time="2025-05-17T00:47:04.117228915Z" level=info msg="Container to stop \"9b7474de2ec06c54ed77bc5b05096aa44a060b009e9335df641e2d7bbecb08ea\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 17 00:47:04.121366 systemd[1]: cri-containerd-0c8f5adf4f1df10f39fd20b0f8906acfba60ec1a711ee0cfa26bc569e332276a.scope: Deactivated successfully. May 17 00:47:04.263579 env[1275]: time="2025-05-17T00:47:04.262654534Z" level=info msg="shim disconnected" id=69378c9783804df1850753ff601c01ddeb3943026a718ef9b3092d47239532f9 May 17 00:47:04.263579 env[1275]: time="2025-05-17T00:47:04.262694392Z" level=warning msg="cleaning up after shim disconnected" id=69378c9783804df1850753ff601c01ddeb3943026a718ef9b3092d47239532f9 namespace=k8s.io May 17 00:47:04.263579 env[1275]: time="2025-05-17T00:47:04.262702705Z" level=info msg="cleaning up dead shim" May 17 00:47:04.272022 env[1275]: time="2025-05-17T00:47:04.263650548Z" level=info msg="shim disconnected" id=0c8f5adf4f1df10f39fd20b0f8906acfba60ec1a711ee0cfa26bc569e332276a May 17 00:47:04.272022 env[1275]: time="2025-05-17T00:47:04.263672074Z" level=warning msg="cleaning up after shim disconnected" id=0c8f5adf4f1df10f39fd20b0f8906acfba60ec1a711ee0cfa26bc569e332276a namespace=k8s.io May 17 00:47:04.272022 env[1275]: time="2025-05-17T00:47:04.263679208Z" level=info msg="cleaning up dead shim" May 17 00:47:04.272022 env[1275]: time="2025-05-17T00:47:04.269979984Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:47:04Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3756 runtime=io.containerd.runc.v2\n" May 17 00:47:04.272022 env[1275]: time="2025-05-17T00:47:04.270106062Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:47:04Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3751 runtime=io.containerd.runc.v2\n" May 17 00:47:04.278424 env[1275]: time="2025-05-17T00:47:04.278403795Z" level=info msg="TearDown network for sandbox \"0c8f5adf4f1df10f39fd20b0f8906acfba60ec1a711ee0cfa26bc569e332276a\" successfully" May 17 00:47:04.278504 env[1275]: time="2025-05-17T00:47:04.278487193Z" level=info msg="StopPodSandbox for \"0c8f5adf4f1df10f39fd20b0f8906acfba60ec1a711ee0cfa26bc569e332276a\" returns successfully" May 17 00:47:04.301963 env[1275]: time="2025-05-17T00:47:04.301853299Z" level=info msg="StopContainer for \"69378c9783804df1850753ff601c01ddeb3943026a718ef9b3092d47239532f9\" returns successfully" May 17 00:47:04.302334 env[1275]: time="2025-05-17T00:47:04.302295172Z" level=info msg="StopPodSandbox for \"b14e9d4d1b853d3150d70f5044bc4067f4ee356b6113ac8b0640cfa81b61032d\"" May 17 00:47:04.302407 env[1275]: time="2025-05-17T00:47:04.302348098Z" level=info msg="Container to stop \"a5eb72902db601c249e9a2b07f1f512859335e621bd0a75e23f2946d009cbdb6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 17 00:47:04.302407 env[1275]: time="2025-05-17T00:47:04.302362917Z" level=info msg="Container to stop \"8206b2b1bf8605b16a389f78869765ed0804e77c7de14fe084aa7b6c5380835c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 17 00:47:04.302407 env[1275]: time="2025-05-17T00:47:04.302371524Z" level=info msg="Container to stop \"d9ef219199b1d71bf6fe4e898b1adf198cd6ef35edeff9f3085d452b8c2862cf\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 17 00:47:04.302407 env[1275]: time="2025-05-17T00:47:04.302379061Z" level=info msg="Container to stop \"69378c9783804df1850753ff601c01ddeb3943026a718ef9b3092d47239532f9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 17 00:47:04.302407 env[1275]: time="2025-05-17T00:47:04.302387253Z" level=info msg="Container to stop \"46fbd8ffb718b7dba622e744e421c4cae9c1831454c4cdbe1deef46b22bc56cc\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 17 00:47:04.310947 systemd[1]: cri-containerd-b14e9d4d1b853d3150d70f5044bc4067f4ee356b6113ac8b0640cfa81b61032d.scope: Deactivated successfully. May 17 00:47:04.379293 env[1275]: time="2025-05-17T00:47:04.379254890Z" level=info msg="shim disconnected" id=b14e9d4d1b853d3150d70f5044bc4067f4ee356b6113ac8b0640cfa81b61032d May 17 00:47:04.379293 env[1275]: time="2025-05-17T00:47:04.379289469Z" level=warning msg="cleaning up after shim disconnected" id=b14e9d4d1b853d3150d70f5044bc4067f4ee356b6113ac8b0640cfa81b61032d namespace=k8s.io May 17 00:47:04.379293 env[1275]: time="2025-05-17T00:47:04.379297645Z" level=info msg="cleaning up dead shim" May 17 00:47:04.385381 env[1275]: time="2025-05-17T00:47:04.385345674Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:47:04Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3795 runtime=io.containerd.runc.v2\n" May 17 00:47:04.385850 env[1275]: time="2025-05-17T00:47:04.385819502Z" level=info msg="TearDown network for sandbox \"b14e9d4d1b853d3150d70f5044bc4067f4ee356b6113ac8b0640cfa81b61032d\" successfully" May 17 00:47:04.385850 env[1275]: time="2025-05-17T00:47:04.385840785Z" level=info msg="StopPodSandbox for \"b14e9d4d1b853d3150d70f5044bc4067f4ee356b6113ac8b0640cfa81b61032d\" returns successfully" May 17 00:47:04.443878 kubelet[2095]: I0517 00:47:04.443846 2095 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5b6b807d-7944-4679-963c-49de35c56b4f-cilium-config-path\") pod \"5b6b807d-7944-4679-963c-49de35c56b4f\" (UID: \"5b6b807d-7944-4679-963c-49de35c56b4f\") " May 17 00:47:04.443878 kubelet[2095]: I0517 00:47:04.443888 2095 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r7jfz\" (UniqueName: \"kubernetes.io/projected/5b6b807d-7944-4679-963c-49de35c56b4f-kube-api-access-r7jfz\") pod \"5b6b807d-7944-4679-963c-49de35c56b4f\" (UID: \"5b6b807d-7944-4679-963c-49de35c56b4f\") " May 17 00:47:04.521878 kubelet[2095]: I0517 00:47:04.518985 2095 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5b6b807d-7944-4679-963c-49de35c56b4f-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "5b6b807d-7944-4679-963c-49de35c56b4f" (UID: "5b6b807d-7944-4679-963c-49de35c56b4f"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 17 00:47:04.526787 kubelet[2095]: I0517 00:47:04.526767 2095 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b6b807d-7944-4679-963c-49de35c56b4f-kube-api-access-r7jfz" (OuterVolumeSpecName: "kube-api-access-r7jfz") pod "5b6b807d-7944-4679-963c-49de35c56b4f" (UID: "5b6b807d-7944-4679-963c-49de35c56b4f"). InnerVolumeSpecName "kube-api-access-r7jfz". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 17 00:47:04.544624 kubelet[2095]: I0517 00:47:04.544596 2095 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/83bda6c3-5a5a-46af-a2d7-028195cd0545-host-proc-sys-kernel\") pod \"83bda6c3-5a5a-46af-a2d7-028195cd0545\" (UID: \"83bda6c3-5a5a-46af-a2d7-028195cd0545\") " May 17 00:47:04.544624 kubelet[2095]: I0517 00:47:04.544622 2095 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/83bda6c3-5a5a-46af-a2d7-028195cd0545-hostproc\") pod \"83bda6c3-5a5a-46af-a2d7-028195cd0545\" (UID: \"83bda6c3-5a5a-46af-a2d7-028195cd0545\") " May 17 00:47:04.544757 kubelet[2095]: I0517 00:47:04.544634 2095 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/83bda6c3-5a5a-46af-a2d7-028195cd0545-cni-path\") pod \"83bda6c3-5a5a-46af-a2d7-028195cd0545\" (UID: \"83bda6c3-5a5a-46af-a2d7-028195cd0545\") " May 17 00:47:04.544757 kubelet[2095]: I0517 00:47:04.544647 2095 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/83bda6c3-5a5a-46af-a2d7-028195cd0545-cilium-config-path\") pod \"83bda6c3-5a5a-46af-a2d7-028195cd0545\" (UID: \"83bda6c3-5a5a-46af-a2d7-028195cd0545\") " May 17 00:47:04.544757 kubelet[2095]: I0517 00:47:04.544658 2095 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/83bda6c3-5a5a-46af-a2d7-028195cd0545-xtables-lock\") pod \"83bda6c3-5a5a-46af-a2d7-028195cd0545\" (UID: \"83bda6c3-5a5a-46af-a2d7-028195cd0545\") " May 17 00:47:04.544757 kubelet[2095]: I0517 00:47:04.544666 2095 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/83bda6c3-5a5a-46af-a2d7-028195cd0545-cilium-run\") pod \"83bda6c3-5a5a-46af-a2d7-028195cd0545\" (UID: \"83bda6c3-5a5a-46af-a2d7-028195cd0545\") " May 17 00:47:04.544757 kubelet[2095]: I0517 00:47:04.544674 2095 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/83bda6c3-5a5a-46af-a2d7-028195cd0545-host-proc-sys-net\") pod \"83bda6c3-5a5a-46af-a2d7-028195cd0545\" (UID: \"83bda6c3-5a5a-46af-a2d7-028195cd0545\") " May 17 00:47:04.544757 kubelet[2095]: I0517 00:47:04.544684 2095 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/83bda6c3-5a5a-46af-a2d7-028195cd0545-hubble-tls\") pod \"83bda6c3-5a5a-46af-a2d7-028195cd0545\" (UID: \"83bda6c3-5a5a-46af-a2d7-028195cd0545\") " May 17 00:47:04.544890 kubelet[2095]: I0517 00:47:04.544693 2095 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/83bda6c3-5a5a-46af-a2d7-028195cd0545-cilium-cgroup\") pod \"83bda6c3-5a5a-46af-a2d7-028195cd0545\" (UID: \"83bda6c3-5a5a-46af-a2d7-028195cd0545\") " May 17 00:47:04.544890 kubelet[2095]: I0517 00:47:04.544705 2095 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/83bda6c3-5a5a-46af-a2d7-028195cd0545-lib-modules\") pod \"83bda6c3-5a5a-46af-a2d7-028195cd0545\" (UID: \"83bda6c3-5a5a-46af-a2d7-028195cd0545\") " May 17 00:47:04.544890 kubelet[2095]: I0517 00:47:04.544712 2095 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/83bda6c3-5a5a-46af-a2d7-028195cd0545-bpf-maps\") pod \"83bda6c3-5a5a-46af-a2d7-028195cd0545\" (UID: \"83bda6c3-5a5a-46af-a2d7-028195cd0545\") " May 17 00:47:04.544890 kubelet[2095]: I0517 00:47:04.544722 2095 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/83bda6c3-5a5a-46af-a2d7-028195cd0545-etc-cni-netd\") pod \"83bda6c3-5a5a-46af-a2d7-028195cd0545\" (UID: \"83bda6c3-5a5a-46af-a2d7-028195cd0545\") " May 17 00:47:04.544890 kubelet[2095]: I0517 00:47:04.544731 2095 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/83bda6c3-5a5a-46af-a2d7-028195cd0545-clustermesh-secrets\") pod \"83bda6c3-5a5a-46af-a2d7-028195cd0545\" (UID: \"83bda6c3-5a5a-46af-a2d7-028195cd0545\") " May 17 00:47:04.544890 kubelet[2095]: I0517 00:47:04.544742 2095 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qfjx4\" (UniqueName: \"kubernetes.io/projected/83bda6c3-5a5a-46af-a2d7-028195cd0545-kube-api-access-qfjx4\") pod \"83bda6c3-5a5a-46af-a2d7-028195cd0545\" (UID: \"83bda6c3-5a5a-46af-a2d7-028195cd0545\") " May 17 00:47:04.545025 kubelet[2095]: I0517 00:47:04.544765 2095 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-r7jfz\" (UniqueName: \"kubernetes.io/projected/5b6b807d-7944-4679-963c-49de35c56b4f-kube-api-access-r7jfz\") on node \"localhost\" DevicePath \"\"" May 17 00:47:04.545025 kubelet[2095]: I0517 00:47:04.544772 2095 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5b6b807d-7944-4679-963c-49de35c56b4f-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 17 00:47:04.545079 kubelet[2095]: I0517 00:47:04.545064 2095 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/83bda6c3-5a5a-46af-a2d7-028195cd0545-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "83bda6c3-5a5a-46af-a2d7-028195cd0545" (UID: "83bda6c3-5a5a-46af-a2d7-028195cd0545"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 00:47:04.545144 kubelet[2095]: I0517 00:47:04.545128 2095 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/83bda6c3-5a5a-46af-a2d7-028195cd0545-hostproc" (OuterVolumeSpecName: "hostproc") pod "83bda6c3-5a5a-46af-a2d7-028195cd0545" (UID: "83bda6c3-5a5a-46af-a2d7-028195cd0545"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 00:47:04.545205 kubelet[2095]: I0517 00:47:04.545193 2095 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/83bda6c3-5a5a-46af-a2d7-028195cd0545-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "83bda6c3-5a5a-46af-a2d7-028195cd0545" (UID: "83bda6c3-5a5a-46af-a2d7-028195cd0545"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 00:47:04.545265 kubelet[2095]: I0517 00:47:04.545255 2095 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/83bda6c3-5a5a-46af-a2d7-028195cd0545-cni-path" (OuterVolumeSpecName: "cni-path") pod "83bda6c3-5a5a-46af-a2d7-028195cd0545" (UID: "83bda6c3-5a5a-46af-a2d7-028195cd0545"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 00:47:04.545313 kubelet[2095]: I0517 00:47:04.545184 2095 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/83bda6c3-5a5a-46af-a2d7-028195cd0545-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "83bda6c3-5a5a-46af-a2d7-028195cd0545" (UID: "83bda6c3-5a5a-46af-a2d7-028195cd0545"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 00:47:04.545361 kubelet[2095]: I0517 00:47:04.545272 2095 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/83bda6c3-5a5a-46af-a2d7-028195cd0545-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "83bda6c3-5a5a-46af-a2d7-028195cd0545" (UID: "83bda6c3-5a5a-46af-a2d7-028195cd0545"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 00:47:04.545414 kubelet[2095]: I0517 00:47:04.545280 2095 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/83bda6c3-5a5a-46af-a2d7-028195cd0545-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "83bda6c3-5a5a-46af-a2d7-028195cd0545" (UID: "83bda6c3-5a5a-46af-a2d7-028195cd0545"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 00:47:04.545838 kubelet[2095]: I0517 00:47:04.545828 2095 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/83bda6c3-5a5a-46af-a2d7-028195cd0545-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "83bda6c3-5a5a-46af-a2d7-028195cd0545" (UID: "83bda6c3-5a5a-46af-a2d7-028195cd0545"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 00:47:04.545905 kubelet[2095]: I0517 00:47:04.545896 2095 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/83bda6c3-5a5a-46af-a2d7-028195cd0545-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "83bda6c3-5a5a-46af-a2d7-028195cd0545" (UID: "83bda6c3-5a5a-46af-a2d7-028195cd0545"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 00:47:04.545963 kubelet[2095]: I0517 00:47:04.545954 2095 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/83bda6c3-5a5a-46af-a2d7-028195cd0545-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "83bda6c3-5a5a-46af-a2d7-028195cd0545" (UID: "83bda6c3-5a5a-46af-a2d7-028195cd0545"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 00:47:04.548798 kubelet[2095]: I0517 00:47:04.548771 2095 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/83bda6c3-5a5a-46af-a2d7-028195cd0545-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "83bda6c3-5a5a-46af-a2d7-028195cd0545" (UID: "83bda6c3-5a5a-46af-a2d7-028195cd0545"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 17 00:47:04.549139 kubelet[2095]: I0517 00:47:04.549120 2095 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/83bda6c3-5a5a-46af-a2d7-028195cd0545-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "83bda6c3-5a5a-46af-a2d7-028195cd0545" (UID: "83bda6c3-5a5a-46af-a2d7-028195cd0545"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 17 00:47:04.549235 kubelet[2095]: I0517 00:47:04.549221 2095 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/83bda6c3-5a5a-46af-a2d7-028195cd0545-kube-api-access-qfjx4" (OuterVolumeSpecName: "kube-api-access-qfjx4") pod "83bda6c3-5a5a-46af-a2d7-028195cd0545" (UID: "83bda6c3-5a5a-46af-a2d7-028195cd0545"). InnerVolumeSpecName "kube-api-access-qfjx4". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 17 00:47:04.549520 kubelet[2095]: I0517 00:47:04.549505 2095 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83bda6c3-5a5a-46af-a2d7-028195cd0545-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "83bda6c3-5a5a-46af-a2d7-028195cd0545" (UID: "83bda6c3-5a5a-46af-a2d7-028195cd0545"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" May 17 00:47:04.645089 kubelet[2095]: I0517 00:47:04.645054 2095 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/83bda6c3-5a5a-46af-a2d7-028195cd0545-xtables-lock\") on node \"localhost\" DevicePath \"\"" May 17 00:47:04.645089 kubelet[2095]: I0517 00:47:04.645083 2095 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/83bda6c3-5a5a-46af-a2d7-028195cd0545-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" May 17 00:47:04.645089 kubelet[2095]: I0517 00:47:04.645090 2095 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/83bda6c3-5a5a-46af-a2d7-028195cd0545-hostproc\") on node \"localhost\" DevicePath \"\"" May 17 00:47:04.645089 kubelet[2095]: I0517 00:47:04.645095 2095 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/83bda6c3-5a5a-46af-a2d7-028195cd0545-cni-path\") on node \"localhost\" DevicePath \"\"" May 17 00:47:04.645285 kubelet[2095]: I0517 00:47:04.645101 2095 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/83bda6c3-5a5a-46af-a2d7-028195cd0545-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 17 00:47:04.645285 kubelet[2095]: I0517 00:47:04.645105 2095 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/83bda6c3-5a5a-46af-a2d7-028195cd0545-cilium-run\") on node \"localhost\" DevicePath \"\"" May 17 00:47:04.645285 kubelet[2095]: I0517 00:47:04.645110 2095 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/83bda6c3-5a5a-46af-a2d7-028195cd0545-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" May 17 00:47:04.645285 kubelet[2095]: I0517 00:47:04.645114 2095 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/83bda6c3-5a5a-46af-a2d7-028195cd0545-hubble-tls\") on node \"localhost\" DevicePath \"\"" May 17 00:47:04.645285 kubelet[2095]: I0517 00:47:04.645128 2095 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/83bda6c3-5a5a-46af-a2d7-028195cd0545-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" May 17 00:47:04.645285 kubelet[2095]: I0517 00:47:04.645136 2095 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/83bda6c3-5a5a-46af-a2d7-028195cd0545-lib-modules\") on node \"localhost\" DevicePath \"\"" May 17 00:47:04.645285 kubelet[2095]: I0517 00:47:04.645141 2095 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/83bda6c3-5a5a-46af-a2d7-028195cd0545-bpf-maps\") on node \"localhost\" DevicePath \"\"" May 17 00:47:04.645285 kubelet[2095]: I0517 00:47:04.645170 2095 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-qfjx4\" (UniqueName: \"kubernetes.io/projected/83bda6c3-5a5a-46af-a2d7-028195cd0545-kube-api-access-qfjx4\") on node \"localhost\" DevicePath \"\"" May 17 00:47:04.650563 kubelet[2095]: I0517 00:47:04.645176 2095 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/83bda6c3-5a5a-46af-a2d7-028195cd0545-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" May 17 00:47:04.650563 kubelet[2095]: I0517 00:47:04.645181 2095 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/83bda6c3-5a5a-46af-a2d7-028195cd0545-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" May 17 00:47:04.852926 systemd[1]: Removed slice kubepods-burstable-pod83bda6c3_5a5a_46af_a2d7_028195cd0545.slice. May 17 00:47:04.852984 systemd[1]: kubepods-burstable-pod83bda6c3_5a5a_46af_a2d7_028195cd0545.slice: Consumed 4.676s CPU time. May 17 00:47:04.876052 kubelet[2095]: I0517 00:47:04.876040 2095 scope.go:117] "RemoveContainer" containerID="69378c9783804df1850753ff601c01ddeb3943026a718ef9b3092d47239532f9" May 17 00:47:04.878588 env[1275]: time="2025-05-17T00:47:04.878563221Z" level=info msg="RemoveContainer for \"69378c9783804df1850753ff601c01ddeb3943026a718ef9b3092d47239532f9\"" May 17 00:47:04.879310 systemd[1]: Removed slice kubepods-besteffort-pod5b6b807d_7944_4679_963c_49de35c56b4f.slice. May 17 00:47:04.903121 env[1275]: time="2025-05-17T00:47:04.903093970Z" level=info msg="RemoveContainer for \"69378c9783804df1850753ff601c01ddeb3943026a718ef9b3092d47239532f9\" returns successfully" May 17 00:47:04.903296 kubelet[2095]: I0517 00:47:04.903284 2095 scope.go:117] "RemoveContainer" containerID="d9ef219199b1d71bf6fe4e898b1adf198cd6ef35edeff9f3085d452b8c2862cf" May 17 00:47:04.910908 env[1275]: time="2025-05-17T00:47:04.910720239Z" level=info msg="RemoveContainer for \"d9ef219199b1d71bf6fe4e898b1adf198cd6ef35edeff9f3085d452b8c2862cf\"" May 17 00:47:04.924753 env[1275]: time="2025-05-17T00:47:04.924698263Z" level=info msg="RemoveContainer for \"d9ef219199b1d71bf6fe4e898b1adf198cd6ef35edeff9f3085d452b8c2862cf\" returns successfully" May 17 00:47:04.924850 kubelet[2095]: I0517 00:47:04.924814 2095 scope.go:117] "RemoveContainer" containerID="8206b2b1bf8605b16a389f78869765ed0804e77c7de14fe084aa7b6c5380835c" May 17 00:47:04.925428 env[1275]: time="2025-05-17T00:47:04.925414472Z" level=info msg="RemoveContainer for \"8206b2b1bf8605b16a389f78869765ed0804e77c7de14fe084aa7b6c5380835c\"" May 17 00:47:04.938114 env[1275]: time="2025-05-17T00:47:04.938089596Z" level=info msg="RemoveContainer for \"8206b2b1bf8605b16a389f78869765ed0804e77c7de14fe084aa7b6c5380835c\" returns successfully" May 17 00:47:04.944045 env[1275]: time="2025-05-17T00:47:04.938931410Z" level=info msg="RemoveContainer for \"a5eb72902db601c249e9a2b07f1f512859335e621bd0a75e23f2946d009cbdb6\"" May 17 00:47:04.944085 kubelet[2095]: I0517 00:47:04.938371 2095 scope.go:117] "RemoveContainer" containerID="a5eb72902db601c249e9a2b07f1f512859335e621bd0a75e23f2946d009cbdb6" May 17 00:47:04.945994 env[1275]: time="2025-05-17T00:47:04.945969812Z" level=info msg="RemoveContainer for \"a5eb72902db601c249e9a2b07f1f512859335e621bd0a75e23f2946d009cbdb6\" returns successfully" May 17 00:47:04.946242 kubelet[2095]: I0517 00:47:04.946221 2095 scope.go:117] "RemoveContainer" containerID="46fbd8ffb718b7dba622e744e421c4cae9c1831454c4cdbe1deef46b22bc56cc" May 17 00:47:04.946778 env[1275]: time="2025-05-17T00:47:04.946757402Z" level=info msg="RemoveContainer for \"46fbd8ffb718b7dba622e744e421c4cae9c1831454c4cdbe1deef46b22bc56cc\"" May 17 00:47:04.949545 env[1275]: time="2025-05-17T00:47:04.949523327Z" level=info msg="RemoveContainer for \"46fbd8ffb718b7dba622e744e421c4cae9c1831454c4cdbe1deef46b22bc56cc\" returns successfully" May 17 00:47:04.949611 kubelet[2095]: I0517 00:47:04.949598 2095 scope.go:117] "RemoveContainer" containerID="69378c9783804df1850753ff601c01ddeb3943026a718ef9b3092d47239532f9" May 17 00:47:04.949789 env[1275]: time="2025-05-17T00:47:04.949737627Z" level=error msg="ContainerStatus for \"69378c9783804df1850753ff601c01ddeb3943026a718ef9b3092d47239532f9\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"69378c9783804df1850753ff601c01ddeb3943026a718ef9b3092d47239532f9\": not found" May 17 00:47:04.949923 kubelet[2095]: E0517 00:47:04.949906 2095 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"69378c9783804df1850753ff601c01ddeb3943026a718ef9b3092d47239532f9\": not found" containerID="69378c9783804df1850753ff601c01ddeb3943026a718ef9b3092d47239532f9" May 17 00:47:04.949980 kubelet[2095]: I0517 00:47:04.949929 2095 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"69378c9783804df1850753ff601c01ddeb3943026a718ef9b3092d47239532f9"} err="failed to get container status \"69378c9783804df1850753ff601c01ddeb3943026a718ef9b3092d47239532f9\": rpc error: code = NotFound desc = an error occurred when try to find container \"69378c9783804df1850753ff601c01ddeb3943026a718ef9b3092d47239532f9\": not found" May 17 00:47:04.950019 kubelet[2095]: I0517 00:47:04.949981 2095 scope.go:117] "RemoveContainer" containerID="d9ef219199b1d71bf6fe4e898b1adf198cd6ef35edeff9f3085d452b8c2862cf" May 17 00:47:04.950099 env[1275]: time="2025-05-17T00:47:04.950066951Z" level=error msg="ContainerStatus for \"d9ef219199b1d71bf6fe4e898b1adf198cd6ef35edeff9f3085d452b8c2862cf\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d9ef219199b1d71bf6fe4e898b1adf198cd6ef35edeff9f3085d452b8c2862cf\": not found" May 17 00:47:04.950168 kubelet[2095]: E0517 00:47:04.950145 2095 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d9ef219199b1d71bf6fe4e898b1adf198cd6ef35edeff9f3085d452b8c2862cf\": not found" containerID="d9ef219199b1d71bf6fe4e898b1adf198cd6ef35edeff9f3085d452b8c2862cf" May 17 00:47:04.950204 kubelet[2095]: I0517 00:47:04.950164 2095 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d9ef219199b1d71bf6fe4e898b1adf198cd6ef35edeff9f3085d452b8c2862cf"} err="failed to get container status \"d9ef219199b1d71bf6fe4e898b1adf198cd6ef35edeff9f3085d452b8c2862cf\": rpc error: code = NotFound desc = an error occurred when try to find container \"d9ef219199b1d71bf6fe4e898b1adf198cd6ef35edeff9f3085d452b8c2862cf\": not found" May 17 00:47:04.950204 kubelet[2095]: I0517 00:47:04.950183 2095 scope.go:117] "RemoveContainer" containerID="8206b2b1bf8605b16a389f78869765ed0804e77c7de14fe084aa7b6c5380835c" May 17 00:47:04.950344 env[1275]: time="2025-05-17T00:47:04.950315114Z" level=error msg="ContainerStatus for \"8206b2b1bf8605b16a389f78869765ed0804e77c7de14fe084aa7b6c5380835c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8206b2b1bf8605b16a389f78869765ed0804e77c7de14fe084aa7b6c5380835c\": not found" May 17 00:47:04.950494 kubelet[2095]: E0517 00:47:04.950475 2095 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8206b2b1bf8605b16a389f78869765ed0804e77c7de14fe084aa7b6c5380835c\": not found" containerID="8206b2b1bf8605b16a389f78869765ed0804e77c7de14fe084aa7b6c5380835c" May 17 00:47:04.950547 kubelet[2095]: I0517 00:47:04.950492 2095 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8206b2b1bf8605b16a389f78869765ed0804e77c7de14fe084aa7b6c5380835c"} err="failed to get container status \"8206b2b1bf8605b16a389f78869765ed0804e77c7de14fe084aa7b6c5380835c\": rpc error: code = NotFound desc = an error occurred when try to find container \"8206b2b1bf8605b16a389f78869765ed0804e77c7de14fe084aa7b6c5380835c\": not found" May 17 00:47:04.950547 kubelet[2095]: I0517 00:47:04.950507 2095 scope.go:117] "RemoveContainer" containerID="a5eb72902db601c249e9a2b07f1f512859335e621bd0a75e23f2946d009cbdb6" May 17 00:47:04.950634 env[1275]: time="2025-05-17T00:47:04.950603420Z" level=error msg="ContainerStatus for \"a5eb72902db601c249e9a2b07f1f512859335e621bd0a75e23f2946d009cbdb6\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a5eb72902db601c249e9a2b07f1f512859335e621bd0a75e23f2946d009cbdb6\": not found" May 17 00:47:04.950742 kubelet[2095]: E0517 00:47:04.950725 2095 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a5eb72902db601c249e9a2b07f1f512859335e621bd0a75e23f2946d009cbdb6\": not found" containerID="a5eb72902db601c249e9a2b07f1f512859335e621bd0a75e23f2946d009cbdb6" May 17 00:47:04.950803 kubelet[2095]: I0517 00:47:04.950740 2095 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a5eb72902db601c249e9a2b07f1f512859335e621bd0a75e23f2946d009cbdb6"} err="failed to get container status \"a5eb72902db601c249e9a2b07f1f512859335e621bd0a75e23f2946d009cbdb6\": rpc error: code = NotFound desc = an error occurred when try to find container \"a5eb72902db601c249e9a2b07f1f512859335e621bd0a75e23f2946d009cbdb6\": not found" May 17 00:47:04.950803 kubelet[2095]: I0517 00:47:04.950750 2095 scope.go:117] "RemoveContainer" containerID="46fbd8ffb718b7dba622e744e421c4cae9c1831454c4cdbe1deef46b22bc56cc" May 17 00:47:04.950874 env[1275]: time="2025-05-17T00:47:04.950824494Z" level=error msg="ContainerStatus for \"46fbd8ffb718b7dba622e744e421c4cae9c1831454c4cdbe1deef46b22bc56cc\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"46fbd8ffb718b7dba622e744e421c4cae9c1831454c4cdbe1deef46b22bc56cc\": not found" May 17 00:47:04.950912 kubelet[2095]: E0517 00:47:04.950889 2095 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"46fbd8ffb718b7dba622e744e421c4cae9c1831454c4cdbe1deef46b22bc56cc\": not found" containerID="46fbd8ffb718b7dba622e744e421c4cae9c1831454c4cdbe1deef46b22bc56cc" May 17 00:47:04.950912 kubelet[2095]: I0517 00:47:04.950899 2095 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"46fbd8ffb718b7dba622e744e421c4cae9c1831454c4cdbe1deef46b22bc56cc"} err="failed to get container status \"46fbd8ffb718b7dba622e744e421c4cae9c1831454c4cdbe1deef46b22bc56cc\": rpc error: code = NotFound desc = an error occurred when try to find container \"46fbd8ffb718b7dba622e744e421c4cae9c1831454c4cdbe1deef46b22bc56cc\": not found" May 17 00:47:04.950912 kubelet[2095]: I0517 00:47:04.950906 2095 scope.go:117] "RemoveContainer" containerID="9b7474de2ec06c54ed77bc5b05096aa44a060b009e9335df641e2d7bbecb08ea" May 17 00:47:04.951534 env[1275]: time="2025-05-17T00:47:04.951516834Z" level=info msg="RemoveContainer for \"9b7474de2ec06c54ed77bc5b05096aa44a060b009e9335df641e2d7bbecb08ea\"" May 17 00:47:04.952801 env[1275]: time="2025-05-17T00:47:04.952786767Z" level=info msg="RemoveContainer for \"9b7474de2ec06c54ed77bc5b05096aa44a060b009e9335df641e2d7bbecb08ea\" returns successfully" May 17 00:47:04.953369 env[1275]: time="2025-05-17T00:47:04.953021411Z" level=error msg="ContainerStatus for \"9b7474de2ec06c54ed77bc5b05096aa44a060b009e9335df641e2d7bbecb08ea\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9b7474de2ec06c54ed77bc5b05096aa44a060b009e9335df641e2d7bbecb08ea\": not found" May 17 00:47:04.953409 kubelet[2095]: I0517 00:47:04.952925 2095 scope.go:117] "RemoveContainer" containerID="9b7474de2ec06c54ed77bc5b05096aa44a060b009e9335df641e2d7bbecb08ea" May 17 00:47:04.953409 kubelet[2095]: E0517 00:47:04.953099 2095 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9b7474de2ec06c54ed77bc5b05096aa44a060b009e9335df641e2d7bbecb08ea\": not found" containerID="9b7474de2ec06c54ed77bc5b05096aa44a060b009e9335df641e2d7bbecb08ea" May 17 00:47:04.953409 kubelet[2095]: I0517 00:47:04.953114 2095 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9b7474de2ec06c54ed77bc5b05096aa44a060b009e9335df641e2d7bbecb08ea"} err="failed to get container status \"9b7474de2ec06c54ed77bc5b05096aa44a060b009e9335df641e2d7bbecb08ea\": rpc error: code = NotFound desc = an error occurred when try to find container \"9b7474de2ec06c54ed77bc5b05096aa44a060b009e9335df641e2d7bbecb08ea\": not found" May 17 00:47:04.970795 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0c8f5adf4f1df10f39fd20b0f8906acfba60ec1a711ee0cfa26bc569e332276a-rootfs.mount: Deactivated successfully. May 17 00:47:04.970864 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0c8f5adf4f1df10f39fd20b0f8906acfba60ec1a711ee0cfa26bc569e332276a-shm.mount: Deactivated successfully. May 17 00:47:04.970910 systemd[1]: var-lib-kubelet-pods-5b6b807d\x2d7944\x2d4679\x2d963c\x2d49de35c56b4f-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dr7jfz.mount: Deactivated successfully. May 17 00:47:04.970953 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b14e9d4d1b853d3150d70f5044bc4067f4ee356b6113ac8b0640cfa81b61032d-rootfs.mount: Deactivated successfully. May 17 00:47:04.970987 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b14e9d4d1b853d3150d70f5044bc4067f4ee356b6113ac8b0640cfa81b61032d-shm.mount: Deactivated successfully. May 17 00:47:04.971025 systemd[1]: var-lib-kubelet-pods-83bda6c3\x2d5a5a\x2d46af\x2da2d7\x2d028195cd0545-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dqfjx4.mount: Deactivated successfully. May 17 00:47:04.971069 systemd[1]: var-lib-kubelet-pods-83bda6c3\x2d5a5a\x2d46af\x2da2d7\x2d028195cd0545-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 17 00:47:04.971126 systemd[1]: var-lib-kubelet-pods-83bda6c3\x2d5a5a\x2d46af\x2da2d7\x2d028195cd0545-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 17 00:47:05.629716 kubelet[2095]: I0517 00:47:05.629683 2095 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b6b807d-7944-4679-963c-49de35c56b4f" path="/var/lib/kubelet/pods/5b6b807d-7944-4679-963c-49de35c56b4f/volumes" May 17 00:47:05.645047 kubelet[2095]: I0517 00:47:05.645027 2095 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="83bda6c3-5a5a-46af-a2d7-028195cd0545" path="/var/lib/kubelet/pods/83bda6c3-5a5a-46af-a2d7-028195cd0545/volumes" May 17 00:47:05.922276 systemd[1]: Started sshd@22-139.178.70.99:22-147.75.109.163:56970.service. May 17 00:47:05.923120 sshd[3648]: pam_unix(sshd:session): session closed for user core May 17 00:47:05.924993 systemd[1]: sshd@21-139.178.70.99:22-147.75.109.163:56956.service: Deactivated successfully. May 17 00:47:05.925754 systemd[1]: session-24.scope: Deactivated successfully. May 17 00:47:05.926286 systemd-logind[1263]: Session 24 logged out. Waiting for processes to exit. May 17 00:47:05.926876 systemd-logind[1263]: Removed session 24. May 17 00:47:05.986347 sshd[3814]: Accepted publickey for core from 147.75.109.163 port 56970 ssh2: RSA SHA256:c2z/7wLfdEkcn4/VTlfeChibQyT7Fv7HLPVdQSmDlR8 May 17 00:47:05.987494 sshd[3814]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:47:05.992522 systemd-logind[1263]: New session 25 of user core. May 17 00:47:05.993013 systemd[1]: Started session-25.scope. May 17 00:47:06.345210 sshd[3814]: pam_unix(sshd:session): session closed for user core May 17 00:47:06.348788 systemd[1]: Started sshd@23-139.178.70.99:22-147.75.109.163:56984.service. May 17 00:47:06.351208 systemd[1]: sshd@22-139.178.70.99:22-147.75.109.163:56970.service: Deactivated successfully. May 17 00:47:06.351651 systemd[1]: session-25.scope: Deactivated successfully. May 17 00:47:06.352722 systemd-logind[1263]: Session 25 logged out. Waiting for processes to exit. May 17 00:47:06.353422 systemd-logind[1263]: Removed session 25. May 17 00:47:06.373252 kubelet[2095]: I0517 00:47:06.373226 2095 memory_manager.go:355] "RemoveStaleState removing state" podUID="83bda6c3-5a5a-46af-a2d7-028195cd0545" containerName="cilium-agent" May 17 00:47:06.373383 kubelet[2095]: I0517 00:47:06.373371 2095 memory_manager.go:355] "RemoveStaleState removing state" podUID="5b6b807d-7944-4679-963c-49de35c56b4f" containerName="cilium-operator" May 17 00:47:06.389236 systemd[1]: Created slice kubepods-burstable-podb7aae66c_1d10_41dc_84f7_9c488211a430.slice. May 17 00:47:06.391098 sshd[3825]: Accepted publickey for core from 147.75.109.163 port 56984 ssh2: RSA SHA256:c2z/7wLfdEkcn4/VTlfeChibQyT7Fv7HLPVdQSmDlR8 May 17 00:47:06.391997 sshd[3825]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:47:06.395548 systemd[1]: Started session-26.scope. May 17 00:47:06.396197 systemd-logind[1263]: New session 26 of user core. May 17 00:47:06.454856 kubelet[2095]: I0517 00:47:06.454832 2095 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b7aae66c-1d10-41dc-84f7-9c488211a430-cilium-cgroup\") pod \"cilium-whr9r\" (UID: \"b7aae66c-1d10-41dc-84f7-9c488211a430\") " pod="kube-system/cilium-whr9r" May 17 00:47:06.454996 kubelet[2095]: I0517 00:47:06.454986 2095 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b7aae66c-1d10-41dc-84f7-9c488211a430-clustermesh-secrets\") pod \"cilium-whr9r\" (UID: \"b7aae66c-1d10-41dc-84f7-9c488211a430\") " pod="kube-system/cilium-whr9r" May 17 00:47:06.455057 kubelet[2095]: I0517 00:47:06.455047 2095 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b7aae66c-1d10-41dc-84f7-9c488211a430-cilium-run\") pod \"cilium-whr9r\" (UID: \"b7aae66c-1d10-41dc-84f7-9c488211a430\") " pod="kube-system/cilium-whr9r" May 17 00:47:06.455113 kubelet[2095]: I0517 00:47:06.455105 2095 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b7aae66c-1d10-41dc-84f7-9c488211a430-hostproc\") pod \"cilium-whr9r\" (UID: \"b7aae66c-1d10-41dc-84f7-9c488211a430\") " pod="kube-system/cilium-whr9r" May 17 00:47:06.455181 kubelet[2095]: I0517 00:47:06.455172 2095 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b7aae66c-1d10-41dc-84f7-9c488211a430-hubble-tls\") pod \"cilium-whr9r\" (UID: \"b7aae66c-1d10-41dc-84f7-9c488211a430\") " pod="kube-system/cilium-whr9r" May 17 00:47:06.455247 kubelet[2095]: I0517 00:47:06.455233 2095 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d292t\" (UniqueName: \"kubernetes.io/projected/b7aae66c-1d10-41dc-84f7-9c488211a430-kube-api-access-d292t\") pod \"cilium-whr9r\" (UID: \"b7aae66c-1d10-41dc-84f7-9c488211a430\") " pod="kube-system/cilium-whr9r" May 17 00:47:06.455300 kubelet[2095]: I0517 00:47:06.455291 2095 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b7aae66c-1d10-41dc-84f7-9c488211a430-xtables-lock\") pod \"cilium-whr9r\" (UID: \"b7aae66c-1d10-41dc-84f7-9c488211a430\") " pod="kube-system/cilium-whr9r" May 17 00:47:06.455353 kubelet[2095]: I0517 00:47:06.455344 2095 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/b7aae66c-1d10-41dc-84f7-9c488211a430-cilium-ipsec-secrets\") pod \"cilium-whr9r\" (UID: \"b7aae66c-1d10-41dc-84f7-9c488211a430\") " pod="kube-system/cilium-whr9r" May 17 00:47:06.455410 kubelet[2095]: I0517 00:47:06.455395 2095 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b7aae66c-1d10-41dc-84f7-9c488211a430-cni-path\") pod \"cilium-whr9r\" (UID: \"b7aae66c-1d10-41dc-84f7-9c488211a430\") " pod="kube-system/cilium-whr9r" May 17 00:47:06.455470 kubelet[2095]: I0517 00:47:06.455460 2095 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b7aae66c-1d10-41dc-84f7-9c488211a430-etc-cni-netd\") pod \"cilium-whr9r\" (UID: \"b7aae66c-1d10-41dc-84f7-9c488211a430\") " pod="kube-system/cilium-whr9r" May 17 00:47:06.455530 kubelet[2095]: I0517 00:47:06.455521 2095 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b7aae66c-1d10-41dc-84f7-9c488211a430-lib-modules\") pod \"cilium-whr9r\" (UID: \"b7aae66c-1d10-41dc-84f7-9c488211a430\") " pod="kube-system/cilium-whr9r" May 17 00:47:06.455596 kubelet[2095]: I0517 00:47:06.455582 2095 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b7aae66c-1d10-41dc-84f7-9c488211a430-bpf-maps\") pod \"cilium-whr9r\" (UID: \"b7aae66c-1d10-41dc-84f7-9c488211a430\") " pod="kube-system/cilium-whr9r" May 17 00:47:06.455672 kubelet[2095]: I0517 00:47:06.455653 2095 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b7aae66c-1d10-41dc-84f7-9c488211a430-cilium-config-path\") pod \"cilium-whr9r\" (UID: \"b7aae66c-1d10-41dc-84f7-9c488211a430\") " pod="kube-system/cilium-whr9r" May 17 00:47:06.455727 kubelet[2095]: I0517 00:47:06.455717 2095 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b7aae66c-1d10-41dc-84f7-9c488211a430-host-proc-sys-kernel\") pod \"cilium-whr9r\" (UID: \"b7aae66c-1d10-41dc-84f7-9c488211a430\") " pod="kube-system/cilium-whr9r" May 17 00:47:06.455801 kubelet[2095]: I0517 00:47:06.455790 2095 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b7aae66c-1d10-41dc-84f7-9c488211a430-host-proc-sys-net\") pod \"cilium-whr9r\" (UID: \"b7aae66c-1d10-41dc-84f7-9c488211a430\") " pod="kube-system/cilium-whr9r" May 17 00:47:06.614407 sshd[3825]: pam_unix(sshd:session): session closed for user core May 17 00:47:06.617127 systemd[1]: Started sshd@24-139.178.70.99:22-147.75.109.163:57000.service. May 17 00:47:06.620449 systemd-logind[1263]: Session 26 logged out. Waiting for processes to exit. May 17 00:47:06.621354 systemd[1]: sshd@23-139.178.70.99:22-147.75.109.163:56984.service: Deactivated successfully. May 17 00:47:06.621799 systemd[1]: session-26.scope: Deactivated successfully. May 17 00:47:06.622628 systemd-logind[1263]: Removed session 26. May 17 00:47:06.642097 env[1275]: time="2025-05-17T00:47:06.641756463Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-whr9r,Uid:b7aae66c-1d10-41dc-84f7-9c488211a430,Namespace:kube-system,Attempt:0,}" May 17 00:47:06.660748 sshd[3840]: Accepted publickey for core from 147.75.109.163 port 57000 ssh2: RSA SHA256:c2z/7wLfdEkcn4/VTlfeChibQyT7Fv7HLPVdQSmDlR8 May 17 00:47:06.662026 sshd[3840]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:47:06.671056 env[1275]: time="2025-05-17T00:47:06.670516071Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:47:06.671056 env[1275]: time="2025-05-17T00:47:06.670537324Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:47:06.671056 env[1275]: time="2025-05-17T00:47:06.670544014Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:47:06.671056 env[1275]: time="2025-05-17T00:47:06.670617318Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/e684f97d6db0215ca48e1752b0118a8a5626f39b229dbd8e594f12f95d4e2911 pid=3850 runtime=io.containerd.runc.v2 May 17 00:47:06.674144 systemd[1]: Started session-27.scope. May 17 00:47:06.674608 systemd-logind[1263]: New session 27 of user core. May 17 00:47:06.691833 systemd[1]: Started cri-containerd-e684f97d6db0215ca48e1752b0118a8a5626f39b229dbd8e594f12f95d4e2911.scope. May 17 00:47:06.709761 env[1275]: time="2025-05-17T00:47:06.709721464Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-whr9r,Uid:b7aae66c-1d10-41dc-84f7-9c488211a430,Namespace:kube-system,Attempt:0,} returns sandbox id \"e684f97d6db0215ca48e1752b0118a8a5626f39b229dbd8e594f12f95d4e2911\"" May 17 00:47:06.713496 env[1275]: time="2025-05-17T00:47:06.713471740Z" level=info msg="CreateContainer within sandbox \"e684f97d6db0215ca48e1752b0118a8a5626f39b229dbd8e594f12f95d4e2911\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 17 00:47:06.752391 env[1275]: time="2025-05-17T00:47:06.752345299Z" level=info msg="CreateContainer within sandbox \"e684f97d6db0215ca48e1752b0118a8a5626f39b229dbd8e594f12f95d4e2911\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"33a25fdb2f7d0b98d597a98975329709c36fb1b27f1b6e951c32f023dcb06694\"" May 17 00:47:06.752902 env[1275]: time="2025-05-17T00:47:06.752888508Z" level=info msg="StartContainer for \"33a25fdb2f7d0b98d597a98975329709c36fb1b27f1b6e951c32f023dcb06694\"" May 17 00:47:06.767996 systemd[1]: Started cri-containerd-33a25fdb2f7d0b98d597a98975329709c36fb1b27f1b6e951c32f023dcb06694.scope. May 17 00:47:06.780443 systemd[1]: cri-containerd-33a25fdb2f7d0b98d597a98975329709c36fb1b27f1b6e951c32f023dcb06694.scope: Deactivated successfully. May 17 00:47:06.794209 env[1275]: time="2025-05-17T00:47:06.794177631Z" level=info msg="shim disconnected" id=33a25fdb2f7d0b98d597a98975329709c36fb1b27f1b6e951c32f023dcb06694 May 17 00:47:06.794382 env[1275]: time="2025-05-17T00:47:06.794370553Z" level=warning msg="cleaning up after shim disconnected" id=33a25fdb2f7d0b98d597a98975329709c36fb1b27f1b6e951c32f023dcb06694 namespace=k8s.io May 17 00:47:06.794443 env[1275]: time="2025-05-17T00:47:06.794433886Z" level=info msg="cleaning up dead shim" May 17 00:47:06.799985 env[1275]: time="2025-05-17T00:47:06.799949090Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:47:06Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3922 runtime=io.containerd.runc.v2\ntime=\"2025-05-17T00:47:06Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/33a25fdb2f7d0b98d597a98975329709c36fb1b27f1b6e951c32f023dcb06694/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" May 17 00:47:06.800374 env[1275]: time="2025-05-17T00:47:06.800312515Z" level=error msg="copy shim log" error="read /proc/self/fd/29: file already closed" May 17 00:47:06.802217 env[1275]: time="2025-05-17T00:47:06.800562078Z" level=error msg="Failed to pipe stderr of container \"33a25fdb2f7d0b98d597a98975329709c36fb1b27f1b6e951c32f023dcb06694\"" error="reading from a closed fifo" May 17 00:47:06.802517 env[1275]: time="2025-05-17T00:47:06.802177135Z" level=error msg="Failed to pipe stdout of container \"33a25fdb2f7d0b98d597a98975329709c36fb1b27f1b6e951c32f023dcb06694\"" error="reading from a closed fifo" May 17 00:47:06.805655 env[1275]: time="2025-05-17T00:47:06.805615413Z" level=error msg="StartContainer for \"33a25fdb2f7d0b98d597a98975329709c36fb1b27f1b6e951c32f023dcb06694\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" May 17 00:47:06.805924 kubelet[2095]: E0517 00:47:06.805895 2095 log.go:32] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="33a25fdb2f7d0b98d597a98975329709c36fb1b27f1b6e951c32f023dcb06694" May 17 00:47:06.811255 kubelet[2095]: E0517 00:47:06.811225 2095 kuberuntime_manager.go:1341] "Unhandled Error" err=< May 17 00:47:06.811255 kubelet[2095]: init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; May 17 00:47:06.811255 kubelet[2095]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; May 17 00:47:06.811255 kubelet[2095]: rm /hostbin/cilium-mount May 17 00:47:06.811424 kubelet[2095]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-d292t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:&AppArmorProfile{Type:Unconfined,LocalhostProfile:nil,},},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-whr9r_kube-system(b7aae66c-1d10-41dc-84f7-9c488211a430): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown May 17 00:47:06.811424 kubelet[2095]: > logger="UnhandledError" May 17 00:47:06.812614 kubelet[2095]: E0517 00:47:06.812581 2095 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-whr9r" podUID="b7aae66c-1d10-41dc-84f7-9c488211a430" May 17 00:47:06.856427 env[1275]: time="2025-05-17T00:47:06.856403888Z" level=info msg="StopPodSandbox for \"e684f97d6db0215ca48e1752b0118a8a5626f39b229dbd8e594f12f95d4e2911\"" May 17 00:47:06.856569 env[1275]: time="2025-05-17T00:47:06.856556090Z" level=info msg="Container to stop \"33a25fdb2f7d0b98d597a98975329709c36fb1b27f1b6e951c32f023dcb06694\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 17 00:47:06.867419 systemd[1]: cri-containerd-e684f97d6db0215ca48e1752b0118a8a5626f39b229dbd8e594f12f95d4e2911.scope: Deactivated successfully. May 17 00:47:06.891414 env[1275]: time="2025-05-17T00:47:06.891379579Z" level=info msg="shim disconnected" id=e684f97d6db0215ca48e1752b0118a8a5626f39b229dbd8e594f12f95d4e2911 May 17 00:47:06.891606 env[1275]: time="2025-05-17T00:47:06.891594215Z" level=warning msg="cleaning up after shim disconnected" id=e684f97d6db0215ca48e1752b0118a8a5626f39b229dbd8e594f12f95d4e2911 namespace=k8s.io May 17 00:47:06.891663 env[1275]: time="2025-05-17T00:47:06.891650585Z" level=info msg="cleaning up dead shim" May 17 00:47:06.896756 env[1275]: time="2025-05-17T00:47:06.896732792Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:47:06Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3955 runtime=io.containerd.runc.v2\n" May 17 00:47:06.897022 env[1275]: time="2025-05-17T00:47:06.897006360Z" level=info msg="TearDown network for sandbox \"e684f97d6db0215ca48e1752b0118a8a5626f39b229dbd8e594f12f95d4e2911\" successfully" May 17 00:47:06.897082 env[1275]: time="2025-05-17T00:47:06.897068848Z" level=info msg="StopPodSandbox for \"e684f97d6db0215ca48e1752b0118a8a5626f39b229dbd8e594f12f95d4e2911\" returns successfully" May 17 00:47:06.957182 kubelet[2095]: I0517 00:47:06.957120 2095 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b7aae66c-1d10-41dc-84f7-9c488211a430-cni-path\") pod \"b7aae66c-1d10-41dc-84f7-9c488211a430\" (UID: \"b7aae66c-1d10-41dc-84f7-9c488211a430\") " May 17 00:47:06.957182 kubelet[2095]: I0517 00:47:06.957169 2095 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b7aae66c-1d10-41dc-84f7-9c488211a430-cilium-config-path\") pod \"b7aae66c-1d10-41dc-84f7-9c488211a430\" (UID: \"b7aae66c-1d10-41dc-84f7-9c488211a430\") " May 17 00:47:06.957182 kubelet[2095]: I0517 00:47:06.957187 2095 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b7aae66c-1d10-41dc-84f7-9c488211a430-hostproc\") pod \"b7aae66c-1d10-41dc-84f7-9c488211a430\" (UID: \"b7aae66c-1d10-41dc-84f7-9c488211a430\") " May 17 00:47:06.957392 kubelet[2095]: I0517 00:47:06.957197 2095 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b7aae66c-1d10-41dc-84f7-9c488211a430-etc-cni-netd\") pod \"b7aae66c-1d10-41dc-84f7-9c488211a430\" (UID: \"b7aae66c-1d10-41dc-84f7-9c488211a430\") " May 17 00:47:06.957392 kubelet[2095]: I0517 00:47:06.957210 2095 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/b7aae66c-1d10-41dc-84f7-9c488211a430-cilium-ipsec-secrets\") pod \"b7aae66c-1d10-41dc-84f7-9c488211a430\" (UID: \"b7aae66c-1d10-41dc-84f7-9c488211a430\") " May 17 00:47:06.957392 kubelet[2095]: I0517 00:47:06.957220 2095 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b7aae66c-1d10-41dc-84f7-9c488211a430-host-proc-sys-kernel\") pod \"b7aae66c-1d10-41dc-84f7-9c488211a430\" (UID: \"b7aae66c-1d10-41dc-84f7-9c488211a430\") " May 17 00:47:06.957392 kubelet[2095]: I0517 00:47:06.957231 2095 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b7aae66c-1d10-41dc-84f7-9c488211a430-host-proc-sys-net\") pod \"b7aae66c-1d10-41dc-84f7-9c488211a430\" (UID: \"b7aae66c-1d10-41dc-84f7-9c488211a430\") " May 17 00:47:06.957392 kubelet[2095]: I0517 00:47:06.957242 2095 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b7aae66c-1d10-41dc-84f7-9c488211a430-cilium-cgroup\") pod \"b7aae66c-1d10-41dc-84f7-9c488211a430\" (UID: \"b7aae66c-1d10-41dc-84f7-9c488211a430\") " May 17 00:47:06.957392 kubelet[2095]: I0517 00:47:06.957254 2095 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b7aae66c-1d10-41dc-84f7-9c488211a430-hubble-tls\") pod \"b7aae66c-1d10-41dc-84f7-9c488211a430\" (UID: \"b7aae66c-1d10-41dc-84f7-9c488211a430\") " May 17 00:47:06.957392 kubelet[2095]: I0517 00:47:06.957263 2095 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b7aae66c-1d10-41dc-84f7-9c488211a430-lib-modules\") pod \"b7aae66c-1d10-41dc-84f7-9c488211a430\" (UID: \"b7aae66c-1d10-41dc-84f7-9c488211a430\") " May 17 00:47:06.957392 kubelet[2095]: I0517 00:47:06.957275 2095 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b7aae66c-1d10-41dc-84f7-9c488211a430-clustermesh-secrets\") pod \"b7aae66c-1d10-41dc-84f7-9c488211a430\" (UID: \"b7aae66c-1d10-41dc-84f7-9c488211a430\") " May 17 00:47:06.957392 kubelet[2095]: I0517 00:47:06.957286 2095 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b7aae66c-1d10-41dc-84f7-9c488211a430-cilium-run\") pod \"b7aae66c-1d10-41dc-84f7-9c488211a430\" (UID: \"b7aae66c-1d10-41dc-84f7-9c488211a430\") " May 17 00:47:06.957392 kubelet[2095]: I0517 00:47:06.957295 2095 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b7aae66c-1d10-41dc-84f7-9c488211a430-bpf-maps\") pod \"b7aae66c-1d10-41dc-84f7-9c488211a430\" (UID: \"b7aae66c-1d10-41dc-84f7-9c488211a430\") " May 17 00:47:06.957392 kubelet[2095]: I0517 00:47:06.957307 2095 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b7aae66c-1d10-41dc-84f7-9c488211a430-xtables-lock\") pod \"b7aae66c-1d10-41dc-84f7-9c488211a430\" (UID: \"b7aae66c-1d10-41dc-84f7-9c488211a430\") " May 17 00:47:06.957392 kubelet[2095]: I0517 00:47:06.957325 2095 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d292t\" (UniqueName: \"kubernetes.io/projected/b7aae66c-1d10-41dc-84f7-9c488211a430-kube-api-access-d292t\") pod \"b7aae66c-1d10-41dc-84f7-9c488211a430\" (UID: \"b7aae66c-1d10-41dc-84f7-9c488211a430\") " May 17 00:47:06.958168 kubelet[2095]: I0517 00:47:06.957750 2095 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b7aae66c-1d10-41dc-84f7-9c488211a430-cni-path" (OuterVolumeSpecName: "cni-path") pod "b7aae66c-1d10-41dc-84f7-9c488211a430" (UID: "b7aae66c-1d10-41dc-84f7-9c488211a430"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 00:47:06.958168 kubelet[2095]: I0517 00:47:06.957975 2095 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b7aae66c-1d10-41dc-84f7-9c488211a430-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "b7aae66c-1d10-41dc-84f7-9c488211a430" (UID: "b7aae66c-1d10-41dc-84f7-9c488211a430"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 00:47:06.958168 kubelet[2095]: I0517 00:47:06.957996 2095 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b7aae66c-1d10-41dc-84f7-9c488211a430-hostproc" (OuterVolumeSpecName: "hostproc") pod "b7aae66c-1d10-41dc-84f7-9c488211a430" (UID: "b7aae66c-1d10-41dc-84f7-9c488211a430"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 00:47:06.958168 kubelet[2095]: I0517 00:47:06.958008 2095 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b7aae66c-1d10-41dc-84f7-9c488211a430-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "b7aae66c-1d10-41dc-84f7-9c488211a430" (UID: "b7aae66c-1d10-41dc-84f7-9c488211a430"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 00:47:06.959120 kubelet[2095]: I0517 00:47:06.959107 2095 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b7aae66c-1d10-41dc-84f7-9c488211a430-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "b7aae66c-1d10-41dc-84f7-9c488211a430" (UID: "b7aae66c-1d10-41dc-84f7-9c488211a430"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 17 00:47:06.960268 kubelet[2095]: I0517 00:47:06.960254 2095 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b7aae66c-1d10-41dc-84f7-9c488211a430-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "b7aae66c-1d10-41dc-84f7-9c488211a430" (UID: "b7aae66c-1d10-41dc-84f7-9c488211a430"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 00:47:06.960685 kubelet[2095]: I0517 00:47:06.960666 2095 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b7aae66c-1d10-41dc-84f7-9c488211a430-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "b7aae66c-1d10-41dc-84f7-9c488211a430" (UID: "b7aae66c-1d10-41dc-84f7-9c488211a430"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 00:47:06.960732 kubelet[2095]: I0517 00:47:06.960687 2095 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b7aae66c-1d10-41dc-84f7-9c488211a430-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "b7aae66c-1d10-41dc-84f7-9c488211a430" (UID: "b7aae66c-1d10-41dc-84f7-9c488211a430"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 00:47:06.960760 kubelet[2095]: I0517 00:47:06.960732 2095 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b7aae66c-1d10-41dc-84f7-9c488211a430-kube-api-access-d292t" (OuterVolumeSpecName: "kube-api-access-d292t") pod "b7aae66c-1d10-41dc-84f7-9c488211a430" (UID: "b7aae66c-1d10-41dc-84f7-9c488211a430"). InnerVolumeSpecName "kube-api-access-d292t". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 17 00:47:06.960760 kubelet[2095]: I0517 00:47:06.960751 2095 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b7aae66c-1d10-41dc-84f7-9c488211a430-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "b7aae66c-1d10-41dc-84f7-9c488211a430" (UID: "b7aae66c-1d10-41dc-84f7-9c488211a430"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 00:47:06.960805 kubelet[2095]: I0517 00:47:06.960760 2095 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b7aae66c-1d10-41dc-84f7-9c488211a430-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "b7aae66c-1d10-41dc-84f7-9c488211a430" (UID: "b7aae66c-1d10-41dc-84f7-9c488211a430"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 00:47:06.960805 kubelet[2095]: I0517 00:47:06.960770 2095 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b7aae66c-1d10-41dc-84f7-9c488211a430-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "b7aae66c-1d10-41dc-84f7-9c488211a430" (UID: "b7aae66c-1d10-41dc-84f7-9c488211a430"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 00:47:06.961315 kubelet[2095]: I0517 00:47:06.961302 2095 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b7aae66c-1d10-41dc-84f7-9c488211a430-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "b7aae66c-1d10-41dc-84f7-9c488211a430" (UID: "b7aae66c-1d10-41dc-84f7-9c488211a430"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 17 00:47:06.962286 kubelet[2095]: I0517 00:47:06.962270 2095 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b7aae66c-1d10-41dc-84f7-9c488211a430-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "b7aae66c-1d10-41dc-84f7-9c488211a430" (UID: "b7aae66c-1d10-41dc-84f7-9c488211a430"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" May 17 00:47:06.962882 kubelet[2095]: I0517 00:47:06.962862 2095 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b7aae66c-1d10-41dc-84f7-9c488211a430-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "b7aae66c-1d10-41dc-84f7-9c488211a430" (UID: "b7aae66c-1d10-41dc-84f7-9c488211a430"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" May 17 00:47:07.058498 kubelet[2095]: I0517 00:47:07.058458 2095 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-d292t\" (UniqueName: \"kubernetes.io/projected/b7aae66c-1d10-41dc-84f7-9c488211a430-kube-api-access-d292t\") on node \"localhost\" DevicePath \"\"" May 17 00:47:07.058498 kubelet[2095]: I0517 00:47:07.058492 2095 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b7aae66c-1d10-41dc-84f7-9c488211a430-cni-path\") on node \"localhost\" DevicePath \"\"" May 17 00:47:07.058498 kubelet[2095]: I0517 00:47:07.058504 2095 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b7aae66c-1d10-41dc-84f7-9c488211a430-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 17 00:47:07.058715 kubelet[2095]: I0517 00:47:07.058515 2095 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b7aae66c-1d10-41dc-84f7-9c488211a430-hostproc\") on node \"localhost\" DevicePath \"\"" May 17 00:47:07.058715 kubelet[2095]: I0517 00:47:07.058525 2095 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b7aae66c-1d10-41dc-84f7-9c488211a430-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" May 17 00:47:07.058715 kubelet[2095]: I0517 00:47:07.058535 2095 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b7aae66c-1d10-41dc-84f7-9c488211a430-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" May 17 00:47:07.058715 kubelet[2095]: I0517 00:47:07.058543 2095 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b7aae66c-1d10-41dc-84f7-9c488211a430-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" May 17 00:47:07.058715 kubelet[2095]: I0517 00:47:07.058552 2095 reconciler_common.go:299] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/b7aae66c-1d10-41dc-84f7-9c488211a430-cilium-ipsec-secrets\") on node \"localhost\" DevicePath \"\"" May 17 00:47:07.058715 kubelet[2095]: I0517 00:47:07.058561 2095 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b7aae66c-1d10-41dc-84f7-9c488211a430-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" May 17 00:47:07.058715 kubelet[2095]: I0517 00:47:07.058570 2095 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b7aae66c-1d10-41dc-84f7-9c488211a430-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" May 17 00:47:07.058715 kubelet[2095]: I0517 00:47:07.058579 2095 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b7aae66c-1d10-41dc-84f7-9c488211a430-cilium-run\") on node \"localhost\" DevicePath \"\"" May 17 00:47:07.058715 kubelet[2095]: I0517 00:47:07.058589 2095 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b7aae66c-1d10-41dc-84f7-9c488211a430-hubble-tls\") on node \"localhost\" DevicePath \"\"" May 17 00:47:07.058715 kubelet[2095]: I0517 00:47:07.058599 2095 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b7aae66c-1d10-41dc-84f7-9c488211a430-lib-modules\") on node \"localhost\" DevicePath \"\"" May 17 00:47:07.058715 kubelet[2095]: I0517 00:47:07.058607 2095 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b7aae66c-1d10-41dc-84f7-9c488211a430-bpf-maps\") on node \"localhost\" DevicePath \"\"" May 17 00:47:07.058715 kubelet[2095]: I0517 00:47:07.058616 2095 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b7aae66c-1d10-41dc-84f7-9c488211a430-xtables-lock\") on node \"localhost\" DevicePath \"\"" May 17 00:47:07.559237 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e684f97d6db0215ca48e1752b0118a8a5626f39b229dbd8e594f12f95d4e2911-shm.mount: Deactivated successfully. May 17 00:47:07.559327 systemd[1]: var-lib-kubelet-pods-b7aae66c\x2d1d10\x2d41dc\x2d84f7\x2d9c488211a430-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dd292t.mount: Deactivated successfully. May 17 00:47:07.559378 systemd[1]: var-lib-kubelet-pods-b7aae66c\x2d1d10\x2d41dc\x2d84f7\x2d9c488211a430-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 17 00:47:07.559423 systemd[1]: var-lib-kubelet-pods-b7aae66c\x2d1d10\x2d41dc\x2d84f7\x2d9c488211a430-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. May 17 00:47:07.559471 systemd[1]: var-lib-kubelet-pods-b7aae66c\x2d1d10\x2d41dc\x2d84f7\x2d9c488211a430-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 17 00:47:07.624783 systemd[1]: Removed slice kubepods-burstable-podb7aae66c_1d10_41dc_84f7_9c488211a430.slice. May 17 00:47:07.858493 kubelet[2095]: I0517 00:47:07.857950 2095 scope.go:117] "RemoveContainer" containerID="33a25fdb2f7d0b98d597a98975329709c36fb1b27f1b6e951c32f023dcb06694" May 17 00:47:07.860252 env[1275]: time="2025-05-17T00:47:07.860224721Z" level=info msg="RemoveContainer for \"33a25fdb2f7d0b98d597a98975329709c36fb1b27f1b6e951c32f023dcb06694\"" May 17 00:47:07.862166 env[1275]: time="2025-05-17T00:47:07.862131967Z" level=info msg="RemoveContainer for \"33a25fdb2f7d0b98d597a98975329709c36fb1b27f1b6e951c32f023dcb06694\" returns successfully" May 17 00:47:07.885288 kubelet[2095]: I0517 00:47:07.885268 2095 memory_manager.go:355] "RemoveStaleState removing state" podUID="b7aae66c-1d10-41dc-84f7-9c488211a430" containerName="mount-cgroup" May 17 00:47:07.889749 systemd[1]: Created slice kubepods-burstable-pod2fc34eee_adbf_4741_b350_0d01ab2843ac.slice. May 17 00:47:07.962847 kubelet[2095]: I0517 00:47:07.962806 2095 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/2fc34eee-adbf-4741-b350-0d01ab2843ac-cilium-ipsec-secrets\") pod \"cilium-z5s5q\" (UID: \"2fc34eee-adbf-4741-b350-0d01ab2843ac\") " pod="kube-system/cilium-z5s5q" May 17 00:47:07.962847 kubelet[2095]: I0517 00:47:07.962843 2095 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2fc34eee-adbf-4741-b350-0d01ab2843ac-xtables-lock\") pod \"cilium-z5s5q\" (UID: \"2fc34eee-adbf-4741-b350-0d01ab2843ac\") " pod="kube-system/cilium-z5s5q" May 17 00:47:07.963039 kubelet[2095]: I0517 00:47:07.962860 2095 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xw2m6\" (UniqueName: \"kubernetes.io/projected/2fc34eee-adbf-4741-b350-0d01ab2843ac-kube-api-access-xw2m6\") pod \"cilium-z5s5q\" (UID: \"2fc34eee-adbf-4741-b350-0d01ab2843ac\") " pod="kube-system/cilium-z5s5q" May 17 00:47:07.963039 kubelet[2095]: I0517 00:47:07.962870 2095 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2fc34eee-adbf-4741-b350-0d01ab2843ac-cilium-cgroup\") pod \"cilium-z5s5q\" (UID: \"2fc34eee-adbf-4741-b350-0d01ab2843ac\") " pod="kube-system/cilium-z5s5q" May 17 00:47:07.963039 kubelet[2095]: I0517 00:47:07.962878 2095 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2fc34eee-adbf-4741-b350-0d01ab2843ac-cni-path\") pod \"cilium-z5s5q\" (UID: \"2fc34eee-adbf-4741-b350-0d01ab2843ac\") " pod="kube-system/cilium-z5s5q" May 17 00:47:07.963039 kubelet[2095]: I0517 00:47:07.962888 2095 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2fc34eee-adbf-4741-b350-0d01ab2843ac-cilium-run\") pod \"cilium-z5s5q\" (UID: \"2fc34eee-adbf-4741-b350-0d01ab2843ac\") " pod="kube-system/cilium-z5s5q" May 17 00:47:07.963039 kubelet[2095]: I0517 00:47:07.962896 2095 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2fc34eee-adbf-4741-b350-0d01ab2843ac-bpf-maps\") pod \"cilium-z5s5q\" (UID: \"2fc34eee-adbf-4741-b350-0d01ab2843ac\") " pod="kube-system/cilium-z5s5q" May 17 00:47:07.963039 kubelet[2095]: I0517 00:47:07.962905 2095 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2fc34eee-adbf-4741-b350-0d01ab2843ac-etc-cni-netd\") pod \"cilium-z5s5q\" (UID: \"2fc34eee-adbf-4741-b350-0d01ab2843ac\") " pod="kube-system/cilium-z5s5q" May 17 00:47:07.963039 kubelet[2095]: I0517 00:47:07.962913 2095 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2fc34eee-adbf-4741-b350-0d01ab2843ac-clustermesh-secrets\") pod \"cilium-z5s5q\" (UID: \"2fc34eee-adbf-4741-b350-0d01ab2843ac\") " pod="kube-system/cilium-z5s5q" May 17 00:47:07.963039 kubelet[2095]: I0517 00:47:07.962924 2095 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2fc34eee-adbf-4741-b350-0d01ab2843ac-cilium-config-path\") pod \"cilium-z5s5q\" (UID: \"2fc34eee-adbf-4741-b350-0d01ab2843ac\") " pod="kube-system/cilium-z5s5q" May 17 00:47:07.963039 kubelet[2095]: I0517 00:47:07.962933 2095 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2fc34eee-adbf-4741-b350-0d01ab2843ac-host-proc-sys-kernel\") pod \"cilium-z5s5q\" (UID: \"2fc34eee-adbf-4741-b350-0d01ab2843ac\") " pod="kube-system/cilium-z5s5q" May 17 00:47:07.963039 kubelet[2095]: I0517 00:47:07.962942 2095 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2fc34eee-adbf-4741-b350-0d01ab2843ac-hostproc\") pod \"cilium-z5s5q\" (UID: \"2fc34eee-adbf-4741-b350-0d01ab2843ac\") " pod="kube-system/cilium-z5s5q" May 17 00:47:07.963039 kubelet[2095]: I0517 00:47:07.962952 2095 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2fc34eee-adbf-4741-b350-0d01ab2843ac-lib-modules\") pod \"cilium-z5s5q\" (UID: \"2fc34eee-adbf-4741-b350-0d01ab2843ac\") " pod="kube-system/cilium-z5s5q" May 17 00:47:07.963039 kubelet[2095]: I0517 00:47:07.962961 2095 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2fc34eee-adbf-4741-b350-0d01ab2843ac-host-proc-sys-net\") pod \"cilium-z5s5q\" (UID: \"2fc34eee-adbf-4741-b350-0d01ab2843ac\") " pod="kube-system/cilium-z5s5q" May 17 00:47:07.963039 kubelet[2095]: I0517 00:47:07.962971 2095 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2fc34eee-adbf-4741-b350-0d01ab2843ac-hubble-tls\") pod \"cilium-z5s5q\" (UID: \"2fc34eee-adbf-4741-b350-0d01ab2843ac\") " pod="kube-system/cilium-z5s5q" May 17 00:47:08.192238 env[1275]: time="2025-05-17T00:47:08.192200804Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-z5s5q,Uid:2fc34eee-adbf-4741-b350-0d01ab2843ac,Namespace:kube-system,Attempt:0,}" May 17 00:47:08.287796 env[1275]: time="2025-05-17T00:47:08.287682537Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:47:08.287796 env[1275]: time="2025-05-17T00:47:08.287721100Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:47:08.287796 env[1275]: time="2025-05-17T00:47:08.287728352Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:47:08.288004 env[1275]: time="2025-05-17T00:47:08.287983827Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/cc9c25993ae2abbf7a47c58112ddda66b2e9cb0fd2d7c589b33ad55e1cc7d2f0 pid=3984 runtime=io.containerd.runc.v2 May 17 00:47:08.296611 systemd[1]: Started cri-containerd-cc9c25993ae2abbf7a47c58112ddda66b2e9cb0fd2d7c589b33ad55e1cc7d2f0.scope. May 17 00:47:08.324467 env[1275]: time="2025-05-17T00:47:08.324432900Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-z5s5q,Uid:2fc34eee-adbf-4741-b350-0d01ab2843ac,Namespace:kube-system,Attempt:0,} returns sandbox id \"cc9c25993ae2abbf7a47c58112ddda66b2e9cb0fd2d7c589b33ad55e1cc7d2f0\"" May 17 00:47:08.326548 env[1275]: time="2025-05-17T00:47:08.326499878Z" level=info msg="CreateContainer within sandbox \"cc9c25993ae2abbf7a47c58112ddda66b2e9cb0fd2d7c589b33ad55e1cc7d2f0\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 17 00:47:08.382403 env[1275]: time="2025-05-17T00:47:08.382364633Z" level=info msg="CreateContainer within sandbox \"cc9c25993ae2abbf7a47c58112ddda66b2e9cb0fd2d7c589b33ad55e1cc7d2f0\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"1d84f7a359bbef97c2e77860bffbbf47009b1e18ab33db0ff2f6a28e927e6fae\"" May 17 00:47:08.382788 env[1275]: time="2025-05-17T00:47:08.382772581Z" level=info msg="StartContainer for \"1d84f7a359bbef97c2e77860bffbbf47009b1e18ab33db0ff2f6a28e927e6fae\"" May 17 00:47:08.393358 systemd[1]: Started cri-containerd-1d84f7a359bbef97c2e77860bffbbf47009b1e18ab33db0ff2f6a28e927e6fae.scope. May 17 00:47:08.431874 env[1275]: time="2025-05-17T00:47:08.431838752Z" level=info msg="StartContainer for \"1d84f7a359bbef97c2e77860bffbbf47009b1e18ab33db0ff2f6a28e927e6fae\" returns successfully" May 17 00:47:08.452982 systemd[1]: cri-containerd-1d84f7a359bbef97c2e77860bffbbf47009b1e18ab33db0ff2f6a28e927e6fae.scope: Deactivated successfully. May 17 00:47:08.470129 env[1275]: time="2025-05-17T00:47:08.470090636Z" level=info msg="shim disconnected" id=1d84f7a359bbef97c2e77860bffbbf47009b1e18ab33db0ff2f6a28e927e6fae May 17 00:47:08.470318 env[1275]: time="2025-05-17T00:47:08.470303651Z" level=warning msg="cleaning up after shim disconnected" id=1d84f7a359bbef97c2e77860bffbbf47009b1e18ab33db0ff2f6a28e927e6fae namespace=k8s.io May 17 00:47:08.470388 env[1275]: time="2025-05-17T00:47:08.470377688Z" level=info msg="cleaning up dead shim" May 17 00:47:08.475788 env[1275]: time="2025-05-17T00:47:08.475761259Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:47:08Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4062 runtime=io.containerd.runc.v2\n" May 17 00:47:08.709445 kubelet[2095]: E0517 00:47:08.709367 2095 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 17 00:47:08.862444 env[1275]: time="2025-05-17T00:47:08.862405006Z" level=info msg="CreateContainer within sandbox \"cc9c25993ae2abbf7a47c58112ddda66b2e9cb0fd2d7c589b33ad55e1cc7d2f0\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 17 00:47:08.946205 env[1275]: time="2025-05-17T00:47:08.946141005Z" level=info msg="CreateContainer within sandbox \"cc9c25993ae2abbf7a47c58112ddda66b2e9cb0fd2d7c589b33ad55e1cc7d2f0\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"3816a8d71baccec64afd09db5c37b7bf574f3ec946d0480acf948fb00a4021ce\"" May 17 00:47:08.946835 env[1275]: time="2025-05-17T00:47:08.946822536Z" level=info msg="StartContainer for \"3816a8d71baccec64afd09db5c37b7bf574f3ec946d0480acf948fb00a4021ce\"" May 17 00:47:08.963499 systemd[1]: Started cri-containerd-3816a8d71baccec64afd09db5c37b7bf574f3ec946d0480acf948fb00a4021ce.scope. May 17 00:47:08.990070 env[1275]: time="2025-05-17T00:47:08.990035726Z" level=info msg="StartContainer for \"3816a8d71baccec64afd09db5c37b7bf574f3ec946d0480acf948fb00a4021ce\" returns successfully" May 17 00:47:09.029694 systemd[1]: cri-containerd-3816a8d71baccec64afd09db5c37b7bf574f3ec946d0480acf948fb00a4021ce.scope: Deactivated successfully. May 17 00:47:09.138195 env[1275]: time="2025-05-17T00:47:09.138137395Z" level=info msg="shim disconnected" id=3816a8d71baccec64afd09db5c37b7bf574f3ec946d0480acf948fb00a4021ce May 17 00:47:09.138425 env[1275]: time="2025-05-17T00:47:09.138408921Z" level=warning msg="cleaning up after shim disconnected" id=3816a8d71baccec64afd09db5c37b7bf574f3ec946d0480acf948fb00a4021ce namespace=k8s.io May 17 00:47:09.138508 env[1275]: time="2025-05-17T00:47:09.138494843Z" level=info msg="cleaning up dead shim" May 17 00:47:09.144225 env[1275]: time="2025-05-17T00:47:09.144190980Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:47:09Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4125 runtime=io.containerd.runc.v2\n" May 17 00:47:09.559240 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3816a8d71baccec64afd09db5c37b7bf574f3ec946d0480acf948fb00a4021ce-rootfs.mount: Deactivated successfully. May 17 00:47:09.620531 kubelet[2095]: I0517 00:47:09.620505 2095 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b7aae66c-1d10-41dc-84f7-9c488211a430" path="/var/lib/kubelet/pods/b7aae66c-1d10-41dc-84f7-9c488211a430/volumes" May 17 00:47:09.863937 env[1275]: time="2025-05-17T00:47:09.863873914Z" level=info msg="CreateContainer within sandbox \"cc9c25993ae2abbf7a47c58112ddda66b2e9cb0fd2d7c589b33ad55e1cc7d2f0\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 17 00:47:09.889358 env[1275]: time="2025-05-17T00:47:09.889333463Z" level=info msg="CreateContainer within sandbox \"cc9c25993ae2abbf7a47c58112ddda66b2e9cb0fd2d7c589b33ad55e1cc7d2f0\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"5c87061eee02ae326eef898eb801e67525b14a8f2cc83b81a011d0eb6c0bbdb2\"" May 17 00:47:09.889878 env[1275]: time="2025-05-17T00:47:09.889864718Z" level=info msg="StartContainer for \"5c87061eee02ae326eef898eb801e67525b14a8f2cc83b81a011d0eb6c0bbdb2\"" May 17 00:47:09.900559 kubelet[2095]: W0517 00:47:09.900523 2095 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb7aae66c_1d10_41dc_84f7_9c488211a430.slice/cri-containerd-33a25fdb2f7d0b98d597a98975329709c36fb1b27f1b6e951c32f023dcb06694.scope WatchSource:0}: container "33a25fdb2f7d0b98d597a98975329709c36fb1b27f1b6e951c32f023dcb06694" in namespace "k8s.io": not found May 17 00:47:09.905567 systemd[1]: Started cri-containerd-5c87061eee02ae326eef898eb801e67525b14a8f2cc83b81a011d0eb6c0bbdb2.scope. May 17 00:47:09.928094 env[1275]: time="2025-05-17T00:47:09.928063494Z" level=info msg="StartContainer for \"5c87061eee02ae326eef898eb801e67525b14a8f2cc83b81a011d0eb6c0bbdb2\" returns successfully" May 17 00:47:09.934244 systemd[1]: cri-containerd-5c87061eee02ae326eef898eb801e67525b14a8f2cc83b81a011d0eb6c0bbdb2.scope: Deactivated successfully. May 17 00:47:09.947654 env[1275]: time="2025-05-17T00:47:09.947627122Z" level=info msg="shim disconnected" id=5c87061eee02ae326eef898eb801e67525b14a8f2cc83b81a011d0eb6c0bbdb2 May 17 00:47:09.947801 env[1275]: time="2025-05-17T00:47:09.947789876Z" level=warning msg="cleaning up after shim disconnected" id=5c87061eee02ae326eef898eb801e67525b14a8f2cc83b81a011d0eb6c0bbdb2 namespace=k8s.io May 17 00:47:09.947856 env[1275]: time="2025-05-17T00:47:09.947845888Z" level=info msg="cleaning up dead shim" May 17 00:47:09.952070 env[1275]: time="2025-05-17T00:47:09.952045077Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:47:09Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4182 runtime=io.containerd.runc.v2\n" May 17 00:47:10.559132 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5c87061eee02ae326eef898eb801e67525b14a8f2cc83b81a011d0eb6c0bbdb2-rootfs.mount: Deactivated successfully. May 17 00:47:10.872824 env[1275]: time="2025-05-17T00:47:10.872680536Z" level=info msg="CreateContainer within sandbox \"cc9c25993ae2abbf7a47c58112ddda66b2e9cb0fd2d7c589b33ad55e1cc7d2f0\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 17 00:47:10.879072 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2595879048.mount: Deactivated successfully. May 17 00:47:10.883176 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3934837996.mount: Deactivated successfully. May 17 00:47:10.894054 env[1275]: time="2025-05-17T00:47:10.894020675Z" level=info msg="CreateContainer within sandbox \"cc9c25993ae2abbf7a47c58112ddda66b2e9cb0fd2d7c589b33ad55e1cc7d2f0\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"172c697152aa0967b9b22484e05699a2ea29640d37ce1adf17d059c172d6554a\"" May 17 00:47:10.895206 env[1275]: time="2025-05-17T00:47:10.894615951Z" level=info msg="StartContainer for \"172c697152aa0967b9b22484e05699a2ea29640d37ce1adf17d059c172d6554a\"" May 17 00:47:10.904137 systemd[1]: Started cri-containerd-172c697152aa0967b9b22484e05699a2ea29640d37ce1adf17d059c172d6554a.scope. May 17 00:47:10.928993 systemd[1]: cri-containerd-172c697152aa0967b9b22484e05699a2ea29640d37ce1adf17d059c172d6554a.scope: Deactivated successfully. May 17 00:47:10.933599 env[1275]: time="2025-05-17T00:47:10.933574852Z" level=info msg="StartContainer for \"172c697152aa0967b9b22484e05699a2ea29640d37ce1adf17d059c172d6554a\" returns successfully" May 17 00:47:10.955664 env[1275]: time="2025-05-17T00:47:10.955634253Z" level=info msg="shim disconnected" id=172c697152aa0967b9b22484e05699a2ea29640d37ce1adf17d059c172d6554a May 17 00:47:10.955837 env[1275]: time="2025-05-17T00:47:10.955825801Z" level=warning msg="cleaning up after shim disconnected" id=172c697152aa0967b9b22484e05699a2ea29640d37ce1adf17d059c172d6554a namespace=k8s.io May 17 00:47:10.955889 env[1275]: time="2025-05-17T00:47:10.955879669Z" level=info msg="cleaning up dead shim" May 17 00:47:10.961071 env[1275]: time="2025-05-17T00:47:10.961050991Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:47:10Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4238 runtime=io.containerd.runc.v2\n" May 17 00:47:11.869142 env[1275]: time="2025-05-17T00:47:11.869111160Z" level=info msg="CreateContainer within sandbox \"cc9c25993ae2abbf7a47c58112ddda66b2e9cb0fd2d7c589b33ad55e1cc7d2f0\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 17 00:47:11.912918 env[1275]: time="2025-05-17T00:47:11.912881190Z" level=info msg="CreateContainer within sandbox \"cc9c25993ae2abbf7a47c58112ddda66b2e9cb0fd2d7c589b33ad55e1cc7d2f0\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"6d23c00508f708048e24423887a873924c7cefee78513a115c94b9340f9b3442\"" May 17 00:47:11.913481 env[1275]: time="2025-05-17T00:47:11.913461837Z" level=info msg="StartContainer for \"6d23c00508f708048e24423887a873924c7cefee78513a115c94b9340f9b3442\"" May 17 00:47:11.924964 systemd[1]: Started cri-containerd-6d23c00508f708048e24423887a873924c7cefee78513a115c94b9340f9b3442.scope. May 17 00:47:11.954104 env[1275]: time="2025-05-17T00:47:11.954070322Z" level=info msg="StartContainer for \"6d23c00508f708048e24423887a873924c7cefee78513a115c94b9340f9b3442\" returns successfully" May 17 00:47:12.760174 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) May 17 00:47:12.880644 kubelet[2095]: I0517 00:47:12.880601 2095 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-z5s5q" podStartSLOduration=5.880587253 podStartE2EDuration="5.880587253s" podCreationTimestamp="2025-05-17 00:47:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:47:12.879875057 +0000 UTC m=+129.392380540" watchObservedRunningTime="2025-05-17 00:47:12.880587253 +0000 UTC m=+129.393092728" May 17 00:47:13.025877 kubelet[2095]: W0517 00:47:13.025757 2095 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2fc34eee_adbf_4741_b350_0d01ab2843ac.slice/cri-containerd-1d84f7a359bbef97c2e77860bffbbf47009b1e18ab33db0ff2f6a28e927e6fae.scope WatchSource:0}: task 1d84f7a359bbef97c2e77860bffbbf47009b1e18ab33db0ff2f6a28e927e6fae not found: not found May 17 00:47:13.032939 systemd[1]: run-containerd-runc-k8s.io-6d23c00508f708048e24423887a873924c7cefee78513a115c94b9340f9b3442-runc.60xBpu.mount: Deactivated successfully. May 17 00:47:15.250227 systemd[1]: run-containerd-runc-k8s.io-6d23c00508f708048e24423887a873924c7cefee78513a115c94b9340f9b3442-runc.LaSTuE.mount: Deactivated successfully. May 17 00:47:15.301839 systemd-networkd[1082]: lxc_health: Link UP May 17 00:47:15.308824 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready May 17 00:47:15.308980 systemd-networkd[1082]: lxc_health: Gained carrier May 17 00:47:16.131555 kubelet[2095]: W0517 00:47:16.131531 2095 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2fc34eee_adbf_4741_b350_0d01ab2843ac.slice/cri-containerd-3816a8d71baccec64afd09db5c37b7bf574f3ec946d0480acf948fb00a4021ce.scope WatchSource:0}: task 3816a8d71baccec64afd09db5c37b7bf574f3ec946d0480acf948fb00a4021ce not found: not found May 17 00:47:16.762310 systemd-networkd[1082]: lxc_health: Gained IPv6LL May 17 00:47:17.457578 systemd[1]: run-containerd-runc-k8s.io-6d23c00508f708048e24423887a873924c7cefee78513a115c94b9340f9b3442-runc.0yjCMS.mount: Deactivated successfully. May 17 00:47:19.238059 kubelet[2095]: W0517 00:47:19.238033 2095 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2fc34eee_adbf_4741_b350_0d01ab2843ac.slice/cri-containerd-5c87061eee02ae326eef898eb801e67525b14a8f2cc83b81a011d0eb6c0bbdb2.scope WatchSource:0}: task 5c87061eee02ae326eef898eb801e67525b14a8f2cc83b81a011d0eb6c0bbdb2 not found: not found May 17 00:47:19.558037 systemd[1]: run-containerd-runc-k8s.io-6d23c00508f708048e24423887a873924c7cefee78513a115c94b9340f9b3442-runc.JhUx7Z.mount: Deactivated successfully. May 17 00:47:21.666230 systemd[1]: run-containerd-runc-k8s.io-6d23c00508f708048e24423887a873924c7cefee78513a115c94b9340f9b3442-runc.TpWvuM.mount: Deactivated successfully. May 17 00:47:21.706741 sshd[3840]: pam_unix(sshd:session): session closed for user core May 17 00:47:21.716191 systemd[1]: sshd@24-139.178.70.99:22-147.75.109.163:57000.service: Deactivated successfully. May 17 00:47:21.716846 systemd-logind[1263]: Session 27 logged out. Waiting for processes to exit. May 17 00:47:21.716940 systemd[1]: session-27.scope: Deactivated successfully. May 17 00:47:21.717819 systemd-logind[1263]: Removed session 27. May 17 00:47:22.342524 kubelet[2095]: W0517 00:47:22.342473 2095 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2fc34eee_adbf_4741_b350_0d01ab2843ac.slice/cri-containerd-172c697152aa0967b9b22484e05699a2ea29640d37ce1adf17d059c172d6554a.scope WatchSource:0}: task 172c697152aa0967b9b22484e05699a2ea29640d37ce1adf17d059c172d6554a not found: not found