May 15 10:40:28.657353 kernel: Linux version 5.15.182-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Thu May 15 09:06:41 -00 2025 May 15 10:40:28.657369 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=f8c1bc5ff10765e781843bfc97fc5357002a3f8a120201a0e954fce1d2ba48f0 May 15 10:40:28.657375 kernel: Disabled fast string operations May 15 10:40:28.657379 kernel: BIOS-provided physical RAM map: May 15 10:40:28.657382 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ebff] usable May 15 10:40:28.657386 kernel: BIOS-e820: [mem 0x000000000009ec00-0x000000000009ffff] reserved May 15 10:40:28.657392 kernel: BIOS-e820: [mem 0x00000000000dc000-0x00000000000fffff] reserved May 15 10:40:28.657396 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007fedffff] usable May 15 10:40:28.657400 kernel: BIOS-e820: [mem 0x000000007fee0000-0x000000007fefefff] ACPI data May 15 10:40:28.657404 kernel: BIOS-e820: [mem 0x000000007feff000-0x000000007fefffff] ACPI NVS May 15 10:40:28.657408 kernel: BIOS-e820: [mem 0x000000007ff00000-0x000000007fffffff] usable May 15 10:40:28.657412 kernel: BIOS-e820: [mem 0x00000000f0000000-0x00000000f7ffffff] reserved May 15 10:40:28.657416 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec0ffff] reserved May 15 10:40:28.657420 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved May 15 10:40:28.657426 kernel: BIOS-e820: [mem 0x00000000fffe0000-0x00000000ffffffff] reserved May 15 10:40:28.657431 kernel: NX (Execute Disable) protection: active May 15 10:40:28.657435 kernel: SMBIOS 2.7 present. May 15 10:40:28.657440 kernel: DMI: VMware, Inc. VMware Virtual Platform/440BX Desktop Reference Platform, BIOS 6.00 05/28/2020 May 15 10:40:28.657444 kernel: vmware: hypercall mode: 0x00 May 15 10:40:28.657448 kernel: Hypervisor detected: VMware May 15 10:40:28.657453 kernel: vmware: TSC freq read from hypervisor : 3408.000 MHz May 15 10:40:28.657458 kernel: vmware: Host bus clock speed read from hypervisor : 66000000 Hz May 15 10:40:28.657462 kernel: vmware: using clock offset of 3159858982 ns May 15 10:40:28.657466 kernel: tsc: Detected 3408.000 MHz processor May 15 10:40:28.657471 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 15 10:40:28.657476 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 15 10:40:28.657480 kernel: last_pfn = 0x80000 max_arch_pfn = 0x400000000 May 15 10:40:28.657485 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 15 10:40:28.657489 kernel: total RAM covered: 3072M May 15 10:40:28.657495 kernel: Found optimal setting for mtrr clean up May 15 10:40:28.657500 kernel: gran_size: 64K chunk_size: 64K num_reg: 2 lose cover RAM: 0G May 15 10:40:28.657504 kernel: Using GB pages for direct mapping May 15 10:40:28.657509 kernel: ACPI: Early table checksum verification disabled May 15 10:40:28.657513 kernel: ACPI: RSDP 0x00000000000F6A00 000024 (v02 PTLTD ) May 15 10:40:28.657518 kernel: ACPI: XSDT 0x000000007FEE965B 00005C (v01 INTEL 440BX 06040000 VMW 01324272) May 15 10:40:28.657522 kernel: ACPI: FACP 0x000000007FEFEE73 0000F4 (v04 INTEL 440BX 06040000 PTL 000F4240) May 15 10:40:28.657527 kernel: ACPI: DSDT 0x000000007FEEAD55 01411E (v01 PTLTD Custom 06040000 MSFT 03000001) May 15 10:40:28.657531 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 May 15 10:40:28.657535 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 May 15 10:40:28.657541 kernel: ACPI: BOOT 0x000000007FEEAD2D 000028 (v01 PTLTD $SBFTBL$ 06040000 LTP 00000001) May 15 10:40:28.657547 kernel: ACPI: APIC 0x000000007FEEA5EB 000742 (v01 PTLTD ? APIC 06040000 LTP 00000000) May 15 10:40:28.657552 kernel: ACPI: MCFG 0x000000007FEEA5AF 00003C (v01 PTLTD $PCITBL$ 06040000 LTP 00000001) May 15 10:40:28.657557 kernel: ACPI: SRAT 0x000000007FEE9757 0008A8 (v02 VMWARE MEMPLUG 06040000 VMW 00000001) May 15 10:40:28.657562 kernel: ACPI: HPET 0x000000007FEE971F 000038 (v01 VMWARE VMW HPET 06040000 VMW 00000001) May 15 10:40:28.657568 kernel: ACPI: WAET 0x000000007FEE96F7 000028 (v01 VMWARE VMW WAET 06040000 VMW 00000001) May 15 10:40:28.657573 kernel: ACPI: Reserving FACP table memory at [mem 0x7fefee73-0x7fefef66] May 15 10:40:28.657577 kernel: ACPI: Reserving DSDT table memory at [mem 0x7feead55-0x7fefee72] May 15 10:40:28.657582 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] May 15 10:40:28.657587 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] May 15 10:40:28.657592 kernel: ACPI: Reserving BOOT table memory at [mem 0x7feead2d-0x7feead54] May 15 10:40:28.657597 kernel: ACPI: Reserving APIC table memory at [mem 0x7feea5eb-0x7feead2c] May 15 10:40:28.657601 kernel: ACPI: Reserving MCFG table memory at [mem 0x7feea5af-0x7feea5ea] May 15 10:40:28.657606 kernel: ACPI: Reserving SRAT table memory at [mem 0x7fee9757-0x7fee9ffe] May 15 10:40:28.657612 kernel: ACPI: Reserving HPET table memory at [mem 0x7fee971f-0x7fee9756] May 15 10:40:28.657617 kernel: ACPI: Reserving WAET table memory at [mem 0x7fee96f7-0x7fee971e] May 15 10:40:28.657621 kernel: system APIC only can use physical flat May 15 10:40:28.657626 kernel: Setting APIC routing to physical flat. May 15 10:40:28.657631 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 May 15 10:40:28.657636 kernel: SRAT: PXM 0 -> APIC 0x02 -> Node 0 May 15 10:40:28.657641 kernel: SRAT: PXM 0 -> APIC 0x04 -> Node 0 May 15 10:40:28.657645 kernel: SRAT: PXM 0 -> APIC 0x06 -> Node 0 May 15 10:40:28.657650 kernel: SRAT: PXM 0 -> APIC 0x08 -> Node 0 May 15 10:40:28.657656 kernel: SRAT: PXM 0 -> APIC 0x0a -> Node 0 May 15 10:40:28.657660 kernel: SRAT: PXM 0 -> APIC 0x0c -> Node 0 May 15 10:40:28.657665 kernel: SRAT: PXM 0 -> APIC 0x0e -> Node 0 May 15 10:40:28.657670 kernel: SRAT: PXM 0 -> APIC 0x10 -> Node 0 May 15 10:40:28.657674 kernel: SRAT: PXM 0 -> APIC 0x12 -> Node 0 May 15 10:40:28.657679 kernel: SRAT: PXM 0 -> APIC 0x14 -> Node 0 May 15 10:40:28.657684 kernel: SRAT: PXM 0 -> APIC 0x16 -> Node 0 May 15 10:40:28.657688 kernel: SRAT: PXM 0 -> APIC 0x18 -> Node 0 May 15 10:40:28.657693 kernel: SRAT: PXM 0 -> APIC 0x1a -> Node 0 May 15 10:40:28.657698 kernel: SRAT: PXM 0 -> APIC 0x1c -> Node 0 May 15 10:40:28.657704 kernel: SRAT: PXM 0 -> APIC 0x1e -> Node 0 May 15 10:40:28.657708 kernel: SRAT: PXM 0 -> APIC 0x20 -> Node 0 May 15 10:40:28.657713 kernel: SRAT: PXM 0 -> APIC 0x22 -> Node 0 May 15 10:40:28.657718 kernel: SRAT: PXM 0 -> APIC 0x24 -> Node 0 May 15 10:40:28.657722 kernel: SRAT: PXM 0 -> APIC 0x26 -> Node 0 May 15 10:40:28.657727 kernel: SRAT: PXM 0 -> APIC 0x28 -> Node 0 May 15 10:40:28.657732 kernel: SRAT: PXM 0 -> APIC 0x2a -> Node 0 May 15 10:40:28.657737 kernel: SRAT: PXM 0 -> APIC 0x2c -> Node 0 May 15 10:40:28.657741 kernel: SRAT: PXM 0 -> APIC 0x2e -> Node 0 May 15 10:40:28.657746 kernel: SRAT: PXM 0 -> APIC 0x30 -> Node 0 May 15 10:40:28.657752 kernel: SRAT: PXM 0 -> APIC 0x32 -> Node 0 May 15 10:40:28.657757 kernel: SRAT: PXM 0 -> APIC 0x34 -> Node 0 May 15 10:40:28.657761 kernel: SRAT: PXM 0 -> APIC 0x36 -> Node 0 May 15 10:40:28.657766 kernel: SRAT: PXM 0 -> APIC 0x38 -> Node 0 May 15 10:40:28.657771 kernel: SRAT: PXM 0 -> APIC 0x3a -> Node 0 May 15 10:40:28.657776 kernel: SRAT: PXM 0 -> APIC 0x3c -> Node 0 May 15 10:40:28.657780 kernel: SRAT: PXM 0 -> APIC 0x3e -> Node 0 May 15 10:40:28.657785 kernel: SRAT: PXM 0 -> APIC 0x40 -> Node 0 May 15 10:40:28.657790 kernel: SRAT: PXM 0 -> APIC 0x42 -> Node 0 May 15 10:40:28.657795 kernel: SRAT: PXM 0 -> APIC 0x44 -> Node 0 May 15 10:40:28.657800 kernel: SRAT: PXM 0 -> APIC 0x46 -> Node 0 May 15 10:40:28.657805 kernel: SRAT: PXM 0 -> APIC 0x48 -> Node 0 May 15 10:40:28.657810 kernel: SRAT: PXM 0 -> APIC 0x4a -> Node 0 May 15 10:40:28.657815 kernel: SRAT: PXM 0 -> APIC 0x4c -> Node 0 May 15 10:40:28.657819 kernel: SRAT: PXM 0 -> APIC 0x4e -> Node 0 May 15 10:40:28.657824 kernel: SRAT: PXM 0 -> APIC 0x50 -> Node 0 May 15 10:40:28.657829 kernel: SRAT: PXM 0 -> APIC 0x52 -> Node 0 May 15 10:40:28.657833 kernel: SRAT: PXM 0 -> APIC 0x54 -> Node 0 May 15 10:40:28.657838 kernel: SRAT: PXM 0 -> APIC 0x56 -> Node 0 May 15 10:40:28.657843 kernel: SRAT: PXM 0 -> APIC 0x58 -> Node 0 May 15 10:40:28.657848 kernel: SRAT: PXM 0 -> APIC 0x5a -> Node 0 May 15 10:40:28.657853 kernel: SRAT: PXM 0 -> APIC 0x5c -> Node 0 May 15 10:40:28.657858 kernel: SRAT: PXM 0 -> APIC 0x5e -> Node 0 May 15 10:40:28.657863 kernel: SRAT: PXM 0 -> APIC 0x60 -> Node 0 May 15 10:40:28.657867 kernel: SRAT: PXM 0 -> APIC 0x62 -> Node 0 May 15 10:40:28.657872 kernel: SRAT: PXM 0 -> APIC 0x64 -> Node 0 May 15 10:40:28.657877 kernel: SRAT: PXM 0 -> APIC 0x66 -> Node 0 May 15 10:40:28.657882 kernel: SRAT: PXM 0 -> APIC 0x68 -> Node 0 May 15 10:40:28.657886 kernel: SRAT: PXM 0 -> APIC 0x6a -> Node 0 May 15 10:40:28.657891 kernel: SRAT: PXM 0 -> APIC 0x6c -> Node 0 May 15 10:40:28.657897 kernel: SRAT: PXM 0 -> APIC 0x6e -> Node 0 May 15 10:40:28.657906 kernel: SRAT: PXM 0 -> APIC 0x70 -> Node 0 May 15 10:40:28.657911 kernel: SRAT: PXM 0 -> APIC 0x72 -> Node 0 May 15 10:40:28.657916 kernel: SRAT: PXM 0 -> APIC 0x74 -> Node 0 May 15 10:40:28.657921 kernel: SRAT: PXM 0 -> APIC 0x76 -> Node 0 May 15 10:40:28.657927 kernel: SRAT: PXM 0 -> APIC 0x78 -> Node 0 May 15 10:40:28.657934 kernel: SRAT: PXM 0 -> APIC 0x7a -> Node 0 May 15 10:40:28.657940 kernel: SRAT: PXM 0 -> APIC 0x7c -> Node 0 May 15 10:40:28.657945 kernel: SRAT: PXM 0 -> APIC 0x7e -> Node 0 May 15 10:40:28.657950 kernel: SRAT: PXM 0 -> APIC 0x80 -> Node 0 May 15 10:40:28.657956 kernel: SRAT: PXM 0 -> APIC 0x82 -> Node 0 May 15 10:40:28.657961 kernel: SRAT: PXM 0 -> APIC 0x84 -> Node 0 May 15 10:40:28.657966 kernel: SRAT: PXM 0 -> APIC 0x86 -> Node 0 May 15 10:40:28.657972 kernel: SRAT: PXM 0 -> APIC 0x88 -> Node 0 May 15 10:40:28.657977 kernel: SRAT: PXM 0 -> APIC 0x8a -> Node 0 May 15 10:40:28.657982 kernel: SRAT: PXM 0 -> APIC 0x8c -> Node 0 May 15 10:40:28.657987 kernel: SRAT: PXM 0 -> APIC 0x8e -> Node 0 May 15 10:40:28.657993 kernel: SRAT: PXM 0 -> APIC 0x90 -> Node 0 May 15 10:40:28.657998 kernel: SRAT: PXM 0 -> APIC 0x92 -> Node 0 May 15 10:40:28.658003 kernel: SRAT: PXM 0 -> APIC 0x94 -> Node 0 May 15 10:40:28.658008 kernel: SRAT: PXM 0 -> APIC 0x96 -> Node 0 May 15 10:40:28.658013 kernel: SRAT: PXM 0 -> APIC 0x98 -> Node 0 May 15 10:40:28.658018 kernel: SRAT: PXM 0 -> APIC 0x9a -> Node 0 May 15 10:40:28.658023 kernel: SRAT: PXM 0 -> APIC 0x9c -> Node 0 May 15 10:40:28.658028 kernel: SRAT: PXM 0 -> APIC 0x9e -> Node 0 May 15 10:40:28.658033 kernel: SRAT: PXM 0 -> APIC 0xa0 -> Node 0 May 15 10:40:28.658038 kernel: SRAT: PXM 0 -> APIC 0xa2 -> Node 0 May 15 10:40:28.658044 kernel: SRAT: PXM 0 -> APIC 0xa4 -> Node 0 May 15 10:40:28.658050 kernel: SRAT: PXM 0 -> APIC 0xa6 -> Node 0 May 15 10:40:28.658055 kernel: SRAT: PXM 0 -> APIC 0xa8 -> Node 0 May 15 10:40:28.658060 kernel: SRAT: PXM 0 -> APIC 0xaa -> Node 0 May 15 10:40:28.658065 kernel: SRAT: PXM 0 -> APIC 0xac -> Node 0 May 15 10:40:28.658070 kernel: SRAT: PXM 0 -> APIC 0xae -> Node 0 May 15 10:40:28.658075 kernel: SRAT: PXM 0 -> APIC 0xb0 -> Node 0 May 15 10:40:28.658080 kernel: SRAT: PXM 0 -> APIC 0xb2 -> Node 0 May 15 10:40:28.658085 kernel: SRAT: PXM 0 -> APIC 0xb4 -> Node 0 May 15 10:40:28.658090 kernel: SRAT: PXM 0 -> APIC 0xb6 -> Node 0 May 15 10:40:28.658096 kernel: SRAT: PXM 0 -> APIC 0xb8 -> Node 0 May 15 10:40:28.658101 kernel: SRAT: PXM 0 -> APIC 0xba -> Node 0 May 15 10:40:28.658106 kernel: SRAT: PXM 0 -> APIC 0xbc -> Node 0 May 15 10:40:28.658111 kernel: SRAT: PXM 0 -> APIC 0xbe -> Node 0 May 15 10:40:28.658116 kernel: SRAT: PXM 0 -> APIC 0xc0 -> Node 0 May 15 10:40:28.658121 kernel: SRAT: PXM 0 -> APIC 0xc2 -> Node 0 May 15 10:40:28.658126 kernel: SRAT: PXM 0 -> APIC 0xc4 -> Node 0 May 15 10:40:28.658131 kernel: SRAT: PXM 0 -> APIC 0xc6 -> Node 0 May 15 10:40:28.658137 kernel: SRAT: PXM 0 -> APIC 0xc8 -> Node 0 May 15 10:40:28.658142 kernel: SRAT: PXM 0 -> APIC 0xca -> Node 0 May 15 10:40:28.658147 kernel: SRAT: PXM 0 -> APIC 0xcc -> Node 0 May 15 10:40:28.658153 kernel: SRAT: PXM 0 -> APIC 0xce -> Node 0 May 15 10:40:28.658158 kernel: SRAT: PXM 0 -> APIC 0xd0 -> Node 0 May 15 10:40:28.658163 kernel: SRAT: PXM 0 -> APIC 0xd2 -> Node 0 May 15 10:40:28.658191 kernel: SRAT: PXM 0 -> APIC 0xd4 -> Node 0 May 15 10:40:28.658196 kernel: SRAT: PXM 0 -> APIC 0xd6 -> Node 0 May 15 10:40:28.658202 kernel: SRAT: PXM 0 -> APIC 0xd8 -> Node 0 May 15 10:40:28.658207 kernel: SRAT: PXM 0 -> APIC 0xda -> Node 0 May 15 10:40:28.658212 kernel: SRAT: PXM 0 -> APIC 0xdc -> Node 0 May 15 10:40:28.658217 kernel: SRAT: PXM 0 -> APIC 0xde -> Node 0 May 15 10:40:28.658223 kernel: SRAT: PXM 0 -> APIC 0xe0 -> Node 0 May 15 10:40:28.658228 kernel: SRAT: PXM 0 -> APIC 0xe2 -> Node 0 May 15 10:40:28.658234 kernel: SRAT: PXM 0 -> APIC 0xe4 -> Node 0 May 15 10:40:28.658239 kernel: SRAT: PXM 0 -> APIC 0xe6 -> Node 0 May 15 10:40:28.658244 kernel: SRAT: PXM 0 -> APIC 0xe8 -> Node 0 May 15 10:40:28.658249 kernel: SRAT: PXM 0 -> APIC 0xea -> Node 0 May 15 10:40:28.658254 kernel: SRAT: PXM 0 -> APIC 0xec -> Node 0 May 15 10:40:28.658259 kernel: SRAT: PXM 0 -> APIC 0xee -> Node 0 May 15 10:40:28.658264 kernel: SRAT: PXM 0 -> APIC 0xf0 -> Node 0 May 15 10:40:28.658269 kernel: SRAT: PXM 0 -> APIC 0xf2 -> Node 0 May 15 10:40:28.658275 kernel: SRAT: PXM 0 -> APIC 0xf4 -> Node 0 May 15 10:40:28.658281 kernel: SRAT: PXM 0 -> APIC 0xf6 -> Node 0 May 15 10:40:28.658286 kernel: SRAT: PXM 0 -> APIC 0xf8 -> Node 0 May 15 10:40:28.658291 kernel: SRAT: PXM 0 -> APIC 0xfa -> Node 0 May 15 10:40:28.658296 kernel: SRAT: PXM 0 -> APIC 0xfc -> Node 0 May 15 10:40:28.658301 kernel: SRAT: PXM 0 -> APIC 0xfe -> Node 0 May 15 10:40:28.658306 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] May 15 10:40:28.658311 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] May 15 10:40:28.658316 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000-0xbfffffff] hotplug May 15 10:40:28.658323 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7fffffff] -> [mem 0x00000000-0x7fffffff] May 15 10:40:28.658328 kernel: NODE_DATA(0) allocated [mem 0x7fffa000-0x7fffffff] May 15 10:40:28.658333 kernel: Zone ranges: May 15 10:40:28.658339 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 15 10:40:28.658344 kernel: DMA32 [mem 0x0000000001000000-0x000000007fffffff] May 15 10:40:28.658349 kernel: Normal empty May 15 10:40:28.658354 kernel: Movable zone start for each node May 15 10:40:28.658359 kernel: Early memory node ranges May 15 10:40:28.658365 kernel: node 0: [mem 0x0000000000001000-0x000000000009dfff] May 15 10:40:28.658370 kernel: node 0: [mem 0x0000000000100000-0x000000007fedffff] May 15 10:40:28.658376 kernel: node 0: [mem 0x000000007ff00000-0x000000007fffffff] May 15 10:40:28.658381 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007fffffff] May 15 10:40:28.658387 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 15 10:40:28.658392 kernel: On node 0, zone DMA: 98 pages in unavailable ranges May 15 10:40:28.658397 kernel: On node 0, zone DMA32: 32 pages in unavailable ranges May 15 10:40:28.658402 kernel: ACPI: PM-Timer IO Port: 0x1008 May 15 10:40:28.658407 kernel: system APIC only can use physical flat May 15 10:40:28.658412 kernel: ACPI: LAPIC_NMI (acpi_id[0x00] high edge lint[0x1]) May 15 10:40:28.658418 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) May 15 10:40:28.658424 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) May 15 10:40:28.658429 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) May 15 10:40:28.658434 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) May 15 10:40:28.658439 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) May 15 10:40:28.658444 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) May 15 10:40:28.658449 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) May 15 10:40:28.658454 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) May 15 10:40:28.658460 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) May 15 10:40:28.658465 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) May 15 10:40:28.658471 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) May 15 10:40:28.658476 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) May 15 10:40:28.658481 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) May 15 10:40:28.658486 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) May 15 10:40:28.658491 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) May 15 10:40:28.658496 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) May 15 10:40:28.658501 kernel: ACPI: LAPIC_NMI (acpi_id[0x11] high edge lint[0x1]) May 15 10:40:28.658506 kernel: ACPI: LAPIC_NMI (acpi_id[0x12] high edge lint[0x1]) May 15 10:40:28.658511 kernel: ACPI: LAPIC_NMI (acpi_id[0x13] high edge lint[0x1]) May 15 10:40:28.658517 kernel: ACPI: LAPIC_NMI (acpi_id[0x14] high edge lint[0x1]) May 15 10:40:28.658523 kernel: ACPI: LAPIC_NMI (acpi_id[0x15] high edge lint[0x1]) May 15 10:40:28.658528 kernel: ACPI: LAPIC_NMI (acpi_id[0x16] high edge lint[0x1]) May 15 10:40:28.658533 kernel: ACPI: LAPIC_NMI (acpi_id[0x17] high edge lint[0x1]) May 15 10:40:28.658538 kernel: ACPI: LAPIC_NMI (acpi_id[0x18] high edge lint[0x1]) May 15 10:40:28.658543 kernel: ACPI: LAPIC_NMI (acpi_id[0x19] high edge lint[0x1]) May 15 10:40:28.658548 kernel: ACPI: LAPIC_NMI (acpi_id[0x1a] high edge lint[0x1]) May 15 10:40:28.658553 kernel: ACPI: LAPIC_NMI (acpi_id[0x1b] high edge lint[0x1]) May 15 10:40:28.658558 kernel: ACPI: LAPIC_NMI (acpi_id[0x1c] high edge lint[0x1]) May 15 10:40:28.658563 kernel: ACPI: LAPIC_NMI (acpi_id[0x1d] high edge lint[0x1]) May 15 10:40:28.658570 kernel: ACPI: LAPIC_NMI (acpi_id[0x1e] high edge lint[0x1]) May 15 10:40:28.658575 kernel: ACPI: LAPIC_NMI (acpi_id[0x1f] high edge lint[0x1]) May 15 10:40:28.658580 kernel: ACPI: LAPIC_NMI (acpi_id[0x20] high edge lint[0x1]) May 15 10:40:28.658585 kernel: ACPI: LAPIC_NMI (acpi_id[0x21] high edge lint[0x1]) May 15 10:40:28.658590 kernel: ACPI: LAPIC_NMI (acpi_id[0x22] high edge lint[0x1]) May 15 10:40:28.658595 kernel: ACPI: LAPIC_NMI (acpi_id[0x23] high edge lint[0x1]) May 15 10:40:28.658600 kernel: ACPI: LAPIC_NMI (acpi_id[0x24] high edge lint[0x1]) May 15 10:40:28.658606 kernel: ACPI: LAPIC_NMI (acpi_id[0x25] high edge lint[0x1]) May 15 10:40:28.658611 kernel: ACPI: LAPIC_NMI (acpi_id[0x26] high edge lint[0x1]) May 15 10:40:28.658616 kernel: ACPI: LAPIC_NMI (acpi_id[0x27] high edge lint[0x1]) May 15 10:40:28.658622 kernel: ACPI: LAPIC_NMI (acpi_id[0x28] high edge lint[0x1]) May 15 10:40:28.658627 kernel: ACPI: LAPIC_NMI (acpi_id[0x29] high edge lint[0x1]) May 15 10:40:28.658632 kernel: ACPI: LAPIC_NMI (acpi_id[0x2a] high edge lint[0x1]) May 15 10:40:28.658637 kernel: ACPI: LAPIC_NMI (acpi_id[0x2b] high edge lint[0x1]) May 15 10:40:28.658642 kernel: ACPI: LAPIC_NMI (acpi_id[0x2c] high edge lint[0x1]) May 15 10:40:28.658647 kernel: ACPI: LAPIC_NMI (acpi_id[0x2d] high edge lint[0x1]) May 15 10:40:28.658653 kernel: ACPI: LAPIC_NMI (acpi_id[0x2e] high edge lint[0x1]) May 15 10:40:28.658658 kernel: ACPI: LAPIC_NMI (acpi_id[0x2f] high edge lint[0x1]) May 15 10:40:28.658663 kernel: ACPI: LAPIC_NMI (acpi_id[0x30] high edge lint[0x1]) May 15 10:40:28.658669 kernel: ACPI: LAPIC_NMI (acpi_id[0x31] high edge lint[0x1]) May 15 10:40:28.658674 kernel: ACPI: LAPIC_NMI (acpi_id[0x32] high edge lint[0x1]) May 15 10:40:28.658679 kernel: ACPI: LAPIC_NMI (acpi_id[0x33] high edge lint[0x1]) May 15 10:40:28.658684 kernel: ACPI: LAPIC_NMI (acpi_id[0x34] high edge lint[0x1]) May 15 10:40:28.658689 kernel: ACPI: LAPIC_NMI (acpi_id[0x35] high edge lint[0x1]) May 15 10:40:28.658694 kernel: ACPI: LAPIC_NMI (acpi_id[0x36] high edge lint[0x1]) May 15 10:40:28.658700 kernel: ACPI: LAPIC_NMI (acpi_id[0x37] high edge lint[0x1]) May 15 10:40:28.658705 kernel: ACPI: LAPIC_NMI (acpi_id[0x38] high edge lint[0x1]) May 15 10:40:28.658710 kernel: ACPI: LAPIC_NMI (acpi_id[0x39] high edge lint[0x1]) May 15 10:40:28.658715 kernel: ACPI: LAPIC_NMI (acpi_id[0x3a] high edge lint[0x1]) May 15 10:40:28.658721 kernel: ACPI: LAPIC_NMI (acpi_id[0x3b] high edge lint[0x1]) May 15 10:40:28.658726 kernel: ACPI: LAPIC_NMI (acpi_id[0x3c] high edge lint[0x1]) May 15 10:40:28.658731 kernel: ACPI: LAPIC_NMI (acpi_id[0x3d] high edge lint[0x1]) May 15 10:40:28.658737 kernel: ACPI: LAPIC_NMI (acpi_id[0x3e] high edge lint[0x1]) May 15 10:40:28.658742 kernel: ACPI: LAPIC_NMI (acpi_id[0x3f] high edge lint[0x1]) May 15 10:40:28.658747 kernel: ACPI: LAPIC_NMI (acpi_id[0x40] high edge lint[0x1]) May 15 10:40:28.658752 kernel: ACPI: LAPIC_NMI (acpi_id[0x41] high edge lint[0x1]) May 15 10:40:28.658757 kernel: ACPI: LAPIC_NMI (acpi_id[0x42] high edge lint[0x1]) May 15 10:40:28.658762 kernel: ACPI: LAPIC_NMI (acpi_id[0x43] high edge lint[0x1]) May 15 10:40:28.658768 kernel: ACPI: LAPIC_NMI (acpi_id[0x44] high edge lint[0x1]) May 15 10:40:28.658773 kernel: ACPI: LAPIC_NMI (acpi_id[0x45] high edge lint[0x1]) May 15 10:40:28.658779 kernel: ACPI: LAPIC_NMI (acpi_id[0x46] high edge lint[0x1]) May 15 10:40:28.658784 kernel: ACPI: LAPIC_NMI (acpi_id[0x47] high edge lint[0x1]) May 15 10:40:28.658789 kernel: ACPI: LAPIC_NMI (acpi_id[0x48] high edge lint[0x1]) May 15 10:40:28.658794 kernel: ACPI: LAPIC_NMI (acpi_id[0x49] high edge lint[0x1]) May 15 10:40:28.658799 kernel: ACPI: LAPIC_NMI (acpi_id[0x4a] high edge lint[0x1]) May 15 10:40:28.658804 kernel: ACPI: LAPIC_NMI (acpi_id[0x4b] high edge lint[0x1]) May 15 10:40:28.658809 kernel: ACPI: LAPIC_NMI (acpi_id[0x4c] high edge lint[0x1]) May 15 10:40:28.658815 kernel: ACPI: LAPIC_NMI (acpi_id[0x4d] high edge lint[0x1]) May 15 10:40:28.658821 kernel: ACPI: LAPIC_NMI (acpi_id[0x4e] high edge lint[0x1]) May 15 10:40:28.658826 kernel: ACPI: LAPIC_NMI (acpi_id[0x4f] high edge lint[0x1]) May 15 10:40:28.658831 kernel: ACPI: LAPIC_NMI (acpi_id[0x50] high edge lint[0x1]) May 15 10:40:28.658836 kernel: ACPI: LAPIC_NMI (acpi_id[0x51] high edge lint[0x1]) May 15 10:40:28.658841 kernel: ACPI: LAPIC_NMI (acpi_id[0x52] high edge lint[0x1]) May 15 10:40:28.658846 kernel: ACPI: LAPIC_NMI (acpi_id[0x53] high edge lint[0x1]) May 15 10:40:28.658851 kernel: ACPI: LAPIC_NMI (acpi_id[0x54] high edge lint[0x1]) May 15 10:40:28.658856 kernel: ACPI: LAPIC_NMI (acpi_id[0x55] high edge lint[0x1]) May 15 10:40:28.658862 kernel: ACPI: LAPIC_NMI (acpi_id[0x56] high edge lint[0x1]) May 15 10:40:28.658868 kernel: ACPI: LAPIC_NMI (acpi_id[0x57] high edge lint[0x1]) May 15 10:40:28.658873 kernel: ACPI: LAPIC_NMI (acpi_id[0x58] high edge lint[0x1]) May 15 10:40:28.658878 kernel: ACPI: LAPIC_NMI (acpi_id[0x59] high edge lint[0x1]) May 15 10:40:28.658883 kernel: ACPI: LAPIC_NMI (acpi_id[0x5a] high edge lint[0x1]) May 15 10:40:28.658897 kernel: ACPI: LAPIC_NMI (acpi_id[0x5b] high edge lint[0x1]) May 15 10:40:28.658902 kernel: ACPI: LAPIC_NMI (acpi_id[0x5c] high edge lint[0x1]) May 15 10:40:28.658908 kernel: ACPI: LAPIC_NMI (acpi_id[0x5d] high edge lint[0x1]) May 15 10:40:28.658919 kernel: ACPI: LAPIC_NMI (acpi_id[0x5e] high edge lint[0x1]) May 15 10:40:28.658925 kernel: ACPI: LAPIC_NMI (acpi_id[0x5f] high edge lint[0x1]) May 15 10:40:28.658932 kernel: ACPI: LAPIC_NMI (acpi_id[0x60] high edge lint[0x1]) May 15 10:40:28.658937 kernel: ACPI: LAPIC_NMI (acpi_id[0x61] high edge lint[0x1]) May 15 10:40:28.658942 kernel: ACPI: LAPIC_NMI (acpi_id[0x62] high edge lint[0x1]) May 15 10:40:28.658947 kernel: ACPI: LAPIC_NMI (acpi_id[0x63] high edge lint[0x1]) May 15 10:40:28.658952 kernel: ACPI: LAPIC_NMI (acpi_id[0x64] high edge lint[0x1]) May 15 10:40:28.658958 kernel: ACPI: LAPIC_NMI (acpi_id[0x65] high edge lint[0x1]) May 15 10:40:28.658963 kernel: ACPI: LAPIC_NMI (acpi_id[0x66] high edge lint[0x1]) May 15 10:40:28.658968 kernel: ACPI: LAPIC_NMI (acpi_id[0x67] high edge lint[0x1]) May 15 10:40:28.658973 kernel: ACPI: LAPIC_NMI (acpi_id[0x68] high edge lint[0x1]) May 15 10:40:28.658978 kernel: ACPI: LAPIC_NMI (acpi_id[0x69] high edge lint[0x1]) May 15 10:40:28.658984 kernel: ACPI: LAPIC_NMI (acpi_id[0x6a] high edge lint[0x1]) May 15 10:40:28.658989 kernel: ACPI: LAPIC_NMI (acpi_id[0x6b] high edge lint[0x1]) May 15 10:40:28.658994 kernel: ACPI: LAPIC_NMI (acpi_id[0x6c] high edge lint[0x1]) May 15 10:40:28.658999 kernel: ACPI: LAPIC_NMI (acpi_id[0x6d] high edge lint[0x1]) May 15 10:40:28.659008 kernel: ACPI: LAPIC_NMI (acpi_id[0x6e] high edge lint[0x1]) May 15 10:40:28.659015 kernel: ACPI: LAPIC_NMI (acpi_id[0x6f] high edge lint[0x1]) May 15 10:40:28.659021 kernel: ACPI: LAPIC_NMI (acpi_id[0x70] high edge lint[0x1]) May 15 10:40:28.659026 kernel: ACPI: LAPIC_NMI (acpi_id[0x71] high edge lint[0x1]) May 15 10:40:28.659031 kernel: ACPI: LAPIC_NMI (acpi_id[0x72] high edge lint[0x1]) May 15 10:40:28.659038 kernel: ACPI: LAPIC_NMI (acpi_id[0x73] high edge lint[0x1]) May 15 10:40:28.659043 kernel: ACPI: LAPIC_NMI (acpi_id[0x74] high edge lint[0x1]) May 15 10:40:28.659048 kernel: ACPI: LAPIC_NMI (acpi_id[0x75] high edge lint[0x1]) May 15 10:40:28.659053 kernel: ACPI: LAPIC_NMI (acpi_id[0x76] high edge lint[0x1]) May 15 10:40:28.659059 kernel: ACPI: LAPIC_NMI (acpi_id[0x77] high edge lint[0x1]) May 15 10:40:28.659064 kernel: ACPI: LAPIC_NMI (acpi_id[0x78] high edge lint[0x1]) May 15 10:40:28.659069 kernel: ACPI: LAPIC_NMI (acpi_id[0x79] high edge lint[0x1]) May 15 10:40:28.659074 kernel: ACPI: LAPIC_NMI (acpi_id[0x7a] high edge lint[0x1]) May 15 10:40:28.659079 kernel: ACPI: LAPIC_NMI (acpi_id[0x7b] high edge lint[0x1]) May 15 10:40:28.659084 kernel: ACPI: LAPIC_NMI (acpi_id[0x7c] high edge lint[0x1]) May 15 10:40:28.659090 kernel: ACPI: LAPIC_NMI (acpi_id[0x7d] high edge lint[0x1]) May 15 10:40:28.659095 kernel: ACPI: LAPIC_NMI (acpi_id[0x7e] high edge lint[0x1]) May 15 10:40:28.659100 kernel: ACPI: LAPIC_NMI (acpi_id[0x7f] high edge lint[0x1]) May 15 10:40:28.659106 kernel: IOAPIC[0]: apic_id 1, version 17, address 0xfec00000, GSI 0-23 May 15 10:40:28.659111 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 high edge) May 15 10:40:28.659116 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 15 10:40:28.659121 kernel: ACPI: HPET id: 0x8086af01 base: 0xfed00000 May 15 10:40:28.659126 kernel: TSC deadline timer available May 15 10:40:28.659132 kernel: smpboot: Allowing 128 CPUs, 126 hotplug CPUs May 15 10:40:28.659138 kernel: [mem 0x80000000-0xefffffff] available for PCI devices May 15 10:40:28.659143 kernel: Booting paravirtualized kernel on VMware hypervisor May 15 10:40:28.659148 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 15 10:40:28.659154 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:128 nr_node_ids:1 May 15 10:40:28.659159 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u262144 May 15 10:40:28.659164 kernel: pcpu-alloc: s188696 r8192 d32488 u262144 alloc=1*2097152 May 15 10:40:28.659186 kernel: pcpu-alloc: [0] 000 001 002 003 004 005 006 007 May 15 10:40:28.659192 kernel: pcpu-alloc: [0] 008 009 010 011 012 013 014 015 May 15 10:40:28.659197 kernel: pcpu-alloc: [0] 016 017 018 019 020 021 022 023 May 15 10:40:28.659203 kernel: pcpu-alloc: [0] 024 025 026 027 028 029 030 031 May 15 10:40:28.659208 kernel: pcpu-alloc: [0] 032 033 034 035 036 037 038 039 May 15 10:40:28.659213 kernel: pcpu-alloc: [0] 040 041 042 043 044 045 046 047 May 15 10:40:28.659218 kernel: pcpu-alloc: [0] 048 049 050 051 052 053 054 055 May 15 10:40:28.659230 kernel: pcpu-alloc: [0] 056 057 058 059 060 061 062 063 May 15 10:40:28.659236 kernel: pcpu-alloc: [0] 064 065 066 067 068 069 070 071 May 15 10:40:28.659242 kernel: pcpu-alloc: [0] 072 073 074 075 076 077 078 079 May 15 10:40:28.659247 kernel: pcpu-alloc: [0] 080 081 082 083 084 085 086 087 May 15 10:40:28.659254 kernel: pcpu-alloc: [0] 088 089 090 091 092 093 094 095 May 15 10:40:28.659259 kernel: pcpu-alloc: [0] 096 097 098 099 100 101 102 103 May 15 10:40:28.659264 kernel: pcpu-alloc: [0] 104 105 106 107 108 109 110 111 May 15 10:40:28.659270 kernel: pcpu-alloc: [0] 112 113 114 115 116 117 118 119 May 15 10:40:28.659275 kernel: pcpu-alloc: [0] 120 121 122 123 124 125 126 127 May 15 10:40:28.659280 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515808 May 15 10:40:28.659286 kernel: Policy zone: DMA32 May 15 10:40:28.659292 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=f8c1bc5ff10765e781843bfc97fc5357002a3f8a120201a0e954fce1d2ba48f0 May 15 10:40:28.659298 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 15 10:40:28.659305 kernel: printk: log_buf_len individual max cpu contribution: 4096 bytes May 15 10:40:28.659310 kernel: printk: log_buf_len total cpu_extra contributions: 520192 bytes May 15 10:40:28.659316 kernel: printk: log_buf_len min size: 262144 bytes May 15 10:40:28.659322 kernel: printk: log_buf_len: 1048576 bytes May 15 10:40:28.659327 kernel: printk: early log buf free: 239728(91%) May 15 10:40:28.659333 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 15 10:40:28.659338 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) May 15 10:40:28.659344 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 15 10:40:28.659350 kernel: Memory: 1940392K/2096628K available (12294K kernel code, 2276K rwdata, 13724K rodata, 47472K init, 4108K bss, 155976K reserved, 0K cma-reserved) May 15 10:40:28.659357 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=128, Nodes=1 May 15 10:40:28.659362 kernel: ftrace: allocating 34585 entries in 136 pages May 15 10:40:28.659368 kernel: ftrace: allocated 136 pages with 2 groups May 15 10:40:28.659374 kernel: rcu: Hierarchical RCU implementation. May 15 10:40:28.659380 kernel: rcu: RCU event tracing is enabled. May 15 10:40:28.659387 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=128. May 15 10:40:28.659392 kernel: Rude variant of Tasks RCU enabled. May 15 10:40:28.659398 kernel: Tracing variant of Tasks RCU enabled. May 15 10:40:28.659404 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 15 10:40:28.659409 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=128 May 15 10:40:28.659415 kernel: NR_IRQS: 33024, nr_irqs: 1448, preallocated irqs: 16 May 15 10:40:28.659420 kernel: random: crng init done May 15 10:40:28.659426 kernel: Console: colour VGA+ 80x25 May 15 10:40:28.659431 kernel: printk: console [tty0] enabled May 15 10:40:28.659438 kernel: printk: console [ttyS0] enabled May 15 10:40:28.659443 kernel: ACPI: Core revision 20210730 May 15 10:40:28.659449 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 133484882848 ns May 15 10:40:28.659455 kernel: APIC: Switch to symmetric I/O mode setup May 15 10:40:28.659460 kernel: x2apic enabled May 15 10:40:28.659466 kernel: Switched APIC routing to physical x2apic. May 15 10:40:28.659472 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 May 15 10:40:28.659477 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns May 15 10:40:28.659483 kernel: Calibrating delay loop (skipped) preset value.. 6816.00 BogoMIPS (lpj=3408000) May 15 10:40:28.659489 kernel: Disabled fast string operations May 15 10:40:28.659495 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 May 15 10:40:28.659500 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 May 15 10:40:28.659506 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 15 10:40:28.659512 kernel: Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks! May 15 10:40:28.659519 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit May 15 10:40:28.659524 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall May 15 10:40:28.659530 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS May 15 10:40:28.659535 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT May 15 10:40:28.659542 kernel: RETBleed: Mitigation: Enhanced IBRS May 15 10:40:28.659547 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier May 15 10:40:28.659553 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp May 15 10:40:28.659559 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode May 15 10:40:28.659564 kernel: SRBDS: Unknown: Dependent on hypervisor status May 15 10:40:28.659570 kernel: GDS: Unknown: Dependent on hypervisor status May 15 10:40:28.659575 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' May 15 10:40:28.659581 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' May 15 10:40:28.659587 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' May 15 10:40:28.659593 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 May 15 10:40:28.659599 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. May 15 10:40:28.659605 kernel: Freeing SMP alternatives memory: 32K May 15 10:40:28.659611 kernel: pid_max: default: 131072 minimum: 1024 May 15 10:40:28.659616 kernel: LSM: Security Framework initializing May 15 10:40:28.659621 kernel: SELinux: Initializing. May 15 10:40:28.659627 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) May 15 10:40:28.659633 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) May 15 10:40:28.659638 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) May 15 10:40:28.659645 kernel: Performance Events: Skylake events, core PMU driver. May 15 10:40:28.659651 kernel: core: CPUID marked event: 'cpu cycles' unavailable May 15 10:40:28.659656 kernel: core: CPUID marked event: 'instructions' unavailable May 15 10:40:28.659662 kernel: core: CPUID marked event: 'bus cycles' unavailable May 15 10:40:28.659667 kernel: core: CPUID marked event: 'cache references' unavailable May 15 10:40:28.659672 kernel: core: CPUID marked event: 'cache misses' unavailable May 15 10:40:28.659678 kernel: core: CPUID marked event: 'branch instructions' unavailable May 15 10:40:28.659683 kernel: core: CPUID marked event: 'branch misses' unavailable May 15 10:40:28.659690 kernel: ... version: 1 May 15 10:40:28.659695 kernel: ... bit width: 48 May 15 10:40:28.659700 kernel: ... generic registers: 4 May 15 10:40:28.659706 kernel: ... value mask: 0000ffffffffffff May 15 10:40:28.659712 kernel: ... max period: 000000007fffffff May 15 10:40:28.659717 kernel: ... fixed-purpose events: 0 May 15 10:40:28.659723 kernel: ... event mask: 000000000000000f May 15 10:40:28.659729 kernel: signal: max sigframe size: 1776 May 15 10:40:28.659734 kernel: rcu: Hierarchical SRCU implementation. May 15 10:40:28.659740 kernel: NMI watchdog: Perf NMI watchdog permanently disabled May 15 10:40:28.659746 kernel: smp: Bringing up secondary CPUs ... May 15 10:40:28.659752 kernel: x86: Booting SMP configuration: May 15 10:40:28.659757 kernel: .... node #0, CPUs: #1 May 15 10:40:28.659763 kernel: Disabled fast string operations May 15 10:40:28.659768 kernel: smpboot: CPU 1 Converting physical 2 to logical package 1 May 15 10:40:28.659774 kernel: smpboot: CPU 1 Converting physical 0 to logical die 1 May 15 10:40:28.659779 kernel: smp: Brought up 1 node, 2 CPUs May 15 10:40:28.659785 kernel: smpboot: Max logical packages: 128 May 15 10:40:28.659790 kernel: smpboot: Total of 2 processors activated (13632.00 BogoMIPS) May 15 10:40:28.659796 kernel: devtmpfs: initialized May 15 10:40:28.659803 kernel: x86/mm: Memory block size: 128MB May 15 10:40:28.659808 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7feff000-0x7fefffff] (4096 bytes) May 15 10:40:28.659814 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 15 10:40:28.659819 kernel: futex hash table entries: 32768 (order: 9, 2097152 bytes, linear) May 15 10:40:28.659825 kernel: pinctrl core: initialized pinctrl subsystem May 15 10:40:28.659831 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 15 10:40:28.659836 kernel: audit: initializing netlink subsys (disabled) May 15 10:40:28.659842 kernel: audit: type=2000 audit(1747305627.058:1): state=initialized audit_enabled=0 res=1 May 15 10:40:28.659848 kernel: thermal_sys: Registered thermal governor 'step_wise' May 15 10:40:28.659854 kernel: thermal_sys: Registered thermal governor 'user_space' May 15 10:40:28.659860 kernel: cpuidle: using governor menu May 15 10:40:28.659865 kernel: Simple Boot Flag at 0x36 set to 0x80 May 15 10:40:28.659871 kernel: ACPI: bus type PCI registered May 15 10:40:28.659877 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 15 10:40:28.659882 kernel: dca service started, version 1.12.1 May 15 10:40:28.659888 kernel: PCI: MMCONFIG for domain 0000 [bus 00-7f] at [mem 0xf0000000-0xf7ffffff] (base 0xf0000000) May 15 10:40:28.659893 kernel: PCI: MMCONFIG at [mem 0xf0000000-0xf7ffffff] reserved in E820 May 15 10:40:28.659899 kernel: PCI: Using configuration type 1 for base access May 15 10:40:28.659907 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 15 10:40:28.659914 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages May 15 10:40:28.659919 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages May 15 10:40:28.659925 kernel: ACPI: Added _OSI(Module Device) May 15 10:40:28.659930 kernel: ACPI: Added _OSI(Processor Device) May 15 10:40:28.659936 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 15 10:40:28.659941 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 15 10:40:28.659947 kernel: ACPI: Added _OSI(Linux-Dell-Video) May 15 10:40:28.659952 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) May 15 10:40:28.659959 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) May 15 10:40:28.659965 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 15 10:40:28.659971 kernel: ACPI: [Firmware Bug]: BIOS _OSI(Linux) query ignored May 15 10:40:28.659976 kernel: ACPI: Interpreter enabled May 15 10:40:28.659982 kernel: ACPI: PM: (supports S0 S1 S5) May 15 10:40:28.659987 kernel: ACPI: Using IOAPIC for interrupt routing May 15 10:40:28.659993 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 15 10:40:28.659998 kernel: ACPI: Enabled 4 GPEs in block 00 to 0F May 15 10:40:28.660004 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-7f]) May 15 10:40:28.660080 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 15 10:40:28.660129 kernel: acpi PNP0A03:00: _OSC: platform does not support [AER LTR] May 15 10:40:28.660182 kernel: acpi PNP0A03:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability] May 15 10:40:28.660191 kernel: PCI host bridge to bus 0000:00 May 15 10:40:28.660238 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] May 15 10:40:28.660280 kernel: pci_bus 0000:00: root bus resource [mem 0x000cc000-0x000dbfff window] May 15 10:40:28.660321 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] May 15 10:40:28.660360 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] May 15 10:40:28.660399 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xfeff window] May 15 10:40:28.660438 kernel: pci_bus 0000:00: root bus resource [bus 00-7f] May 15 10:40:28.660491 kernel: pci 0000:00:00.0: [8086:7190] type 00 class 0x060000 May 15 10:40:28.660543 kernel: pci 0000:00:01.0: [8086:7191] type 01 class 0x060400 May 15 10:40:28.660596 kernel: pci 0000:00:07.0: [8086:7110] type 00 class 0x060100 May 15 10:40:28.660668 kernel: pci 0000:00:07.1: [8086:7111] type 00 class 0x01018a May 15 10:40:28.660717 kernel: pci 0000:00:07.1: reg 0x20: [io 0x1060-0x106f] May 15 10:40:28.663391 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] May 15 10:40:28.663449 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x14: [io 0x03f6] May 15 10:40:28.663501 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] May 15 10:40:28.663553 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x1c: [io 0x0376] May 15 10:40:28.663613 kernel: pci 0000:00:07.3: [8086:7113] type 00 class 0x068000 May 15 10:40:28.663666 kernel: pci 0000:00:07.3: quirk: [io 0x1000-0x103f] claimed by PIIX4 ACPI May 15 10:40:28.663715 kernel: pci 0000:00:07.3: quirk: [io 0x1040-0x104f] claimed by PIIX4 SMB May 15 10:40:28.663771 kernel: pci 0000:00:07.7: [15ad:0740] type 00 class 0x088000 May 15 10:40:28.663822 kernel: pci 0000:00:07.7: reg 0x10: [io 0x1080-0x10bf] May 15 10:40:28.663872 kernel: pci 0000:00:07.7: reg 0x14: [mem 0xfebfe000-0xfebfffff 64bit] May 15 10:40:28.663927 kernel: pci 0000:00:0f.0: [15ad:0405] type 00 class 0x030000 May 15 10:40:28.663978 kernel: pci 0000:00:0f.0: reg 0x10: [io 0x1070-0x107f] May 15 10:40:28.664028 kernel: pci 0000:00:0f.0: reg 0x14: [mem 0xe8000000-0xefffffff pref] May 15 10:40:28.664077 kernel: pci 0000:00:0f.0: reg 0x18: [mem 0xfe000000-0xfe7fffff] May 15 10:40:28.664125 kernel: pci 0000:00:0f.0: reg 0x30: [mem 0x00000000-0x00007fff pref] May 15 10:40:28.664182 kernel: pci 0000:00:0f.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] May 15 10:40:28.664237 kernel: pci 0000:00:11.0: [15ad:0790] type 01 class 0x060401 May 15 10:40:28.664295 kernel: pci 0000:00:15.0: [15ad:07a0] type 01 class 0x060400 May 15 10:40:28.664345 kernel: pci 0000:00:15.0: PME# supported from D0 D3hot D3cold May 15 10:40:28.664400 kernel: pci 0000:00:15.1: [15ad:07a0] type 01 class 0x060400 May 15 10:40:28.664454 kernel: pci 0000:00:15.1: PME# supported from D0 D3hot D3cold May 15 10:40:28.664509 kernel: pci 0000:00:15.2: [15ad:07a0] type 01 class 0x060400 May 15 10:40:28.664559 kernel: pci 0000:00:15.2: PME# supported from D0 D3hot D3cold May 15 10:40:28.664616 kernel: pci 0000:00:15.3: [15ad:07a0] type 01 class 0x060400 May 15 10:40:28.664666 kernel: pci 0000:00:15.3: PME# supported from D0 D3hot D3cold May 15 10:40:28.664726 kernel: pci 0000:00:15.4: [15ad:07a0] type 01 class 0x060400 May 15 10:40:28.664788 kernel: pci 0000:00:15.4: PME# supported from D0 D3hot D3cold May 15 10:40:28.664842 kernel: pci 0000:00:15.5: [15ad:07a0] type 01 class 0x060400 May 15 10:40:28.664893 kernel: pci 0000:00:15.5: PME# supported from D0 D3hot D3cold May 15 10:40:28.664948 kernel: pci 0000:00:15.6: [15ad:07a0] type 01 class 0x060400 May 15 10:40:28.664999 kernel: pci 0000:00:15.6: PME# supported from D0 D3hot D3cold May 15 10:40:28.665052 kernel: pci 0000:00:15.7: [15ad:07a0] type 01 class 0x060400 May 15 10:40:28.665103 kernel: pci 0000:00:15.7: PME# supported from D0 D3hot D3cold May 15 10:40:28.665155 kernel: pci 0000:00:16.0: [15ad:07a0] type 01 class 0x060400 May 15 10:40:28.665214 kernel: pci 0000:00:16.0: PME# supported from D0 D3hot D3cold May 15 10:40:28.665269 kernel: pci 0000:00:16.1: [15ad:07a0] type 01 class 0x060400 May 15 10:40:28.665320 kernel: pci 0000:00:16.1: PME# supported from D0 D3hot D3cold May 15 10:40:28.665373 kernel: pci 0000:00:16.2: [15ad:07a0] type 01 class 0x060400 May 15 10:40:28.665423 kernel: pci 0000:00:16.2: PME# supported from D0 D3hot D3cold May 15 10:40:28.665476 kernel: pci 0000:00:16.3: [15ad:07a0] type 01 class 0x060400 May 15 10:40:28.665527 kernel: pci 0000:00:16.3: PME# supported from D0 D3hot D3cold May 15 10:40:28.665584 kernel: pci 0000:00:16.4: [15ad:07a0] type 01 class 0x060400 May 15 10:40:28.665635 kernel: pci 0000:00:16.4: PME# supported from D0 D3hot D3cold May 15 10:40:28.665687 kernel: pci 0000:00:16.5: [15ad:07a0] type 01 class 0x060400 May 15 10:40:28.665738 kernel: pci 0000:00:16.5: PME# supported from D0 D3hot D3cold May 15 10:40:28.665792 kernel: pci 0000:00:16.6: [15ad:07a0] type 01 class 0x060400 May 15 10:40:28.665841 kernel: pci 0000:00:16.6: PME# supported from D0 D3hot D3cold May 15 10:40:28.665897 kernel: pci 0000:00:16.7: [15ad:07a0] type 01 class 0x060400 May 15 10:40:28.665951 kernel: pci 0000:00:16.7: PME# supported from D0 D3hot D3cold May 15 10:40:28.666004 kernel: pci 0000:00:17.0: [15ad:07a0] type 01 class 0x060400 May 15 10:40:28.666055 kernel: pci 0000:00:17.0: PME# supported from D0 D3hot D3cold May 15 10:40:28.666108 kernel: pci 0000:00:17.1: [15ad:07a0] type 01 class 0x060400 May 15 10:40:28.666159 kernel: pci 0000:00:17.1: PME# supported from D0 D3hot D3cold May 15 10:40:28.668261 kernel: pci 0000:00:17.2: [15ad:07a0] type 01 class 0x060400 May 15 10:40:28.668324 kernel: pci 0000:00:17.2: PME# supported from D0 D3hot D3cold May 15 10:40:28.668378 kernel: pci 0000:00:17.3: [15ad:07a0] type 01 class 0x060400 May 15 10:40:28.668426 kernel: pci 0000:00:17.3: PME# supported from D0 D3hot D3cold May 15 10:40:28.668477 kernel: pci 0000:00:17.4: [15ad:07a0] type 01 class 0x060400 May 15 10:40:28.668523 kernel: pci 0000:00:17.4: PME# supported from D0 D3hot D3cold May 15 10:40:28.668572 kernel: pci 0000:00:17.5: [15ad:07a0] type 01 class 0x060400 May 15 10:40:28.668621 kernel: pci 0000:00:17.5: PME# supported from D0 D3hot D3cold May 15 10:40:28.668671 kernel: pci 0000:00:17.6: [15ad:07a0] type 01 class 0x060400 May 15 10:40:28.668717 kernel: pci 0000:00:17.6: PME# supported from D0 D3hot D3cold May 15 10:40:28.668766 kernel: pci 0000:00:17.7: [15ad:07a0] type 01 class 0x060400 May 15 10:40:28.668811 kernel: pci 0000:00:17.7: PME# supported from D0 D3hot D3cold May 15 10:40:28.668860 kernel: pci 0000:00:18.0: [15ad:07a0] type 01 class 0x060400 May 15 10:40:28.668910 kernel: pci 0000:00:18.0: PME# supported from D0 D3hot D3cold May 15 10:40:28.668962 kernel: pci 0000:00:18.1: [15ad:07a0] type 01 class 0x060400 May 15 10:40:28.669008 kernel: pci 0000:00:18.1: PME# supported from D0 D3hot D3cold May 15 10:40:28.669060 kernel: pci 0000:00:18.2: [15ad:07a0] type 01 class 0x060400 May 15 10:40:28.669105 kernel: pci 0000:00:18.2: PME# supported from D0 D3hot D3cold May 15 10:40:28.669154 kernel: pci 0000:00:18.3: [15ad:07a0] type 01 class 0x060400 May 15 10:40:28.669218 kernel: pci 0000:00:18.3: PME# supported from D0 D3hot D3cold May 15 10:40:28.669269 kernel: pci 0000:00:18.4: [15ad:07a0] type 01 class 0x060400 May 15 10:40:28.669315 kernel: pci 0000:00:18.4: PME# supported from D0 D3hot D3cold May 15 10:40:28.669364 kernel: pci 0000:00:18.5: [15ad:07a0] type 01 class 0x060400 May 15 10:40:28.669409 kernel: pci 0000:00:18.5: PME# supported from D0 D3hot D3cold May 15 10:40:28.669458 kernel: pci 0000:00:18.6: [15ad:07a0] type 01 class 0x060400 May 15 10:40:28.669510 kernel: pci 0000:00:18.6: PME# supported from D0 D3hot D3cold May 15 10:40:28.669560 kernel: pci 0000:00:18.7: [15ad:07a0] type 01 class 0x060400 May 15 10:40:28.669607 kernel: pci 0000:00:18.7: PME# supported from D0 D3hot D3cold May 15 10:40:28.669656 kernel: pci_bus 0000:01: extended config space not accessible May 15 10:40:28.669704 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] May 15 10:40:28.669755 kernel: pci_bus 0000:02: extended config space not accessible May 15 10:40:28.669764 kernel: acpiphp: Slot [32] registered May 15 10:40:28.669772 kernel: acpiphp: Slot [33] registered May 15 10:40:28.669778 kernel: acpiphp: Slot [34] registered May 15 10:40:28.669783 kernel: acpiphp: Slot [35] registered May 15 10:40:28.669789 kernel: acpiphp: Slot [36] registered May 15 10:40:28.669795 kernel: acpiphp: Slot [37] registered May 15 10:40:28.669800 kernel: acpiphp: Slot [38] registered May 15 10:40:28.669806 kernel: acpiphp: Slot [39] registered May 15 10:40:28.669812 kernel: acpiphp: Slot [40] registered May 15 10:40:28.669817 kernel: acpiphp: Slot [41] registered May 15 10:40:28.669824 kernel: acpiphp: Slot [42] registered May 15 10:40:28.669830 kernel: acpiphp: Slot [43] registered May 15 10:40:28.669835 kernel: acpiphp: Slot [44] registered May 15 10:40:28.669841 kernel: acpiphp: Slot [45] registered May 15 10:40:28.669847 kernel: acpiphp: Slot [46] registered May 15 10:40:28.669853 kernel: acpiphp: Slot [47] registered May 15 10:40:28.669858 kernel: acpiphp: Slot [48] registered May 15 10:40:28.669864 kernel: acpiphp: Slot [49] registered May 15 10:40:28.669869 kernel: acpiphp: Slot [50] registered May 15 10:40:28.669875 kernel: acpiphp: Slot [51] registered May 15 10:40:28.669881 kernel: acpiphp: Slot [52] registered May 15 10:40:28.669887 kernel: acpiphp: Slot [53] registered May 15 10:40:28.669893 kernel: acpiphp: Slot [54] registered May 15 10:40:28.669898 kernel: acpiphp: Slot [55] registered May 15 10:40:28.669904 kernel: acpiphp: Slot [56] registered May 15 10:40:28.669909 kernel: acpiphp: Slot [57] registered May 15 10:40:28.669915 kernel: acpiphp: Slot [58] registered May 15 10:40:28.669921 kernel: acpiphp: Slot [59] registered May 15 10:40:28.669926 kernel: acpiphp: Slot [60] registered May 15 10:40:28.669933 kernel: acpiphp: Slot [61] registered May 15 10:40:28.669938 kernel: acpiphp: Slot [62] registered May 15 10:40:28.669944 kernel: acpiphp: Slot [63] registered May 15 10:40:28.669989 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] (subtractive decode) May 15 10:40:28.670036 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] May 15 10:40:28.670081 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] May 15 10:40:28.670127 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] May 15 10:40:28.670179 kernel: pci 0000:00:11.0: bridge window [mem 0x000a0000-0x000bffff window] (subtractive decode) May 15 10:40:28.672573 kernel: pci 0000:00:11.0: bridge window [mem 0x000cc000-0x000dbfff window] (subtractive decode) May 15 10:40:28.672630 kernel: pci 0000:00:11.0: bridge window [mem 0xc0000000-0xfebfffff window] (subtractive decode) May 15 10:40:28.672678 kernel: pci 0000:00:11.0: bridge window [io 0x0000-0x0cf7 window] (subtractive decode) May 15 10:40:28.672724 kernel: pci 0000:00:11.0: bridge window [io 0x0d00-0xfeff window] (subtractive decode) May 15 10:40:28.672779 kernel: pci 0000:03:00.0: [15ad:07c0] type 00 class 0x010700 May 15 10:40:28.672828 kernel: pci 0000:03:00.0: reg 0x10: [io 0x4000-0x4007] May 15 10:40:28.672875 kernel: pci 0000:03:00.0: reg 0x14: [mem 0xfd5f8000-0xfd5fffff 64bit] May 15 10:40:28.672926 kernel: pci 0000:03:00.0: reg 0x30: [mem 0x00000000-0x0000ffff pref] May 15 10:40:28.672973 kernel: pci 0000:03:00.0: PME# supported from D0 D3hot D3cold May 15 10:40:28.673021 kernel: pci 0000:03:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' May 15 10:40:28.673070 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] May 15 10:40:28.673116 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] May 15 10:40:28.673162 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] May 15 10:40:28.673219 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] May 15 10:40:28.673265 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] May 15 10:40:28.673312 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] May 15 10:40:28.673358 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] May 15 10:40:28.673406 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] May 15 10:40:28.673451 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] May 15 10:40:28.673497 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] May 15 10:40:28.673542 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] May 15 10:40:28.673590 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] May 15 10:40:28.673638 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] May 15 10:40:28.673682 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] May 15 10:40:28.673729 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] May 15 10:40:28.673774 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] May 15 10:40:28.673820 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] May 15 10:40:28.673868 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] May 15 10:40:28.673918 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] May 15 10:40:28.673964 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] May 15 10:40:28.674011 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] May 15 10:40:28.674056 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] May 15 10:40:28.674102 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] May 15 10:40:28.674148 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] May 15 10:40:28.679392 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] May 15 10:40:28.679451 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] May 15 10:40:28.679506 kernel: pci 0000:0b:00.0: [15ad:07b0] type 00 class 0x020000 May 15 10:40:28.679560 kernel: pci 0000:0b:00.0: reg 0x10: [mem 0xfd4fc000-0xfd4fcfff] May 15 10:40:28.679608 kernel: pci 0000:0b:00.0: reg 0x14: [mem 0xfd4fd000-0xfd4fdfff] May 15 10:40:28.679654 kernel: pci 0000:0b:00.0: reg 0x18: [mem 0xfd4fe000-0xfd4fffff] May 15 10:40:28.679699 kernel: pci 0000:0b:00.0: reg 0x1c: [io 0x5000-0x500f] May 15 10:40:28.679744 kernel: pci 0000:0b:00.0: reg 0x30: [mem 0x00000000-0x0000ffff pref] May 15 10:40:28.679793 kernel: pci 0000:0b:00.0: supports D1 D2 May 15 10:40:28.679838 kernel: pci 0000:0b:00.0: PME# supported from D0 D1 D2 D3hot D3cold May 15 10:40:28.679883 kernel: pci 0000:0b:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' May 15 10:40:28.679929 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] May 15 10:40:28.679993 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] May 15 10:40:28.680052 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] May 15 10:40:28.680098 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] May 15 10:40:28.680143 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] May 15 10:40:28.681562 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] May 15 10:40:28.681618 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] May 15 10:40:28.681669 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] May 15 10:40:28.681717 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] May 15 10:40:28.681765 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] May 15 10:40:28.681811 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] May 15 10:40:28.681858 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] May 15 10:40:28.681908 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] May 15 10:40:28.681954 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] May 15 10:40:28.682001 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] May 15 10:40:28.682046 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] May 15 10:40:28.682092 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] May 15 10:40:28.682140 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] May 15 10:40:28.683473 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] May 15 10:40:28.683528 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] May 15 10:40:28.683580 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] May 15 10:40:28.683635 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] May 15 10:40:28.683682 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] May 15 10:40:28.683729 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] May 15 10:40:28.683775 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] May 15 10:40:28.683820 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] May 15 10:40:28.683866 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] May 15 10:40:28.683915 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] May 15 10:40:28.683961 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] May 15 10:40:28.684009 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] May 15 10:40:28.684055 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] May 15 10:40:28.684100 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] May 15 10:40:28.684145 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] May 15 10:40:28.684443 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] May 15 10:40:28.685223 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] May 15 10:40:28.685287 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] May 15 10:40:28.685343 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] May 15 10:40:28.685396 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] May 15 10:40:28.685449 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] May 15 10:40:28.685500 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] May 15 10:40:28.685550 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] May 15 10:40:28.685602 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] May 15 10:40:28.685651 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] May 15 10:40:28.685700 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] May 15 10:40:28.685753 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] May 15 10:40:28.685803 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] May 15 10:40:28.685853 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] May 15 10:40:28.685904 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] May 15 10:40:28.685953 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] May 15 10:40:28.686002 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] May 15 10:40:28.686053 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] May 15 10:40:28.686102 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] May 15 10:40:28.686153 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] May 15 10:40:28.686233 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] May 15 10:40:28.686282 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] May 15 10:40:28.686331 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] May 15 10:40:28.686380 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] May 15 10:40:28.686431 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] May 15 10:40:28.686480 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] May 15 10:40:28.686529 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] May 15 10:40:28.686580 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] May 15 10:40:28.686632 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] May 15 10:40:28.686681 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] May 15 10:40:28.686729 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] May 15 10:40:28.686780 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] May 15 10:40:28.686829 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] May 15 10:40:28.686878 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] May 15 10:40:28.686932 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] May 15 10:40:28.686982 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] May 15 10:40:28.687031 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] May 15 10:40:28.687081 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] May 15 10:40:28.687131 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] May 15 10:40:28.687190 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] May 15 10:40:28.687241 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] May 15 10:40:28.687290 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] May 15 10:40:28.687341 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] May 15 10:40:28.687392 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] May 15 10:40:28.687442 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] May 15 10:40:28.687491 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] May 15 10:40:28.687499 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 9 May 15 10:40:28.687505 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 0 May 15 10:40:28.687511 kernel: ACPI: PCI: Interrupt link LNKB disabled May 15 10:40:28.687517 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 May 15 10:40:28.687522 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 10 May 15 10:40:28.687529 kernel: iommu: Default domain type: Translated May 15 10:40:28.687535 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 15 10:40:28.687585 kernel: pci 0000:00:0f.0: vgaarb: setting as boot VGA device May 15 10:40:28.687633 kernel: pci 0000:00:0f.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none May 15 10:40:28.687682 kernel: pci 0000:00:0f.0: vgaarb: bridge control possible May 15 10:40:28.687690 kernel: vgaarb: loaded May 15 10:40:28.687696 kernel: pps_core: LinuxPPS API ver. 1 registered May 15 10:40:28.687702 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti May 15 10:40:28.687708 kernel: PTP clock support registered May 15 10:40:28.687715 kernel: PCI: Using ACPI for IRQ routing May 15 10:40:28.687721 kernel: PCI: pci_cache_line_size set to 64 bytes May 15 10:40:28.687727 kernel: e820: reserve RAM buffer [mem 0x0009ec00-0x0009ffff] May 15 10:40:28.687733 kernel: e820: reserve RAM buffer [mem 0x7fee0000-0x7fffffff] May 15 10:40:28.687739 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 May 15 10:40:28.687744 kernel: hpet0: 16 comparators, 64-bit 14.318180 MHz counter May 15 10:40:28.687750 kernel: clocksource: Switched to clocksource tsc-early May 15 10:40:28.687756 kernel: VFS: Disk quotas dquot_6.6.0 May 15 10:40:28.687762 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 15 10:40:28.687768 kernel: pnp: PnP ACPI init May 15 10:40:28.687821 kernel: system 00:00: [io 0x1000-0x103f] has been reserved May 15 10:40:28.687867 kernel: system 00:00: [io 0x1040-0x104f] has been reserved May 15 10:40:28.687924 kernel: system 00:00: [io 0x0cf0-0x0cf1] has been reserved May 15 10:40:28.687978 kernel: system 00:04: [mem 0xfed00000-0xfed003ff] has been reserved May 15 10:40:28.688027 kernel: pnp 00:06: [dma 2] May 15 10:40:28.688075 kernel: system 00:07: [io 0xfce0-0xfcff] has been reserved May 15 10:40:28.688123 kernel: system 00:07: [mem 0xf0000000-0xf7ffffff] has been reserved May 15 10:40:28.692602 kernel: system 00:07: [mem 0xfe800000-0xfe9fffff] has been reserved May 15 10:40:28.692614 kernel: pnp: PnP ACPI: found 8 devices May 15 10:40:28.692621 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 15 10:40:28.692627 kernel: NET: Registered PF_INET protocol family May 15 10:40:28.692633 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) May 15 10:40:28.692639 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) May 15 10:40:28.692646 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 15 10:40:28.692653 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) May 15 10:40:28.692658 kernel: TCP bind hash table entries: 16384 (order: 6, 262144 bytes, linear) May 15 10:40:28.692664 kernel: TCP: Hash tables configured (established 16384 bind 16384) May 15 10:40:28.692670 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) May 15 10:40:28.692676 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) May 15 10:40:28.692682 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 15 10:40:28.692687 kernel: NET: Registered PF_XDP protocol family May 15 10:40:28.692747 kernel: pci 0000:00:15.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 May 15 10:40:28.692801 kernel: pci 0000:00:15.3: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 May 15 10:40:28.692852 kernel: pci 0000:00:15.4: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 May 15 10:40:28.692899 kernel: pci 0000:00:15.5: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 May 15 10:40:28.692947 kernel: pci 0000:00:15.6: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 May 15 10:40:28.692995 kernel: pci 0000:00:15.7: bridge window [io 0x1000-0x0fff] to [bus 0a] add_size 1000 May 15 10:40:28.693043 kernel: pci 0000:00:16.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 0b] add_size 200000 add_align 100000 May 15 10:40:28.693091 kernel: pci 0000:00:16.3: bridge window [io 0x1000-0x0fff] to [bus 0e] add_size 1000 May 15 10:40:28.693138 kernel: pci 0000:00:16.4: bridge window [io 0x1000-0x0fff] to [bus 0f] add_size 1000 May 15 10:40:28.693197 kernel: pci 0000:00:16.5: bridge window [io 0x1000-0x0fff] to [bus 10] add_size 1000 May 15 10:40:28.693245 kernel: pci 0000:00:16.6: bridge window [io 0x1000-0x0fff] to [bus 11] add_size 1000 May 15 10:40:28.693292 kernel: pci 0000:00:16.7: bridge window [io 0x1000-0x0fff] to [bus 12] add_size 1000 May 15 10:40:28.693341 kernel: pci 0000:00:17.3: bridge window [io 0x1000-0x0fff] to [bus 16] add_size 1000 May 15 10:40:28.693386 kernel: pci 0000:00:17.4: bridge window [io 0x1000-0x0fff] to [bus 17] add_size 1000 May 15 10:40:28.693432 kernel: pci 0000:00:17.5: bridge window [io 0x1000-0x0fff] to [bus 18] add_size 1000 May 15 10:40:28.693478 kernel: pci 0000:00:17.6: bridge window [io 0x1000-0x0fff] to [bus 19] add_size 1000 May 15 10:40:28.693524 kernel: pci 0000:00:17.7: bridge window [io 0x1000-0x0fff] to [bus 1a] add_size 1000 May 15 10:40:28.693569 kernel: pci 0000:00:18.2: bridge window [io 0x1000-0x0fff] to [bus 1d] add_size 1000 May 15 10:40:28.693616 kernel: pci 0000:00:18.3: bridge window [io 0x1000-0x0fff] to [bus 1e] add_size 1000 May 15 10:40:28.693662 kernel: pci 0000:00:18.4: bridge window [io 0x1000-0x0fff] to [bus 1f] add_size 1000 May 15 10:40:28.693708 kernel: pci 0000:00:18.5: bridge window [io 0x1000-0x0fff] to [bus 20] add_size 1000 May 15 10:40:28.693753 kernel: pci 0000:00:18.6: bridge window [io 0x1000-0x0fff] to [bus 21] add_size 1000 May 15 10:40:28.693798 kernel: pci 0000:00:18.7: bridge window [io 0x1000-0x0fff] to [bus 22] add_size 1000 May 15 10:40:28.693844 kernel: pci 0000:00:15.0: BAR 15: assigned [mem 0xc0000000-0xc01fffff 64bit pref] May 15 10:40:28.693904 kernel: pci 0000:00:16.0: BAR 15: assigned [mem 0xc0200000-0xc03fffff 64bit pref] May 15 10:40:28.693967 kernel: pci 0000:00:15.3: BAR 13: no space for [io size 0x1000] May 15 10:40:28.694014 kernel: pci 0000:00:15.3: BAR 13: failed to assign [io size 0x1000] May 15 10:40:28.694060 kernel: pci 0000:00:15.4: BAR 13: no space for [io size 0x1000] May 15 10:40:28.694105 kernel: pci 0000:00:15.4: BAR 13: failed to assign [io size 0x1000] May 15 10:40:28.694151 kernel: pci 0000:00:15.5: BAR 13: no space for [io size 0x1000] May 15 10:40:28.694209 kernel: pci 0000:00:15.5: BAR 13: failed to assign [io size 0x1000] May 15 10:40:28.694257 kernel: pci 0000:00:15.6: BAR 13: no space for [io size 0x1000] May 15 10:40:28.694305 kernel: pci 0000:00:15.6: BAR 13: failed to assign [io size 0x1000] May 15 10:40:28.694351 kernel: pci 0000:00:15.7: BAR 13: no space for [io size 0x1000] May 15 10:40:28.694396 kernel: pci 0000:00:15.7: BAR 13: failed to assign [io size 0x1000] May 15 10:40:28.694442 kernel: pci 0000:00:16.3: BAR 13: no space for [io size 0x1000] May 15 10:40:28.694487 kernel: pci 0000:00:16.3: BAR 13: failed to assign [io size 0x1000] May 15 10:40:28.694533 kernel: pci 0000:00:16.4: BAR 13: no space for [io size 0x1000] May 15 10:40:28.694579 kernel: pci 0000:00:16.4: BAR 13: failed to assign [io size 0x1000] May 15 10:40:28.694624 kernel: pci 0000:00:16.5: BAR 13: no space for [io size 0x1000] May 15 10:40:28.694672 kernel: pci 0000:00:16.5: BAR 13: failed to assign [io size 0x1000] May 15 10:40:28.694718 kernel: pci 0000:00:16.6: BAR 13: no space for [io size 0x1000] May 15 10:40:28.694763 kernel: pci 0000:00:16.6: BAR 13: failed to assign [io size 0x1000] May 15 10:40:28.694808 kernel: pci 0000:00:16.7: BAR 13: no space for [io size 0x1000] May 15 10:40:28.694853 kernel: pci 0000:00:16.7: BAR 13: failed to assign [io size 0x1000] May 15 10:40:28.694898 kernel: pci 0000:00:17.3: BAR 13: no space for [io size 0x1000] May 15 10:40:28.694949 kernel: pci 0000:00:17.3: BAR 13: failed to assign [io size 0x1000] May 15 10:40:28.694995 kernel: pci 0000:00:17.4: BAR 13: no space for [io size 0x1000] May 15 10:40:28.696524 kernel: pci 0000:00:17.4: BAR 13: failed to assign [io size 0x1000] May 15 10:40:28.696579 kernel: pci 0000:00:17.5: BAR 13: no space for [io size 0x1000] May 15 10:40:28.696629 kernel: pci 0000:00:17.5: BAR 13: failed to assign [io size 0x1000] May 15 10:40:28.696962 kernel: pci 0000:00:17.6: BAR 13: no space for [io size 0x1000] May 15 10:40:28.697015 kernel: pci 0000:00:17.6: BAR 13: failed to assign [io size 0x1000] May 15 10:40:28.697063 kernel: pci 0000:00:17.7: BAR 13: no space for [io size 0x1000] May 15 10:40:28.697110 kernel: pci 0000:00:17.7: BAR 13: failed to assign [io size 0x1000] May 15 10:40:28.697156 kernel: pci 0000:00:18.2: BAR 13: no space for [io size 0x1000] May 15 10:40:28.697215 kernel: pci 0000:00:18.2: BAR 13: failed to assign [io size 0x1000] May 15 10:40:28.697261 kernel: pci 0000:00:18.3: BAR 13: no space for [io size 0x1000] May 15 10:40:28.697306 kernel: pci 0000:00:18.3: BAR 13: failed to assign [io size 0x1000] May 15 10:40:28.697351 kernel: pci 0000:00:18.4: BAR 13: no space for [io size 0x1000] May 15 10:40:28.697397 kernel: pci 0000:00:18.4: BAR 13: failed to assign [io size 0x1000] May 15 10:40:28.697442 kernel: pci 0000:00:18.5: BAR 13: no space for [io size 0x1000] May 15 10:40:28.697487 kernel: pci 0000:00:18.5: BAR 13: failed to assign [io size 0x1000] May 15 10:40:28.697532 kernel: pci 0000:00:18.6: BAR 13: no space for [io size 0x1000] May 15 10:40:28.697576 kernel: pci 0000:00:18.6: BAR 13: failed to assign [io size 0x1000] May 15 10:40:28.697644 kernel: pci 0000:00:18.7: BAR 13: no space for [io size 0x1000] May 15 10:40:28.697692 kernel: pci 0000:00:18.7: BAR 13: failed to assign [io size 0x1000] May 15 10:40:28.697738 kernel: pci 0000:00:18.7: BAR 13: no space for [io size 0x1000] May 15 10:40:28.697784 kernel: pci 0000:00:18.7: BAR 13: failed to assign [io size 0x1000] May 15 10:40:28.697830 kernel: pci 0000:00:18.6: BAR 13: no space for [io size 0x1000] May 15 10:40:28.697876 kernel: pci 0000:00:18.6: BAR 13: failed to assign [io size 0x1000] May 15 10:40:28.697933 kernel: pci 0000:00:18.5: BAR 13: no space for [io size 0x1000] May 15 10:40:28.697979 kernel: pci 0000:00:18.5: BAR 13: failed to assign [io size 0x1000] May 15 10:40:28.698024 kernel: pci 0000:00:18.4: BAR 13: no space for [io size 0x1000] May 15 10:40:28.698072 kernel: pci 0000:00:18.4: BAR 13: failed to assign [io size 0x1000] May 15 10:40:28.698117 kernel: pci 0000:00:18.3: BAR 13: no space for [io size 0x1000] May 15 10:40:28.698162 kernel: pci 0000:00:18.3: BAR 13: failed to assign [io size 0x1000] May 15 10:40:28.698224 kernel: pci 0000:00:18.2: BAR 13: no space for [io size 0x1000] May 15 10:40:28.698271 kernel: pci 0000:00:18.2: BAR 13: failed to assign [io size 0x1000] May 15 10:40:28.698316 kernel: pci 0000:00:17.7: BAR 13: no space for [io size 0x1000] May 15 10:40:28.698361 kernel: pci 0000:00:17.7: BAR 13: failed to assign [io size 0x1000] May 15 10:40:28.698406 kernel: pci 0000:00:17.6: BAR 13: no space for [io size 0x1000] May 15 10:40:28.698451 kernel: pci 0000:00:17.6: BAR 13: failed to assign [io size 0x1000] May 15 10:40:28.698499 kernel: pci 0000:00:17.5: BAR 13: no space for [io size 0x1000] May 15 10:40:28.698545 kernel: pci 0000:00:17.5: BAR 13: failed to assign [io size 0x1000] May 15 10:40:28.698591 kernel: pci 0000:00:17.4: BAR 13: no space for [io size 0x1000] May 15 10:40:28.698637 kernel: pci 0000:00:17.4: BAR 13: failed to assign [io size 0x1000] May 15 10:40:28.698683 kernel: pci 0000:00:17.3: BAR 13: no space for [io size 0x1000] May 15 10:40:28.698728 kernel: pci 0000:00:17.3: BAR 13: failed to assign [io size 0x1000] May 15 10:40:28.698773 kernel: pci 0000:00:16.7: BAR 13: no space for [io size 0x1000] May 15 10:40:28.698819 kernel: pci 0000:00:16.7: BAR 13: failed to assign [io size 0x1000] May 15 10:40:28.698865 kernel: pci 0000:00:16.6: BAR 13: no space for [io size 0x1000] May 15 10:40:28.698913 kernel: pci 0000:00:16.6: BAR 13: failed to assign [io size 0x1000] May 15 10:40:28.698958 kernel: pci 0000:00:16.5: BAR 13: no space for [io size 0x1000] May 15 10:40:28.699004 kernel: pci 0000:00:16.5: BAR 13: failed to assign [io size 0x1000] May 15 10:40:28.699051 kernel: pci 0000:00:16.4: BAR 13: no space for [io size 0x1000] May 15 10:40:28.699096 kernel: pci 0000:00:16.4: BAR 13: failed to assign [io size 0x1000] May 15 10:40:28.699141 kernel: pci 0000:00:16.3: BAR 13: no space for [io size 0x1000] May 15 10:40:28.699199 kernel: pci 0000:00:16.3: BAR 13: failed to assign [io size 0x1000] May 15 10:40:28.699247 kernel: pci 0000:00:15.7: BAR 13: no space for [io size 0x1000] May 15 10:40:28.699293 kernel: pci 0000:00:15.7: BAR 13: failed to assign [io size 0x1000] May 15 10:40:28.699338 kernel: pci 0000:00:15.6: BAR 13: no space for [io size 0x1000] May 15 10:40:28.699680 kernel: pci 0000:00:15.6: BAR 13: failed to assign [io size 0x1000] May 15 10:40:28.699734 kernel: pci 0000:00:15.5: BAR 13: no space for [io size 0x1000] May 15 10:40:28.699783 kernel: pci 0000:00:15.5: BAR 13: failed to assign [io size 0x1000] May 15 10:40:28.700110 kernel: pci 0000:00:15.4: BAR 13: no space for [io size 0x1000] May 15 10:40:28.700163 kernel: pci 0000:00:15.4: BAR 13: failed to assign [io size 0x1000] May 15 10:40:28.701446 kernel: pci 0000:00:15.3: BAR 13: no space for [io size 0x1000] May 15 10:40:28.701500 kernel: pci 0000:00:15.3: BAR 13: failed to assign [io size 0x1000] May 15 10:40:28.701763 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] May 15 10:40:28.701815 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] May 15 10:40:28.701865 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] May 15 10:40:28.701932 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] May 15 10:40:28.702231 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] May 15 10:40:28.702310 kernel: pci 0000:03:00.0: BAR 6: assigned [mem 0xfd500000-0xfd50ffff pref] May 15 10:40:28.702641 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] May 15 10:40:28.702692 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] May 15 10:40:28.702739 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] May 15 10:40:28.702786 kernel: pci 0000:00:15.0: bridge window [mem 0xc0000000-0xc01fffff 64bit pref] May 15 10:40:28.702835 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] May 15 10:40:28.702881 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] May 15 10:40:28.702927 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] May 15 10:40:28.702972 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] May 15 10:40:28.703018 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] May 15 10:40:28.703064 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] May 15 10:40:28.703109 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] May 15 10:40:28.703481 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] May 15 10:40:28.703533 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] May 15 10:40:28.703601 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] May 15 10:40:28.703653 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] May 15 10:40:28.703700 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] May 15 10:40:28.703746 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] May 15 10:40:28.703794 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] May 15 10:40:28.703841 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] May 15 10:40:28.703887 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] May 15 10:40:28.703938 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] May 15 10:40:28.703985 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] May 15 10:40:28.704031 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] May 15 10:40:28.704076 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] May 15 10:40:28.704122 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] May 15 10:40:28.704176 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] May 15 10:40:28.704230 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] May 15 10:40:28.704281 kernel: pci 0000:0b:00.0: BAR 6: assigned [mem 0xfd400000-0xfd40ffff pref] May 15 10:40:28.704328 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] May 15 10:40:28.704376 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] May 15 10:40:28.704422 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] May 15 10:40:28.704467 kernel: pci 0000:00:16.0: bridge window [mem 0xc0200000-0xc03fffff 64bit pref] May 15 10:40:28.704643 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] May 15 10:40:28.704693 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] May 15 10:40:28.704739 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] May 15 10:40:28.704786 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] May 15 10:40:28.704984 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] May 15 10:40:28.705039 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] May 15 10:40:28.705089 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] May 15 10:40:28.705136 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] May 15 10:40:28.705474 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] May 15 10:40:28.705527 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] May 15 10:40:28.705574 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] May 15 10:40:28.705619 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] May 15 10:40:28.705665 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] May 15 10:40:28.705720 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] May 15 10:40:28.705774 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] May 15 10:40:28.705820 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] May 15 10:40:28.705868 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] May 15 10:40:28.705913 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] May 15 10:40:28.706244 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] May 15 10:40:28.706300 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] May 15 10:40:28.706349 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] May 15 10:40:28.706664 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] May 15 10:40:28.706724 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] May 15 10:40:28.706773 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] May 15 10:40:28.706820 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] May 15 10:40:28.706869 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] May 15 10:40:28.706919 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] May 15 10:40:28.706967 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] May 15 10:40:28.707012 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] May 15 10:40:28.707057 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] May 15 10:40:28.707103 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] May 15 10:40:28.707150 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] May 15 10:40:28.707252 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] May 15 10:40:28.707298 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] May 15 10:40:28.707344 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] May 15 10:40:28.707670 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] May 15 10:40:28.707722 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] May 15 10:40:28.707769 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] May 15 10:40:28.707815 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] May 15 10:40:28.707861 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] May 15 10:40:28.707912 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] May 15 10:40:28.707959 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] May 15 10:40:28.708005 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] May 15 10:40:28.708050 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] May 15 10:40:28.708097 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] May 15 10:40:28.708143 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] May 15 10:40:28.708206 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] May 15 10:40:28.708254 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] May 15 10:40:28.708299 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] May 15 10:40:28.708344 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] May 15 10:40:28.708520 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] May 15 10:40:28.708573 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] May 15 10:40:28.708620 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] May 15 10:40:28.708665 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] May 15 10:40:28.708989 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] May 15 10:40:28.709042 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] May 15 10:40:28.709090 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] May 15 10:40:28.709138 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] May 15 10:40:28.709214 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] May 15 10:40:28.709261 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] May 15 10:40:28.709308 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] May 15 10:40:28.709353 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] May 15 10:40:28.709678 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] May 15 10:40:28.709733 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] May 15 10:40:28.709781 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] May 15 10:40:28.709827 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] May 15 10:40:28.709873 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] May 15 10:40:28.709920 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] May 15 10:40:28.709965 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] May 15 10:40:28.710010 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] May 15 10:40:28.710056 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] May 15 10:40:28.710101 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] May 15 10:40:28.710147 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] May 15 10:40:28.710214 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] May 15 10:40:28.710262 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] May 15 10:40:28.710306 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] May 15 10:40:28.710351 kernel: pci_bus 0000:00: resource 4 [mem 0x000a0000-0x000bffff window] May 15 10:40:28.710669 kernel: pci_bus 0000:00: resource 5 [mem 0x000cc000-0x000dbfff window] May 15 10:40:28.710714 kernel: pci_bus 0000:00: resource 6 [mem 0xc0000000-0xfebfffff window] May 15 10:40:28.710755 kernel: pci_bus 0000:00: resource 7 [io 0x0000-0x0cf7 window] May 15 10:40:28.710795 kernel: pci_bus 0000:00: resource 8 [io 0x0d00-0xfeff window] May 15 10:40:28.710842 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x3fff] May 15 10:40:28.710885 kernel: pci_bus 0000:02: resource 1 [mem 0xfd600000-0xfdffffff] May 15 10:40:28.710931 kernel: pci_bus 0000:02: resource 2 [mem 0xe7b00000-0xe7ffffff 64bit pref] May 15 10:40:28.710973 kernel: pci_bus 0000:02: resource 4 [mem 0x000a0000-0x000bffff window] May 15 10:40:28.711014 kernel: pci_bus 0000:02: resource 5 [mem 0x000cc000-0x000dbfff window] May 15 10:40:28.711056 kernel: pci_bus 0000:02: resource 6 [mem 0xc0000000-0xfebfffff window] May 15 10:40:28.711096 kernel: pci_bus 0000:02: resource 7 [io 0x0000-0x0cf7 window] May 15 10:40:28.711140 kernel: pci_bus 0000:02: resource 8 [io 0x0d00-0xfeff window] May 15 10:40:28.711231 kernel: pci_bus 0000:03: resource 0 [io 0x4000-0x4fff] May 15 10:40:28.711277 kernel: pci_bus 0000:03: resource 1 [mem 0xfd500000-0xfd5fffff] May 15 10:40:28.711319 kernel: pci_bus 0000:03: resource 2 [mem 0xc0000000-0xc01fffff 64bit pref] May 15 10:40:28.711365 kernel: pci_bus 0000:04: resource 0 [io 0x8000-0x8fff] May 15 10:40:28.711684 kernel: pci_bus 0000:04: resource 1 [mem 0xfd100000-0xfd1fffff] May 15 10:40:28.711728 kernel: pci_bus 0000:04: resource 2 [mem 0xe7800000-0xe78fffff 64bit pref] May 15 10:40:28.711778 kernel: pci_bus 0000:05: resource 0 [io 0xc000-0xcfff] May 15 10:40:28.711821 kernel: pci_bus 0000:05: resource 1 [mem 0xfcd00000-0xfcdfffff] May 15 10:40:28.711863 kernel: pci_bus 0000:05: resource 2 [mem 0xe7400000-0xe74fffff 64bit pref] May 15 10:40:28.711910 kernel: pci_bus 0000:06: resource 1 [mem 0xfc900000-0xfc9fffff] May 15 10:40:28.711953 kernel: pci_bus 0000:06: resource 2 [mem 0xe7000000-0xe70fffff 64bit pref] May 15 10:40:28.711998 kernel: pci_bus 0000:07: resource 1 [mem 0xfc500000-0xfc5fffff] May 15 10:40:28.712039 kernel: pci_bus 0000:07: resource 2 [mem 0xe6c00000-0xe6cfffff 64bit pref] May 15 10:40:28.712087 kernel: pci_bus 0000:08: resource 1 [mem 0xfc100000-0xfc1fffff] May 15 10:40:28.712129 kernel: pci_bus 0000:08: resource 2 [mem 0xe6800000-0xe68fffff 64bit pref] May 15 10:40:28.712210 kernel: pci_bus 0000:09: resource 1 [mem 0xfbd00000-0xfbdfffff] May 15 10:40:28.712255 kernel: pci_bus 0000:09: resource 2 [mem 0xe6400000-0xe64fffff 64bit pref] May 15 10:40:28.712301 kernel: pci_bus 0000:0a: resource 1 [mem 0xfb900000-0xfb9fffff] May 15 10:40:28.712345 kernel: pci_bus 0000:0a: resource 2 [mem 0xe6000000-0xe60fffff 64bit pref] May 15 10:40:28.712569 kernel: pci_bus 0000:0b: resource 0 [io 0x5000-0x5fff] May 15 10:40:28.712616 kernel: pci_bus 0000:0b: resource 1 [mem 0xfd400000-0xfd4fffff] May 15 10:40:28.712659 kernel: pci_bus 0000:0b: resource 2 [mem 0xc0200000-0xc03fffff 64bit pref] May 15 10:40:28.712708 kernel: pci_bus 0000:0c: resource 0 [io 0x9000-0x9fff] May 15 10:40:28.713019 kernel: pci_bus 0000:0c: resource 1 [mem 0xfd000000-0xfd0fffff] May 15 10:40:28.713067 kernel: pci_bus 0000:0c: resource 2 [mem 0xe7700000-0xe77fffff 64bit pref] May 15 10:40:28.713123 kernel: pci_bus 0000:0d: resource 0 [io 0xd000-0xdfff] May 15 10:40:28.713176 kernel: pci_bus 0000:0d: resource 1 [mem 0xfcc00000-0xfccfffff] May 15 10:40:28.713222 kernel: pci_bus 0000:0d: resource 2 [mem 0xe7300000-0xe73fffff 64bit pref] May 15 10:40:28.713267 kernel: pci_bus 0000:0e: resource 1 [mem 0xfc800000-0xfc8fffff] May 15 10:40:28.713326 kernel: pci_bus 0000:0e: resource 2 [mem 0xe6f00000-0xe6ffffff 64bit pref] May 15 10:40:28.713379 kernel: pci_bus 0000:0f: resource 1 [mem 0xfc400000-0xfc4fffff] May 15 10:40:28.713517 kernel: pci_bus 0000:0f: resource 2 [mem 0xe6b00000-0xe6bfffff 64bit pref] May 15 10:40:28.713567 kernel: pci_bus 0000:10: resource 1 [mem 0xfc000000-0xfc0fffff] May 15 10:40:28.713611 kernel: pci_bus 0000:10: resource 2 [mem 0xe6700000-0xe67fffff 64bit pref] May 15 10:40:28.713931 kernel: pci_bus 0000:11: resource 1 [mem 0xfbc00000-0xfbcfffff] May 15 10:40:28.713982 kernel: pci_bus 0000:11: resource 2 [mem 0xe6300000-0xe63fffff 64bit pref] May 15 10:40:28.714031 kernel: pci_bus 0000:12: resource 1 [mem 0xfb800000-0xfb8fffff] May 15 10:40:28.714074 kernel: pci_bus 0000:12: resource 2 [mem 0xe5f00000-0xe5ffffff 64bit pref] May 15 10:40:28.714124 kernel: pci_bus 0000:13: resource 0 [io 0x6000-0x6fff] May 15 10:40:28.714193 kernel: pci_bus 0000:13: resource 1 [mem 0xfd300000-0xfd3fffff] May 15 10:40:28.714242 kernel: pci_bus 0000:13: resource 2 [mem 0xe7a00000-0xe7afffff 64bit pref] May 15 10:40:28.714289 kernel: pci_bus 0000:14: resource 0 [io 0xa000-0xafff] May 15 10:40:28.714350 kernel: pci_bus 0000:14: resource 1 [mem 0xfcf00000-0xfcffffff] May 15 10:40:28.714399 kernel: pci_bus 0000:14: resource 2 [mem 0xe7600000-0xe76fffff 64bit pref] May 15 10:40:28.714561 kernel: pci_bus 0000:15: resource 0 [io 0xe000-0xefff] May 15 10:40:28.714607 kernel: pci_bus 0000:15: resource 1 [mem 0xfcb00000-0xfcbfffff] May 15 10:40:28.714650 kernel: pci_bus 0000:15: resource 2 [mem 0xe7200000-0xe72fffff 64bit pref] May 15 10:40:28.714964 kernel: pci_bus 0000:16: resource 1 [mem 0xfc700000-0xfc7fffff] May 15 10:40:28.715015 kernel: pci_bus 0000:16: resource 2 [mem 0xe6e00000-0xe6efffff 64bit pref] May 15 10:40:28.715062 kernel: pci_bus 0000:17: resource 1 [mem 0xfc300000-0xfc3fffff] May 15 10:40:28.715105 kernel: pci_bus 0000:17: resource 2 [mem 0xe6a00000-0xe6afffff 64bit pref] May 15 10:40:28.715155 kernel: pci_bus 0000:18: resource 1 [mem 0xfbf00000-0xfbffffff] May 15 10:40:28.715206 kernel: pci_bus 0000:18: resource 2 [mem 0xe6600000-0xe66fffff 64bit pref] May 15 10:40:28.715255 kernel: pci_bus 0000:19: resource 1 [mem 0xfbb00000-0xfbbfffff] May 15 10:40:28.715298 kernel: pci_bus 0000:19: resource 2 [mem 0xe6200000-0xe62fffff 64bit pref] May 15 10:40:28.715345 kernel: pci_bus 0000:1a: resource 1 [mem 0xfb700000-0xfb7fffff] May 15 10:40:28.715517 kernel: pci_bus 0000:1a: resource 2 [mem 0xe5e00000-0xe5efffff 64bit pref] May 15 10:40:28.715571 kernel: pci_bus 0000:1b: resource 0 [io 0x7000-0x7fff] May 15 10:40:28.715615 kernel: pci_bus 0000:1b: resource 1 [mem 0xfd200000-0xfd2fffff] May 15 10:40:28.715925 kernel: pci_bus 0000:1b: resource 2 [mem 0xe7900000-0xe79fffff 64bit pref] May 15 10:40:28.715980 kernel: pci_bus 0000:1c: resource 0 [io 0xb000-0xbfff] May 15 10:40:28.716026 kernel: pci_bus 0000:1c: resource 1 [mem 0xfce00000-0xfcefffff] May 15 10:40:28.716069 kernel: pci_bus 0000:1c: resource 2 [mem 0xe7500000-0xe75fffff 64bit pref] May 15 10:40:28.716122 kernel: pci_bus 0000:1d: resource 1 [mem 0xfca00000-0xfcafffff] May 15 10:40:28.716205 kernel: pci_bus 0000:1d: resource 2 [mem 0xe7100000-0xe71fffff 64bit pref] May 15 10:40:28.716260 kernel: pci_bus 0000:1e: resource 1 [mem 0xfc600000-0xfc6fffff] May 15 10:40:28.716303 kernel: pci_bus 0000:1e: resource 2 [mem 0xe6d00000-0xe6dfffff 64bit pref] May 15 10:40:28.716352 kernel: pci_bus 0000:1f: resource 1 [mem 0xfc200000-0xfc2fffff] May 15 10:40:28.716512 kernel: pci_bus 0000:1f: resource 2 [mem 0xe6900000-0xe69fffff 64bit pref] May 15 10:40:28.716567 kernel: pci_bus 0000:20: resource 1 [mem 0xfbe00000-0xfbefffff] May 15 10:40:28.716611 kernel: pci_bus 0000:20: resource 2 [mem 0xe6500000-0xe65fffff 64bit pref] May 15 10:40:28.716656 kernel: pci_bus 0000:21: resource 1 [mem 0xfba00000-0xfbafffff] May 15 10:40:28.716699 kernel: pci_bus 0000:21: resource 2 [mem 0xe6100000-0xe61fffff 64bit pref] May 15 10:40:28.716745 kernel: pci_bus 0000:22: resource 1 [mem 0xfb600000-0xfb6fffff] May 15 10:40:28.716788 kernel: pci_bus 0000:22: resource 2 [mem 0xe5d00000-0xe5dfffff 64bit pref] May 15 10:40:28.716841 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers May 15 10:40:28.716850 kernel: PCI: CLS 32 bytes, default 64 May 15 10:40:28.716856 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer May 15 10:40:28.716862 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns May 15 10:40:28.716869 kernel: clocksource: Switched to clocksource tsc May 15 10:40:28.716875 kernel: Initialise system trusted keyrings May 15 10:40:28.716881 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 May 15 10:40:28.716887 kernel: Key type asymmetric registered May 15 10:40:28.716893 kernel: Asymmetric key parser 'x509' registered May 15 10:40:28.716905 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) May 15 10:40:28.716911 kernel: io scheduler mq-deadline registered May 15 10:40:28.716917 kernel: io scheduler kyber registered May 15 10:40:28.716923 kernel: io scheduler bfq registered May 15 10:40:28.716973 kernel: pcieport 0000:00:15.0: PME: Signaling with IRQ 24 May 15 10:40:28.717020 kernel: pcieport 0000:00:15.0: pciehp: Slot #160 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 15 10:40:28.717068 kernel: pcieport 0000:00:15.1: PME: Signaling with IRQ 25 May 15 10:40:28.717114 kernel: pcieport 0000:00:15.1: pciehp: Slot #161 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 15 10:40:28.717163 kernel: pcieport 0000:00:15.2: PME: Signaling with IRQ 26 May 15 10:40:28.717224 kernel: pcieport 0000:00:15.2: pciehp: Slot #162 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 15 10:40:28.717272 kernel: pcieport 0000:00:15.3: PME: Signaling with IRQ 27 May 15 10:40:28.717322 kernel: pcieport 0000:00:15.3: pciehp: Slot #163 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 15 10:40:28.717369 kernel: pcieport 0000:00:15.4: PME: Signaling with IRQ 28 May 15 10:40:28.717415 kernel: pcieport 0000:00:15.4: pciehp: Slot #164 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 15 10:40:28.717466 kernel: pcieport 0000:00:15.5: PME: Signaling with IRQ 29 May 15 10:40:28.717512 kernel: pcieport 0000:00:15.5: pciehp: Slot #165 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 15 10:40:28.717559 kernel: pcieport 0000:00:15.6: PME: Signaling with IRQ 30 May 15 10:40:28.717605 kernel: pcieport 0000:00:15.6: pciehp: Slot #166 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 15 10:40:28.717652 kernel: pcieport 0000:00:15.7: PME: Signaling with IRQ 31 May 15 10:40:28.717701 kernel: pcieport 0000:00:15.7: pciehp: Slot #167 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 15 10:40:28.717748 kernel: pcieport 0000:00:16.0: PME: Signaling with IRQ 32 May 15 10:40:28.717794 kernel: pcieport 0000:00:16.0: pciehp: Slot #192 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 15 10:40:28.717841 kernel: pcieport 0000:00:16.1: PME: Signaling with IRQ 33 May 15 10:40:28.717886 kernel: pcieport 0000:00:16.1: pciehp: Slot #193 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 15 10:40:28.717939 kernel: pcieport 0000:00:16.2: PME: Signaling with IRQ 34 May 15 10:40:28.717985 kernel: pcieport 0000:00:16.2: pciehp: Slot #194 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 15 10:40:28.718033 kernel: pcieport 0000:00:16.3: PME: Signaling with IRQ 35 May 15 10:40:28.718080 kernel: pcieport 0000:00:16.3: pciehp: Slot #195 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 15 10:40:28.718127 kernel: pcieport 0000:00:16.4: PME: Signaling with IRQ 36 May 15 10:40:28.718180 kernel: pcieport 0000:00:16.4: pciehp: Slot #196 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 15 10:40:28.718228 kernel: pcieport 0000:00:16.5: PME: Signaling with IRQ 37 May 15 10:40:28.718273 kernel: pcieport 0000:00:16.5: pciehp: Slot #197 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 15 10:40:28.718322 kernel: pcieport 0000:00:16.6: PME: Signaling with IRQ 38 May 15 10:40:28.718369 kernel: pcieport 0000:00:16.6: pciehp: Slot #198 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 15 10:40:28.718415 kernel: pcieport 0000:00:16.7: PME: Signaling with IRQ 39 May 15 10:40:28.718730 kernel: pcieport 0000:00:16.7: pciehp: Slot #199 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 15 10:40:28.718787 kernel: pcieport 0000:00:17.0: PME: Signaling with IRQ 40 May 15 10:40:28.718839 kernel: pcieport 0000:00:17.0: pciehp: Slot #224 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 15 10:40:28.718887 kernel: pcieport 0000:00:17.1: PME: Signaling with IRQ 41 May 15 10:40:28.718935 kernel: pcieport 0000:00:17.1: pciehp: Slot #225 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 15 10:40:28.718984 kernel: pcieport 0000:00:17.2: PME: Signaling with IRQ 42 May 15 10:40:28.719030 kernel: pcieport 0000:00:17.2: pciehp: Slot #226 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 15 10:40:28.719077 kernel: pcieport 0000:00:17.3: PME: Signaling with IRQ 43 May 15 10:40:28.719124 kernel: pcieport 0000:00:17.3: pciehp: Slot #227 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 15 10:40:28.719210 kernel: pcieport 0000:00:17.4: PME: Signaling with IRQ 44 May 15 10:40:28.719261 kernel: pcieport 0000:00:17.4: pciehp: Slot #228 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 15 10:40:28.719308 kernel: pcieport 0000:00:17.5: PME: Signaling with IRQ 45 May 15 10:40:28.719355 kernel: pcieport 0000:00:17.5: pciehp: Slot #229 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 15 10:40:28.719401 kernel: pcieport 0000:00:17.6: PME: Signaling with IRQ 46 May 15 10:40:28.719450 kernel: pcieport 0000:00:17.6: pciehp: Slot #230 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 15 10:40:28.719496 kernel: pcieport 0000:00:17.7: PME: Signaling with IRQ 47 May 15 10:40:28.719542 kernel: pcieport 0000:00:17.7: pciehp: Slot #231 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 15 10:40:28.719588 kernel: pcieport 0000:00:18.0: PME: Signaling with IRQ 48 May 15 10:40:28.719904 kernel: pcieport 0000:00:18.0: pciehp: Slot #256 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 15 10:40:28.719960 kernel: pcieport 0000:00:18.1: PME: Signaling with IRQ 49 May 15 10:40:28.720028 kernel: pcieport 0000:00:18.1: pciehp: Slot #257 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 15 10:40:28.720106 kernel: pcieport 0000:00:18.2: PME: Signaling with IRQ 50 May 15 10:40:28.720365 kernel: pcieport 0000:00:18.2: pciehp: Slot #258 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 15 10:40:28.720429 kernel: pcieport 0000:00:18.3: PME: Signaling with IRQ 51 May 15 10:40:28.720482 kernel: pcieport 0000:00:18.3: pciehp: Slot #259 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 15 10:40:28.721408 kernel: pcieport 0000:00:18.4: PME: Signaling with IRQ 52 May 15 10:40:28.721474 kernel: pcieport 0000:00:18.4: pciehp: Slot #260 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 15 10:40:28.721526 kernel: pcieport 0000:00:18.5: PME: Signaling with IRQ 53 May 15 10:40:28.721575 kernel: pcieport 0000:00:18.5: pciehp: Slot #261 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 15 10:40:28.721927 kernel: pcieport 0000:00:18.6: PME: Signaling with IRQ 54 May 15 10:40:28.721983 kernel: pcieport 0000:00:18.6: pciehp: Slot #262 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 15 10:40:28.722039 kernel: pcieport 0000:00:18.7: PME: Signaling with IRQ 55 May 15 10:40:28.722435 kernel: pcieport 0000:00:18.7: pciehp: Slot #263 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 15 10:40:28.722445 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 15 10:40:28.722452 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 15 10:40:28.722458 kernel: 00:05: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 15 10:40:28.722465 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBC,PNP0f13:MOUS] at 0x60,0x64 irq 1,12 May 15 10:40:28.722473 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 May 15 10:40:28.722481 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 May 15 10:40:28.722659 kernel: rtc_cmos 00:01: registered as rtc0 May 15 10:40:28.722711 kernel: rtc_cmos 00:01: setting system clock to 2025-05-15T10:40:28 UTC (1747305628) May 15 10:40:28.722754 kernel: rtc_cmos 00:01: alarms up to one month, y3k, 114 bytes nvram May 15 10:40:28.722762 kernel: intel_pstate: CPU model not supported May 15 10:40:28.722769 kernel: NET: Registered PF_INET6 protocol family May 15 10:40:28.722775 kernel: Segment Routing with IPv6 May 15 10:40:28.722782 kernel: In-situ OAM (IOAM) with IPv6 May 15 10:40:28.722788 kernel: NET: Registered PF_PACKET protocol family May 15 10:40:28.722796 kernel: Key type dns_resolver registered May 15 10:40:28.722802 kernel: IPI shorthand broadcast: enabled May 15 10:40:28.722973 kernel: sched_clock: Marking stable (824224196, 220429276)->(1107241111, -62587639) May 15 10:40:28.722983 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 May 15 10:40:28.722990 kernel: registered taskstats version 1 May 15 10:40:28.722996 kernel: Loading compiled-in X.509 certificates May 15 10:40:28.723002 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.182-flatcar: 04007c306af6b7696d09b3c2eafc1297036fd28e' May 15 10:40:28.723008 kernel: Key type .fscrypt registered May 15 10:40:28.723014 kernel: Key type fscrypt-provisioning registered May 15 10:40:28.723022 kernel: ima: No TPM chip found, activating TPM-bypass! May 15 10:40:28.723028 kernel: ima: Allocated hash algorithm: sha1 May 15 10:40:28.723034 kernel: ima: No architecture policies found May 15 10:40:28.723041 kernel: clk: Disabling unused clocks May 15 10:40:28.723047 kernel: Freeing unused kernel image (initmem) memory: 47472K May 15 10:40:28.723053 kernel: Write protecting the kernel read-only data: 28672k May 15 10:40:28.723059 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K May 15 10:40:28.723065 kernel: Freeing unused kernel image (rodata/data gap) memory: 612K May 15 10:40:28.723073 kernel: Run /init as init process May 15 10:40:28.723079 kernel: with arguments: May 15 10:40:28.723085 kernel: /init May 15 10:40:28.723091 kernel: with environment: May 15 10:40:28.723097 kernel: HOME=/ May 15 10:40:28.723103 kernel: TERM=linux May 15 10:40:28.723109 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 15 10:40:28.723116 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) May 15 10:40:28.723124 systemd[1]: Detected virtualization vmware. May 15 10:40:28.723132 systemd[1]: Detected architecture x86-64. May 15 10:40:28.723138 systemd[1]: Running in initrd. May 15 10:40:28.723144 systemd[1]: No hostname configured, using default hostname. May 15 10:40:28.723154 systemd[1]: Hostname set to . May 15 10:40:28.723207 systemd[1]: Initializing machine ID from random generator. May 15 10:40:28.723214 systemd[1]: Queued start job for default target initrd.target. May 15 10:40:28.723221 systemd[1]: Started systemd-ask-password-console.path. May 15 10:40:28.723227 systemd[1]: Reached target cryptsetup.target. May 15 10:40:28.723472 systemd[1]: Reached target paths.target. May 15 10:40:28.723481 systemd[1]: Reached target slices.target. May 15 10:40:28.723487 systemd[1]: Reached target swap.target. May 15 10:40:28.723494 systemd[1]: Reached target timers.target. May 15 10:40:28.723500 systemd[1]: Listening on iscsid.socket. May 15 10:40:28.723506 systemd[1]: Listening on iscsiuio.socket. May 15 10:40:28.723512 systemd[1]: Listening on systemd-journald-audit.socket. May 15 10:40:28.723521 systemd[1]: Listening on systemd-journald-dev-log.socket. May 15 10:40:28.723528 systemd[1]: Listening on systemd-journald.socket. May 15 10:40:28.723534 systemd[1]: Listening on systemd-networkd.socket. May 15 10:40:28.723540 systemd[1]: Listening on systemd-udevd-control.socket. May 15 10:40:28.723716 systemd[1]: Listening on systemd-udevd-kernel.socket. May 15 10:40:28.723724 systemd[1]: Reached target sockets.target. May 15 10:40:28.723730 systemd[1]: Starting kmod-static-nodes.service... May 15 10:40:28.723737 systemd[1]: Finished network-cleanup.service. May 15 10:40:28.723743 systemd[1]: Starting systemd-fsck-usr.service... May 15 10:40:28.723751 systemd[1]: Starting systemd-journald.service... May 15 10:40:28.723759 systemd[1]: Starting systemd-modules-load.service... May 15 10:40:28.723765 systemd[1]: Starting systemd-resolved.service... May 15 10:40:28.723771 systemd[1]: Starting systemd-vconsole-setup.service... May 15 10:40:28.723777 systemd[1]: Finished kmod-static-nodes.service. May 15 10:40:28.723784 systemd[1]: Finished systemd-fsck-usr.service. May 15 10:40:28.723790 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... May 15 10:40:28.723796 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. May 15 10:40:28.723803 kernel: audit: type=1130 audit(1747305628.659:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:40:28.723811 systemd[1]: Finished systemd-vconsole-setup.service. May 15 10:40:28.723817 systemd[1]: Starting dracut-cmdline-ask.service... May 15 10:40:28.723824 kernel: audit: type=1130 audit(1747305628.667:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:40:28.723830 systemd[1]: Started systemd-resolved.service. May 15 10:40:28.723836 systemd[1]: Reached target nss-lookup.target. May 15 10:40:28.723843 kernel: audit: type=1130 audit(1747305628.674:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:40:28.723849 systemd[1]: Finished dracut-cmdline-ask.service. May 15 10:40:28.723855 kernel: audit: type=1130 audit(1747305628.687:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:40:28.723862 systemd[1]: Starting dracut-cmdline.service... May 15 10:40:28.723869 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 15 10:40:28.723875 kernel: Bridge firewalling registered May 15 10:40:28.723881 kernel: SCSI subsystem initialized May 15 10:40:28.723909 systemd-journald[217]: Journal started May 15 10:40:28.723946 systemd-journald[217]: Runtime Journal (/run/log/journal/911fe67943b54834bd55340a4b383856) is 4.8M, max 38.8M, 34.0M free. May 15 10:40:28.727458 systemd[1]: Started systemd-journald.service. May 15 10:40:28.727473 kernel: audit: type=1130 audit(1747305628.723:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:40:28.659000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:40:28.667000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:40:28.674000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:40:28.687000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:40:28.723000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:40:28.659028 systemd-modules-load[218]: Inserted module 'overlay' May 15 10:40:28.673062 systemd-resolved[219]: Positive Trust Anchors: May 15 10:40:28.673069 systemd-resolved[219]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 15 10:40:28.673088 systemd-resolved[219]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test May 15 10:40:28.674911 systemd-resolved[219]: Defaulting to hostname 'linux'. May 15 10:40:28.704524 systemd-modules-load[218]: Inserted module 'br_netfilter' May 15 10:40:28.730125 dracut-cmdline[233]: dracut-dracut-053 May 15 10:40:28.730125 dracut-cmdline[233]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LA May 15 10:40:28.730125 dracut-cmdline[233]: BEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=f8c1bc5ff10765e781843bfc97fc5357002a3f8a120201a0e954fce1d2ba48f0 May 15 10:40:28.733863 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 15 10:40:28.733878 kernel: device-mapper: uevent: version 1.0.3 May 15 10:40:28.733886 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com May 15 10:40:28.734290 systemd-modules-load[218]: Inserted module 'dm_multipath' May 15 10:40:28.738206 kernel: audit: type=1130 audit(1747305628.733:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:40:28.733000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:40:28.734621 systemd[1]: Finished systemd-modules-load.service. May 15 10:40:28.735106 systemd[1]: Starting systemd-sysctl.service... May 15 10:40:28.742322 systemd[1]: Finished systemd-sysctl.service. May 15 10:40:28.745225 kernel: audit: type=1130 audit(1747305628.741:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:40:28.741000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:40:28.759180 kernel: Loading iSCSI transport class v2.0-870. May 15 10:40:28.770177 kernel: iscsi: registered transport (tcp) May 15 10:40:28.785208 kernel: iscsi: registered transport (qla4xxx) May 15 10:40:28.785240 kernel: QLogic iSCSI HBA Driver May 15 10:40:28.800000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:40:28.801646 systemd[1]: Finished dracut-cmdline.service. May 15 10:40:28.802248 systemd[1]: Starting dracut-pre-udev.service... May 15 10:40:28.805324 kernel: audit: type=1130 audit(1747305628.800:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:40:28.839180 kernel: raid6: avx2x4 gen() 48247 MB/s May 15 10:40:28.856185 kernel: raid6: avx2x4 xor() 21841 MB/s May 15 10:40:28.873177 kernel: raid6: avx2x2 gen() 53080 MB/s May 15 10:40:28.890182 kernel: raid6: avx2x2 xor() 32014 MB/s May 15 10:40:28.907179 kernel: raid6: avx2x1 gen() 44881 MB/s May 15 10:40:28.924180 kernel: raid6: avx2x1 xor() 27748 MB/s May 15 10:40:28.941183 kernel: raid6: sse2x4 gen() 21225 MB/s May 15 10:40:28.958178 kernel: raid6: sse2x4 xor() 11928 MB/s May 15 10:40:28.975181 kernel: raid6: sse2x2 gen() 21541 MB/s May 15 10:40:28.992182 kernel: raid6: sse2x2 xor() 13384 MB/s May 15 10:40:29.009183 kernel: raid6: sse2x1 gen() 18065 MB/s May 15 10:40:29.026354 kernel: raid6: sse2x1 xor() 8933 MB/s May 15 10:40:29.026372 kernel: raid6: using algorithm avx2x2 gen() 53080 MB/s May 15 10:40:29.026380 kernel: raid6: .... xor() 32014 MB/s, rmw enabled May 15 10:40:29.027541 kernel: raid6: using avx2x2 recovery algorithm May 15 10:40:29.036178 kernel: xor: automatically using best checksumming function avx May 15 10:40:29.096187 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no May 15 10:40:29.101402 systemd[1]: Finished dracut-pre-udev.service. May 15 10:40:29.100000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:40:29.102076 systemd[1]: Starting systemd-udevd.service... May 15 10:40:29.100000 audit: BPF prog-id=7 op=LOAD May 15 10:40:29.100000 audit: BPF prog-id=8 op=LOAD May 15 10:40:29.105255 kernel: audit: type=1130 audit(1747305629.100:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:40:29.113069 systemd-udevd[415]: Using default interface naming scheme 'v252'. May 15 10:40:29.115927 systemd[1]: Started systemd-udevd.service. May 15 10:40:29.116525 systemd[1]: Starting dracut-pre-trigger.service... May 15 10:40:29.114000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:40:29.123973 dracut-pre-trigger[419]: rd.md=0: removing MD RAID activation May 15 10:40:29.138843 systemd[1]: Finished dracut-pre-trigger.service. May 15 10:40:29.137000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:40:29.139402 systemd[1]: Starting systemd-udev-trigger.service... May 15 10:40:29.199779 systemd[1]: Finished systemd-udev-trigger.service. May 15 10:40:29.198000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:40:29.257478 kernel: VMware PVSCSI driver - version 1.0.7.0-k May 15 10:40:29.257510 kernel: vmw_pvscsi: using 64bit dma May 15 10:40:29.269183 kernel: vmw_pvscsi: max_id: 16 May 15 10:40:29.269209 kernel: vmw_pvscsi: setting ring_pages to 8 May 15 10:40:29.273495 kernel: VMware vmxnet3 virtual NIC driver - version 1.6.0.0-k-NAPI May 15 10:40:29.273526 kernel: vmxnet3 0000:0b:00.0: # of Tx queues : 2, # of Rx queues : 2 May 15 10:40:29.276419 kernel: vmxnet3 0000:0b:00.0 eth0: NIC Link is Up 10000 Mbps May 15 10:40:29.280178 kernel: cryptd: max_cpu_qlen set to 1000 May 15 10:40:29.281409 kernel: vmw_pvscsi: enabling reqCallThreshold May 15 10:40:29.281429 kernel: vmw_pvscsi: driver-based request coalescing enabled May 15 10:40:29.281437 kernel: vmw_pvscsi: using MSI-X May 15 10:40:29.282684 kernel: scsi host0: VMware PVSCSI storage adapter rev 2, req/cmp/msg rings: 8/8/1 pages, cmd_per_lun=254 May 15 10:40:29.283514 kernel: vmw_pvscsi 0000:03:00.0: VMware PVSCSI rev 2 host #0 May 15 10:40:29.285974 kernel: scsi 0:0:0:0: Direct-Access VMware Virtual disk 2.0 PQ: 0 ANSI: 6 May 15 10:40:29.293572 kernel: AVX2 version of gcm_enc/dec engaged. May 15 10:40:29.293595 kernel: AES CTR mode by8 optimization enabled May 15 10:40:29.298181 kernel: vmxnet3 0000:0b:00.0 ens192: renamed from eth0 May 15 10:40:29.304493 kernel: sd 0:0:0:0: [sda] 17805312 512-byte logical blocks: (9.12 GB/8.49 GiB) May 15 10:40:29.313347 kernel: sd 0:0:0:0: [sda] Write Protect is off May 15 10:40:29.313417 kernel: sd 0:0:0:0: [sda] Mode Sense: 31 00 00 00 May 15 10:40:29.313476 kernel: sd 0:0:0:0: [sda] Cache data unavailable May 15 10:40:29.313532 kernel: sd 0:0:0:0: [sda] Assuming drive cache: write through May 15 10:40:29.313589 kernel: libata version 3.00 loaded. May 15 10:40:29.313600 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 15 10:40:29.313608 kernel: sd 0:0:0:0: [sda] Attached SCSI disk May 15 10:40:29.316180 kernel: ata_piix 0000:00:07.1: version 2.13 May 15 10:40:29.316260 kernel: scsi host1: ata_piix May 15 10:40:29.316324 kernel: scsi host2: ata_piix May 15 10:40:29.316381 kernel: ata1: PATA max UDMA/33 cmd 0x1f0 ctl 0x3f6 bmdma 0x1060 irq 14 May 15 10:40:29.316390 kernel: ata2: PATA max UDMA/33 cmd 0x170 ctl 0x376 bmdma 0x1068 irq 15 May 15 10:40:29.344326 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. May 15 10:40:29.345176 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by (udev-worker) (473) May 15 10:40:29.349720 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. May 15 10:40:29.349898 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. May 15 10:40:29.350506 systemd[1]: Starting disk-uuid.service... May 15 10:40:29.354539 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. May 15 10:40:29.368138 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. May 15 10:40:29.485209 kernel: ata2.00: ATAPI: VMware Virtual IDE CDROM Drive, 00000001, max UDMA/33 May 15 10:40:29.491196 kernel: scsi 2:0:0:0: CD-ROM NECVMWar VMware IDE CDR10 1.00 PQ: 0 ANSI: 5 May 15 10:40:29.517236 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 1x/1x writer dvd-ram cd/rw xa/form2 cdda tray May 15 10:40:29.535578 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 May 15 10:40:29.535597 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 May 15 10:40:30.376187 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 15 10:40:30.376441 disk-uuid[538]: The operation has completed successfully. May 15 10:40:30.416509 systemd[1]: disk-uuid.service: Deactivated successfully. May 15 10:40:30.416779 systemd[1]: Finished disk-uuid.service. May 15 10:40:30.415000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:40:30.415000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:40:30.417508 systemd[1]: Starting verity-setup.service... May 15 10:40:30.427204 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" May 15 10:40:30.461973 systemd[1]: Found device dev-mapper-usr.device. May 15 10:40:30.462787 systemd[1]: Mounting sysusr-usr.mount... May 15 10:40:30.464108 systemd[1]: Finished verity-setup.service. May 15 10:40:30.463000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:40:30.513546 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. May 15 10:40:30.513669 systemd[1]: Mounted sysusr-usr.mount. May 15 10:40:30.514216 systemd[1]: Starting afterburn-network-kargs.service... May 15 10:40:30.514680 systemd[1]: Starting ignition-setup.service... May 15 10:40:30.529649 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm May 15 10:40:30.529675 kernel: BTRFS info (device sda6): using free space tree May 15 10:40:30.529687 kernel: BTRFS info (device sda6): has skinny extents May 15 10:40:30.536180 kernel: BTRFS info (device sda6): enabling ssd optimizations May 15 10:40:30.542820 systemd[1]: mnt-oem.mount: Deactivated successfully. May 15 10:40:30.561000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:40:30.562613 systemd[1]: Finished ignition-setup.service. May 15 10:40:30.563187 systemd[1]: Starting ignition-fetch-offline.service... May 15 10:40:30.633497 systemd[1]: Finished afterburn-network-kargs.service. May 15 10:40:30.632000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=afterburn-network-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:40:30.634222 systemd[1]: Starting parse-ip-for-networkd.service... May 15 10:40:30.686744 systemd[1]: Finished parse-ip-for-networkd.service. May 15 10:40:30.685000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:40:30.686000 audit: BPF prog-id=9 op=LOAD May 15 10:40:30.687644 systemd[1]: Starting systemd-networkd.service... May 15 10:40:30.701056 systemd-networkd[733]: lo: Link UP May 15 10:40:30.701062 systemd-networkd[733]: lo: Gained carrier May 15 10:40:30.701337 systemd-networkd[733]: Enumeration completed May 15 10:40:30.701533 systemd-networkd[733]: ens192: Configuring with /etc/systemd/network/10-dracut-cmdline-99.network. May 15 10:40:30.705507 kernel: vmxnet3 0000:0b:00.0 ens192: intr type 3, mode 0, 3 vectors allocated May 15 10:40:30.705615 kernel: vmxnet3 0000:0b:00.0 ens192: NIC Link is Up 10000 Mbps May 15 10:40:30.700000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:40:30.701742 systemd[1]: Started systemd-networkd.service. May 15 10:40:30.701881 systemd[1]: Reached target network.target. May 15 10:40:30.702397 systemd[1]: Starting iscsiuio.service... May 15 10:40:30.703969 systemd-networkd[733]: ens192: Link UP May 15 10:40:30.703971 systemd-networkd[733]: ens192: Gained carrier May 15 10:40:30.707846 systemd[1]: Started iscsiuio.service. May 15 10:40:30.706000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:40:30.708679 systemd[1]: Starting iscsid.service... May 15 10:40:30.710782 iscsid[738]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi May 15 10:40:30.710782 iscsid[738]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. May 15 10:40:30.710782 iscsid[738]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. May 15 10:40:30.710782 iscsid[738]: If using hardware iscsi like qla4xxx this message can be ignored. May 15 10:40:30.710782 iscsid[738]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi May 15 10:40:30.711727 iscsid[738]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf May 15 10:40:30.711921 systemd[1]: Started iscsid.service. May 15 10:40:30.710000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:40:30.713010 systemd[1]: Starting dracut-initqueue.service... May 15 10:40:30.720090 systemd[1]: Finished dracut-initqueue.service. May 15 10:40:30.719000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:40:30.720380 systemd[1]: Reached target remote-fs-pre.target. May 15 10:40:30.720837 systemd[1]: Reached target remote-cryptsetup.target. May 15 10:40:30.721063 systemd[1]: Reached target remote-fs.target. May 15 10:40:30.722011 systemd[1]: Starting dracut-pre-mount.service... May 15 10:40:30.724538 ignition[605]: Ignition 2.14.0 May 15 10:40:30.724544 ignition[605]: Stage: fetch-offline May 15 10:40:30.724573 ignition[605]: reading system config file "/usr/lib/ignition/base.d/base.ign" May 15 10:40:30.724602 ignition[605]: parsing config with SHA512: bd85a898f7da4744ff98e02742aa4854e1ceea8026a4e95cb6fb599b39b54cff0db353847df13d3c55ae196a9dc5d648977228d55e5da3ea20cd600fa7cec8ed May 15 10:40:30.726884 systemd[1]: Finished dracut-pre-mount.service. May 15 10:40:30.725000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:40:30.728117 ignition[605]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" May 15 10:40:30.728201 ignition[605]: parsed url from cmdline: "" May 15 10:40:30.728203 ignition[605]: no config URL provided May 15 10:40:30.728207 ignition[605]: reading system config file "/usr/lib/ignition/user.ign" May 15 10:40:30.728212 ignition[605]: no config at "/usr/lib/ignition/user.ign" May 15 10:40:30.735962 ignition[605]: config successfully fetched May 15 10:40:30.735984 ignition[605]: parsing config with SHA512: 1c80e0f34f0d09cbb1949a398c29529722af917156ac2d27547e6f183e67328350a8cf2d20d105462f482fed3ff334aeb8bfa5d884dab8203a4988602d078215 May 15 10:40:30.741470 unknown[605]: fetched base config from "system" May 15 10:40:30.741626 unknown[605]: fetched user config from "vmware" May 15 10:40:30.742089 ignition[605]: fetch-offline: fetch-offline passed May 15 10:40:30.742282 ignition[605]: Ignition finished successfully May 15 10:40:30.742884 systemd[1]: Finished ignition-fetch-offline.service. May 15 10:40:30.743063 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 15 10:40:30.741000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:40:30.743524 systemd[1]: Starting ignition-kargs.service... May 15 10:40:30.748975 ignition[753]: Ignition 2.14.0 May 15 10:40:30.748981 ignition[753]: Stage: kargs May 15 10:40:30.749043 ignition[753]: reading system config file "/usr/lib/ignition/base.d/base.ign" May 15 10:40:30.749054 ignition[753]: parsing config with SHA512: bd85a898f7da4744ff98e02742aa4854e1ceea8026a4e95cb6fb599b39b54cff0db353847df13d3c55ae196a9dc5d648977228d55e5da3ea20cd600fa7cec8ed May 15 10:40:30.750481 ignition[753]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" May 15 10:40:30.752086 ignition[753]: kargs: kargs passed May 15 10:40:30.752112 ignition[753]: Ignition finished successfully May 15 10:40:30.752833 systemd[1]: Finished ignition-kargs.service. May 15 10:40:30.751000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:40:30.753501 systemd[1]: Starting ignition-disks.service... May 15 10:40:30.758060 ignition[759]: Ignition 2.14.0 May 15 10:40:30.758279 ignition[759]: Stage: disks May 15 10:40:30.758449 ignition[759]: reading system config file "/usr/lib/ignition/base.d/base.ign" May 15 10:40:30.758597 ignition[759]: parsing config with SHA512: bd85a898f7da4744ff98e02742aa4854e1ceea8026a4e95cb6fb599b39b54cff0db353847df13d3c55ae196a9dc5d648977228d55e5da3ea20cd600fa7cec8ed May 15 10:40:30.759942 ignition[759]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" May 15 10:40:30.761520 ignition[759]: disks: disks passed May 15 10:40:30.761667 ignition[759]: Ignition finished successfully May 15 10:40:30.761000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:40:30.762189 systemd[1]: Finished ignition-disks.service. May 15 10:40:30.762337 systemd[1]: Reached target initrd-root-device.target. May 15 10:40:30.762428 systemd[1]: Reached target local-fs-pre.target. May 15 10:40:30.762511 systemd[1]: Reached target local-fs.target. May 15 10:40:30.762591 systemd[1]: Reached target sysinit.target. May 15 10:40:30.762668 systemd[1]: Reached target basic.target. May 15 10:40:30.763428 systemd[1]: Starting systemd-fsck-root.service... May 15 10:40:30.775877 systemd-fsck[767]: ROOT: clean, 623/1628000 files, 124060/1617920 blocks May 15 10:40:30.777158 systemd[1]: Finished systemd-fsck-root.service. May 15 10:40:30.776000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:40:30.777866 systemd[1]: Mounting sysroot.mount... May 15 10:40:30.824661 kernel: EXT4-fs (sda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. May 15 10:40:30.824343 systemd[1]: Mounted sysroot.mount. May 15 10:40:30.824515 systemd[1]: Reached target initrd-root-fs.target. May 15 10:40:30.825944 systemd[1]: Mounting sysroot-usr.mount... May 15 10:40:30.826415 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. May 15 10:40:30.826444 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 15 10:40:30.826462 systemd[1]: Reached target ignition-diskful.target. May 15 10:40:30.828254 systemd[1]: Mounted sysroot-usr.mount. May 15 10:40:30.829000 systemd[1]: Starting initrd-setup-root.service... May 15 10:40:30.832918 initrd-setup-root[777]: cut: /sysroot/etc/passwd: No such file or directory May 15 10:40:30.836692 initrd-setup-root[785]: cut: /sysroot/etc/group: No such file or directory May 15 10:40:30.839300 initrd-setup-root[793]: cut: /sysroot/etc/shadow: No such file or directory May 15 10:40:30.841813 initrd-setup-root[801]: cut: /sysroot/etc/gshadow: No such file or directory May 15 10:40:30.873823 systemd[1]: Finished initrd-setup-root.service. May 15 10:40:30.872000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:40:30.874378 systemd[1]: Starting ignition-mount.service... May 15 10:40:30.874798 systemd[1]: Starting sysroot-boot.service... May 15 10:40:30.878298 bash[818]: umount: /sysroot/usr/share/oem: not mounted. May 15 10:40:30.883470 ignition[819]: INFO : Ignition 2.14.0 May 15 10:40:30.883470 ignition[819]: INFO : Stage: mount May 15 10:40:30.883786 ignition[819]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" May 15 10:40:30.883786 ignition[819]: DEBUG : parsing config with SHA512: bd85a898f7da4744ff98e02742aa4854e1ceea8026a4e95cb6fb599b39b54cff0db353847df13d3c55ae196a9dc5d648977228d55e5da3ea20cd600fa7cec8ed May 15 10:40:30.884754 ignition[819]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" May 15 10:40:30.886034 ignition[819]: INFO : mount: mount passed May 15 10:40:30.886147 ignition[819]: INFO : Ignition finished successfully May 15 10:40:30.885000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:40:30.886650 systemd[1]: Finished ignition-mount.service. May 15 10:40:30.892000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:40:30.893172 systemd[1]: Finished sysroot-boot.service. May 15 10:40:31.476010 systemd[1]: Mounting sysroot-usr-share-oem.mount... May 15 10:40:31.484189 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (828) May 15 10:40:31.486938 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm May 15 10:40:31.486959 kernel: BTRFS info (device sda6): using free space tree May 15 10:40:31.486970 kernel: BTRFS info (device sda6): has skinny extents May 15 10:40:31.491179 kernel: BTRFS info (device sda6): enabling ssd optimizations May 15 10:40:31.493072 systemd[1]: Mounted sysroot-usr-share-oem.mount. May 15 10:40:31.493752 systemd[1]: Starting ignition-files.service... May 15 10:40:31.505656 ignition[848]: INFO : Ignition 2.14.0 May 15 10:40:31.505974 ignition[848]: INFO : Stage: files May 15 10:40:31.506229 ignition[848]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" May 15 10:40:31.506433 ignition[848]: DEBUG : parsing config with SHA512: bd85a898f7da4744ff98e02742aa4854e1ceea8026a4e95cb6fb599b39b54cff0db353847df13d3c55ae196a9dc5d648977228d55e5da3ea20cd600fa7cec8ed May 15 10:40:31.508403 ignition[848]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" May 15 10:40:31.512028 ignition[848]: DEBUG : files: compiled without relabeling support, skipping May 15 10:40:31.512513 ignition[848]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 15 10:40:31.512513 ignition[848]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 15 10:40:31.515003 ignition[848]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 15 10:40:31.515276 ignition[848]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 15 10:40:31.516161 unknown[848]: wrote ssh authorized keys file for user: core May 15 10:40:31.516407 ignition[848]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 15 10:40:31.516846 ignition[848]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" May 15 10:40:31.517027 ignition[848]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" May 15 10:40:31.517027 ignition[848]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 15 10:40:31.517027 ignition[848]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 May 15 10:40:31.554898 ignition[848]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK May 15 10:40:31.680079 ignition[848]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 15 10:40:31.688358 ignition[848]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" May 15 10:40:31.689596 ignition[848]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 May 15 10:40:31.777366 systemd-networkd[733]: ens192: Gained IPv6LL May 15 10:40:32.135777 ignition[848]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK May 15 10:40:32.168610 ignition[848]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" May 15 10:40:32.168842 ignition[848]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/install.sh" May 15 10:40:32.168842 ignition[848]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/install.sh" May 15 10:40:32.168842 ignition[848]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nginx.yaml" May 15 10:40:32.168842 ignition[848]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nginx.yaml" May 15 10:40:32.169466 ignition[848]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 15 10:40:32.169466 ignition[848]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 15 10:40:32.169466 ignition[848]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 15 10:40:32.169466 ignition[848]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 15 10:40:32.169466 ignition[848]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/flatcar/update.conf" May 15 10:40:32.169466 ignition[848]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 15 10:40:32.169466 ignition[848]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 15 10:40:32.169466 ignition[848]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 15 10:40:32.170890 ignition[848]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/etc/systemd/system/vmtoolsd.service" May 15 10:40:32.170890 ignition[848]: INFO : files: createFilesystemsFiles: createFiles: op(c): oem config not found in "/usr/share/oem", looking on oem partition May 15 10:40:32.172382 ignition[848]: INFO : files: createFilesystemsFiles: createFiles: op(c): op(d): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1617104643" May 15 10:40:32.172590 ignition[848]: CRITICAL : files: createFilesystemsFiles: createFiles: op(c): op(d): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1617104643": device or resource busy May 15 10:40:32.172856 ignition[848]: ERROR : files: createFilesystemsFiles: createFiles: op(c): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem1617104643", trying btrfs: device or resource busy May 15 10:40:32.173068 ignition[848]: INFO : files: createFilesystemsFiles: createFiles: op(c): op(e): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1617104643" May 15 10:40:32.173371 ignition[848]: INFO : files: createFilesystemsFiles: createFiles: op(c): op(e): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1617104643" May 15 10:40:32.174050 ignition[848]: INFO : files: createFilesystemsFiles: createFiles: op(c): op(f): [started] unmounting "/mnt/oem1617104643" May 15 10:40:32.174258 ignition[848]: INFO : files: createFilesystemsFiles: createFiles: op(c): op(f): [finished] unmounting "/mnt/oem1617104643" May 15 10:40:32.175353 systemd[1]: mnt-oem1617104643.mount: Deactivated successfully. May 15 10:40:32.176305 ignition[848]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/etc/systemd/system/vmtoolsd.service" May 15 10:40:32.176489 ignition[848]: INFO : files: createFilesystemsFiles: createFiles: op(10): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 15 10:40:32.176489 ignition[848]: INFO : files: createFilesystemsFiles: createFiles: op(10): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 May 15 10:40:32.622435 ignition[848]: INFO : files: createFilesystemsFiles: createFiles: op(10): GET result: OK May 15 10:40:32.749031 ignition[848]: INFO : files: createFilesystemsFiles: createFiles: op(10): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 15 10:40:32.749314 ignition[848]: INFO : files: createFilesystemsFiles: createFiles: op(11): [started] writing file "/sysroot/etc/systemd/network/00-vmware.network" May 15 10:40:32.749499 ignition[848]: INFO : files: createFilesystemsFiles: createFiles: op(11): [finished] writing file "/sysroot/etc/systemd/network/00-vmware.network" May 15 10:40:32.749499 ignition[848]: INFO : files: op(12): [started] processing unit "vmtoolsd.service" May 15 10:40:32.749499 ignition[848]: INFO : files: op(12): [finished] processing unit "vmtoolsd.service" May 15 10:40:32.749499 ignition[848]: INFO : files: op(13): [started] processing unit "containerd.service" May 15 10:40:32.749499 ignition[848]: INFO : files: op(13): op(14): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" May 15 10:40:32.749499 ignition[848]: INFO : files: op(13): op(14): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" May 15 10:40:32.749499 ignition[848]: INFO : files: op(13): [finished] processing unit "containerd.service" May 15 10:40:32.749499 ignition[848]: INFO : files: op(15): [started] processing unit "prepare-helm.service" May 15 10:40:32.749499 ignition[848]: INFO : files: op(15): op(16): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 15 10:40:32.751046 ignition[848]: INFO : files: op(15): op(16): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 15 10:40:32.751046 ignition[848]: INFO : files: op(15): [finished] processing unit "prepare-helm.service" May 15 10:40:32.751046 ignition[848]: INFO : files: op(17): [started] processing unit "coreos-metadata.service" May 15 10:40:32.751046 ignition[848]: INFO : files: op(17): op(18): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 15 10:40:32.751046 ignition[848]: INFO : files: op(17): op(18): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 15 10:40:32.751046 ignition[848]: INFO : files: op(17): [finished] processing unit "coreos-metadata.service" May 15 10:40:32.751046 ignition[848]: INFO : files: op(19): [started] setting preset to enabled for "vmtoolsd.service" May 15 10:40:32.751046 ignition[848]: INFO : files: op(19): [finished] setting preset to enabled for "vmtoolsd.service" May 15 10:40:32.751046 ignition[848]: INFO : files: op(1a): [started] setting preset to enabled for "prepare-helm.service" May 15 10:40:32.751046 ignition[848]: INFO : files: op(1a): [finished] setting preset to enabled for "prepare-helm.service" May 15 10:40:32.751046 ignition[848]: INFO : files: op(1b): [started] setting preset to disabled for "coreos-metadata.service" May 15 10:40:32.751046 ignition[848]: INFO : files: op(1b): op(1c): [started] removing enablement symlink(s) for "coreos-metadata.service" May 15 10:40:32.801352 ignition[848]: INFO : files: op(1b): op(1c): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 15 10:40:32.801567 ignition[848]: INFO : files: op(1b): [finished] setting preset to disabled for "coreos-metadata.service" May 15 10:40:32.801567 ignition[848]: INFO : files: createResultFile: createFiles: op(1d): [started] writing file "/sysroot/etc/.ignition-result.json" May 15 10:40:32.801567 ignition[848]: INFO : files: createResultFile: createFiles: op(1d): [finished] writing file "/sysroot/etc/.ignition-result.json" May 15 10:40:32.801567 ignition[848]: INFO : files: files passed May 15 10:40:32.801567 ignition[848]: INFO : Ignition finished successfully May 15 10:40:32.803094 systemd[1]: Finished ignition-files.service. May 15 10:40:32.805483 kernel: kauditd_printk_skb: 24 callbacks suppressed May 15 10:40:32.805506 kernel: audit: type=1130 audit(1747305632.802:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:40:32.802000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:40:32.804026 systemd[1]: Starting initrd-setup-root-after-ignition.service... May 15 10:40:32.804142 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). May 15 10:40:32.804559 systemd[1]: Starting ignition-quench.service... May 15 10:40:32.810233 systemd[1]: ignition-quench.service: Deactivated successfully. May 15 10:40:32.810302 systemd[1]: Finished ignition-quench.service. May 15 10:40:32.809000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:40:32.809000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:40:32.814609 initrd-setup-root-after-ignition[874]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 15 10:40:32.815481 kernel: audit: type=1130 audit(1747305632.809:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:40:32.815495 kernel: audit: type=1131 audit(1747305632.809:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:40:32.814967 systemd[1]: Finished initrd-setup-root-after-ignition.service. May 15 10:40:32.814000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:40:32.815857 systemd[1]: Reached target ignition-complete.target. May 15 10:40:32.819092 kernel: audit: type=1130 audit(1747305632.814:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:40:32.818808 systemd[1]: Starting initrd-parse-etc.service... May 15 10:40:32.826970 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 15 10:40:32.827188 systemd[1]: Finished initrd-parse-etc.service. May 15 10:40:32.827466 systemd[1]: Reached target initrd-fs.target. May 15 10:40:32.827673 systemd[1]: Reached target initrd.target. May 15 10:40:32.827899 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. May 15 10:40:32.826000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:40:32.828570 systemd[1]: Starting dracut-pre-pivot.service... May 15 10:40:32.833402 kernel: audit: type=1130 audit(1747305632.826:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:40:32.833418 kernel: audit: type=1131 audit(1747305632.826:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:40:32.826000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:40:32.835954 systemd[1]: Finished dracut-pre-pivot.service. May 15 10:40:32.836492 systemd[1]: Starting initrd-cleanup.service... May 15 10:40:32.834000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:40:32.840360 kernel: audit: type=1130 audit(1747305632.834:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:40:32.842458 systemd[1]: Stopped target nss-lookup.target. May 15 10:40:32.842726 systemd[1]: Stopped target remote-cryptsetup.target. May 15 10:40:32.842996 systemd[1]: Stopped target timers.target. May 15 10:40:32.843257 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 15 10:40:32.843324 systemd[1]: Stopped dracut-pre-pivot.service. May 15 10:40:32.842000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:40:32.843749 systemd[1]: Stopped target initrd.target. May 15 10:40:32.846214 kernel: audit: type=1131 audit(1747305632.842:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:40:32.846298 systemd[1]: Stopped target basic.target. May 15 10:40:32.846546 systemd[1]: Stopped target ignition-complete.target. May 15 10:40:32.846804 systemd[1]: Stopped target ignition-diskful.target. May 15 10:40:32.847055 systemd[1]: Stopped target initrd-root-device.target. May 15 10:40:32.847318 systemd[1]: Stopped target remote-fs.target. May 15 10:40:32.847560 systemd[1]: Stopped target remote-fs-pre.target. May 15 10:40:32.847813 systemd[1]: Stopped target sysinit.target. May 15 10:40:32.848069 systemd[1]: Stopped target local-fs.target. May 15 10:40:32.848322 systemd[1]: Stopped target local-fs-pre.target. May 15 10:40:32.848570 systemd[1]: Stopped target swap.target. May 15 10:40:32.848788 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 15 10:40:32.848853 systemd[1]: Stopped dracut-pre-mount.service. May 15 10:40:32.848000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:40:32.849305 systemd[1]: Stopped target cryptsetup.target. May 15 10:40:32.853179 kernel: audit: type=1131 audit(1747305632.848:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:40:32.853043 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 15 10:40:32.853106 systemd[1]: Stopped dracut-initqueue.service. May 15 10:40:32.853273 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 15 10:40:32.855976 kernel: audit: type=1131 audit(1747305632.852:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:40:32.852000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:40:32.854000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:40:32.853329 systemd[1]: Stopped ignition-fetch-offline.service. May 15 10:40:32.855918 systemd[1]: Stopped target paths.target. May 15 10:40:32.856056 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 15 10:40:32.857189 systemd[1]: Stopped systemd-ask-password-console.path. May 15 10:40:32.857340 systemd[1]: Stopped target slices.target. May 15 10:40:32.857521 systemd[1]: Stopped target sockets.target. May 15 10:40:32.857696 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 15 10:40:32.856000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:40:32.857761 systemd[1]: Stopped initrd-setup-root-after-ignition.service. May 15 10:40:32.857935 systemd[1]: ignition-files.service: Deactivated successfully. May 15 10:40:32.857988 systemd[1]: Stopped ignition-files.service. May 15 10:40:32.857000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:40:32.860591 iscsid[738]: iscsid shutting down. May 15 10:40:32.858745 systemd[1]: Stopping ignition-mount.service... May 15 10:40:32.862000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:40:32.863566 ignition[887]: INFO : Ignition 2.14.0 May 15 10:40:32.863566 ignition[887]: INFO : Stage: umount May 15 10:40:32.865000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:40:32.865000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:40:32.866000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:40:32.868000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:40:32.868000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:40:32.868000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:40:32.858929 systemd[1]: Stopping iscsid.service... May 15 10:40:32.869998 ignition[887]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" May 15 10:40:32.869998 ignition[887]: DEBUG : parsing config with SHA512: bd85a898f7da4744ff98e02742aa4854e1ceea8026a4e95cb6fb599b39b54cff0db353847df13d3c55ae196a9dc5d648977228d55e5da3ea20cd600fa7cec8ed May 15 10:40:32.863219 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 15 10:40:32.872449 ignition[887]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" May 15 10:40:32.872449 ignition[887]: INFO : umount: umount passed May 15 10:40:32.872449 ignition[887]: INFO : Ignition finished successfully May 15 10:40:32.871000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:40:32.872000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:40:32.863301 systemd[1]: Stopped kmod-static-nodes.service. May 15 10:40:32.872000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:40:32.863913 systemd[1]: Stopping sysroot-boot.service... May 15 10:40:32.872000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:40:32.864004 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 15 10:40:32.864075 systemd[1]: Stopped systemd-udev-trigger.service. May 15 10:40:32.866309 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 15 10:40:32.866368 systemd[1]: Stopped dracut-pre-trigger.service. May 15 10:40:32.867298 systemd[1]: iscsid.service: Deactivated successfully. May 15 10:40:32.867359 systemd[1]: Stopped iscsid.service. May 15 10:40:32.867687 systemd[1]: iscsid.socket: Deactivated successfully. May 15 10:40:32.867735 systemd[1]: Closed iscsid.socket. May 15 10:40:32.867867 systemd[1]: Stopping iscsiuio.service... May 15 10:40:32.869238 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 15 10:40:32.869287 systemd[1]: Finished initrd-cleanup.service. May 15 10:40:32.869466 systemd[1]: iscsiuio.service: Deactivated successfully. May 15 10:40:32.869513 systemd[1]: Stopped iscsiuio.service. May 15 10:40:32.869647 systemd[1]: iscsiuio.socket: Deactivated successfully. May 15 10:40:32.869663 systemd[1]: Closed iscsiuio.socket. May 15 10:40:32.872261 systemd[1]: ignition-mount.service: Deactivated successfully. May 15 10:40:32.872302 systemd[1]: Stopped ignition-mount.service. May 15 10:40:32.872522 systemd[1]: Stopped target network.target. May 15 10:40:32.872896 systemd[1]: ignition-disks.service: Deactivated successfully. May 15 10:40:32.872921 systemd[1]: Stopped ignition-disks.service. May 15 10:40:32.873423 systemd[1]: ignition-kargs.service: Deactivated successfully. May 15 10:40:32.873443 systemd[1]: Stopped ignition-kargs.service. May 15 10:40:32.873573 systemd[1]: ignition-setup.service: Deactivated successfully. May 15 10:40:32.873591 systemd[1]: Stopped ignition-setup.service. May 15 10:40:32.873782 systemd[1]: Stopping systemd-networkd.service... May 15 10:40:32.874077 systemd[1]: Stopping systemd-resolved.service... May 15 10:40:32.874824 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 15 10:40:32.877709 systemd[1]: systemd-networkd.service: Deactivated successfully. May 15 10:40:32.877758 systemd[1]: Stopped systemd-networkd.service. May 15 10:40:32.876000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:40:32.878430 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 15 10:40:32.878453 systemd[1]: Closed systemd-networkd.socket. May 15 10:40:32.878000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:40:32.878000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=afterburn-network-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:40:32.878000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:40:32.878000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:40:32.880000 audit: BPF prog-id=9 op=UNLOAD May 15 10:40:32.879008 systemd[1]: Stopping network-cleanup.service... May 15 10:40:32.879105 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 15 10:40:32.879134 systemd[1]: Stopped parse-ip-for-networkd.service. May 15 10:40:32.879443 systemd[1]: afterburn-network-kargs.service: Deactivated successfully. May 15 10:40:32.879464 systemd[1]: Stopped afterburn-network-kargs.service. May 15 10:40:32.879570 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 15 10:40:32.879590 systemd[1]: Stopped systemd-sysctl.service. May 15 10:40:32.879743 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 15 10:40:32.879763 systemd[1]: Stopped systemd-modules-load.service. May 15 10:40:32.879901 systemd[1]: Stopping systemd-udevd.service... May 15 10:40:32.882049 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 15 10:40:32.884911 systemd[1]: systemd-resolved.service: Deactivated successfully. May 15 10:40:32.884964 systemd[1]: Stopped systemd-resolved.service. May 15 10:40:32.883000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:40:32.885551 systemd[1]: systemd-udevd.service: Deactivated successfully. May 15 10:40:32.885623 systemd[1]: Stopped systemd-udevd.service. May 15 10:40:32.884000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:40:32.885992 systemd[1]: network-cleanup.service: Deactivated successfully. May 15 10:40:32.886042 systemd[1]: Stopped network-cleanup.service. May 15 10:40:32.885000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:40:32.886302 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 15 10:40:32.886319 systemd[1]: Closed systemd-udevd-control.socket. May 15 10:40:32.886434 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 15 10:40:32.885000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:40:32.886459 systemd[1]: Closed systemd-udevd-kernel.socket. May 15 10:40:32.885000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:40:32.886596 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 15 10:40:32.885000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:40:32.886616 systemd[1]: Stopped dracut-pre-udev.service. May 15 10:40:32.886773 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 15 10:40:32.886792 systemd[1]: Stopped dracut-cmdline.service. May 15 10:40:32.886945 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 15 10:40:32.889000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:40:32.889000 audit: BPF prog-id=6 op=UNLOAD May 15 10:40:32.886963 systemd[1]: Stopped dracut-cmdline-ask.service. May 15 10:40:32.887488 systemd[1]: Starting initrd-udevadm-cleanup-db.service... May 15 10:40:32.890295 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 15 10:40:32.890325 systemd[1]: Stopped systemd-vconsole-setup.service. May 15 10:40:32.890000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:40:32.890000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:40:32.891323 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 15 10:40:32.891366 systemd[1]: Finished initrd-udevadm-cleanup-db.service. May 15 10:40:32.979853 systemd[1]: sysroot-boot.service: Deactivated successfully. May 15 10:40:32.979930 systemd[1]: Stopped sysroot-boot.service. May 15 10:40:32.979000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:40:32.980307 systemd[1]: Reached target initrd-switch-root.target. May 15 10:40:32.980454 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 15 10:40:32.980485 systemd[1]: Stopped initrd-setup-root.service. May 15 10:40:32.979000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:40:32.981225 systemd[1]: Starting initrd-switch-root.service... May 15 10:40:32.994776 systemd[1]: Switching root. May 15 10:40:32.994000 audit: BPF prog-id=8 op=UNLOAD May 15 10:40:32.994000 audit: BPF prog-id=7 op=UNLOAD May 15 10:40:32.996000 audit: BPF prog-id=5 op=UNLOAD May 15 10:40:32.996000 audit: BPF prog-id=4 op=UNLOAD May 15 10:40:32.996000 audit: BPF prog-id=3 op=UNLOAD May 15 10:40:33.013050 systemd-journald[217]: Journal stopped May 15 10:40:35.337090 systemd-journald[217]: Received SIGTERM from PID 1 (systemd). May 15 10:40:35.337109 kernel: SELinux: Class mctp_socket not defined in policy. May 15 10:40:35.337117 kernel: SELinux: Class anon_inode not defined in policy. May 15 10:40:35.337123 kernel: SELinux: the above unknown classes and permissions will be allowed May 15 10:40:35.337128 kernel: SELinux: policy capability network_peer_controls=1 May 15 10:40:35.337135 kernel: SELinux: policy capability open_perms=1 May 15 10:40:35.337141 kernel: SELinux: policy capability extended_socket_class=1 May 15 10:40:35.337147 kernel: SELinux: policy capability always_check_network=0 May 15 10:40:35.337153 kernel: SELinux: policy capability cgroup_seclabel=1 May 15 10:40:35.337158 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 15 10:40:35.337163 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 15 10:40:35.338933 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 15 10:40:35.338951 systemd[1]: Successfully loaded SELinux policy in 42.638ms. May 15 10:40:35.338960 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 5.448ms. May 15 10:40:35.338969 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) May 15 10:40:35.338976 systemd[1]: Detected virtualization vmware. May 15 10:40:35.338983 systemd[1]: Detected architecture x86-64. May 15 10:40:35.338990 systemd[1]: Detected first boot. May 15 10:40:35.338996 systemd[1]: Initializing machine ID from random generator. May 15 10:40:35.339002 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). May 15 10:40:35.339008 systemd[1]: Populated /etc with preset unit settings. May 15 10:40:35.339015 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 15 10:40:35.339022 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 15 10:40:35.339030 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 15 10:40:35.339038 systemd[1]: Queued start job for default target multi-user.target. May 15 10:40:35.339045 systemd[1]: Unnecessary job was removed for dev-sda6.device. May 15 10:40:35.339051 systemd[1]: Created slice system-addon\x2dconfig.slice. May 15 10:40:35.339058 systemd[1]: Created slice system-addon\x2drun.slice. May 15 10:40:35.339064 systemd[1]: Created slice system-getty.slice. May 15 10:40:35.339071 systemd[1]: Created slice system-modprobe.slice. May 15 10:40:35.339077 systemd[1]: Created slice system-serial\x2dgetty.slice. May 15 10:40:35.339094 systemd[1]: Created slice system-system\x2dcloudinit.slice. May 15 10:40:35.339103 systemd[1]: Created slice system-systemd\x2dfsck.slice. May 15 10:40:35.339110 systemd[1]: Created slice user.slice. May 15 10:40:35.339116 systemd[1]: Started systemd-ask-password-console.path. May 15 10:40:35.339123 systemd[1]: Started systemd-ask-password-wall.path. May 15 10:40:35.339129 systemd[1]: Set up automount boot.automount. May 15 10:40:35.339136 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. May 15 10:40:35.339142 systemd[1]: Reached target integritysetup.target. May 15 10:40:35.339149 systemd[1]: Reached target remote-cryptsetup.target. May 15 10:40:35.339157 systemd[1]: Reached target remote-fs.target. May 15 10:40:35.341114 systemd[1]: Reached target slices.target. May 15 10:40:35.341130 systemd[1]: Reached target swap.target. May 15 10:40:35.341138 systemd[1]: Reached target torcx.target. May 15 10:40:35.341144 systemd[1]: Reached target veritysetup.target. May 15 10:40:35.341151 systemd[1]: Listening on systemd-coredump.socket. May 15 10:40:35.341157 systemd[1]: Listening on systemd-initctl.socket. May 15 10:40:35.341164 systemd[1]: Listening on systemd-journald-audit.socket. May 15 10:40:35.341190 systemd[1]: Listening on systemd-journald-dev-log.socket. May 15 10:40:35.341198 systemd[1]: Listening on systemd-journald.socket. May 15 10:40:35.341204 systemd[1]: Listening on systemd-networkd.socket. May 15 10:40:35.341210 systemd[1]: Listening on systemd-udevd-control.socket. May 15 10:40:35.341217 systemd[1]: Listening on systemd-udevd-kernel.socket. May 15 10:40:35.341224 systemd[1]: Listening on systemd-userdbd.socket. May 15 10:40:35.341232 systemd[1]: Mounting dev-hugepages.mount... May 15 10:40:35.341239 systemd[1]: Mounting dev-mqueue.mount... May 15 10:40:35.341245 systemd[1]: Mounting media.mount... May 15 10:40:35.341252 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). May 15 10:40:35.341259 systemd[1]: Mounting sys-kernel-debug.mount... May 15 10:40:35.341266 systemd[1]: Mounting sys-kernel-tracing.mount... May 15 10:40:35.341273 systemd[1]: Mounting tmp.mount... May 15 10:40:35.341281 systemd[1]: Starting flatcar-tmpfiles.service... May 15 10:40:35.341289 systemd[1]: Starting ignition-delete-config.service... May 15 10:40:35.341295 systemd[1]: Starting kmod-static-nodes.service... May 15 10:40:35.341302 systemd[1]: Starting modprobe@configfs.service... May 15 10:40:35.341309 systemd[1]: Starting modprobe@dm_mod.service... May 15 10:40:35.341315 systemd[1]: Starting modprobe@drm.service... May 15 10:40:35.341322 systemd[1]: Starting modprobe@efi_pstore.service... May 15 10:40:35.341329 systemd[1]: Starting modprobe@fuse.service... May 15 10:40:35.341335 systemd[1]: Starting modprobe@loop.service... May 15 10:40:35.341343 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 15 10:40:35.341350 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. May 15 10:40:35.341357 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) May 15 10:40:35.341364 systemd[1]: Starting systemd-journald.service... May 15 10:40:35.341371 systemd[1]: Starting systemd-modules-load.service... May 15 10:40:35.341377 systemd[1]: Starting systemd-network-generator.service... May 15 10:40:35.341384 systemd[1]: Starting systemd-remount-fs.service... May 15 10:40:35.341390 systemd[1]: Starting systemd-udev-trigger.service... May 15 10:40:35.341397 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). May 15 10:40:35.341405 systemd[1]: Mounted dev-hugepages.mount. May 15 10:40:35.341436 systemd[1]: Mounted dev-mqueue.mount. May 15 10:40:35.341448 systemd[1]: Mounted media.mount. May 15 10:40:35.341456 systemd[1]: Mounted sys-kernel-debug.mount. May 15 10:40:35.341463 systemd[1]: Mounted sys-kernel-tracing.mount. May 15 10:40:35.341470 systemd[1]: Mounted tmp.mount. May 15 10:40:35.341476 systemd[1]: Finished systemd-remount-fs.service. May 15 10:40:35.341483 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 15 10:40:35.341490 systemd[1]: Starting systemd-hwdb-update.service... May 15 10:40:35.341500 systemd[1]: Starting systemd-random-seed.service... May 15 10:40:35.341506 systemd[1]: Finished kmod-static-nodes.service. May 15 10:40:35.341513 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 15 10:40:35.341520 systemd[1]: Finished modprobe@dm_mod.service. May 15 10:40:35.341526 systemd[1]: modprobe@drm.service: Deactivated successfully. May 15 10:40:35.341533 systemd[1]: Finished modprobe@drm.service. May 15 10:40:35.341540 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 15 10:40:35.341549 systemd-journald[1031]: Journal started May 15 10:40:35.341612 systemd-journald[1031]: Runtime Journal (/run/log/journal/6d1c453923844a9cbabc193a81867fc0) is 4.8M, max 38.8M, 34.0M free. May 15 10:40:35.236000 audit[1]: AVC avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 May 15 10:40:35.236000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 May 15 10:40:35.301000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:40:35.312000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:40:35.328000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 May 15 10:40:35.328000 audit[1031]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=4 a1=7fff6d1cf990 a2=4000 a3=7fff6d1cfa2c items=0 ppid=1 pid=1031 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:40:35.345629 systemd[1]: Finished modprobe@configfs.service. May 15 10:40:35.345644 systemd[1]: Started systemd-journald.service. May 15 10:40:35.328000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" May 15 10:40:35.331000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:40:35.331000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:40:35.338000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:40:35.338000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:40:35.341000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:40:35.341000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:40:35.342000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:40:35.342000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:40:35.342000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:40:35.343000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:40:35.343000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:40:35.343000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:40:35.343854 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 15 10:40:35.363073 systemd-journald[1031]: Time spent on flushing to /var/log/journal/6d1c453923844a9cbabc193a81867fc0 is 41.990ms for 1915 entries. May 15 10:40:35.363073 systemd-journald[1031]: System Journal (/var/log/journal/6d1c453923844a9cbabc193a81867fc0) is 8.0M, max 584.8M, 576.8M free. May 15 10:40:35.425536 systemd-journald[1031]: Received client request to flush runtime journal. May 15 10:40:35.425584 kernel: fuse: init (API version 7.34) May 15 10:40:35.425604 kernel: loop: module loaded May 15 10:40:35.383000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:40:35.384000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:40:35.384000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:40:35.410000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:40:35.410000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:40:35.344023 systemd[1]: Finished modprobe@efi_pstore.service. May 15 10:40:35.344354 systemd[1]: Finished systemd-modules-load.service. May 15 10:40:35.344651 systemd[1]: Finished systemd-network-generator.service. May 15 10:40:35.425000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:40:35.344929 systemd[1]: Finished systemd-random-seed.service. May 15 10:40:35.345072 systemd[1]: Reached target first-boot-complete.target. May 15 10:40:35.345163 systemd[1]: Reached target network-pre.target. May 15 10:40:35.352686 systemd[1]: Mounting sys-kernel-config.mount... May 15 10:40:35.426720 jq[1018]: true May 15 10:40:35.353698 systemd[1]: Starting systemd-journal-flush.service... May 15 10:40:35.353816 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 15 10:40:35.354576 systemd[1]: Starting systemd-sysctl.service... May 15 10:40:35.355363 systemd[1]: Mounted sys-kernel-config.mount. May 15 10:40:35.384874 systemd[1]: Finished systemd-sysctl.service. May 15 10:40:35.385151 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 15 10:40:35.385249 systemd[1]: Finished modprobe@fuse.service. May 15 10:40:35.387470 systemd[1]: Mounting sys-fs-fuse-connections.mount... May 15 10:40:35.389975 systemd[1]: Mounted sys-fs-fuse-connections.mount. May 15 10:40:35.411418 systemd[1]: modprobe@loop.service: Deactivated successfully. May 15 10:40:35.411518 systemd[1]: Finished modprobe@loop.service. May 15 10:40:35.411704 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 15 10:40:35.426100 systemd[1]: Finished systemd-journal-flush.service. May 15 10:40:35.437660 jq[1047]: true May 15 10:40:35.446621 systemd[1]: Finished flatcar-tmpfiles.service. May 15 10:40:35.447663 systemd[1]: Starting systemd-sysusers.service... May 15 10:40:35.445000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:40:35.480599 systemd[1]: Finished systemd-sysusers.service. May 15 10:40:35.481737 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... May 15 10:40:35.479000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:40:35.483000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:40:35.484972 systemd[1]: Finished systemd-udev-trigger.service. May 15 10:40:35.485928 systemd[1]: Starting systemd-udev-settle.service... May 15 10:40:35.494601 udevadm[1107]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. May 15 10:40:35.502090 ignition[1068]: Ignition 2.14.0 May 15 10:40:35.502347 ignition[1068]: deleting config from guestinfo properties May 15 10:40:35.505254 ignition[1068]: Successfully deleted config May 15 10:40:35.505802 systemd[1]: Finished ignition-delete-config.service. May 15 10:40:35.504000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ignition-delete-config comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:40:35.516701 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. May 15 10:40:35.515000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:40:35.876738 systemd[1]: Finished systemd-hwdb-update.service. May 15 10:40:35.875000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:40:35.877750 systemd[1]: Starting systemd-udevd.service... May 15 10:40:35.889503 systemd-udevd[1111]: Using default interface naming scheme 'v252'. May 15 10:40:35.909511 systemd[1]: Started systemd-udevd.service. May 15 10:40:35.908000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:40:35.910708 systemd[1]: Starting systemd-networkd.service... May 15 10:40:35.918789 systemd[1]: Starting systemd-userdbd.service... May 15 10:40:35.938394 systemd[1]: Found device dev-ttyS0.device. May 15 10:40:35.950494 systemd[1]: Started systemd-userdbd.service. May 15 10:40:35.949000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:40:35.981202 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 May 15 10:40:35.985179 kernel: ACPI: button: Power Button [PWRF] May 15 10:40:36.006426 systemd-networkd[1114]: lo: Link UP May 15 10:40:36.006431 systemd-networkd[1114]: lo: Gained carrier May 15 10:40:36.006900 systemd-networkd[1114]: Enumeration completed May 15 10:40:36.006968 systemd[1]: Started systemd-networkd.service. May 15 10:40:36.005000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:40:36.007181 systemd-networkd[1114]: ens192: Configuring with /etc/systemd/network/00-vmware.network. May 15 10:40:36.009181 kernel: vmxnet3 0000:0b:00.0 ens192: intr type 3, mode 0, 3 vectors allocated May 15 10:40:36.009305 kernel: vmxnet3 0000:0b:00.0 ens192: NIC Link is Up 10000 Mbps May 15 10:40:36.010927 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): ens192: link becomes ready May 15 10:40:36.011261 systemd-networkd[1114]: ens192: Link UP May 15 10:40:36.011343 systemd-networkd[1114]: ens192: Gained carrier May 15 10:40:36.027824 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. May 15 10:40:36.059000 audit[1122]: AVC avc: denied { confidentiality } for pid=1122 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 May 15 10:40:36.059000 audit[1122]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=55ab1014d190 a1=338ac a2=7f5d0f48ebc5 a3=5 items=110 ppid=1111 pid=1122 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:40:36.059000 audit: CWD cwd="/" May 15 10:40:36.059000 audit: PATH item=0 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:40:36.059000 audit: PATH item=1 name=(null) inode=25010 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:40:36.059000 audit: PATH item=2 name=(null) inode=25010 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:40:36.059000 audit: PATH item=3 name=(null) inode=25011 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:40:36.059000 audit: PATH item=4 name=(null) inode=25010 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:40:36.059000 audit: PATH item=5 name=(null) inode=25012 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:40:36.059000 audit: PATH item=6 name=(null) inode=25010 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:40:36.059000 audit: PATH item=7 name=(null) inode=25013 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:40:36.059000 audit: PATH item=8 name=(null) inode=25013 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:40:36.059000 audit: PATH item=9 name=(null) inode=25014 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:40:36.059000 audit: PATH item=10 name=(null) inode=25013 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:40:36.059000 audit: PATH item=11 name=(null) inode=25015 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:40:36.059000 audit: PATH item=12 name=(null) inode=25013 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:40:36.059000 audit: PATH item=13 name=(null) inode=25016 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:40:36.059000 audit: PATH item=14 name=(null) inode=25013 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:40:36.059000 audit: PATH item=15 name=(null) inode=25017 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:40:36.059000 audit: PATH item=16 name=(null) inode=25013 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:40:36.059000 audit: PATH item=17 name=(null) inode=25018 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:40:36.059000 audit: PATH item=18 name=(null) inode=25010 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:40:36.059000 audit: PATH item=19 name=(null) inode=25019 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:40:36.059000 audit: PATH item=20 name=(null) inode=25019 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:40:36.059000 audit: PATH item=21 name=(null) inode=25020 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:40:36.059000 audit: PATH item=22 name=(null) inode=25019 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:40:36.059000 audit: PATH item=23 name=(null) inode=25021 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:40:36.059000 audit: PATH item=24 name=(null) inode=25019 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:40:36.059000 audit: PATH item=25 name=(null) inode=25022 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:40:36.059000 audit: PATH item=26 name=(null) inode=25019 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:40:36.059000 audit: PATH item=27 name=(null) inode=25023 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:40:36.059000 audit: PATH item=28 name=(null) inode=25019 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:40:36.059000 audit: PATH item=29 name=(null) inode=25024 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:40:36.059000 audit: PATH item=30 name=(null) inode=25010 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:40:36.059000 audit: PATH item=31 name=(null) inode=25025 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:40:36.059000 audit: PATH item=32 name=(null) inode=25025 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:40:36.059000 audit: PATH item=33 name=(null) inode=25026 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:40:36.059000 audit: PATH item=34 name=(null) inode=25025 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:40:36.059000 audit: PATH item=35 name=(null) inode=25027 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:40:36.059000 audit: PATH item=36 name=(null) inode=25025 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:40:36.059000 audit: PATH item=37 name=(null) inode=25028 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:40:36.059000 audit: PATH item=38 name=(null) inode=25025 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:40:36.059000 audit: PATH item=39 name=(null) inode=25029 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:40:36.059000 audit: PATH item=40 name=(null) inode=25025 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:40:36.059000 audit: PATH item=41 name=(null) inode=25030 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:40:36.059000 audit: PATH item=42 name=(null) inode=25010 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:40:36.059000 audit: PATH item=43 name=(null) inode=25031 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:40:36.059000 audit: PATH item=44 name=(null) inode=25031 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:40:36.059000 audit: PATH item=45 name=(null) inode=25032 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:40:36.059000 audit: PATH item=46 name=(null) inode=25031 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:40:36.059000 audit: PATH item=47 name=(null) inode=25033 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:40:36.059000 audit: PATH item=48 name=(null) inode=25031 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:40:36.059000 audit: PATH item=49 name=(null) inode=25034 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:40:36.059000 audit: PATH item=50 name=(null) inode=25031 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:40:36.059000 audit: PATH item=51 name=(null) inode=25035 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:40:36.059000 audit: PATH item=52 name=(null) inode=25031 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:40:36.059000 audit: PATH item=53 name=(null) inode=25036 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:40:36.059000 audit: PATH item=54 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:40:36.059000 audit: PATH item=55 name=(null) inode=25037 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:40:36.059000 audit: PATH item=56 name=(null) inode=25037 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:40:36.059000 audit: PATH item=57 name=(null) inode=25038 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:40:36.059000 audit: PATH item=58 name=(null) inode=25037 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:40:36.059000 audit: PATH item=59 name=(null) inode=25039 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:40:36.059000 audit: PATH item=60 name=(null) inode=25037 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:40:36.059000 audit: PATH item=61 name=(null) inode=25040 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:40:36.059000 audit: PATH item=62 name=(null) inode=25040 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:40:36.059000 audit: PATH item=63 name=(null) inode=25041 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:40:36.059000 audit: PATH item=64 name=(null) inode=25040 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:40:36.059000 audit: PATH item=65 name=(null) inode=25042 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:40:36.059000 audit: PATH item=66 name=(null) inode=25040 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:40:36.059000 audit: PATH item=67 name=(null) inode=25043 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:40:36.059000 audit: PATH item=68 name=(null) inode=25040 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:40:36.059000 audit: PATH item=69 name=(null) inode=25044 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:40:36.059000 audit: PATH item=70 name=(null) inode=25040 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:40:36.059000 audit: PATH item=71 name=(null) inode=25045 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:40:36.059000 audit: PATH item=72 name=(null) inode=25037 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:40:36.059000 audit: PATH item=73 name=(null) inode=25046 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:40:36.059000 audit: PATH item=74 name=(null) inode=25046 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:40:36.059000 audit: PATH item=75 name=(null) inode=25047 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:40:36.059000 audit: PATH item=76 name=(null) inode=25046 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:40:36.059000 audit: PATH item=77 name=(null) inode=25048 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:40:36.059000 audit: PATH item=78 name=(null) inode=25046 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:40:36.059000 audit: PATH item=79 name=(null) inode=25049 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:40:36.059000 audit: PATH item=80 name=(null) inode=25046 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:40:36.059000 audit: PATH item=81 name=(null) inode=25050 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:40:36.059000 audit: PATH item=82 name=(null) inode=25046 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:40:36.059000 audit: PATH item=83 name=(null) inode=25051 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:40:36.059000 audit: PATH item=84 name=(null) inode=25037 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:40:36.059000 audit: PATH item=85 name=(null) inode=25052 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:40:36.059000 audit: PATH item=86 name=(null) inode=25052 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:40:36.059000 audit: PATH item=87 name=(null) inode=25053 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:40:36.059000 audit: PATH item=88 name=(null) inode=25052 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:40:36.059000 audit: PATH item=89 name=(null) inode=25054 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:40:36.059000 audit: PATH item=90 name=(null) inode=25052 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:40:36.059000 audit: PATH item=91 name=(null) inode=25055 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:40:36.059000 audit: PATH item=92 name=(null) inode=25052 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:40:36.059000 audit: PATH item=93 name=(null) inode=25056 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:40:36.059000 audit: PATH item=94 name=(null) inode=25052 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:40:36.059000 audit: PATH item=95 name=(null) inode=25057 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:40:36.059000 audit: PATH item=96 name=(null) inode=25037 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:40:36.059000 audit: PATH item=97 name=(null) inode=25058 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:40:36.059000 audit: PATH item=98 name=(null) inode=25058 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:40:36.059000 audit: PATH item=99 name=(null) inode=25059 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:40:36.059000 audit: PATH item=100 name=(null) inode=25058 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:40:36.059000 audit: PATH item=101 name=(null) inode=25060 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:40:36.059000 audit: PATH item=102 name=(null) inode=25058 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:40:36.059000 audit: PATH item=103 name=(null) inode=25061 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:40:36.059000 audit: PATH item=104 name=(null) inode=25058 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:40:36.059000 audit: PATH item=105 name=(null) inode=25062 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:40:36.059000 audit: PATH item=106 name=(null) inode=25058 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:40:36.059000 audit: PATH item=107 name=(null) inode=25063 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:40:36.059000 audit: PATH item=108 name=(null) inode=1 dev=00:07 mode=040700 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:40:36.059000 audit: PATH item=109 name=(null) inode=25064 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:40:36.059000 audit: PROCTITLE proctitle="(udev-worker)" May 15 10:40:36.067235 kernel: piix4_smbus 0000:00:07.3: SMBus Host Controller not enabled! May 15 10:40:36.070553 kernel: vmw_vmci 0000:00:07.7: Found VMCI PCI device at 0x11080, irq 16 May 15 10:40:36.082539 kernel: vmw_vmci 0000:00:07.7: Using capabilities 0xc May 15 10:40:36.082625 kernel: Guest personality initialized and is active May 15 10:40:36.083952 kernel: VMCI host device registered (name=vmci, major=10, minor=125) May 15 10:40:36.083981 kernel: Initialized host personality May 15 10:40:36.088176 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input3 May 15 10:40:36.113179 kernel: mousedev: PS/2 mouse device common for all mice May 15 10:40:36.113454 (udev-worker)[1115]: id: Truncating stdout of 'dmi_memory_id' up to 16384 byte. May 15 10:40:36.133438 systemd[1]: Finished systemd-udev-settle.service. May 15 10:40:36.132000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:40:36.134461 systemd[1]: Starting lvm2-activation-early.service... May 15 10:40:36.151479 lvm[1145]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 15 10:40:36.174817 systemd[1]: Finished lvm2-activation-early.service. May 15 10:40:36.173000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:40:36.174997 systemd[1]: Reached target cryptsetup.target. May 15 10:40:36.176026 systemd[1]: Starting lvm2-activation.service... May 15 10:40:36.178968 lvm[1147]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 15 10:40:36.205882 systemd[1]: Finished lvm2-activation.service. May 15 10:40:36.204000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:40:36.206061 systemd[1]: Reached target local-fs-pre.target. May 15 10:40:36.206157 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 15 10:40:36.206180 systemd[1]: Reached target local-fs.target. May 15 10:40:36.206268 systemd[1]: Reached target machines.target. May 15 10:40:36.207303 systemd[1]: Starting ldconfig.service... May 15 10:40:36.207922 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 15 10:40:36.207954 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 15 10:40:36.208764 systemd[1]: Starting systemd-boot-update.service... May 15 10:40:36.209602 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... May 15 10:40:36.210780 systemd[1]: Starting systemd-machine-id-commit.service... May 15 10:40:36.211673 systemd[1]: Starting systemd-sysext.service... May 15 10:40:36.216741 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1150 (bootctl) May 15 10:40:36.217434 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... May 15 10:40:36.233770 systemd[1]: Unmounting usr-share-oem.mount... May 15 10:40:36.235825 systemd[1]: usr-share-oem.mount: Deactivated successfully. May 15 10:40:36.236058 systemd[1]: Unmounted usr-share-oem.mount. May 15 10:40:36.245000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:40:36.246758 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. May 15 10:40:36.255194 kernel: loop0: detected capacity change from 0 to 210664 May 15 10:40:36.970000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:40:36.971195 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 15 10:40:36.971672 systemd[1]: Finished systemd-machine-id-commit.service. May 15 10:40:37.090191 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 15 10:40:37.165864 systemd-fsck[1163]: fsck.fat 4.2 (2021-01-31) May 15 10:40:37.165864 systemd-fsck[1163]: /dev/sda1: 790 files, 120732/258078 clusters May 15 10:40:37.165000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:40:37.166832 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. May 15 10:40:37.167854 systemd[1]: Mounting boot.mount... May 15 10:40:37.190186 kernel: loop1: detected capacity change from 0 to 210664 May 15 10:40:37.284700 systemd[1]: Mounted boot.mount. May 15 10:40:37.303000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:40:37.304209 systemd[1]: Finished systemd-boot-update.service. May 15 10:40:37.309893 (sd-sysext)[1170]: Using extensions 'kubernetes'. May 15 10:40:37.311157 (sd-sysext)[1170]: Merged extensions into '/usr'. May 15 10:40:37.321359 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). May 15 10:40:37.322452 systemd[1]: Mounting usr-share-oem.mount... May 15 10:40:37.323502 systemd[1]: Starting modprobe@dm_mod.service... May 15 10:40:37.324543 systemd[1]: Starting modprobe@efi_pstore.service... May 15 10:40:37.325275 systemd[1]: Starting modprobe@loop.service... May 15 10:40:37.325453 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 15 10:40:37.325532 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 15 10:40:37.325605 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). May 15 10:40:37.327894 systemd[1]: Mounted usr-share-oem.mount. May 15 10:40:37.328693 systemd[1]: Finished systemd-sysext.service. May 15 10:40:37.328000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:40:37.329979 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 15 10:40:37.330054 systemd[1]: Finished modprobe@dm_mod.service. May 15 10:40:37.329000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:40:37.329000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:40:37.332310 systemd[1]: modprobe@loop.service: Deactivated successfully. May 15 10:40:37.332391 systemd[1]: Finished modprobe@loop.service. May 15 10:40:37.332000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:40:37.332000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:40:37.333953 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 15 10:40:37.334090 systemd[1]: Finished modprobe@efi_pstore.service. May 15 10:40:37.333000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:40:37.333000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:40:37.335407 systemd[1]: Starting ensure-sysext.service... May 15 10:40:37.335573 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 15 10:40:37.335614 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 15 10:40:37.336582 systemd[1]: Starting systemd-tmpfiles-setup.service... May 15 10:40:37.344981 systemd[1]: Reloading. May 15 10:40:37.349224 systemd-tmpfiles[1186]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. May 15 10:40:37.350484 systemd-tmpfiles[1186]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 15 10:40:37.352419 systemd-tmpfiles[1186]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 15 10:40:37.389640 /usr/lib/systemd/system-generators/torcx-generator[1205]: time="2025-05-15T10:40:37Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.100 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.100 /var/lib/torcx/store]" May 15 10:40:37.389656 /usr/lib/systemd/system-generators/torcx-generator[1205]: time="2025-05-15T10:40:37Z" level=info msg="torcx already run" May 15 10:40:37.474743 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 15 10:40:37.474753 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 15 10:40:37.488829 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 15 10:40:37.529000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:40:37.530980 systemd[1]: Finished systemd-tmpfiles-setup.service. May 15 10:40:37.533009 systemd[1]: Starting audit-rules.service... May 15 10:40:37.533878 systemd[1]: Starting clean-ca-certificates.service... May 15 10:40:37.534852 systemd[1]: Starting systemd-journal-catalog-update.service... May 15 10:40:37.537335 systemd[1]: Starting systemd-resolved.service... May 15 10:40:37.539426 systemd[1]: Starting systemd-timesyncd.service... May 15 10:40:37.542430 systemd[1]: Starting systemd-update-utmp.service... May 15 10:40:37.545187 systemd[1]: Finished clean-ca-certificates.service. May 15 10:40:37.544000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:40:37.546747 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 15 10:40:37.549263 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). May 15 10:40:37.549992 systemd[1]: Starting modprobe@dm_mod.service... May 15 10:40:37.551355 systemd[1]: Starting modprobe@efi_pstore.service... May 15 10:40:37.553611 systemd[1]: Starting modprobe@loop.service... May 15 10:40:37.553747 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 15 10:40:37.553827 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 15 10:40:37.553908 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 15 10:40:37.553962 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). May 15 10:40:37.553000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:40:37.553000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:40:37.554527 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 15 10:40:37.554611 systemd[1]: Finished modprobe@dm_mod.service. May 15 10:40:37.555796 systemd[1]: modprobe@loop.service: Deactivated successfully. May 15 10:40:37.555888 systemd[1]: Finished modprobe@loop.service. May 15 10:40:37.554000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:40:37.554000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:40:37.556274 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 15 10:40:37.555000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:40:37.555000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:40:37.556392 systemd[1]: Finished modprobe@efi_pstore.service. May 15 10:40:37.557762 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 15 10:40:37.556000 audit[1283]: SYSTEM_BOOT pid=1283 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' May 15 10:40:37.557857 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 15 10:40:37.558000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:40:37.560012 systemd[1]: Finished systemd-update-utmp.service. May 15 10:40:37.566441 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). May 15 10:40:37.567338 systemd[1]: Starting modprobe@dm_mod.service... May 15 10:40:37.568085 systemd[1]: Starting modprobe@efi_pstore.service... May 15 10:40:37.569781 systemd[1]: Starting modprobe@loop.service... May 15 10:40:37.569911 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 15 10:40:37.570013 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 15 10:40:37.570099 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 15 10:40:37.570510 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). May 15 10:40:37.572126 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 15 10:40:37.572000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:40:37.572000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:40:37.573427 systemd[1]: Finished modprobe@dm_mod.service. May 15 10:40:37.573830 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 15 10:40:37.573908 systemd[1]: Finished modprobe@efi_pstore.service. May 15 10:40:37.573000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:40:37.573000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:40:37.574570 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 15 10:40:37.576533 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). May 15 10:40:37.577452 systemd[1]: Starting modprobe@dm_mod.service... May 15 10:40:37.579836 systemd[1]: Starting modprobe@drm.service... May 15 10:40:37.580832 systemd[1]: Starting modprobe@efi_pstore.service... May 15 10:40:37.581270 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 15 10:40:37.581358 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 15 10:40:37.583437 systemd[1]: Starting systemd-networkd-wait-online.service... May 15 10:40:37.584005 ldconfig[1149]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 15 10:40:37.584029 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 15 10:40:37.585212 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). May 15 10:40:37.588567 systemd[1]: modprobe@loop.service: Deactivated successfully. May 15 10:40:37.588700 systemd[1]: Finished modprobe@loop.service. May 15 10:40:37.587000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:40:37.587000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:40:37.589111 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 15 10:40:37.590191 systemd[1]: Finished modprobe@dm_mod.service. May 15 10:40:37.589000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:40:37.589000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:40:37.590635 systemd[1]: Finished ldconfig.service. May 15 10:40:37.589000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:40:37.590000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:40:37.590000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:40:37.590964 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 15 10:40:37.591047 systemd[1]: Finished modprobe@efi_pstore.service. May 15 10:40:37.591399 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 15 10:40:37.591462 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 15 10:40:37.592000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:40:37.593623 systemd[1]: Finished ensure-sysext.service. May 15 10:40:37.597000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:40:37.597000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:40:37.598903 systemd[1]: modprobe@drm.service: Deactivated successfully. May 15 10:40:37.599000 systemd[1]: Finished modprobe@drm.service. May 15 10:40:37.616654 systemd[1]: Finished systemd-journal-catalog-update.service. May 15 10:40:37.615000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:40:37.617886 systemd[1]: Starting systemd-update-done.service... May 15 10:40:37.627848 systemd[1]: Finished systemd-update-done.service. May 15 10:40:37.626000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-done comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:40:37.648000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 May 15 10:40:37.648000 audit[1322]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffeab6d5890 a2=420 a3=0 items=0 ppid=1273 pid=1322 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:40:37.648000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 May 15 10:40:37.649500 augenrules[1322]: No rules May 15 10:40:37.650040 systemd[1]: Started systemd-timesyncd.service. May 15 10:40:37.650413 systemd[1]: Finished audit-rules.service. May 15 10:40:37.650538 systemd[1]: Reached target time-set.target. May 15 10:40:37.653659 systemd-resolved[1276]: Positive Trust Anchors: May 15 10:40:37.653805 systemd-resolved[1276]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 15 10:40:37.653867 systemd-resolved[1276]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test May 15 10:40:37.673046 systemd-resolved[1276]: Defaulting to hostname 'linux'. May 15 10:40:37.674077 systemd[1]: Started systemd-resolved.service. May 15 10:40:37.674231 systemd[1]: Reached target network.target. May 15 10:40:37.674319 systemd[1]: Reached target nss-lookup.target. May 15 10:40:37.674414 systemd[1]: Reached target sysinit.target. May 15 10:40:37.674549 systemd[1]: Started motdgen.path. May 15 10:40:37.674648 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. May 15 10:40:37.674830 systemd[1]: Started logrotate.timer. May 15 10:40:37.674963 systemd[1]: Started mdadm.timer. May 15 10:40:37.675046 systemd[1]: Started systemd-tmpfiles-clean.timer. May 15 10:40:37.675137 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 15 10:40:37.675158 systemd[1]: Reached target paths.target. May 15 10:40:37.675249 systemd[1]: Reached target timers.target. May 15 10:40:37.675492 systemd[1]: Listening on dbus.socket. May 15 10:40:37.676503 systemd[1]: Starting docker.socket... May 15 10:40:37.677543 systemd[1]: Listening on sshd.socket. May 15 10:40:37.677679 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 15 10:40:37.677868 systemd[1]: Listening on docker.socket. May 15 10:40:37.677988 systemd[1]: Reached target sockets.target. May 15 10:40:37.678094 systemd[1]: Reached target basic.target. May 15 10:40:37.678264 systemd[1]: System is tainted: cgroupsv1 May 15 10:40:37.678291 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. May 15 10:40:37.678304 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. May 15 10:40:37.679127 systemd[1]: Starting containerd.service... May 15 10:40:37.680068 systemd[1]: Starting dbus.service... May 15 10:40:37.681035 systemd[1]: Starting enable-oem-cloudinit.service... May 15 10:40:37.682053 systemd[1]: Starting extend-filesystems.service... May 15 10:40:37.683325 jq[1333]: false May 15 10:40:37.683766 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). May 15 10:40:37.684823 systemd[1]: Starting motdgen.service... May 15 10:40:37.685877 systemd[1]: Starting prepare-helm.service... May 15 10:40:37.687025 systemd[1]: Starting ssh-key-proc-cmdline.service... May 15 10:40:37.688237 systemd[1]: Starting sshd-keygen.service... May 15 10:40:37.695662 systemd[1]: Starting systemd-logind.service... May 15 10:40:37.695816 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 15 10:40:37.695856 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 15 10:40:37.697101 systemd[1]: Starting update-engine.service... May 15 10:40:37.698986 systemd[1]: Starting update-ssh-keys-after-ignition.service... May 15 10:40:37.699983 systemd[1]: Starting vmtoolsd.service... May 15 10:40:37.702816 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 15 10:40:37.702948 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. May 15 10:40:37.706897 systemd[1]: Started vmtoolsd.service. May 15 10:40:37.714312 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 15 10:40:37.714732 systemd[1]: Finished ssh-key-proc-cmdline.service. May 15 10:40:37.722411 jq[1348]: true May 15 10:40:37.725259 tar[1351]: linux-amd64/helm May 15 10:40:37.727533 jq[1362]: true May 15 10:40:37.744258 extend-filesystems[1334]: Found loop1 May 15 10:40:37.744558 extend-filesystems[1334]: Found sda May 15 10:40:37.744558 extend-filesystems[1334]: Found sda1 May 15 10:40:37.744558 extend-filesystems[1334]: Found sda2 May 15 10:40:37.744558 extend-filesystems[1334]: Found sda3 May 15 10:40:37.744558 extend-filesystems[1334]: Found usr May 15 10:40:37.744558 extend-filesystems[1334]: Found sda4 May 15 10:40:37.744558 extend-filesystems[1334]: Found sda6 May 15 10:40:37.744558 extend-filesystems[1334]: Found sda7 May 15 10:40:37.744558 extend-filesystems[1334]: Found sda9 May 15 10:40:37.744558 extend-filesystems[1334]: Checking size of /dev/sda9 May 15 10:40:37.758474 extend-filesystems[1334]: Old size kept for /dev/sda9 May 15 10:40:37.758474 extend-filesystems[1334]: Found sr0 May 15 10:40:37.753178 systemd[1]: extend-filesystems.service: Deactivated successfully. May 15 10:40:37.753443 systemd[1]: Finished extend-filesystems.service. May 15 10:40:37.760549 systemd[1]: motdgen.service: Deactivated successfully. May 15 10:40:37.760687 systemd[1]: Finished motdgen.service. May 15 10:40:37.762613 dbus-daemon[1332]: [system] SELinux support is enabled May 15 10:40:37.762700 systemd[1]: Started dbus.service. May 15 10:40:37.764008 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 15 10:40:37.764826 bash[1387]: Updated "/home/core/.ssh/authorized_keys" May 15 10:40:37.764028 systemd[1]: Reached target system-config.target. May 15 10:40:37.764143 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 15 10:40:37.764154 systemd[1]: Reached target user-config.target. May 15 10:40:37.764768 systemd[1]: Finished update-ssh-keys-after-ignition.service. May 15 10:40:37.777181 kernel: NET: Registered PF_VSOCK protocol family May 15 10:40:37.809725 update_engine[1347]: I0515 10:40:37.808990 1347 main.cc:92] Flatcar Update Engine starting May 15 10:40:37.811710 env[1356]: time="2025-05-15T10:40:37.811681112Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 May 15 10:40:37.813464 update_engine[1347]: I0515 10:40:37.813400 1347 update_check_scheduler.cc:74] Next update check in 2m28s May 15 10:40:37.815267 systemd[1]: Started update-engine.service. May 15 10:40:37.816594 systemd[1]: Started locksmithd.service. May 15 10:40:37.837096 env[1356]: time="2025-05-15T10:40:37.837068937Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 15 10:40:37.837159 env[1356]: time="2025-05-15T10:40:37.837152391Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 15 10:40:37.837908 env[1356]: time="2025-05-15T10:40:37.837884850Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.182-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 15 10:40:37.837908 env[1356]: time="2025-05-15T10:40:37.837905283Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 15 10:40:37.838084 env[1356]: time="2025-05-15T10:40:37.838043724Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 15 10:40:37.838084 env[1356]: time="2025-05-15T10:40:37.838057844Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 15 10:40:37.838084 env[1356]: time="2025-05-15T10:40:37.838067734Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" May 15 10:40:37.838084 env[1356]: time="2025-05-15T10:40:37.838073289Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 15 10:40:37.838164 env[1356]: time="2025-05-15T10:40:37.838115229Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 15 10:40:37.838284 env[1356]: time="2025-05-15T10:40:37.838271531Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 15 10:40:37.838373 env[1356]: time="2025-05-15T10:40:37.838360541Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 15 10:40:37.838373 env[1356]: time="2025-05-15T10:40:37.838371635Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 15 10:40:37.838414 env[1356]: time="2025-05-15T10:40:37.838398891Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" May 15 10:40:37.838414 env[1356]: time="2025-05-15T10:40:37.838406558Z" level=info msg="metadata content store policy set" policy=shared May 15 10:40:37.844254 env[1356]: time="2025-05-15T10:40:37.844231565Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 15 10:40:37.844254 env[1356]: time="2025-05-15T10:40:37.844254507Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 15 10:40:37.844333 env[1356]: time="2025-05-15T10:40:37.844263998Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 15 10:40:37.844333 env[1356]: time="2025-05-15T10:40:37.844286307Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 15 10:40:37.844333 env[1356]: time="2025-05-15T10:40:37.844294706Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 15 10:40:37.844333 env[1356]: time="2025-05-15T10:40:37.844302392Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 15 10:40:37.844333 env[1356]: time="2025-05-15T10:40:37.844309315Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 15 10:40:37.844333 env[1356]: time="2025-05-15T10:40:37.844318243Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 15 10:40:37.844333 env[1356]: time="2025-05-15T10:40:37.844325973Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 May 15 10:40:37.844333 env[1356]: time="2025-05-15T10:40:37.844333007Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 15 10:40:37.844462 env[1356]: time="2025-05-15T10:40:37.844339964Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 15 10:40:37.844462 env[1356]: time="2025-05-15T10:40:37.844347419Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 15 10:40:37.844462 env[1356]: time="2025-05-15T10:40:37.844406974Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 15 10:40:37.844462 env[1356]: time="2025-05-15T10:40:37.844452290Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 15 10:40:37.844768 env[1356]: time="2025-05-15T10:40:37.844663964Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 15 10:40:37.844768 env[1356]: time="2025-05-15T10:40:37.844684249Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 15 10:40:37.844768 env[1356]: time="2025-05-15T10:40:37.844693364Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 15 10:40:37.844768 env[1356]: time="2025-05-15T10:40:37.844719809Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 15 10:40:37.844768 env[1356]: time="2025-05-15T10:40:37.844727430Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 15 10:40:37.844768 env[1356]: time="2025-05-15T10:40:37.844734341Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 15 10:40:37.844768 env[1356]: time="2025-05-15T10:40:37.844741522Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 15 10:40:37.844768 env[1356]: time="2025-05-15T10:40:37.844748834Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 15 10:40:37.844768 env[1356]: time="2025-05-15T10:40:37.844755277Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 15 10:40:37.844768 env[1356]: time="2025-05-15T10:40:37.844761768Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 15 10:40:37.844768 env[1356]: time="2025-05-15T10:40:37.844767976Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 15 10:40:37.844986 env[1356]: time="2025-05-15T10:40:37.844779907Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 15 10:40:37.844986 env[1356]: time="2025-05-15T10:40:37.844846665Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 15 10:40:37.844986 env[1356]: time="2025-05-15T10:40:37.844855654Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 15 10:40:37.844986 env[1356]: time="2025-05-15T10:40:37.844862386Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 15 10:40:37.844986 env[1356]: time="2025-05-15T10:40:37.844868500Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 15 10:40:37.844986 env[1356]: time="2025-05-15T10:40:37.844876572Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 May 15 10:40:37.844986 env[1356]: time="2025-05-15T10:40:37.844882450Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 15 10:40:37.844986 env[1356]: time="2025-05-15T10:40:37.844893155Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" May 15 10:40:37.844986 env[1356]: time="2025-05-15T10:40:37.844916444Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 15 10:40:37.845249 env[1356]: time="2025-05-15T10:40:37.845028488Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 15 10:40:37.845249 env[1356]: time="2025-05-15T10:40:37.845065045Z" level=info msg="Connect containerd service" May 15 10:40:37.845249 env[1356]: time="2025-05-15T10:40:37.845087542Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 15 10:40:37.847235 env[1356]: time="2025-05-15T10:40:37.845458051Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 15 10:40:37.847235 env[1356]: time="2025-05-15T10:40:37.845611677Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 15 10:40:37.847235 env[1356]: time="2025-05-15T10:40:37.845636109Z" level=info msg=serving... address=/run/containerd/containerd.sock May 15 10:40:37.845719 systemd[1]: Started containerd.service. May 15 10:40:37.847501 env[1356]: time="2025-05-15T10:40:37.847365474Z" level=info msg="containerd successfully booted in 0.036251s" May 15 10:40:37.847501 env[1356]: time="2025-05-15T10:40:37.847380554Z" level=info msg="Start subscribing containerd event" May 15 10:40:37.847501 env[1356]: time="2025-05-15T10:40:37.847406483Z" level=info msg="Start recovering state" May 15 10:40:37.847501 env[1356]: time="2025-05-15T10:40:37.847455665Z" level=info msg="Start event monitor" May 15 10:40:37.847501 env[1356]: time="2025-05-15T10:40:37.847466944Z" level=info msg="Start snapshots syncer" May 15 10:40:37.847501 env[1356]: time="2025-05-15T10:40:37.847472841Z" level=info msg="Start cni network conf syncer for default" May 15 10:40:37.847501 env[1356]: time="2025-05-15T10:40:37.847479317Z" level=info msg="Start streaming server" May 15 10:40:37.853371 systemd-logind[1346]: Watching system buttons on /dev/input/event1 (Power Button) May 15 10:40:37.853898 systemd-logind[1346]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 15 10:40:37.854051 systemd-logind[1346]: New seat seat0. May 15 10:40:37.858109 systemd[1]: Started systemd-logind.service. May 15 10:40:37.858511 systemd-networkd[1114]: ens192: Gained IPv6LL May 15 10:40:37.865964 systemd[1]: Finished systemd-networkd-wait-online.service. May 15 10:40:37.866266 systemd[1]: Reached target network-online.target. May 15 10:40:37.870003 systemd[1]: Starting kubelet.service... May 15 10:41:58.653258 systemd-resolved[1276]: Clock change detected. Flushing caches. May 15 10:41:58.653344 systemd-timesyncd[1279]: Contacted time server 162.159.200.1:123 (0.flatcar.pool.ntp.org). May 15 10:41:58.653371 systemd-timesyncd[1279]: Initial clock synchronization to Thu 2025-05-15 10:41:58.653228 UTC. May 15 10:41:59.073623 locksmithd[1403]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 15 10:41:59.131890 tar[1351]: linux-amd64/LICENSE May 15 10:41:59.132327 tar[1351]: linux-amd64/README.md May 15 10:41:59.135526 systemd[1]: Finished prepare-helm.service. May 15 10:41:59.399255 systemd[1]: Started kubelet.service. May 15 10:41:59.420690 sshd_keygen[1364]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 15 10:41:59.442723 systemd[1]: Finished sshd-keygen.service. May 15 10:41:59.443974 systemd[1]: Starting issuegen.service... May 15 10:41:59.448054 systemd[1]: issuegen.service: Deactivated successfully. May 15 10:41:59.448172 systemd[1]: Finished issuegen.service. May 15 10:41:59.449262 systemd[1]: Starting systemd-user-sessions.service... May 15 10:41:59.454036 systemd[1]: Finished systemd-user-sessions.service. May 15 10:41:59.454927 systemd[1]: Started getty@tty1.service. May 15 10:41:59.455724 systemd[1]: Started serial-getty@ttyS0.service. May 15 10:41:59.455909 systemd[1]: Reached target getty.target. May 15 10:41:59.456039 systemd[1]: Reached target multi-user.target. May 15 10:41:59.456991 systemd[1]: Starting systemd-update-utmp-runlevel.service... May 15 10:41:59.461614 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. May 15 10:41:59.461743 systemd[1]: Finished systemd-update-utmp-runlevel.service. May 15 10:41:59.461906 systemd[1]: Startup finished in 5.721s (kernel) + 5.291s (userspace) = 11.013s. May 15 10:41:59.485318 login[1484]: pam_lastlog(login:session): file /var/log/lastlog is locked/write May 15 10:41:59.487193 login[1485]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) May 15 10:41:59.496014 systemd[1]: Created slice user-500.slice. May 15 10:41:59.496691 systemd[1]: Starting user-runtime-dir@500.service... May 15 10:41:59.499325 systemd-logind[1346]: New session 2 of user core. May 15 10:41:59.503269 systemd[1]: Finished user-runtime-dir@500.service. May 15 10:41:59.504040 systemd[1]: Starting user@500.service... May 15 10:41:59.507117 (systemd)[1490]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 15 10:41:59.555198 systemd[1490]: Queued start job for default target default.target. May 15 10:41:59.555339 systemd[1490]: Reached target paths.target. May 15 10:41:59.555352 systemd[1490]: Reached target sockets.target. May 15 10:41:59.555360 systemd[1490]: Reached target timers.target. May 15 10:41:59.555382 systemd[1490]: Reached target basic.target. May 15 10:41:59.555458 systemd[1]: Started user@500.service. May 15 10:41:59.556068 systemd[1]: Started session-2.scope. May 15 10:41:59.556350 systemd[1490]: Reached target default.target. May 15 10:41:59.556446 systemd[1490]: Startup finished in 45ms. May 15 10:41:59.979653 kubelet[1468]: E0515 10:41:59.979624 1468 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 15 10:41:59.980912 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 15 10:41:59.981001 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 15 10:42:00.485765 login[1484]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) May 15 10:42:00.489702 systemd[1]: Started session-1.scope. May 15 10:42:00.489895 systemd-logind[1346]: New session 1 of user core. May 15 10:42:10.231628 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 15 10:42:10.231810 systemd[1]: Stopped kubelet.service. May 15 10:42:10.233117 systemd[1]: Starting kubelet.service... May 15 10:42:10.284401 systemd[1]: Started kubelet.service. May 15 10:42:10.340763 kubelet[1527]: E0515 10:42:10.340739 1527 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 15 10:42:10.342906 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 15 10:42:10.342990 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 15 10:42:20.593516 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 15 10:42:20.593725 systemd[1]: Stopped kubelet.service. May 15 10:42:20.594839 systemd[1]: Starting kubelet.service... May 15 10:42:20.645061 systemd[1]: Started kubelet.service. May 15 10:42:20.713323 kubelet[1542]: E0515 10:42:20.713286 1542 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 15 10:42:20.714493 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 15 10:42:20.714588 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 15 10:42:28.686550 systemd[1]: Created slice system-sshd.slice. May 15 10:42:28.687331 systemd[1]: Started sshd@0-139.178.70.108:22-147.75.109.163:51678.service. May 15 10:42:28.842761 sshd[1550]: Accepted publickey for core from 147.75.109.163 port 51678 ssh2: RSA SHA256:5jNLHoTZfjCzTOKQrCP5LbgIW1XBYqjk9sc7IZ/f9u8 May 15 10:42:28.844034 sshd[1550]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 10:42:28.848003 systemd[1]: Started session-3.scope. May 15 10:42:28.848740 systemd-logind[1346]: New session 3 of user core. May 15 10:42:28.897420 systemd[1]: Started sshd@1-139.178.70.108:22-147.75.109.163:51694.service. May 15 10:42:28.942384 sshd[1555]: Accepted publickey for core from 147.75.109.163 port 51694 ssh2: RSA SHA256:5jNLHoTZfjCzTOKQrCP5LbgIW1XBYqjk9sc7IZ/f9u8 May 15 10:42:28.943420 sshd[1555]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 10:42:28.946266 systemd-logind[1346]: New session 4 of user core. May 15 10:42:28.947163 systemd[1]: Started session-4.scope. May 15 10:42:28.997733 sshd[1555]: pam_unix(sshd:session): session closed for user core May 15 10:42:28.999293 systemd[1]: Started sshd@2-139.178.70.108:22-147.75.109.163:51710.service. May 15 10:42:29.001859 systemd[1]: sshd@1-139.178.70.108:22-147.75.109.163:51694.service: Deactivated successfully. May 15 10:42:29.003710 systemd[1]: session-4.scope: Deactivated successfully. May 15 10:42:29.004016 systemd-logind[1346]: Session 4 logged out. Waiting for processes to exit. May 15 10:42:29.004787 systemd-logind[1346]: Removed session 4. May 15 10:42:29.036979 sshd[1560]: Accepted publickey for core from 147.75.109.163 port 51710 ssh2: RSA SHA256:5jNLHoTZfjCzTOKQrCP5LbgIW1XBYqjk9sc7IZ/f9u8 May 15 10:42:29.038130 sshd[1560]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 10:42:29.041597 systemd[1]: Started session-5.scope. May 15 10:42:29.041897 systemd-logind[1346]: New session 5 of user core. May 15 10:42:29.090661 sshd[1560]: pam_unix(sshd:session): session closed for user core May 15 10:42:29.092904 systemd[1]: Started sshd@3-139.178.70.108:22-147.75.109.163:51712.service. May 15 10:42:29.093673 systemd[1]: sshd@2-139.178.70.108:22-147.75.109.163:51710.service: Deactivated successfully. May 15 10:42:29.094422 systemd-logind[1346]: Session 5 logged out. Waiting for processes to exit. May 15 10:42:29.094496 systemd[1]: session-5.scope: Deactivated successfully. May 15 10:42:29.095756 systemd-logind[1346]: Removed session 5. May 15 10:42:29.132255 sshd[1567]: Accepted publickey for core from 147.75.109.163 port 51712 ssh2: RSA SHA256:5jNLHoTZfjCzTOKQrCP5LbgIW1XBYqjk9sc7IZ/f9u8 May 15 10:42:29.133285 sshd[1567]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 10:42:29.135706 systemd-logind[1346]: New session 6 of user core. May 15 10:42:29.136159 systemd[1]: Started session-6.scope. May 15 10:42:29.187009 sshd[1567]: pam_unix(sshd:session): session closed for user core May 15 10:42:29.188931 systemd[1]: Started sshd@4-139.178.70.108:22-147.75.109.163:51718.service. May 15 10:42:29.190925 systemd[1]: sshd@3-139.178.70.108:22-147.75.109.163:51712.service: Deactivated successfully. May 15 10:42:29.191883 systemd[1]: session-6.scope: Deactivated successfully. May 15 10:42:29.192245 systemd-logind[1346]: Session 6 logged out. Waiting for processes to exit. May 15 10:42:29.193239 systemd-logind[1346]: Removed session 6. May 15 10:42:29.228907 sshd[1574]: Accepted publickey for core from 147.75.109.163 port 51718 ssh2: RSA SHA256:5jNLHoTZfjCzTOKQrCP5LbgIW1XBYqjk9sc7IZ/f9u8 May 15 10:42:29.229735 sshd[1574]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 10:42:29.232332 systemd[1]: Started session-7.scope. May 15 10:42:29.232592 systemd-logind[1346]: New session 7 of user core. May 15 10:42:29.443808 sudo[1580]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 15 10:42:29.444069 sudo[1580]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) May 15 10:42:29.468782 systemd[1]: Starting docker.service... May 15 10:42:29.500837 env[1590]: time="2025-05-15T10:42:29.500807556Z" level=info msg="Starting up" May 15 10:42:29.501881 env[1590]: time="2025-05-15T10:42:29.501869557Z" level=info msg="parsed scheme: \"unix\"" module=grpc May 15 10:42:29.501939 env[1590]: time="2025-05-15T10:42:29.501929010Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc May 15 10:42:29.501997 env[1590]: time="2025-05-15T10:42:29.501986029Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc May 15 10:42:29.502042 env[1590]: time="2025-05-15T10:42:29.502033275Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc May 15 10:42:29.502942 env[1590]: time="2025-05-15T10:42:29.502879366Z" level=info msg="parsed scheme: \"unix\"" module=grpc May 15 10:42:29.502942 env[1590]: time="2025-05-15T10:42:29.502889865Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc May 15 10:42:29.502942 env[1590]: time="2025-05-15T10:42:29.502897830Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc May 15 10:42:29.502942 env[1590]: time="2025-05-15T10:42:29.502903294Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc May 15 10:42:29.866064 env[1590]: time="2025-05-15T10:42:29.865819495Z" level=warning msg="Your kernel does not support cgroup blkio weight" May 15 10:42:29.866189 env[1590]: time="2025-05-15T10:42:29.866177932Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" May 15 10:42:29.866319 env[1590]: time="2025-05-15T10:42:29.866309974Z" level=info msg="Loading containers: start." May 15 10:42:29.943691 kernel: Initializing XFRM netlink socket May 15 10:42:29.968242 env[1590]: time="2025-05-15T10:42:29.968212389Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" May 15 10:42:30.008174 systemd-networkd[1114]: docker0: Link UP May 15 10:42:30.016559 env[1590]: time="2025-05-15T10:42:30.016534156Z" level=info msg="Loading containers: done." May 15 10:42:30.023748 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3279740243-merged.mount: Deactivated successfully. May 15 10:42:30.039253 env[1590]: time="2025-05-15T10:42:30.039221325Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 15 10:42:30.039355 env[1590]: time="2025-05-15T10:42:30.039342122Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 May 15 10:42:30.039416 env[1590]: time="2025-05-15T10:42:30.039404848Z" level=info msg="Daemon has completed initialization" May 15 10:42:30.086435 systemd[1]: Started docker.service. May 15 10:42:30.087256 env[1590]: time="2025-05-15T10:42:30.087154194Z" level=info msg="API listen on /run/docker.sock" May 15 10:42:30.776342 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. May 15 10:42:30.776444 systemd[1]: Stopped kubelet.service. May 15 10:42:30.777490 systemd[1]: Starting kubelet.service... May 15 10:42:31.296066 systemd[1]: Started kubelet.service. May 15 10:42:31.330709 kubelet[1720]: E0515 10:42:31.330668 1720 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 15 10:42:31.331898 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 15 10:42:31.331983 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 15 10:42:32.331834 env[1356]: time="2025-05-15T10:42:32.331793149Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\"" May 15 10:42:32.854917 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1424273533.mount: Deactivated successfully. May 15 10:42:34.086843 env[1356]: time="2025-05-15T10:42:34.086812108Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.30.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:42:34.087609 env[1356]: time="2025-05-15T10:42:34.087592822Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e113c59aa22f0650435e2a3ed64aadb01e87f3d2835aa3825fe078cd39699bfb,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:42:34.088546 env[1356]: time="2025-05-15T10:42:34.088532645Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.30.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:42:34.089511 env[1356]: time="2025-05-15T10:42:34.089497486Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:42:34.089953 env[1356]: time="2025-05-15T10:42:34.089936519Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\" returns image reference \"sha256:e113c59aa22f0650435e2a3ed64aadb01e87f3d2835aa3825fe078cd39699bfb\"" May 15 10:42:34.095448 env[1356]: time="2025-05-15T10:42:34.095424224Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\"" May 15 10:42:35.522684 env[1356]: time="2025-05-15T10:42:35.522630637Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.30.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:42:35.532537 env[1356]: time="2025-05-15T10:42:35.532510346Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:70742b7b7d90a618a1fa06d89248dbe2c291c19d7f75f4ad60a69d0454dbbac8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:42:35.540984 env[1356]: time="2025-05-15T10:42:35.540956233Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.30.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:42:35.551765 env[1356]: time="2025-05-15T10:42:35.551736025Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:42:35.552207 env[1356]: time="2025-05-15T10:42:35.552187315Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\" returns image reference \"sha256:70742b7b7d90a618a1fa06d89248dbe2c291c19d7f75f4ad60a69d0454dbbac8\"" May 15 10:42:35.559887 env[1356]: time="2025-05-15T10:42:35.559858716Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\"" May 15 10:42:36.691837 env[1356]: time="2025-05-15T10:42:36.691802754Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.30.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:42:36.702859 env[1356]: time="2025-05-15T10:42:36.702831348Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c0b91cfea9f9a1c09fc5d056f3a015e52604fd0d63671ff5bf31e642402ef05d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:42:36.708095 env[1356]: time="2025-05-15T10:42:36.708074927Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.30.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:42:36.713003 env[1356]: time="2025-05-15T10:42:36.712980396Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:42:36.713606 env[1356]: time="2025-05-15T10:42:36.713586092Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\" returns image reference \"sha256:c0b91cfea9f9a1c09fc5d056f3a015e52604fd0d63671ff5bf31e642402ef05d\"" May 15 10:42:36.719567 env[1356]: time="2025-05-15T10:42:36.719538144Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\"" May 15 10:42:37.693612 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1662994529.mount: Deactivated successfully. May 15 10:42:38.168537 env[1356]: time="2025-05-15T10:42:38.168468124Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.30.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:42:38.181011 env[1356]: time="2025-05-15T10:42:38.180982522Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c9356fea5d151501039907c3ba870272461396117eabc74063632616f4e31b2b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:42:38.186188 env[1356]: time="2025-05-15T10:42:38.186164921Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.30.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:42:38.192184 env[1356]: time="2025-05-15T10:42:38.192158971Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:42:38.192358 env[1356]: time="2025-05-15T10:42:38.192338200Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\" returns image reference \"sha256:c9356fea5d151501039907c3ba870272461396117eabc74063632616f4e31b2b\"" May 15 10:42:38.197999 env[1356]: time="2025-05-15T10:42:38.197974900Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" May 15 10:42:38.719791 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1530510822.mount: Deactivated successfully. May 15 10:42:39.493369 env[1356]: time="2025-05-15T10:42:39.493335397Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:42:39.512831 env[1356]: time="2025-05-15T10:42:39.512261348Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:42:39.519252 env[1356]: time="2025-05-15T10:42:39.519226370Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:42:39.528915 env[1356]: time="2025-05-15T10:42:39.528889210Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:42:39.529363 env[1356]: time="2025-05-15T10:42:39.529346641Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" May 15 10:42:39.534647 env[1356]: time="2025-05-15T10:42:39.534624311Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" May 15 10:42:39.921497 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2107476848.mount: Deactivated successfully. May 15 10:42:39.923356 env[1356]: time="2025-05-15T10:42:39.923334604Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:42:39.923820 env[1356]: time="2025-05-15T10:42:39.923808668Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:42:39.924569 env[1356]: time="2025-05-15T10:42:39.924552509Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:42:39.925339 env[1356]: time="2025-05-15T10:42:39.925325451Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:42:39.925744 env[1356]: time="2025-05-15T10:42:39.925727017Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" May 15 10:42:39.933532 env[1356]: time="2025-05-15T10:42:39.933505201Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" May 15 10:42:40.340801 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3990309607.mount: Deactivated successfully. May 15 10:42:41.555176 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. May 15 10:42:41.555351 systemd[1]: Stopped kubelet.service. May 15 10:42:41.556929 systemd[1]: Starting kubelet.service... May 15 10:42:42.766956 env[1356]: time="2025-05-15T10:42:42.766919216Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.12-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:42:42.769180 env[1356]: time="2025-05-15T10:42:42.769159314Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:42:42.772554 env[1356]: time="2025-05-15T10:42:42.772290564Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.12-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:42:42.773323 env[1356]: time="2025-05-15T10:42:42.773310079Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:42:42.774455 env[1356]: time="2025-05-15T10:42:42.774437220Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" May 15 10:42:43.463175 update_engine[1347]: I0515 10:42:43.463140 1347 update_attempter.cc:509] Updating boot flags... May 15 10:42:44.494489 systemd[1]: Started kubelet.service. May 15 10:42:44.543845 kubelet[1853]: E0515 10:42:44.543822 1853 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 15 10:42:44.544924 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 15 10:42:44.545012 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 15 10:42:45.782268 systemd[1]: Stopped kubelet.service. May 15 10:42:45.784108 systemd[1]: Starting kubelet.service... May 15 10:42:45.806354 systemd[1]: Reloading. May 15 10:42:45.846877 /usr/lib/systemd/system-generators/torcx-generator[1887]: time="2025-05-15T10:42:45Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.100 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.100 /var/lib/torcx/store]" May 15 10:42:45.846895 /usr/lib/systemd/system-generators/torcx-generator[1887]: time="2025-05-15T10:42:45Z" level=info msg="torcx already run" May 15 10:42:45.929968 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 15 10:42:45.929980 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 15 10:42:45.941759 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 15 10:42:45.993944 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM May 15 10:42:45.993993 systemd[1]: kubelet.service: Failed with result 'signal'. May 15 10:42:45.994146 systemd[1]: Stopped kubelet.service. May 15 10:42:45.995462 systemd[1]: Starting kubelet.service... May 15 10:42:46.368119 systemd[1]: Started kubelet.service. May 15 10:42:46.441101 kubelet[1962]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 15 10:42:46.441386 kubelet[1962]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 15 10:42:46.441444 kubelet[1962]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 15 10:42:46.441578 kubelet[1962]: I0515 10:42:46.441556 1962 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 15 10:42:46.602067 kubelet[1962]: I0515 10:42:46.602042 1962 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" May 15 10:42:46.602067 kubelet[1962]: I0515 10:42:46.602060 1962 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 15 10:42:46.602197 kubelet[1962]: I0515 10:42:46.602186 1962 server.go:927] "Client rotation is on, will bootstrap in background" May 15 10:42:46.612288 kubelet[1962]: I0515 10:42:46.611945 1962 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 15 10:42:46.612872 kubelet[1962]: E0515 10:42:46.612863 1962 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://139.178.70.108:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 139.178.70.108:6443: connect: connection refused May 15 10:42:46.621389 kubelet[1962]: I0515 10:42:46.621338 1962 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 15 10:42:46.622515 kubelet[1962]: I0515 10:42:46.622494 1962 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 15 10:42:46.622665 kubelet[1962]: I0515 10:42:46.622562 1962 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} May 15 10:42:46.623125 kubelet[1962]: I0515 10:42:46.623116 1962 topology_manager.go:138] "Creating topology manager with none policy" May 15 10:42:46.623171 kubelet[1962]: I0515 10:42:46.623165 1962 container_manager_linux.go:301] "Creating device plugin manager" May 15 10:42:46.623998 kubelet[1962]: I0515 10:42:46.623990 1962 state_mem.go:36] "Initialized new in-memory state store" May 15 10:42:46.624713 kubelet[1962]: I0515 10:42:46.624705 1962 kubelet.go:400] "Attempting to sync node with API server" May 15 10:42:46.624765 kubelet[1962]: I0515 10:42:46.624758 1962 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" May 15 10:42:46.624819 kubelet[1962]: I0515 10:42:46.624812 1962 kubelet.go:312] "Adding apiserver pod source" May 15 10:42:46.624904 kubelet[1962]: I0515 10:42:46.624898 1962 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 15 10:42:46.631361 kubelet[1962]: W0515 10:42:46.631208 1962 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://139.178.70.108:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 139.178.70.108:6443: connect: connection refused May 15 10:42:46.631361 kubelet[1962]: E0515 10:42:46.631230 1962 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://139.178.70.108:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 139.178.70.108:6443: connect: connection refused May 15 10:42:46.631428 kubelet[1962]: I0515 10:42:46.631413 1962 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" May 15 10:42:46.633034 kubelet[1962]: I0515 10:42:46.633008 1962 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 15 10:42:46.633034 kubelet[1962]: W0515 10:42:46.633033 1962 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 15 10:42:46.633668 kubelet[1962]: I0515 10:42:46.633292 1962 server.go:1264] "Started kubelet" May 15 10:42:46.633668 kubelet[1962]: W0515 10:42:46.633347 1962 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://139.178.70.108:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 139.178.70.108:6443: connect: connection refused May 15 10:42:46.633668 kubelet[1962]: E0515 10:42:46.633371 1962 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://139.178.70.108:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 139.178.70.108:6443: connect: connection refused May 15 10:42:46.646086 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). May 15 10:42:46.646169 kubelet[1962]: I0515 10:42:46.646154 1962 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 15 10:42:46.647460 kubelet[1962]: E0515 10:42:46.647386 1962 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://139.178.70.108:6443/api/v1/namespaces/default/events\": dial tcp 139.178.70.108:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.183fad564e2b1681 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-15 10:42:46.633281153 +0000 UTC m=+0.261332817,LastTimestamp:2025-05-15 10:42:46.633281153 +0000 UTC m=+0.261332817,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 15 10:42:46.648428 kubelet[1962]: E0515 10:42:46.648418 1962 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 15 10:42:46.649639 kubelet[1962]: I0515 10:42:46.649625 1962 volume_manager.go:291] "Starting Kubelet Volume Manager" May 15 10:42:46.650559 kubelet[1962]: I0515 10:42:46.650542 1962 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 15 10:42:46.651305 kubelet[1962]: I0515 10:42:46.651295 1962 server.go:455] "Adding debug handlers to kubelet server" May 15 10:42:46.651876 kubelet[1962]: I0515 10:42:46.651861 1962 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 15 10:42:46.651925 kubelet[1962]: I0515 10:42:46.651895 1962 reconciler.go:26] "Reconciler: start to sync state" May 15 10:42:46.652161 kubelet[1962]: I0515 10:42:46.652135 1962 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 15 10:42:46.652325 kubelet[1962]: I0515 10:42:46.652316 1962 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 15 10:42:46.653033 kubelet[1962]: I0515 10:42:46.653024 1962 factory.go:221] Registration of the systemd container factory successfully May 15 10:42:46.653129 kubelet[1962]: I0515 10:42:46.653106 1962 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 15 10:42:46.653515 kubelet[1962]: E0515 10:42:46.653500 1962 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.108:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.108:6443: connect: connection refused" interval="200ms" May 15 10:42:46.654190 kubelet[1962]: I0515 10:42:46.654181 1962 factory.go:221] Registration of the containerd container factory successfully May 15 10:42:46.659113 kubelet[1962]: W0515 10:42:46.659089 1962 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://139.178.70.108:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.70.108:6443: connect: connection refused May 15 10:42:46.659168 kubelet[1962]: E0515 10:42:46.659121 1962 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://139.178.70.108:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.70.108:6443: connect: connection refused May 15 10:42:46.669476 kubelet[1962]: I0515 10:42:46.669458 1962 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 15 10:42:46.671343 kubelet[1962]: I0515 10:42:46.671334 1962 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 15 10:42:46.671406 kubelet[1962]: I0515 10:42:46.671399 1962 status_manager.go:217] "Starting to sync pod status with apiserver" May 15 10:42:46.671453 kubelet[1962]: I0515 10:42:46.671447 1962 kubelet.go:2337] "Starting kubelet main sync loop" May 15 10:42:46.671518 kubelet[1962]: E0515 10:42:46.671508 1962 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 15 10:42:46.673057 kubelet[1962]: W0515 10:42:46.673029 1962 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://139.178.70.108:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.70.108:6443: connect: connection refused May 15 10:42:46.673102 kubelet[1962]: E0515 10:42:46.673060 1962 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://139.178.70.108:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.70.108:6443: connect: connection refused May 15 10:42:46.673236 kubelet[1962]: I0515 10:42:46.673223 1962 cpu_manager.go:214] "Starting CPU manager" policy="none" May 15 10:42:46.673236 kubelet[1962]: I0515 10:42:46.673232 1962 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 15 10:42:46.673293 kubelet[1962]: I0515 10:42:46.673241 1962 state_mem.go:36] "Initialized new in-memory state store" May 15 10:42:46.674149 kubelet[1962]: I0515 10:42:46.674138 1962 policy_none.go:49] "None policy: Start" May 15 10:42:46.674399 kubelet[1962]: I0515 10:42:46.674385 1962 memory_manager.go:170] "Starting memorymanager" policy="None" May 15 10:42:46.675235 kubelet[1962]: I0515 10:42:46.675222 1962 state_mem.go:35] "Initializing new in-memory state store" May 15 10:42:46.677775 kubelet[1962]: I0515 10:42:46.677762 1962 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 15 10:42:46.677853 kubelet[1962]: I0515 10:42:46.677833 1962 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 15 10:42:46.677897 kubelet[1962]: I0515 10:42:46.677888 1962 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 15 10:42:46.678814 kubelet[1962]: E0515 10:42:46.678806 1962 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" May 15 10:42:46.751364 kubelet[1962]: I0515 10:42:46.751347 1962 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 15 10:42:46.751832 kubelet[1962]: E0515 10:42:46.751817 1962 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://139.178.70.108:6443/api/v1/nodes\": dial tcp 139.178.70.108:6443: connect: connection refused" node="localhost" May 15 10:42:46.772266 kubelet[1962]: I0515 10:42:46.772201 1962 topology_manager.go:215] "Topology Admit Handler" podUID="faee1587b04b25c35e691c620ef42a58" podNamespace="kube-system" podName="kube-apiserver-localhost" May 15 10:42:46.773043 kubelet[1962]: I0515 10:42:46.773030 1962 topology_manager.go:215] "Topology Admit Handler" podUID="b20b39a8540dba87b5883a6f0f602dba" podNamespace="kube-system" podName="kube-controller-manager-localhost" May 15 10:42:46.774219 kubelet[1962]: I0515 10:42:46.774206 1962 topology_manager.go:215] "Topology Admit Handler" podUID="6ece95f10dbffa04b25ec3439a115512" podNamespace="kube-system" podName="kube-scheduler-localhost" May 15 10:42:46.854775 kubelet[1962]: E0515 10:42:46.854707 1962 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.108:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.108:6443: connect: connection refused" interval="400ms" May 15 10:42:46.953092 kubelet[1962]: I0515 10:42:46.953001 1962 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/faee1587b04b25c35e691c620ef42a58-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"faee1587b04b25c35e691c620ef42a58\") " pod="kube-system/kube-apiserver-localhost" May 15 10:42:46.953092 kubelet[1962]: I0515 10:42:46.953041 1962 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 15 10:42:46.953092 kubelet[1962]: I0515 10:42:46.953057 1962 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 15 10:42:46.953092 kubelet[1962]: I0515 10:42:46.953071 1962 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 15 10:42:46.953722 kubelet[1962]: I0515 10:42:46.953708 1962 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6ece95f10dbffa04b25ec3439a115512-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6ece95f10dbffa04b25ec3439a115512\") " pod="kube-system/kube-scheduler-localhost" May 15 10:42:46.953815 kubelet[1962]: I0515 10:42:46.953803 1962 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/faee1587b04b25c35e691c620ef42a58-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"faee1587b04b25c35e691c620ef42a58\") " pod="kube-system/kube-apiserver-localhost" May 15 10:42:46.953896 kubelet[1962]: I0515 10:42:46.953885 1962 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/faee1587b04b25c35e691c620ef42a58-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"faee1587b04b25c35e691c620ef42a58\") " pod="kube-system/kube-apiserver-localhost" May 15 10:42:46.953972 kubelet[1962]: I0515 10:42:46.953961 1962 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 15 10:42:46.954045 kubelet[1962]: I0515 10:42:46.954034 1962 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 15 10:42:46.954960 kubelet[1962]: I0515 10:42:46.954927 1962 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 15 10:42:46.955248 kubelet[1962]: E0515 10:42:46.955227 1962 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://139.178.70.108:6443/api/v1/nodes\": dial tcp 139.178.70.108:6443: connect: connection refused" node="localhost" May 15 10:42:47.078797 env[1356]: time="2025-05-15T10:42:47.078765767Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6ece95f10dbffa04b25ec3439a115512,Namespace:kube-system,Attempt:0,}" May 15 10:42:47.079696 env[1356]: time="2025-05-15T10:42:47.079663902Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:b20b39a8540dba87b5883a6f0f602dba,Namespace:kube-system,Attempt:0,}" May 15 10:42:47.080526 env[1356]: time="2025-05-15T10:42:47.080405372Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:faee1587b04b25c35e691c620ef42a58,Namespace:kube-system,Attempt:0,}" May 15 10:42:47.255801 kubelet[1962]: E0515 10:42:47.255661 1962 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.108:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.108:6443: connect: connection refused" interval="800ms" May 15 10:42:47.357553 kubelet[1962]: I0515 10:42:47.357522 1962 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 15 10:42:47.357819 kubelet[1962]: E0515 10:42:47.357799 1962 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://139.178.70.108:6443/api/v1/nodes\": dial tcp 139.178.70.108:6443: connect: connection refused" node="localhost" May 15 10:42:47.487865 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1474310310.mount: Deactivated successfully. May 15 10:42:47.490169 env[1356]: time="2025-05-15T10:42:47.490150234Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:42:47.490752 env[1356]: time="2025-05-15T10:42:47.490739554Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:42:47.491250 env[1356]: time="2025-05-15T10:42:47.491235145Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:42:47.492230 env[1356]: time="2025-05-15T10:42:47.492218816Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:42:47.493545 env[1356]: time="2025-05-15T10:42:47.493533016Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:42:47.495477 env[1356]: time="2025-05-15T10:42:47.495460728Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:42:47.497412 env[1356]: time="2025-05-15T10:42:47.497397676Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:42:47.497941 env[1356]: time="2025-05-15T10:42:47.497920480Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:42:47.498485 env[1356]: time="2025-05-15T10:42:47.498459717Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:42:47.498975 env[1356]: time="2025-05-15T10:42:47.498952670Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:42:47.499477 env[1356]: time="2025-05-15T10:42:47.499465191Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:42:47.499958 env[1356]: time="2025-05-15T10:42:47.499946947Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:42:47.513352 env[1356]: time="2025-05-15T10:42:47.511601936Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 10:42:47.513352 env[1356]: time="2025-05-15T10:42:47.511654039Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 10:42:47.513352 env[1356]: time="2025-05-15T10:42:47.511661829Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 10:42:47.513352 env[1356]: time="2025-05-15T10:42:47.511749825Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/a4168945959d828c554280436347e0e89a59ed5eb6f3b8a4a0e255b6c48c6bff pid=2005 runtime=io.containerd.runc.v2 May 15 10:42:47.513976 env[1356]: time="2025-05-15T10:42:47.513244548Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 10:42:47.513976 env[1356]: time="2025-05-15T10:42:47.513263862Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 10:42:47.513976 env[1356]: time="2025-05-15T10:42:47.513270514Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 10:42:47.517990 env[1356]: time="2025-05-15T10:42:47.517765496Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/a979cfd7aa9cc6ce0b6812ea69c42e21d4d40fefe75c059cb84eda8c670c924d pid=2007 runtime=io.containerd.runc.v2 May 15 10:42:47.521603 env[1356]: time="2025-05-15T10:42:47.521573296Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 10:42:47.521703 env[1356]: time="2025-05-15T10:42:47.521688300Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 10:42:47.521770 env[1356]: time="2025-05-15T10:42:47.521757894Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 10:42:47.521910 env[1356]: time="2025-05-15T10:42:47.521876713Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/61e70c29b66e90dbb5e0f4e526b16fdb986d67884e9a6ede6ee61bb8bcd33f1b pid=2038 runtime=io.containerd.runc.v2 May 15 10:42:47.568489 env[1356]: time="2025-05-15T10:42:47.568457238Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:faee1587b04b25c35e691c620ef42a58,Namespace:kube-system,Attempt:0,} returns sandbox id \"a979cfd7aa9cc6ce0b6812ea69c42e21d4d40fefe75c059cb84eda8c670c924d\"" May 15 10:42:47.576659 env[1356]: time="2025-05-15T10:42:47.576640478Z" level=info msg="CreateContainer within sandbox \"a979cfd7aa9cc6ce0b6812ea69c42e21d4d40fefe75c059cb84eda8c670c924d\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 15 10:42:47.583763 env[1356]: time="2025-05-15T10:42:47.583738194Z" level=info msg="CreateContainer within sandbox \"a979cfd7aa9cc6ce0b6812ea69c42e21d4d40fefe75c059cb84eda8c670c924d\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"5e5f2c7fc642605cfbce3abc761bda6efea3a7c5859a722aa3846b3b3a4f7711\"" May 15 10:42:47.585102 env[1356]: time="2025-05-15T10:42:47.584124129Z" level=info msg="StartContainer for \"5e5f2c7fc642605cfbce3abc761bda6efea3a7c5859a722aa3846b3b3a4f7711\"" May 15 10:42:47.590844 env[1356]: time="2025-05-15T10:42:47.590818548Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:b20b39a8540dba87b5883a6f0f602dba,Namespace:kube-system,Attempt:0,} returns sandbox id \"61e70c29b66e90dbb5e0f4e526b16fdb986d67884e9a6ede6ee61bb8bcd33f1b\"" May 15 10:42:47.592485 env[1356]: time="2025-05-15T10:42:47.592471695Z" level=info msg="CreateContainer within sandbox \"61e70c29b66e90dbb5e0f4e526b16fdb986d67884e9a6ede6ee61bb8bcd33f1b\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 15 10:42:47.595926 env[1356]: time="2025-05-15T10:42:47.594346199Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6ece95f10dbffa04b25ec3439a115512,Namespace:kube-system,Attempt:0,} returns sandbox id \"a4168945959d828c554280436347e0e89a59ed5eb6f3b8a4a0e255b6c48c6bff\"" May 15 10:42:47.595985 env[1356]: time="2025-05-15T10:42:47.595972877Z" level=info msg="CreateContainer within sandbox \"a4168945959d828c554280436347e0e89a59ed5eb6f3b8a4a0e255b6c48c6bff\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 15 10:42:47.599316 env[1356]: time="2025-05-15T10:42:47.599293030Z" level=info msg="CreateContainer within sandbox \"61e70c29b66e90dbb5e0f4e526b16fdb986d67884e9a6ede6ee61bb8bcd33f1b\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"611eebee4f2c6f105faf8e89264669a0d82b74b52d1bfe2691639c48ac8050b1\"" May 15 10:42:47.599601 env[1356]: time="2025-05-15T10:42:47.599580471Z" level=info msg="StartContainer for \"611eebee4f2c6f105faf8e89264669a0d82b74b52d1bfe2691639c48ac8050b1\"" May 15 10:42:47.602361 env[1356]: time="2025-05-15T10:42:47.602343473Z" level=info msg="CreateContainer within sandbox \"a4168945959d828c554280436347e0e89a59ed5eb6f3b8a4a0e255b6c48c6bff\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"5a971c1fb56e51871605b1b399d679f8bf066cfc654baa5bdb97021957a92a8e\"" May 15 10:42:47.602587 env[1356]: time="2025-05-15T10:42:47.602570259Z" level=info msg="StartContainer for \"5a971c1fb56e51871605b1b399d679f8bf066cfc654baa5bdb97021957a92a8e\"" May 15 10:42:47.604756 kubelet[1962]: W0515 10:42:47.604711 1962 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://139.178.70.108:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.70.108:6443: connect: connection refused May 15 10:42:47.604756 kubelet[1962]: E0515 10:42:47.604741 1962 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://139.178.70.108:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.70.108:6443: connect: connection refused May 15 10:42:47.663656 env[1356]: time="2025-05-15T10:42:47.663631256Z" level=info msg="StartContainer for \"5e5f2c7fc642605cfbce3abc761bda6efea3a7c5859a722aa3846b3b3a4f7711\" returns successfully" May 15 10:42:47.671832 env[1356]: time="2025-05-15T10:42:47.671815852Z" level=info msg="StartContainer for \"611eebee4f2c6f105faf8e89264669a0d82b74b52d1bfe2691639c48ac8050b1\" returns successfully" May 15 10:42:47.690695 env[1356]: time="2025-05-15T10:42:47.690658879Z" level=info msg="StartContainer for \"5a971c1fb56e51871605b1b399d679f8bf066cfc654baa5bdb97021957a92a8e\" returns successfully" May 15 10:42:47.753844 kubelet[1962]: W0515 10:42:47.753768 1962 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://139.178.70.108:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 139.178.70.108:6443: connect: connection refused May 15 10:42:47.753844 kubelet[1962]: E0515 10:42:47.753830 1962 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://139.178.70.108:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 139.178.70.108:6443: connect: connection refused May 15 10:42:47.799486 kubelet[1962]: W0515 10:42:47.799339 1962 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://139.178.70.108:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.70.108:6443: connect: connection refused May 15 10:42:47.799486 kubelet[1962]: E0515 10:42:47.799402 1962 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://139.178.70.108:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.70.108:6443: connect: connection refused May 15 10:42:47.875145 kubelet[1962]: W0515 10:42:47.875078 1962 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://139.178.70.108:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 139.178.70.108:6443: connect: connection refused May 15 10:42:47.875145 kubelet[1962]: E0515 10:42:47.875127 1962 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://139.178.70.108:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 139.178.70.108:6443: connect: connection refused May 15 10:42:48.056122 kubelet[1962]: E0515 10:42:48.056094 1962 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.108:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.108:6443: connect: connection refused" interval="1.6s" May 15 10:42:48.060395 kubelet[1962]: E0515 10:42:48.060305 1962 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://139.178.70.108:6443/api/v1/namespaces/default/events\": dial tcp 139.178.70.108:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.183fad564e2b1681 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-15 10:42:46.633281153 +0000 UTC m=+0.261332817,LastTimestamp:2025-05-15 10:42:46.633281153 +0000 UTC m=+0.261332817,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 15 10:42:48.158627 kubelet[1962]: I0515 10:42:48.158610 1962 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 15 10:42:48.158900 kubelet[1962]: E0515 10:42:48.158888 1962 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://139.178.70.108:6443/api/v1/nodes\": dial tcp 139.178.70.108:6443: connect: connection refused" node="localhost" May 15 10:42:49.443493 kubelet[1962]: E0515 10:42:49.443476 1962 csi_plugin.go:308] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found May 15 10:42:49.658262 kubelet[1962]: E0515 10:42:49.658236 1962 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" May 15 10:42:49.760539 kubelet[1962]: I0515 10:42:49.760471 1962 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 15 10:42:49.768746 kubelet[1962]: I0515 10:42:49.768724 1962 kubelet_node_status.go:76] "Successfully registered node" node="localhost" May 15 10:42:49.773339 kubelet[1962]: E0515 10:42:49.773321 1962 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 10:42:49.874050 kubelet[1962]: E0515 10:42:49.874014 1962 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 10:42:49.974630 kubelet[1962]: E0515 10:42:49.974590 1962 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 10:42:50.075175 kubelet[1962]: E0515 10:42:50.075140 1962 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 10:42:50.175984 kubelet[1962]: E0515 10:42:50.175957 1962 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 10:42:50.276522 kubelet[1962]: E0515 10:42:50.276493 1962 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 10:42:50.377160 kubelet[1962]: E0515 10:42:50.377056 1962 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 10:42:50.477588 kubelet[1962]: E0515 10:42:50.477563 1962 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 10:42:50.632520 kubelet[1962]: I0515 10:42:50.632435 1962 apiserver.go:52] "Watching apiserver" May 15 10:42:50.652158 kubelet[1962]: I0515 10:42:50.652129 1962 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 15 10:42:51.371811 systemd[1]: Reloading. May 15 10:42:51.432406 /usr/lib/systemd/system-generators/torcx-generator[2246]: time="2025-05-15T10:42:51Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.100 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.100 /var/lib/torcx/store]" May 15 10:42:51.432644 /usr/lib/systemd/system-generators/torcx-generator[2246]: time="2025-05-15T10:42:51Z" level=info msg="torcx already run" May 15 10:42:51.497201 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 15 10:42:51.497314 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 15 10:42:51.509147 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 15 10:42:51.564161 kubelet[1962]: I0515 10:42:51.564148 1962 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 15 10:42:51.564560 systemd[1]: Stopping kubelet.service... May 15 10:42:51.573898 systemd[1]: kubelet.service: Deactivated successfully. May 15 10:42:51.574068 systemd[1]: Stopped kubelet.service. May 15 10:42:51.575784 systemd[1]: Starting kubelet.service... May 15 10:42:52.587693 systemd[1]: Started kubelet.service. May 15 10:42:52.677517 sudo[2332]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin May 15 10:42:52.677666 sudo[2332]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) May 15 10:42:52.704544 kubelet[2321]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 15 10:42:52.705016 kubelet[2321]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 15 10:42:52.705060 kubelet[2321]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 15 10:42:52.705162 kubelet[2321]: I0515 10:42:52.705141 2321 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 15 10:42:52.709285 kubelet[2321]: I0515 10:42:52.709206 2321 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" May 15 10:42:52.709285 kubelet[2321]: I0515 10:42:52.709229 2321 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 15 10:42:52.709407 kubelet[2321]: I0515 10:42:52.709373 2321 server.go:927] "Client rotation is on, will bootstrap in background" May 15 10:42:52.710203 kubelet[2321]: I0515 10:42:52.710188 2321 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 15 10:42:52.712446 kubelet[2321]: I0515 10:42:52.712432 2321 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 15 10:42:52.722952 kubelet[2321]: I0515 10:42:52.722934 2321 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 15 10:42:52.723335 kubelet[2321]: I0515 10:42:52.723314 2321 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 15 10:42:52.723505 kubelet[2321]: I0515 10:42:52.723387 2321 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} May 15 10:42:52.723608 kubelet[2321]: I0515 10:42:52.723599 2321 topology_manager.go:138] "Creating topology manager with none policy" May 15 10:42:52.723659 kubelet[2321]: I0515 10:42:52.723652 2321 container_manager_linux.go:301] "Creating device plugin manager" May 15 10:42:52.723750 kubelet[2321]: I0515 10:42:52.723742 2321 state_mem.go:36] "Initialized new in-memory state store" May 15 10:42:52.723882 kubelet[2321]: I0515 10:42:52.723875 2321 kubelet.go:400] "Attempting to sync node with API server" May 15 10:42:52.723932 kubelet[2321]: I0515 10:42:52.723924 2321 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" May 15 10:42:52.723994 kubelet[2321]: I0515 10:42:52.723986 2321 kubelet.go:312] "Adding apiserver pod source" May 15 10:42:52.724051 kubelet[2321]: I0515 10:42:52.724043 2321 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 15 10:42:52.735920 kubelet[2321]: I0515 10:42:52.735903 2321 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" May 15 10:42:52.736135 kubelet[2321]: I0515 10:42:52.736126 2321 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 15 10:42:52.736433 kubelet[2321]: I0515 10:42:52.736425 2321 server.go:1264] "Started kubelet" May 15 10:42:52.738397 kubelet[2321]: I0515 10:42:52.738388 2321 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 15 10:42:52.740750 kubelet[2321]: I0515 10:42:52.740729 2321 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 15 10:42:52.741558 kubelet[2321]: I0515 10:42:52.741548 2321 server.go:455] "Adding debug handlers to kubelet server" May 15 10:42:52.743571 kubelet[2321]: I0515 10:42:52.743535 2321 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 15 10:42:52.743739 kubelet[2321]: I0515 10:42:52.743732 2321 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 15 10:42:52.747017 kubelet[2321]: I0515 10:42:52.747004 2321 volume_manager.go:291] "Starting Kubelet Volume Manager" May 15 10:42:52.748168 kubelet[2321]: I0515 10:42:52.748156 2321 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 15 10:42:52.748587 kubelet[2321]: I0515 10:42:52.748579 2321 reconciler.go:26] "Reconciler: start to sync state" May 15 10:42:52.752257 kubelet[2321]: I0515 10:42:52.752243 2321 factory.go:221] Registration of the systemd container factory successfully May 15 10:42:52.752395 kubelet[2321]: I0515 10:42:52.752383 2321 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 15 10:42:52.752589 kubelet[2321]: E0515 10:42:52.752580 2321 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 15 10:42:52.755069 kubelet[2321]: I0515 10:42:52.755053 2321 factory.go:221] Registration of the containerd container factory successfully May 15 10:42:52.763789 kubelet[2321]: I0515 10:42:52.763765 2321 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 15 10:42:52.764467 kubelet[2321]: I0515 10:42:52.764455 2321 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 15 10:42:52.764552 kubelet[2321]: I0515 10:42:52.764544 2321 status_manager.go:217] "Starting to sync pod status with apiserver" May 15 10:42:52.764608 kubelet[2321]: I0515 10:42:52.764601 2321 kubelet.go:2337] "Starting kubelet main sync loop" May 15 10:42:52.764726 kubelet[2321]: E0515 10:42:52.764671 2321 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 15 10:42:52.828160 kubelet[2321]: I0515 10:42:52.828136 2321 cpu_manager.go:214] "Starting CPU manager" policy="none" May 15 10:42:52.828281 kubelet[2321]: I0515 10:42:52.828272 2321 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 15 10:42:52.828386 kubelet[2321]: I0515 10:42:52.828379 2321 state_mem.go:36] "Initialized new in-memory state store" May 15 10:42:52.828598 kubelet[2321]: I0515 10:42:52.828579 2321 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 15 10:42:52.828657 kubelet[2321]: I0515 10:42:52.828641 2321 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 15 10:42:52.828727 kubelet[2321]: I0515 10:42:52.828720 2321 policy_none.go:49] "None policy: Start" May 15 10:42:52.830170 kubelet[2321]: I0515 10:42:52.830153 2321 memory_manager.go:170] "Starting memorymanager" policy="None" May 15 10:42:52.830250 kubelet[2321]: I0515 10:42:52.830243 2321 state_mem.go:35] "Initializing new in-memory state store" May 15 10:42:52.830434 kubelet[2321]: I0515 10:42:52.830426 2321 state_mem.go:75] "Updated machine memory state" May 15 10:42:52.831345 kubelet[2321]: I0515 10:42:52.831336 2321 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 15 10:42:52.833296 kubelet[2321]: I0515 10:42:52.833190 2321 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 15 10:42:52.835913 kubelet[2321]: I0515 10:42:52.835894 2321 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 15 10:42:52.850839 kubelet[2321]: I0515 10:42:52.850776 2321 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 15 10:42:52.864345 kubelet[2321]: I0515 10:42:52.864214 2321 kubelet_node_status.go:112] "Node was previously registered" node="localhost" May 15 10:42:52.864345 kubelet[2321]: I0515 10:42:52.864286 2321 kubelet_node_status.go:76] "Successfully registered node" node="localhost" May 15 10:42:52.864946 kubelet[2321]: I0515 10:42:52.864915 2321 topology_manager.go:215] "Topology Admit Handler" podUID="faee1587b04b25c35e691c620ef42a58" podNamespace="kube-system" podName="kube-apiserver-localhost" May 15 10:42:52.864993 kubelet[2321]: I0515 10:42:52.864981 2321 topology_manager.go:215] "Topology Admit Handler" podUID="b20b39a8540dba87b5883a6f0f602dba" podNamespace="kube-system" podName="kube-controller-manager-localhost" May 15 10:42:52.865060 kubelet[2321]: I0515 10:42:52.865046 2321 topology_manager.go:215] "Topology Admit Handler" podUID="6ece95f10dbffa04b25ec3439a115512" podNamespace="kube-system" podName="kube-scheduler-localhost" May 15 10:42:52.950731 kubelet[2321]: I0515 10:42:52.950708 2321 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6ece95f10dbffa04b25ec3439a115512-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6ece95f10dbffa04b25ec3439a115512\") " pod="kube-system/kube-scheduler-localhost" May 15 10:42:52.950865 kubelet[2321]: I0515 10:42:52.950852 2321 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 15 10:42:52.950935 kubelet[2321]: I0515 10:42:52.950926 2321 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 15 10:42:52.951001 kubelet[2321]: I0515 10:42:52.950993 2321 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/faee1587b04b25c35e691c620ef42a58-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"faee1587b04b25c35e691c620ef42a58\") " pod="kube-system/kube-apiserver-localhost" May 15 10:42:52.951056 kubelet[2321]: I0515 10:42:52.951047 2321 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 15 10:42:52.951119 kubelet[2321]: I0515 10:42:52.951110 2321 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 15 10:42:52.951176 kubelet[2321]: I0515 10:42:52.951162 2321 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 15 10:42:52.951237 kubelet[2321]: I0515 10:42:52.951229 2321 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/faee1587b04b25c35e691c620ef42a58-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"faee1587b04b25c35e691c620ef42a58\") " pod="kube-system/kube-apiserver-localhost" May 15 10:42:52.951291 kubelet[2321]: I0515 10:42:52.951278 2321 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/faee1587b04b25c35e691c620ef42a58-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"faee1587b04b25c35e691c620ef42a58\") " pod="kube-system/kube-apiserver-localhost" May 15 10:42:53.278273 sudo[2332]: pam_unix(sudo:session): session closed for user root May 15 10:42:53.730505 kubelet[2321]: I0515 10:42:53.730491 2321 apiserver.go:52] "Watching apiserver" May 15 10:42:53.748553 kubelet[2321]: I0515 10:42:53.748534 2321 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 15 10:42:53.802074 kubelet[2321]: E0515 10:42:53.802058 2321 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 15 10:42:53.814048 kubelet[2321]: I0515 10:42:53.814008 2321 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.813998359 podStartE2EDuration="1.813998359s" podCreationTimestamp="2025-05-15 10:42:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 10:42:53.812313802 +0000 UTC m=+1.164535438" watchObservedRunningTime="2025-05-15 10:42:53.813998359 +0000 UTC m=+1.166219995" May 15 10:42:53.819180 kubelet[2321]: I0515 10:42:53.819152 2321 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.8191403290000001 podStartE2EDuration="1.819140329s" podCreationTimestamp="2025-05-15 10:42:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 10:42:53.819135393 +0000 UTC m=+1.171357029" watchObservedRunningTime="2025-05-15 10:42:53.819140329 +0000 UTC m=+1.171361963" May 15 10:42:53.823665 kubelet[2321]: I0515 10:42:53.823634 2321 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.82362544 podStartE2EDuration="1.82362544s" podCreationTimestamp="2025-05-15 10:42:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 10:42:53.823497315 +0000 UTC m=+1.175718953" watchObservedRunningTime="2025-05-15 10:42:53.82362544 +0000 UTC m=+1.175847077" May 15 10:42:54.809190 sudo[1580]: pam_unix(sudo:session): session closed for user root May 15 10:42:54.811333 sshd[1574]: pam_unix(sshd:session): session closed for user core May 15 10:42:54.813200 systemd[1]: sshd@4-139.178.70.108:22-147.75.109.163:51718.service: Deactivated successfully. May 15 10:42:54.814321 systemd[1]: session-7.scope: Deactivated successfully. May 15 10:42:54.814738 systemd-logind[1346]: Session 7 logged out. Waiting for processes to exit. May 15 10:42:54.815392 systemd-logind[1346]: Removed session 7. May 15 10:43:04.095060 kubelet[2321]: I0515 10:43:04.095037 2321 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 15 10:43:04.095397 kubelet[2321]: I0515 10:43:04.095315 2321 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 15 10:43:04.095426 env[1356]: time="2025-05-15T10:43:04.095226935Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 15 10:43:05.026534 kubelet[2321]: I0515 10:43:05.026510 2321 topology_manager.go:215] "Topology Admit Handler" podUID="2b56cdb3-6b72-4493-896f-9c54ddf87971" podNamespace="kube-system" podName="cilium-wwt4l" May 15 10:43:05.027250 kubelet[2321]: I0515 10:43:05.027235 2321 topology_manager.go:215] "Topology Admit Handler" podUID="8bcd91ff-d040-4bfb-bb92-b1292a1ed4bb" podNamespace="kube-system" podName="kube-proxy-pc7np" May 15 10:43:05.033896 kubelet[2321]: W0515 10:43:05.033876 2321 reflector.go:547] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:localhost" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object May 15 10:43:05.034091 kubelet[2321]: E0515 10:43:05.034079 2321 reflector.go:150] object-"kube-system"/"hubble-server-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:localhost" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object May 15 10:43:05.034188 kubelet[2321]: W0515 10:43:05.034179 2321 reflector.go:547] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object May 15 10:43:05.034261 kubelet[2321]: E0515 10:43:05.034254 2321 reflector.go:150] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object May 15 10:43:05.034330 kubelet[2321]: W0515 10:43:05.034322 2321 reflector.go:547] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:localhost" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object May 15 10:43:05.034383 kubelet[2321]: E0515 10:43:05.034376 2321 reflector.go:150] object-"kube-system"/"cilium-clustermesh": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:localhost" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object May 15 10:43:05.034460 kubelet[2321]: W0515 10:43:05.034452 2321 reflector.go:547] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object May 15 10:43:05.034515 kubelet[2321]: E0515 10:43:05.034508 2321 reflector.go:150] object-"kube-system"/"cilium-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object May 15 10:43:05.034597 kubelet[2321]: W0515 10:43:05.034589 2321 reflector.go:547] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object May 15 10:43:05.034648 kubelet[2321]: E0515 10:43:05.034641 2321 reflector.go:150] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object May 15 10:43:05.125093 kubelet[2321]: I0515 10:43:05.125069 2321 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8bcd91ff-d040-4bfb-bb92-b1292a1ed4bb-lib-modules\") pod \"kube-proxy-pc7np\" (UID: \"8bcd91ff-d040-4bfb-bb92-b1292a1ed4bb\") " pod="kube-system/kube-proxy-pc7np" May 15 10:43:05.125093 kubelet[2321]: I0515 10:43:05.125095 2321 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2b56cdb3-6b72-4493-896f-9c54ddf87971-lib-modules\") pod \"cilium-wwt4l\" (UID: \"2b56cdb3-6b72-4493-896f-9c54ddf87971\") " pod="kube-system/cilium-wwt4l" May 15 10:43:05.125369 kubelet[2321]: I0515 10:43:05.125108 2321 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2b56cdb3-6b72-4493-896f-9c54ddf87971-hubble-tls\") pod \"cilium-wwt4l\" (UID: \"2b56cdb3-6b72-4493-896f-9c54ddf87971\") " pod="kube-system/cilium-wwt4l" May 15 10:43:05.125369 kubelet[2321]: I0515 10:43:05.125119 2321 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6rbvt\" (UniqueName: \"kubernetes.io/projected/8bcd91ff-d040-4bfb-bb92-b1292a1ed4bb-kube-api-access-6rbvt\") pod \"kube-proxy-pc7np\" (UID: \"8bcd91ff-d040-4bfb-bb92-b1292a1ed4bb\") " pod="kube-system/kube-proxy-pc7np" May 15 10:43:05.125369 kubelet[2321]: I0515 10:43:05.125128 2321 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2b56cdb3-6b72-4493-896f-9c54ddf87971-bpf-maps\") pod \"cilium-wwt4l\" (UID: \"2b56cdb3-6b72-4493-896f-9c54ddf87971\") " pod="kube-system/cilium-wwt4l" May 15 10:43:05.125369 kubelet[2321]: I0515 10:43:05.125156 2321 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2b56cdb3-6b72-4493-896f-9c54ddf87971-host-proc-sys-net\") pod \"cilium-wwt4l\" (UID: \"2b56cdb3-6b72-4493-896f-9c54ddf87971\") " pod="kube-system/cilium-wwt4l" May 15 10:43:05.125369 kubelet[2321]: I0515 10:43:05.125167 2321 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2b56cdb3-6b72-4493-896f-9c54ddf87971-host-proc-sys-kernel\") pod \"cilium-wwt4l\" (UID: \"2b56cdb3-6b72-4493-896f-9c54ddf87971\") " pod="kube-system/cilium-wwt4l" May 15 10:43:05.125369 kubelet[2321]: I0515 10:43:05.125176 2321 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2b56cdb3-6b72-4493-896f-9c54ddf87971-hostproc\") pod \"cilium-wwt4l\" (UID: \"2b56cdb3-6b72-4493-896f-9c54ddf87971\") " pod="kube-system/cilium-wwt4l" May 15 10:43:05.125488 kubelet[2321]: I0515 10:43:05.125185 2321 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2b56cdb3-6b72-4493-896f-9c54ddf87971-cilium-run\") pod \"cilium-wwt4l\" (UID: \"2b56cdb3-6b72-4493-896f-9c54ddf87971\") " pod="kube-system/cilium-wwt4l" May 15 10:43:05.125488 kubelet[2321]: I0515 10:43:05.125193 2321 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2b56cdb3-6b72-4493-896f-9c54ddf87971-xtables-lock\") pod \"cilium-wwt4l\" (UID: \"2b56cdb3-6b72-4493-896f-9c54ddf87971\") " pod="kube-system/cilium-wwt4l" May 15 10:43:05.125488 kubelet[2321]: I0515 10:43:05.125201 2321 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2b56cdb3-6b72-4493-896f-9c54ddf87971-cilium-config-path\") pod \"cilium-wwt4l\" (UID: \"2b56cdb3-6b72-4493-896f-9c54ddf87971\") " pod="kube-system/cilium-wwt4l" May 15 10:43:05.125488 kubelet[2321]: I0515 10:43:05.125211 2321 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8bcd91ff-d040-4bfb-bb92-b1292a1ed4bb-xtables-lock\") pod \"kube-proxy-pc7np\" (UID: \"8bcd91ff-d040-4bfb-bb92-b1292a1ed4bb\") " pod="kube-system/kube-proxy-pc7np" May 15 10:43:05.125488 kubelet[2321]: I0515 10:43:05.125219 2321 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2b56cdb3-6b72-4493-896f-9c54ddf87971-cni-path\") pod \"cilium-wwt4l\" (UID: \"2b56cdb3-6b72-4493-896f-9c54ddf87971\") " pod="kube-system/cilium-wwt4l" May 15 10:43:05.125488 kubelet[2321]: I0515 10:43:05.125228 2321 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2b56cdb3-6b72-4493-896f-9c54ddf87971-clustermesh-secrets\") pod \"cilium-wwt4l\" (UID: \"2b56cdb3-6b72-4493-896f-9c54ddf87971\") " pod="kube-system/cilium-wwt4l" May 15 10:43:05.125601 kubelet[2321]: I0515 10:43:05.125238 2321 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/8bcd91ff-d040-4bfb-bb92-b1292a1ed4bb-kube-proxy\") pod \"kube-proxy-pc7np\" (UID: \"8bcd91ff-d040-4bfb-bb92-b1292a1ed4bb\") " pod="kube-system/kube-proxy-pc7np" May 15 10:43:05.125601 kubelet[2321]: I0515 10:43:05.125246 2321 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2b56cdb3-6b72-4493-896f-9c54ddf87971-etc-cni-netd\") pod \"cilium-wwt4l\" (UID: \"2b56cdb3-6b72-4493-896f-9c54ddf87971\") " pod="kube-system/cilium-wwt4l" May 15 10:43:05.125601 kubelet[2321]: I0515 10:43:05.125256 2321 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2b56cdb3-6b72-4493-896f-9c54ddf87971-cilium-cgroup\") pod \"cilium-wwt4l\" (UID: \"2b56cdb3-6b72-4493-896f-9c54ddf87971\") " pod="kube-system/cilium-wwt4l" May 15 10:43:05.125601 kubelet[2321]: I0515 10:43:05.125266 2321 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qkx8g\" (UniqueName: \"kubernetes.io/projected/2b56cdb3-6b72-4493-896f-9c54ddf87971-kube-api-access-qkx8g\") pod \"cilium-wwt4l\" (UID: \"2b56cdb3-6b72-4493-896f-9c54ddf87971\") " pod="kube-system/cilium-wwt4l" May 15 10:43:05.300119 kubelet[2321]: I0515 10:43:05.300042 2321 topology_manager.go:215] "Topology Admit Handler" podUID="06162253-c5ba-41c4-a121-45d3a08feaea" podNamespace="kube-system" podName="cilium-operator-599987898-wnx68" May 15 10:43:05.326539 kubelet[2321]: I0515 10:43:05.326509 2321 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x7xdh\" (UniqueName: \"kubernetes.io/projected/06162253-c5ba-41c4-a121-45d3a08feaea-kube-api-access-x7xdh\") pod \"cilium-operator-599987898-wnx68\" (UID: \"06162253-c5ba-41c4-a121-45d3a08feaea\") " pod="kube-system/cilium-operator-599987898-wnx68" May 15 10:43:05.326657 kubelet[2321]: I0515 10:43:05.326547 2321 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/06162253-c5ba-41c4-a121-45d3a08feaea-cilium-config-path\") pod \"cilium-operator-599987898-wnx68\" (UID: \"06162253-c5ba-41c4-a121-45d3a08feaea\") " pod="kube-system/cilium-operator-599987898-wnx68" May 15 10:43:06.227630 kubelet[2321]: E0515 10:43:06.227607 2321 configmap.go:199] Couldn't get configMap kube-system/cilium-config: failed to sync configmap cache: timed out waiting for the condition May 15 10:43:06.228430 kubelet[2321]: E0515 10:43:06.228419 2321 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2b56cdb3-6b72-4493-896f-9c54ddf87971-cilium-config-path podName:2b56cdb3-6b72-4493-896f-9c54ddf87971 nodeName:}" failed. No retries permitted until 2025-05-15 10:43:06.728405787 +0000 UTC m=+14.080627421 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-config-path" (UniqueName: "kubernetes.io/configmap/2b56cdb3-6b72-4493-896f-9c54ddf87971-cilium-config-path") pod "cilium-wwt4l" (UID: "2b56cdb3-6b72-4493-896f-9c54ddf87971") : failed to sync configmap cache: timed out waiting for the condition May 15 10:43:06.229723 kubelet[2321]: E0515 10:43:06.229710 2321 projected.go:269] Couldn't get secret kube-system/hubble-server-certs: failed to sync secret cache: timed out waiting for the condition May 15 10:43:06.229797 kubelet[2321]: E0515 10:43:06.229789 2321 projected.go:200] Error preparing data for projected volume hubble-tls for pod kube-system/cilium-wwt4l: failed to sync secret cache: timed out waiting for the condition May 15 10:43:06.229881 kubelet[2321]: E0515 10:43:06.229874 2321 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/2b56cdb3-6b72-4493-896f-9c54ddf87971-hubble-tls podName:2b56cdb3-6b72-4493-896f-9c54ddf87971 nodeName:}" failed. No retries permitted until 2025-05-15 10:43:06.729862097 +0000 UTC m=+14.082083733 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "hubble-tls" (UniqueName: "kubernetes.io/projected/2b56cdb3-6b72-4493-896f-9c54ddf87971-hubble-tls") pod "cilium-wwt4l" (UID: "2b56cdb3-6b72-4493-896f-9c54ddf87971") : failed to sync secret cache: timed out waiting for the condition May 15 10:43:06.232370 env[1356]: time="2025-05-15T10:43:06.232062857Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-pc7np,Uid:8bcd91ff-d040-4bfb-bb92-b1292a1ed4bb,Namespace:kube-system,Attempt:0,}" May 15 10:43:06.242704 env[1356]: time="2025-05-15T10:43:06.242651236Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 10:43:06.242843 env[1356]: time="2025-05-15T10:43:06.242692191Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 10:43:06.242843 env[1356]: time="2025-05-15T10:43:06.242821557Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 10:43:06.243048 env[1356]: time="2025-05-15T10:43:06.243017103Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ec64679c89a653adba6aea9909ed8dc988b0dc57d20091510853d281adfe549a pid=2401 runtime=io.containerd.runc.v2 May 15 10:43:06.268055 env[1356]: time="2025-05-15T10:43:06.268027156Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-pc7np,Uid:8bcd91ff-d040-4bfb-bb92-b1292a1ed4bb,Namespace:kube-system,Attempt:0,} returns sandbox id \"ec64679c89a653adba6aea9909ed8dc988b0dc57d20091510853d281adfe549a\"" May 15 10:43:06.270343 env[1356]: time="2025-05-15T10:43:06.270320382Z" level=info msg="CreateContainer within sandbox \"ec64679c89a653adba6aea9909ed8dc988b0dc57d20091510853d281adfe549a\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 15 10:43:06.323590 env[1356]: time="2025-05-15T10:43:06.323555664Z" level=info msg="CreateContainer within sandbox \"ec64679c89a653adba6aea9909ed8dc988b0dc57d20091510853d281adfe549a\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"c346b1efe328eda3214bf950f93b1dfbf33c129baf62f58591c0aa1a0b89867b\"" May 15 10:43:06.324055 env[1356]: time="2025-05-15T10:43:06.324040803Z" level=info msg="StartContainer for \"c346b1efe328eda3214bf950f93b1dfbf33c129baf62f58591c0aa1a0b89867b\"" May 15 10:43:06.359933 env[1356]: time="2025-05-15T10:43:06.359906831Z" level=info msg="StartContainer for \"c346b1efe328eda3214bf950f93b1dfbf33c129baf62f58591c0aa1a0b89867b\" returns successfully" May 15 10:43:06.427781 kubelet[2321]: E0515 10:43:06.427567 2321 configmap.go:199] Couldn't get configMap kube-system/cilium-config: failed to sync configmap cache: timed out waiting for the condition May 15 10:43:06.427781 kubelet[2321]: E0515 10:43:06.427627 2321 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/06162253-c5ba-41c4-a121-45d3a08feaea-cilium-config-path podName:06162253-c5ba-41c4-a121-45d3a08feaea nodeName:}" failed. No retries permitted until 2025-05-15 10:43:06.92761533 +0000 UTC m=+14.279836961 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-config-path" (UniqueName: "kubernetes.io/configmap/06162253-c5ba-41c4-a121-45d3a08feaea-cilium-config-path") pod "cilium-operator-599987898-wnx68" (UID: "06162253-c5ba-41c4-a121-45d3a08feaea") : failed to sync configmap cache: timed out waiting for the condition May 15 10:43:06.835932 env[1356]: time="2025-05-15T10:43:06.835907561Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-wwt4l,Uid:2b56cdb3-6b72-4493-896f-9c54ddf87971,Namespace:kube-system,Attempt:0,}" May 15 10:43:06.840918 kubelet[2321]: I0515 10:43:06.840882 2321 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-pc7np" podStartSLOduration=1.840869964 podStartE2EDuration="1.840869964s" podCreationTimestamp="2025-05-15 10:43:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 10:43:06.840704917 +0000 UTC m=+14.192926553" watchObservedRunningTime="2025-05-15 10:43:06.840869964 +0000 UTC m=+14.193091607" May 15 10:43:06.849529 env[1356]: time="2025-05-15T10:43:06.849496106Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 10:43:06.849786 env[1356]: time="2025-05-15T10:43:06.849772877Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 10:43:06.849873 env[1356]: time="2025-05-15T10:43:06.849859325Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 10:43:06.850070 env[1356]: time="2025-05-15T10:43:06.850045883Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f536e729699158d6cedd391725a16f9bd57bd482d622548ba4fc88c04eaa537b pid=2511 runtime=io.containerd.runc.v2 May 15 10:43:06.872313 env[1356]: time="2025-05-15T10:43:06.872289351Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-wwt4l,Uid:2b56cdb3-6b72-4493-896f-9c54ddf87971,Namespace:kube-system,Attempt:0,} returns sandbox id \"f536e729699158d6cedd391725a16f9bd57bd482d622548ba4fc88c04eaa537b\"" May 15 10:43:06.873372 env[1356]: time="2025-05-15T10:43:06.873357574Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" May 15 10:43:07.102788 env[1356]: time="2025-05-15T10:43:07.102506906Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-wnx68,Uid:06162253-c5ba-41c4-a121-45d3a08feaea,Namespace:kube-system,Attempt:0,}" May 15 10:43:07.112182 env[1356]: time="2025-05-15T10:43:07.112010959Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 10:43:07.112182 env[1356]: time="2025-05-15T10:43:07.112045969Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 10:43:07.112182 env[1356]: time="2025-05-15T10:43:07.112060808Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 10:43:07.112483 env[1356]: time="2025-05-15T10:43:07.112442508Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/704291867eb85ac1b2ae1da298b8ce5c8de0e6c498a543089b4b4be85fe79651 pid=2641 runtime=io.containerd.runc.v2 May 15 10:43:07.160220 env[1356]: time="2025-05-15T10:43:07.160195554Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-wnx68,Uid:06162253-c5ba-41c4-a121-45d3a08feaea,Namespace:kube-system,Attempt:0,} returns sandbox id \"704291867eb85ac1b2ae1da298b8ce5c8de0e6c498a543089b4b4be85fe79651\"" May 15 10:43:08.036032 systemd[1]: run-containerd-runc-k8s.io-704291867eb85ac1b2ae1da298b8ce5c8de0e6c498a543089b4b4be85fe79651-runc.241YLi.mount: Deactivated successfully. May 15 10:43:10.798209 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2470360646.mount: Deactivated successfully. May 15 10:43:12.989663 env[1356]: time="2025-05-15T10:43:12.989616380Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:43:12.992514 env[1356]: time="2025-05-15T10:43:12.992493574Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:43:12.993336 env[1356]: time="2025-05-15T10:43:12.993322734Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:43:12.993875 env[1356]: time="2025-05-15T10:43:12.993659173Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" May 15 10:43:12.994868 env[1356]: time="2025-05-15T10:43:12.994852970Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" May 15 10:43:12.995826 env[1356]: time="2025-05-15T10:43:12.995267054Z" level=info msg="CreateContainer within sandbox \"f536e729699158d6cedd391725a16f9bd57bd482d622548ba4fc88c04eaa537b\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 15 10:43:13.005320 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1485996064.mount: Deactivated successfully. May 15 10:43:13.010003 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2677899118.mount: Deactivated successfully. May 15 10:43:13.012371 env[1356]: time="2025-05-15T10:43:13.012348957Z" level=info msg="CreateContainer within sandbox \"f536e729699158d6cedd391725a16f9bd57bd482d622548ba4fc88c04eaa537b\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"8fb1f81f878a0589a842e34c879ab96e17b42ef9c9abe66ad0fabe5af7170b3b\"" May 15 10:43:13.012777 env[1356]: time="2025-05-15T10:43:13.012762921Z" level=info msg="StartContainer for \"8fb1f81f878a0589a842e34c879ab96e17b42ef9c9abe66ad0fabe5af7170b3b\"" May 15 10:43:13.051621 env[1356]: time="2025-05-15T10:43:13.051039733Z" level=info msg="StartContainer for \"8fb1f81f878a0589a842e34c879ab96e17b42ef9c9abe66ad0fabe5af7170b3b\" returns successfully" May 15 10:43:13.482467 env[1356]: time="2025-05-15T10:43:13.482435099Z" level=info msg="shim disconnected" id=8fb1f81f878a0589a842e34c879ab96e17b42ef9c9abe66ad0fabe5af7170b3b May 15 10:43:13.482467 env[1356]: time="2025-05-15T10:43:13.482470135Z" level=warning msg="cleaning up after shim disconnected" id=8fb1f81f878a0589a842e34c879ab96e17b42ef9c9abe66ad0fabe5af7170b3b namespace=k8s.io May 15 10:43:13.482620 env[1356]: time="2025-05-15T10:43:13.482477017Z" level=info msg="cleaning up dead shim" May 15 10:43:13.488061 env[1356]: time="2025-05-15T10:43:13.488035727Z" level=warning msg="cleanup warnings time=\"2025-05-15T10:43:13Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2724 runtime=io.containerd.runc.v2\n" May 15 10:43:13.892780 env[1356]: time="2025-05-15T10:43:13.887384645Z" level=info msg="CreateContainer within sandbox \"f536e729699158d6cedd391725a16f9bd57bd482d622548ba4fc88c04eaa537b\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 15 10:43:13.896583 env[1356]: time="2025-05-15T10:43:13.896556430Z" level=info msg="CreateContainer within sandbox \"f536e729699158d6cedd391725a16f9bd57bd482d622548ba4fc88c04eaa537b\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"6e65b0c8f956bac178ac1bce5471082333cb3eb61e553a24612a6b34f342a487\"" May 15 10:43:13.897743 env[1356]: time="2025-05-15T10:43:13.897052812Z" level=info msg="StartContainer for \"6e65b0c8f956bac178ac1bce5471082333cb3eb61e553a24612a6b34f342a487\"" May 15 10:43:13.930426 env[1356]: time="2025-05-15T10:43:13.930402861Z" level=info msg="StartContainer for \"6e65b0c8f956bac178ac1bce5471082333cb3eb61e553a24612a6b34f342a487\" returns successfully" May 15 10:43:13.934286 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 15 10:43:13.934455 systemd[1]: Stopped systemd-sysctl.service. May 15 10:43:13.934597 systemd[1]: Stopping systemd-sysctl.service... May 15 10:43:13.936091 systemd[1]: Starting systemd-sysctl.service... May 15 10:43:13.947574 systemd[1]: Finished systemd-sysctl.service. May 15 10:43:13.953087 env[1356]: time="2025-05-15T10:43:13.953026688Z" level=info msg="shim disconnected" id=6e65b0c8f956bac178ac1bce5471082333cb3eb61e553a24612a6b34f342a487 May 15 10:43:13.953200 env[1356]: time="2025-05-15T10:43:13.953189085Z" level=warning msg="cleaning up after shim disconnected" id=6e65b0c8f956bac178ac1bce5471082333cb3eb61e553a24612a6b34f342a487 namespace=k8s.io May 15 10:43:13.953248 env[1356]: time="2025-05-15T10:43:13.953238255Z" level=info msg="cleaning up dead shim" May 15 10:43:13.957828 env[1356]: time="2025-05-15T10:43:13.957809578Z" level=warning msg="cleanup warnings time=\"2025-05-15T10:43:13Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2789 runtime=io.containerd.runc.v2\n" May 15 10:43:14.003384 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8fb1f81f878a0589a842e34c879ab96e17b42ef9c9abe66ad0fabe5af7170b3b-rootfs.mount: Deactivated successfully. May 15 10:43:14.282742 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount887550981.mount: Deactivated successfully. May 15 10:43:14.757098 env[1356]: time="2025-05-15T10:43:14.757062713Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:43:14.759058 env[1356]: time="2025-05-15T10:43:14.758119352Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:43:14.759326 env[1356]: time="2025-05-15T10:43:14.759210982Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:43:14.759634 env[1356]: time="2025-05-15T10:43:14.759612900Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" May 15 10:43:14.762650 env[1356]: time="2025-05-15T10:43:14.762311859Z" level=info msg="CreateContainer within sandbox \"704291867eb85ac1b2ae1da298b8ce5c8de0e6c498a543089b4b4be85fe79651\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" May 15 10:43:14.768040 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3427457570.mount: Deactivated successfully. May 15 10:43:14.782395 env[1356]: time="2025-05-15T10:43:14.782364111Z" level=info msg="CreateContainer within sandbox \"704291867eb85ac1b2ae1da298b8ce5c8de0e6c498a543089b4b4be85fe79651\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"adff8ee023975f4717793bb894141e94d42d76ceed6d278f3b23fa2240c72d45\"" May 15 10:43:14.783318 env[1356]: time="2025-05-15T10:43:14.782712914Z" level=info msg="StartContainer for \"adff8ee023975f4717793bb894141e94d42d76ceed6d278f3b23fa2240c72d45\"" May 15 10:43:14.812593 env[1356]: time="2025-05-15T10:43:14.812565151Z" level=info msg="StartContainer for \"adff8ee023975f4717793bb894141e94d42d76ceed6d278f3b23fa2240c72d45\" returns successfully" May 15 10:43:14.888972 env[1356]: time="2025-05-15T10:43:14.888947959Z" level=info msg="CreateContainer within sandbox \"f536e729699158d6cedd391725a16f9bd57bd482d622548ba4fc88c04eaa537b\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 15 10:43:14.917099 env[1356]: time="2025-05-15T10:43:14.917065778Z" level=info msg="CreateContainer within sandbox \"f536e729699158d6cedd391725a16f9bd57bd482d622548ba4fc88c04eaa537b\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"1c2ea59371bd4fe33bf2045b7874817cd1ac64152bda919970c516765210b8f2\"" May 15 10:43:14.917691 env[1356]: time="2025-05-15T10:43:14.917656894Z" level=info msg="StartContainer for \"1c2ea59371bd4fe33bf2045b7874817cd1ac64152bda919970c516765210b8f2\"" May 15 10:43:14.926718 kubelet[2321]: I0515 10:43:14.926510 2321 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-wnx68" podStartSLOduration=2.324682801 podStartE2EDuration="9.924151464s" podCreationTimestamp="2025-05-15 10:43:05 +0000 UTC" firstStartedPulling="2025-05-15 10:43:07.160989937 +0000 UTC m=+14.513211568" lastFinishedPulling="2025-05-15 10:43:14.760458598 +0000 UTC m=+22.112680231" observedRunningTime="2025-05-15 10:43:14.923907565 +0000 UTC m=+22.276129207" watchObservedRunningTime="2025-05-15 10:43:14.924151464 +0000 UTC m=+22.276373101" May 15 10:43:15.036033 env[1356]: time="2025-05-15T10:43:15.035967942Z" level=info msg="StartContainer for \"1c2ea59371bd4fe33bf2045b7874817cd1ac64152bda919970c516765210b8f2\" returns successfully" May 15 10:43:15.137517 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1c2ea59371bd4fe33bf2045b7874817cd1ac64152bda919970c516765210b8f2-rootfs.mount: Deactivated successfully. May 15 10:43:15.382814 env[1356]: time="2025-05-15T10:43:15.382781806Z" level=info msg="shim disconnected" id=1c2ea59371bd4fe33bf2045b7874817cd1ac64152bda919970c516765210b8f2 May 15 10:43:15.382814 env[1356]: time="2025-05-15T10:43:15.382809022Z" level=warning msg="cleaning up after shim disconnected" id=1c2ea59371bd4fe33bf2045b7874817cd1ac64152bda919970c516765210b8f2 namespace=k8s.io May 15 10:43:15.382814 env[1356]: time="2025-05-15T10:43:15.382816054Z" level=info msg="cleaning up dead shim" May 15 10:43:15.392997 env[1356]: time="2025-05-15T10:43:15.392949050Z" level=warning msg="cleanup warnings time=\"2025-05-15T10:43:15Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2882 runtime=io.containerd.runc.v2\n" May 15 10:43:15.919581 env[1356]: time="2025-05-15T10:43:15.919551548Z" level=info msg="CreateContainer within sandbox \"f536e729699158d6cedd391725a16f9bd57bd482d622548ba4fc88c04eaa537b\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 15 10:43:15.944534 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1539728315.mount: Deactivated successfully. May 15 10:43:15.955456 env[1356]: time="2025-05-15T10:43:15.955430872Z" level=info msg="CreateContainer within sandbox \"f536e729699158d6cedd391725a16f9bd57bd482d622548ba4fc88c04eaa537b\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"ffb9cd831efe22e98e191468f0ad8cec9fdb07e62e6247d3f6fab0fa50fadab1\"" May 15 10:43:15.956227 env[1356]: time="2025-05-15T10:43:15.956211035Z" level=info msg="StartContainer for \"ffb9cd831efe22e98e191468f0ad8cec9fdb07e62e6247d3f6fab0fa50fadab1\"" May 15 10:43:15.990245 env[1356]: time="2025-05-15T10:43:15.990213486Z" level=info msg="StartContainer for \"ffb9cd831efe22e98e191468f0ad8cec9fdb07e62e6247d3f6fab0fa50fadab1\" returns successfully" May 15 10:43:16.003638 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount637293076.mount: Deactivated successfully. May 15 10:43:16.009533 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ffb9cd831efe22e98e191468f0ad8cec9fdb07e62e6247d3f6fab0fa50fadab1-rootfs.mount: Deactivated successfully. May 15 10:43:16.028041 env[1356]: time="2025-05-15T10:43:16.028006683Z" level=info msg="shim disconnected" id=ffb9cd831efe22e98e191468f0ad8cec9fdb07e62e6247d3f6fab0fa50fadab1 May 15 10:43:16.028041 env[1356]: time="2025-05-15T10:43:16.028038899Z" level=warning msg="cleaning up after shim disconnected" id=ffb9cd831efe22e98e191468f0ad8cec9fdb07e62e6247d3f6fab0fa50fadab1 namespace=k8s.io May 15 10:43:16.035197 env[1356]: time="2025-05-15T10:43:16.028045526Z" level=info msg="cleaning up dead shim" May 15 10:43:16.035197 env[1356]: time="2025-05-15T10:43:16.033421146Z" level=warning msg="cleanup warnings time=\"2025-05-15T10:43:16Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2936 runtime=io.containerd.runc.v2\n" May 15 10:43:16.896361 env[1356]: time="2025-05-15T10:43:16.896333519Z" level=info msg="CreateContainer within sandbox \"f536e729699158d6cedd391725a16f9bd57bd482d622548ba4fc88c04eaa537b\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 15 10:43:16.916865 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount45344859.mount: Deactivated successfully. May 15 10:43:16.932338 env[1356]: time="2025-05-15T10:43:16.932314532Z" level=info msg="CreateContainer within sandbox \"f536e729699158d6cedd391725a16f9bd57bd482d622548ba4fc88c04eaa537b\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"0d22ab0188d86a0b3c257c6c8a21fa444a512ebfb10e241ac19137e95d10dcbf\"" May 15 10:43:16.932922 env[1356]: time="2025-05-15T10:43:16.932898723Z" level=info msg="StartContainer for \"0d22ab0188d86a0b3c257c6c8a21fa444a512ebfb10e241ac19137e95d10dcbf\"" May 15 10:43:16.981689 env[1356]: time="2025-05-15T10:43:16.977627318Z" level=info msg="StartContainer for \"0d22ab0188d86a0b3c257c6c8a21fa444a512ebfb10e241ac19137e95d10dcbf\" returns successfully" May 15 10:43:17.003362 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount852981709.mount: Deactivated successfully. May 15 10:43:17.045182 systemd[1]: run-containerd-runc-k8s.io-0d22ab0188d86a0b3c257c6c8a21fa444a512ebfb10e241ac19137e95d10dcbf-runc.Mxnph5.mount: Deactivated successfully. May 15 10:43:17.157057 kubelet[2321]: I0515 10:43:17.148203 2321 kubelet_node_status.go:497] "Fast updating node status as it just became ready" May 15 10:43:17.282714 kubelet[2321]: I0515 10:43:17.282667 2321 topology_manager.go:215] "Topology Admit Handler" podUID="7e9e1b54-12ec-44eb-a186-2c705eab80bd" podNamespace="kube-system" podName="coredns-7db6d8ff4d-6kb6j" May 15 10:43:17.291045 kubelet[2321]: I0515 10:43:17.291017 2321 topology_manager.go:215] "Topology Admit Handler" podUID="37ac3747-b1d3-4cf8-8f5e-cedf8441b4c4" podNamespace="kube-system" podName="coredns-7db6d8ff4d-qplrx" May 15 10:43:17.341160 kubelet[2321]: I0515 10:43:17.341120 2321 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-st7jn\" (UniqueName: \"kubernetes.io/projected/37ac3747-b1d3-4cf8-8f5e-cedf8441b4c4-kube-api-access-st7jn\") pod \"coredns-7db6d8ff4d-qplrx\" (UID: \"37ac3747-b1d3-4cf8-8f5e-cedf8441b4c4\") " pod="kube-system/coredns-7db6d8ff4d-qplrx" May 15 10:43:17.341283 kubelet[2321]: I0515 10:43:17.341194 2321 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7e9e1b54-12ec-44eb-a186-2c705eab80bd-config-volume\") pod \"coredns-7db6d8ff4d-6kb6j\" (UID: \"7e9e1b54-12ec-44eb-a186-2c705eab80bd\") " pod="kube-system/coredns-7db6d8ff4d-6kb6j" May 15 10:43:17.341283 kubelet[2321]: I0515 10:43:17.341239 2321 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h4h59\" (UniqueName: \"kubernetes.io/projected/7e9e1b54-12ec-44eb-a186-2c705eab80bd-kube-api-access-h4h59\") pod \"coredns-7db6d8ff4d-6kb6j\" (UID: \"7e9e1b54-12ec-44eb-a186-2c705eab80bd\") " pod="kube-system/coredns-7db6d8ff4d-6kb6j" May 15 10:43:17.341283 kubelet[2321]: I0515 10:43:17.341258 2321 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/37ac3747-b1d3-4cf8-8f5e-cedf8441b4c4-config-volume\") pod \"coredns-7db6d8ff4d-qplrx\" (UID: \"37ac3747-b1d3-4cf8-8f5e-cedf8441b4c4\") " pod="kube-system/coredns-7db6d8ff4d-qplrx" May 15 10:43:17.631314 env[1356]: time="2025-05-15T10:43:17.631288656Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-6kb6j,Uid:7e9e1b54-12ec-44eb-a186-2c705eab80bd,Namespace:kube-system,Attempt:0,}" May 15 10:43:17.655871 env[1356]: time="2025-05-15T10:43:17.631732009Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-qplrx,Uid:37ac3747-b1d3-4cf8-8f5e-cedf8441b4c4,Namespace:kube-system,Attempt:0,}" May 15 10:43:17.922481 kubelet[2321]: I0515 10:43:17.922086 2321 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-wwt4l" podStartSLOduration=6.800741037 podStartE2EDuration="12.922056273s" podCreationTimestamp="2025-05-15 10:43:05 +0000 UTC" firstStartedPulling="2025-05-15 10:43:06.873113549 +0000 UTC m=+14.225335180" lastFinishedPulling="2025-05-15 10:43:12.99442878 +0000 UTC m=+20.346650416" observedRunningTime="2025-05-15 10:43:17.921944091 +0000 UTC m=+25.274165732" watchObservedRunningTime="2025-05-15 10:43:17.922056273 +0000 UTC m=+25.274277910" May 15 10:43:18.572700 kernel: Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks! May 15 10:43:18.890697 kernel: Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks! May 15 10:43:20.999766 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready May 15 10:43:20.999886 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready May 15 10:43:21.031435 systemd-networkd[1114]: cilium_host: Link UP May 15 10:43:21.031521 systemd-networkd[1114]: cilium_net: Link UP May 15 10:43:21.031629 systemd-networkd[1114]: cilium_net: Gained carrier May 15 10:43:21.031784 systemd-networkd[1114]: cilium_host: Gained carrier May 15 10:43:21.266381 systemd-networkd[1114]: cilium_vxlan: Link UP May 15 10:43:21.266385 systemd-networkd[1114]: cilium_vxlan: Gained carrier May 15 10:43:21.567836 systemd-networkd[1114]: cilium_host: Gained IPv6LL May 15 10:43:21.759798 systemd-networkd[1114]: cilium_net: Gained IPv6LL May 15 10:43:22.975760 systemd-networkd[1114]: cilium_vxlan: Gained IPv6LL May 15 10:43:23.188697 kernel: NET: Registered PF_ALG protocol family May 15 10:43:24.439489 systemd-networkd[1114]: lxc_health: Link UP May 15 10:43:24.453737 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready May 15 10:43:24.451710 systemd-networkd[1114]: lxc_health: Gained carrier May 15 10:43:25.008522 systemd-networkd[1114]: lxc081506aa0cf7: Link UP May 15 10:43:25.015699 kernel: eth0: renamed from tmpd4555 May 15 10:43:25.028958 systemd-networkd[1114]: lxc081506aa0cf7: Gained carrier May 15 10:43:25.029724 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc081506aa0cf7: link becomes ready May 15 10:43:25.078287 systemd-networkd[1114]: lxc1321c1be2899: Link UP May 15 10:43:25.090692 kernel: eth0: renamed from tmp5b585 May 15 10:43:25.090762 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc1321c1be2899: link becomes ready May 15 10:43:25.091455 systemd-networkd[1114]: lxc1321c1be2899: Gained carrier May 15 10:43:26.367789 systemd-networkd[1114]: lxc_health: Gained IPv6LL May 15 10:43:26.495805 systemd-networkd[1114]: lxc1321c1be2899: Gained IPv6LL May 15 10:43:26.623812 systemd-networkd[1114]: lxc081506aa0cf7: Gained IPv6LL May 15 10:43:27.837289 kubelet[2321]: I0515 10:43:27.837252 2321 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 15 10:43:27.915966 env[1356]: time="2025-05-15T10:43:27.915889000Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 10:43:27.915966 env[1356]: time="2025-05-15T10:43:27.915944003Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 10:43:27.916400 env[1356]: time="2025-05-15T10:43:27.916359806Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 10:43:27.919846 env[1356]: time="2025-05-15T10:43:27.919778108Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/5b5854636cfbfe2b8481efeffddb1a8345810e06810ac80860cbf64ae6e0332f pid=3487 runtime=io.containerd.runc.v2 May 15 10:43:27.955816 env[1356]: time="2025-05-15T10:43:27.955477809Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 10:43:27.955816 env[1356]: time="2025-05-15T10:43:27.955553716Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 10:43:27.955816 env[1356]: time="2025-05-15T10:43:27.955579462Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 10:43:27.956282 env[1356]: time="2025-05-15T10:43:27.956247128Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/d45559616c185ed3b619e2a63314295afb0b3b785248c03487683cfaf0342ad7 pid=3508 runtime=io.containerd.runc.v2 May 15 10:43:27.980062 systemd-resolved[1276]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 15 10:43:28.026720 systemd-resolved[1276]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 15 10:43:28.034613 env[1356]: time="2025-05-15T10:43:28.034583628Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-qplrx,Uid:37ac3747-b1d3-4cf8-8f5e-cedf8441b4c4,Namespace:kube-system,Attempt:0,} returns sandbox id \"5b5854636cfbfe2b8481efeffddb1a8345810e06810ac80860cbf64ae6e0332f\"" May 15 10:43:28.048582 env[1356]: time="2025-05-15T10:43:28.048552642Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-6kb6j,Uid:7e9e1b54-12ec-44eb-a186-2c705eab80bd,Namespace:kube-system,Attempt:0,} returns sandbox id \"d45559616c185ed3b619e2a63314295afb0b3b785248c03487683cfaf0342ad7\"" May 15 10:43:28.077702 env[1356]: time="2025-05-15T10:43:28.077664955Z" level=info msg="CreateContainer within sandbox \"d45559616c185ed3b619e2a63314295afb0b3b785248c03487683cfaf0342ad7\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 15 10:43:28.090357 env[1356]: time="2025-05-15T10:43:28.090282363Z" level=info msg="CreateContainer within sandbox \"5b5854636cfbfe2b8481efeffddb1a8345810e06810ac80860cbf64ae6e0332f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 15 10:43:28.329390 env[1356]: time="2025-05-15T10:43:28.329341016Z" level=info msg="CreateContainer within sandbox \"5b5854636cfbfe2b8481efeffddb1a8345810e06810ac80860cbf64ae6e0332f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"0dc54a29ab440f0e97feac1d45c6e604ffc3a196c94e26ef59a2522d8b6b96f0\"" May 15 10:43:28.330588 env[1356]: time="2025-05-15T10:43:28.330565633Z" level=info msg="StartContainer for \"0dc54a29ab440f0e97feac1d45c6e604ffc3a196c94e26ef59a2522d8b6b96f0\"" May 15 10:43:28.376109 env[1356]: time="2025-05-15T10:43:28.376028082Z" level=info msg="CreateContainer within sandbox \"d45559616c185ed3b619e2a63314295afb0b3b785248c03487683cfaf0342ad7\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"68e0ce5e532f74c88a8dd2961a2b98fe8d81b5bbc4ab8472cfdefea67b257f4f\"" May 15 10:43:28.377404 env[1356]: time="2025-05-15T10:43:28.376867451Z" level=info msg="StartContainer for \"68e0ce5e532f74c88a8dd2961a2b98fe8d81b5bbc4ab8472cfdefea67b257f4f\"" May 15 10:43:28.500839 env[1356]: time="2025-05-15T10:43:28.500801536Z" level=info msg="StartContainer for \"68e0ce5e532f74c88a8dd2961a2b98fe8d81b5bbc4ab8472cfdefea67b257f4f\" returns successfully" May 15 10:43:28.505884 env[1356]: time="2025-05-15T10:43:28.505850947Z" level=info msg="StartContainer for \"0dc54a29ab440f0e97feac1d45c6e604ffc3a196c94e26ef59a2522d8b6b96f0\" returns successfully" May 15 10:43:28.925069 systemd[1]: run-containerd-runc-k8s.io-d45559616c185ed3b619e2a63314295afb0b3b785248c03487683cfaf0342ad7-runc.e9p0rD.mount: Deactivated successfully. May 15 10:43:29.106261 kubelet[2321]: I0515 10:43:29.106144 2321 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-qplrx" podStartSLOduration=24.106132564 podStartE2EDuration="24.106132564s" podCreationTimestamp="2025-05-15 10:43:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 10:43:29.105870991 +0000 UTC m=+36.458092628" watchObservedRunningTime="2025-05-15 10:43:29.106132564 +0000 UTC m=+36.458354201" May 15 10:43:30.097491 kubelet[2321]: I0515 10:43:30.097448 2321 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-6kb6j" podStartSLOduration=25.097425661 podStartE2EDuration="25.097425661s" podCreationTimestamp="2025-05-15 10:43:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 10:43:29.13730652 +0000 UTC m=+36.489528158" watchObservedRunningTime="2025-05-15 10:43:30.097425661 +0000 UTC m=+37.449647299" May 15 10:43:33.023548 systemd[1]: Started sshd@5-139.178.70.108:22-185.156.73.234:58890.service. May 15 10:43:36.605721 sshd[3655]: Invalid user 1234 from 185.156.73.234 port 58890 May 15 10:43:36.767955 sshd[3655]: pam_faillock(sshd:auth): User unknown May 15 10:43:36.772192 sshd[3655]: pam_unix(sshd:auth): check pass; user unknown May 15 10:43:36.772216 sshd[3655]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=185.156.73.234 May 15 10:43:36.772264 sshd[3655]: pam_faillock(sshd:auth): User unknown May 15 10:43:39.055865 sshd[3655]: Failed password for invalid user 1234 from 185.156.73.234 port 58890 ssh2 May 15 10:43:40.169481 sshd[3655]: Connection closed by invalid user 1234 185.156.73.234 port 58890 [preauth] May 15 10:43:40.170375 systemd[1]: sshd@5-139.178.70.108:22-185.156.73.234:58890.service: Deactivated successfully. May 15 10:44:02.735625 systemd[1]: Started sshd@6-139.178.70.108:22-147.75.109.163:44634.service. May 15 10:44:03.052007 sshd[3665]: Accepted publickey for core from 147.75.109.163 port 44634 ssh2: RSA SHA256:5jNLHoTZfjCzTOKQrCP5LbgIW1XBYqjk9sc7IZ/f9u8 May 15 10:44:03.056079 sshd[3665]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 10:44:03.068012 systemd-logind[1346]: New session 8 of user core. May 15 10:44:03.068379 systemd[1]: Started session-8.scope. May 15 10:44:03.489801 sshd[3665]: pam_unix(sshd:session): session closed for user core May 15 10:44:03.491322 systemd-logind[1346]: Session 8 logged out. Waiting for processes to exit. May 15 10:44:03.491522 systemd[1]: sshd@6-139.178.70.108:22-147.75.109.163:44634.service: Deactivated successfully. May 15 10:44:03.492037 systemd[1]: session-8.scope: Deactivated successfully. May 15 10:44:03.492801 systemd-logind[1346]: Removed session 8. May 15 10:44:08.492160 systemd[1]: Started sshd@7-139.178.70.108:22-147.75.109.163:56070.service. May 15 10:44:08.581860 sshd[3681]: Accepted publickey for core from 147.75.109.163 port 56070 ssh2: RSA SHA256:5jNLHoTZfjCzTOKQrCP5LbgIW1XBYqjk9sc7IZ/f9u8 May 15 10:44:08.582878 sshd[3681]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 10:44:08.586294 systemd-logind[1346]: New session 9 of user core. May 15 10:44:08.586896 systemd[1]: Started session-9.scope. May 15 10:44:08.692459 sshd[3681]: pam_unix(sshd:session): session closed for user core May 15 10:44:08.693972 systemd[1]: sshd@7-139.178.70.108:22-147.75.109.163:56070.service: Deactivated successfully. May 15 10:44:08.694670 systemd[1]: session-9.scope: Deactivated successfully. May 15 10:44:08.694711 systemd-logind[1346]: Session 9 logged out. Waiting for processes to exit. May 15 10:44:08.695301 systemd-logind[1346]: Removed session 9. May 15 10:44:13.694928 systemd[1]: Started sshd@8-139.178.70.108:22-147.75.109.163:56086.service. May 15 10:44:13.734911 sshd[3694]: Accepted publickey for core from 147.75.109.163 port 56086 ssh2: RSA SHA256:5jNLHoTZfjCzTOKQrCP5LbgIW1XBYqjk9sc7IZ/f9u8 May 15 10:44:13.736336 sshd[3694]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 10:44:13.741213 systemd-logind[1346]: New session 10 of user core. May 15 10:44:13.741640 systemd[1]: Started session-10.scope. May 15 10:44:13.831719 sshd[3694]: pam_unix(sshd:session): session closed for user core May 15 10:44:13.833183 systemd-logind[1346]: Session 10 logged out. Waiting for processes to exit. May 15 10:44:13.833278 systemd[1]: sshd@8-139.178.70.108:22-147.75.109.163:56086.service: Deactivated successfully. May 15 10:44:13.833773 systemd[1]: session-10.scope: Deactivated successfully. May 15 10:44:13.834197 systemd-logind[1346]: Removed session 10. May 15 10:44:18.834254 systemd[1]: Started sshd@9-139.178.70.108:22-147.75.109.163:44932.service. May 15 10:44:18.872232 sshd[3708]: Accepted publickey for core from 147.75.109.163 port 44932 ssh2: RSA SHA256:5jNLHoTZfjCzTOKQrCP5LbgIW1XBYqjk9sc7IZ/f9u8 May 15 10:44:18.873445 sshd[3708]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 10:44:18.876488 systemd[1]: Started session-11.scope. May 15 10:44:18.876717 systemd-logind[1346]: New session 11 of user core. May 15 10:44:18.962685 sshd[3708]: pam_unix(sshd:session): session closed for user core May 15 10:44:18.964412 systemd[1]: Started sshd@10-139.178.70.108:22-147.75.109.163:44942.service. May 15 10:44:18.968891 systemd[1]: sshd@9-139.178.70.108:22-147.75.109.163:44932.service: Deactivated successfully. May 15 10:44:18.969434 systemd[1]: session-11.scope: Deactivated successfully. May 15 10:44:18.970567 systemd-logind[1346]: Session 11 logged out. Waiting for processes to exit. May 15 10:44:18.971579 systemd-logind[1346]: Removed session 11. May 15 10:44:19.004445 sshd[3719]: Accepted publickey for core from 147.75.109.163 port 44942 ssh2: RSA SHA256:5jNLHoTZfjCzTOKQrCP5LbgIW1XBYqjk9sc7IZ/f9u8 May 15 10:44:19.005304 sshd[3719]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 10:44:19.007616 systemd-logind[1346]: New session 12 of user core. May 15 10:44:19.008245 systemd[1]: Started session-12.scope. May 15 10:44:19.220865 systemd[1]: Started sshd@11-139.178.70.108:22-147.75.109.163:44946.service. May 15 10:44:19.229834 sshd[3719]: pam_unix(sshd:session): session closed for user core May 15 10:44:19.233639 systemd-logind[1346]: Session 12 logged out. Waiting for processes to exit. May 15 10:44:19.233800 systemd[1]: sshd@10-139.178.70.108:22-147.75.109.163:44942.service: Deactivated successfully. May 15 10:44:19.234346 systemd[1]: session-12.scope: Deactivated successfully. May 15 10:44:19.236843 systemd-logind[1346]: Removed session 12. May 15 10:44:19.264073 sshd[3730]: Accepted publickey for core from 147.75.109.163 port 44946 ssh2: RSA SHA256:5jNLHoTZfjCzTOKQrCP5LbgIW1XBYqjk9sc7IZ/f9u8 May 15 10:44:19.265219 sshd[3730]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 10:44:19.267608 systemd-logind[1346]: New session 13 of user core. May 15 10:44:19.268250 systemd[1]: Started session-13.scope. May 15 10:44:19.378949 sshd[3730]: pam_unix(sshd:session): session closed for user core May 15 10:44:19.380755 systemd[1]: sshd@11-139.178.70.108:22-147.75.109.163:44946.service: Deactivated successfully. May 15 10:44:19.381393 systemd-logind[1346]: Session 13 logged out. Waiting for processes to exit. May 15 10:44:19.381394 systemd[1]: session-13.scope: Deactivated successfully. May 15 10:44:19.382237 systemd-logind[1346]: Removed session 13. May 15 10:44:24.381965 systemd[1]: Started sshd@12-139.178.70.108:22-147.75.109.163:44960.service. May 15 10:44:24.423121 sshd[3744]: Accepted publickey for core from 147.75.109.163 port 44960 ssh2: RSA SHA256:5jNLHoTZfjCzTOKQrCP5LbgIW1XBYqjk9sc7IZ/f9u8 May 15 10:44:24.424410 sshd[3744]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 10:44:24.427189 systemd-logind[1346]: New session 14 of user core. May 15 10:44:24.427650 systemd[1]: Started session-14.scope. May 15 10:44:24.524851 sshd[3744]: pam_unix(sshd:session): session closed for user core May 15 10:44:24.526475 systemd-logind[1346]: Session 14 logged out. Waiting for processes to exit. May 15 10:44:24.526563 systemd[1]: sshd@12-139.178.70.108:22-147.75.109.163:44960.service: Deactivated successfully. May 15 10:44:24.527080 systemd[1]: session-14.scope: Deactivated successfully. May 15 10:44:24.527378 systemd-logind[1346]: Removed session 14. May 15 10:44:26.592071 update_engine[1347]: I0515 10:44:26.574481 1347 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs May 15 10:44:26.592401 update_engine[1347]: I0515 10:44:26.592082 1347 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs May 15 10:44:26.636218 update_engine[1347]: I0515 10:44:26.636191 1347 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs May 15 10:44:26.636643 update_engine[1347]: I0515 10:44:26.636618 1347 omaha_request_params.cc:62] Current group set to lts May 15 10:44:26.668753 update_engine[1347]: I0515 10:44:26.668721 1347 update_attempter.cc:499] Already updated boot flags. Skipping. May 15 10:44:26.668753 update_engine[1347]: I0515 10:44:26.668740 1347 update_attempter.cc:643] Scheduling an action processor start. May 15 10:44:26.668753 update_engine[1347]: I0515 10:44:26.668753 1347 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction May 15 10:44:26.690029 update_engine[1347]: I0515 10:44:26.689994 1347 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs May 15 10:44:26.690117 update_engine[1347]: I0515 10:44:26.690078 1347 omaha_request_action.cc:270] Posting an Omaha request to disabled May 15 10:44:26.690117 update_engine[1347]: I0515 10:44:26.690083 1347 omaha_request_action.cc:271] Request: May 15 10:44:26.690117 update_engine[1347]: May 15 10:44:26.690117 update_engine[1347]: May 15 10:44:26.690117 update_engine[1347]: May 15 10:44:26.690117 update_engine[1347]: May 15 10:44:26.690117 update_engine[1347]: May 15 10:44:26.690117 update_engine[1347]: May 15 10:44:26.690117 update_engine[1347]: May 15 10:44:26.690117 update_engine[1347]: May 15 10:44:26.690117 update_engine[1347]: I0515 10:44:26.690088 1347 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 15 10:44:26.797861 update_engine[1347]: I0515 10:44:26.797827 1347 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 15 10:44:26.797987 update_engine[1347]: E0515 10:44:26.797930 1347 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 15 10:44:26.798080 update_engine[1347]: I0515 10:44:26.797986 1347 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 May 15 10:44:26.996003 locksmithd[1403]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 May 15 10:44:29.527520 systemd[1]: Started sshd@13-139.178.70.108:22-147.75.109.163:56270.service. May 15 10:44:29.566635 sshd[3758]: Accepted publickey for core from 147.75.109.163 port 56270 ssh2: RSA SHA256:5jNLHoTZfjCzTOKQrCP5LbgIW1XBYqjk9sc7IZ/f9u8 May 15 10:44:29.567509 sshd[3758]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 10:44:29.570557 systemd[1]: Started session-15.scope. May 15 10:44:29.571206 systemd-logind[1346]: New session 15 of user core. May 15 10:44:29.655958 sshd[3758]: pam_unix(sshd:session): session closed for user core May 15 10:44:29.657836 systemd[1]: Started sshd@14-139.178.70.108:22-147.75.109.163:56276.service. May 15 10:44:29.660213 systemd[1]: sshd@13-139.178.70.108:22-147.75.109.163:56270.service: Deactivated successfully. May 15 10:44:29.661483 systemd-logind[1346]: Session 15 logged out. Waiting for processes to exit. May 15 10:44:29.661539 systemd[1]: session-15.scope: Deactivated successfully. May 15 10:44:29.663019 systemd-logind[1346]: Removed session 15. May 15 10:44:29.697913 sshd[3768]: Accepted publickey for core from 147.75.109.163 port 56276 ssh2: RSA SHA256:5jNLHoTZfjCzTOKQrCP5LbgIW1XBYqjk9sc7IZ/f9u8 May 15 10:44:29.699104 sshd[3768]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 10:44:29.702283 systemd[1]: Started session-16.scope. May 15 10:44:29.702513 systemd-logind[1346]: New session 16 of user core. May 15 10:44:30.791272 systemd[1]: Started sshd@15-139.178.70.108:22-147.75.109.163:56292.service. May 15 10:44:30.792367 sshd[3768]: pam_unix(sshd:session): session closed for user core May 15 10:44:30.798134 systemd-logind[1346]: Session 16 logged out. Waiting for processes to exit. May 15 10:44:30.798360 systemd[1]: sshd@14-139.178.70.108:22-147.75.109.163:56276.service: Deactivated successfully. May 15 10:44:30.798907 systemd[1]: session-16.scope: Deactivated successfully. May 15 10:44:30.799816 systemd-logind[1346]: Removed session 16. May 15 10:44:30.839886 sshd[3779]: Accepted publickey for core from 147.75.109.163 port 56292 ssh2: RSA SHA256:5jNLHoTZfjCzTOKQrCP5LbgIW1XBYqjk9sc7IZ/f9u8 May 15 10:44:30.840949 sshd[3779]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 10:44:30.845939 systemd-logind[1346]: New session 17 of user core. May 15 10:44:30.846238 systemd[1]: Started session-17.scope. May 15 10:44:32.960635 systemd[1]: Started sshd@16-139.178.70.108:22-147.75.109.163:56298.service. May 15 10:44:32.975981 sshd[3779]: pam_unix(sshd:session): session closed for user core May 15 10:44:33.001140 systemd[1]: sshd@15-139.178.70.108:22-147.75.109.163:56292.service: Deactivated successfully. May 15 10:44:33.001969 systemd[1]: session-17.scope: Deactivated successfully. May 15 10:44:33.002248 systemd-logind[1346]: Session 17 logged out. Waiting for processes to exit. May 15 10:44:33.003019 systemd-logind[1346]: Removed session 17. May 15 10:44:33.114056 sshd[3797]: Accepted publickey for core from 147.75.109.163 port 56298 ssh2: RSA SHA256:5jNLHoTZfjCzTOKQrCP5LbgIW1XBYqjk9sc7IZ/f9u8 May 15 10:44:33.115558 sshd[3797]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 10:44:33.118819 systemd-logind[1346]: New session 18 of user core. May 15 10:44:33.119169 systemd[1]: Started session-18.scope. May 15 10:44:33.558891 sshd[3797]: pam_unix(sshd:session): session closed for user core May 15 10:44:33.560665 systemd[1]: Started sshd@17-139.178.70.108:22-147.75.109.163:56300.service. May 15 10:44:33.562246 systemd-logind[1346]: Session 18 logged out. Waiting for processes to exit. May 15 10:44:33.562897 systemd[1]: sshd@16-139.178.70.108:22-147.75.109.163:56298.service: Deactivated successfully. May 15 10:44:33.563413 systemd[1]: session-18.scope: Deactivated successfully. May 15 10:44:33.563923 systemd-logind[1346]: Removed session 18. May 15 10:44:33.659194 sshd[3807]: Accepted publickey for core from 147.75.109.163 port 56300 ssh2: RSA SHA256:5jNLHoTZfjCzTOKQrCP5LbgIW1XBYqjk9sc7IZ/f9u8 May 15 10:44:33.660224 sshd[3807]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 10:44:33.662793 systemd-logind[1346]: New session 19 of user core. May 15 10:44:33.663408 systemd[1]: Started session-19.scope. May 15 10:44:33.819106 sshd[3807]: pam_unix(sshd:session): session closed for user core May 15 10:44:33.820990 systemd-logind[1346]: Session 19 logged out. Waiting for processes to exit. May 15 10:44:33.821170 systemd[1]: sshd@17-139.178.70.108:22-147.75.109.163:56300.service: Deactivated successfully. May 15 10:44:33.821628 systemd[1]: session-19.scope: Deactivated successfully. May 15 10:44:33.822380 systemd-logind[1346]: Removed session 19. May 15 10:44:37.450061 update_engine[1347]: I0515 10:44:37.449724 1347 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 15 10:44:37.450061 update_engine[1347]: I0515 10:44:37.449908 1347 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 15 10:44:37.450061 update_engine[1347]: E0515 10:44:37.449978 1347 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 15 10:44:37.450061 update_engine[1347]: I0515 10:44:37.450037 1347 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 May 15 10:44:38.822085 systemd[1]: Started sshd@18-139.178.70.108:22-147.75.109.163:51376.service. May 15 10:44:38.859938 sshd[3827]: Accepted publickey for core from 147.75.109.163 port 51376 ssh2: RSA SHA256:5jNLHoTZfjCzTOKQrCP5LbgIW1XBYqjk9sc7IZ/f9u8 May 15 10:44:38.861176 sshd[3827]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 10:44:38.864593 systemd[1]: Started session-20.scope. May 15 10:44:38.864864 systemd-logind[1346]: New session 20 of user core. May 15 10:44:38.950108 sshd[3827]: pam_unix(sshd:session): session closed for user core May 15 10:44:38.951766 systemd[1]: sshd@18-139.178.70.108:22-147.75.109.163:51376.service: Deactivated successfully. May 15 10:44:38.952538 systemd[1]: session-20.scope: Deactivated successfully. May 15 10:44:38.952933 systemd-logind[1346]: Session 20 logged out. Waiting for processes to exit. May 15 10:44:38.953539 systemd-logind[1346]: Removed session 20. May 15 10:44:43.953324 systemd[1]: Started sshd@19-139.178.70.108:22-147.75.109.163:51382.service. May 15 10:44:43.995693 sshd[3840]: Accepted publickey for core from 147.75.109.163 port 51382 ssh2: RSA SHA256:5jNLHoTZfjCzTOKQrCP5LbgIW1XBYqjk9sc7IZ/f9u8 May 15 10:44:43.997006 sshd[3840]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 10:44:43.999925 systemd-logind[1346]: New session 21 of user core. May 15 10:44:44.000392 systemd[1]: Started session-21.scope. May 15 10:44:44.092410 sshd[3840]: pam_unix(sshd:session): session closed for user core May 15 10:44:44.094296 systemd-logind[1346]: Session 21 logged out. Waiting for processes to exit. May 15 10:44:44.094388 systemd[1]: sshd@19-139.178.70.108:22-147.75.109.163:51382.service: Deactivated successfully. May 15 10:44:44.094911 systemd[1]: session-21.scope: Deactivated successfully. May 15 10:44:44.095301 systemd-logind[1346]: Removed session 21. May 15 10:44:47.450543 update_engine[1347]: I0515 10:44:47.450201 1347 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 15 10:44:47.450543 update_engine[1347]: I0515 10:44:47.450392 1347 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 15 10:44:47.450543 update_engine[1347]: E0515 10:44:47.450465 1347 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 15 10:44:47.450543 update_engine[1347]: I0515 10:44:47.450511 1347 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 May 15 10:44:49.094416 systemd[1]: Started sshd@20-139.178.70.108:22-147.75.109.163:53054.service. May 15 10:44:49.132523 sshd[3853]: Accepted publickey for core from 147.75.109.163 port 53054 ssh2: RSA SHA256:5jNLHoTZfjCzTOKQrCP5LbgIW1XBYqjk9sc7IZ/f9u8 May 15 10:44:49.133488 sshd[3853]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 10:44:49.136466 systemd[1]: Started session-22.scope. May 15 10:44:49.137196 systemd-logind[1346]: New session 22 of user core. May 15 10:44:49.235308 sshd[3853]: pam_unix(sshd:session): session closed for user core May 15 10:44:49.236972 systemd-logind[1346]: Session 22 logged out. Waiting for processes to exit. May 15 10:44:49.238173 systemd[1]: sshd@20-139.178.70.108:22-147.75.109.163:53054.service: Deactivated successfully. May 15 10:44:49.238971 systemd[1]: session-22.scope: Deactivated successfully. May 15 10:44:49.239966 systemd-logind[1346]: Removed session 22. May 15 10:44:54.237952 systemd[1]: Started sshd@21-139.178.70.108:22-147.75.109.163:53060.service. May 15 10:44:54.280911 sshd[3868]: Accepted publickey for core from 147.75.109.163 port 53060 ssh2: RSA SHA256:5jNLHoTZfjCzTOKQrCP5LbgIW1XBYqjk9sc7IZ/f9u8 May 15 10:44:54.282130 sshd[3868]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 10:44:54.285145 systemd[1]: Started session-23.scope. May 15 10:44:54.285428 systemd-logind[1346]: New session 23 of user core. May 15 10:44:54.372341 sshd[3868]: pam_unix(sshd:session): session closed for user core May 15 10:44:54.374307 systemd[1]: Started sshd@22-139.178.70.108:22-147.75.109.163:53076.service. May 15 10:44:54.378893 systemd-logind[1346]: Session 23 logged out. Waiting for processes to exit. May 15 10:44:54.379233 systemd[1]: sshd@21-139.178.70.108:22-147.75.109.163:53060.service: Deactivated successfully. May 15 10:44:54.379737 systemd[1]: session-23.scope: Deactivated successfully. May 15 10:44:54.380508 systemd-logind[1346]: Removed session 23. May 15 10:44:54.415723 sshd[3879]: Accepted publickey for core from 147.75.109.163 port 53076 ssh2: RSA SHA256:5jNLHoTZfjCzTOKQrCP5LbgIW1XBYqjk9sc7IZ/f9u8 May 15 10:44:54.416940 sshd[3879]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 10:44:54.419923 systemd[1]: Started session-24.scope. May 15 10:44:54.420219 systemd-logind[1346]: New session 24 of user core. May 15 10:44:56.112948 env[1356]: time="2025-05-15T10:44:56.112539584Z" level=info msg="StopContainer for \"adff8ee023975f4717793bb894141e94d42d76ceed6d278f3b23fa2240c72d45\" with timeout 30 (s)" May 15 10:44:56.112948 env[1356]: time="2025-05-15T10:44:56.112877200Z" level=info msg="Stop container \"adff8ee023975f4717793bb894141e94d42d76ceed6d278f3b23fa2240c72d45\" with signal terminated" May 15 10:44:56.124983 env[1356]: time="2025-05-15T10:44:56.123989685Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 15 10:44:56.130084 env[1356]: time="2025-05-15T10:44:56.130041916Z" level=info msg="StopContainer for \"0d22ab0188d86a0b3c257c6c8a21fa444a512ebfb10e241ac19137e95d10dcbf\" with timeout 2 (s)" May 15 10:44:56.130208 env[1356]: time="2025-05-15T10:44:56.130185152Z" level=info msg="Stop container \"0d22ab0188d86a0b3c257c6c8a21fa444a512ebfb10e241ac19137e95d10dcbf\" with signal terminated" May 15 10:44:56.135543 systemd-networkd[1114]: lxc_health: Link DOWN May 15 10:44:56.135547 systemd-networkd[1114]: lxc_health: Lost carrier May 15 10:44:56.140958 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-adff8ee023975f4717793bb894141e94d42d76ceed6d278f3b23fa2240c72d45-rootfs.mount: Deactivated successfully. May 15 10:44:56.149412 env[1356]: time="2025-05-15T10:44:56.149369912Z" level=info msg="shim disconnected" id=adff8ee023975f4717793bb894141e94d42d76ceed6d278f3b23fa2240c72d45 May 15 10:44:56.149575 env[1356]: time="2025-05-15T10:44:56.149562852Z" level=warning msg="cleaning up after shim disconnected" id=adff8ee023975f4717793bb894141e94d42d76ceed6d278f3b23fa2240c72d45 namespace=k8s.io May 15 10:44:56.149646 env[1356]: time="2025-05-15T10:44:56.149636785Z" level=info msg="cleaning up dead shim" May 15 10:44:56.155567 env[1356]: time="2025-05-15T10:44:56.155538057Z" level=warning msg="cleanup warnings time=\"2025-05-15T10:44:56Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3944 runtime=io.containerd.runc.v2\n" May 15 10:44:56.159181 env[1356]: time="2025-05-15T10:44:56.159151824Z" level=info msg="StopContainer for \"adff8ee023975f4717793bb894141e94d42d76ceed6d278f3b23fa2240c72d45\" returns successfully" May 15 10:44:56.159737 env[1356]: time="2025-05-15T10:44:56.159666237Z" level=info msg="StopPodSandbox for \"704291867eb85ac1b2ae1da298b8ce5c8de0e6c498a543089b4b4be85fe79651\"" May 15 10:44:56.159778 env[1356]: time="2025-05-15T10:44:56.159759956Z" level=info msg="Container to stop \"adff8ee023975f4717793bb894141e94d42d76ceed6d278f3b23fa2240c72d45\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 15 10:44:56.161157 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-704291867eb85ac1b2ae1da298b8ce5c8de0e6c498a543089b4b4be85fe79651-shm.mount: Deactivated successfully. May 15 10:44:56.178548 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0d22ab0188d86a0b3c257c6c8a21fa444a512ebfb10e241ac19137e95d10dcbf-rootfs.mount: Deactivated successfully. May 15 10:44:56.181170 env[1356]: time="2025-05-15T10:44:56.181131496Z" level=info msg="shim disconnected" id=0d22ab0188d86a0b3c257c6c8a21fa444a512ebfb10e241ac19137e95d10dcbf May 15 10:44:56.181240 env[1356]: time="2025-05-15T10:44:56.181171126Z" level=warning msg="cleaning up after shim disconnected" id=0d22ab0188d86a0b3c257c6c8a21fa444a512ebfb10e241ac19137e95d10dcbf namespace=k8s.io May 15 10:44:56.181240 env[1356]: time="2025-05-15T10:44:56.181179026Z" level=info msg="cleaning up dead shim" May 15 10:44:56.192992 env[1356]: time="2025-05-15T10:44:56.192949562Z" level=warning msg="cleanup warnings time=\"2025-05-15T10:44:56Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3982 runtime=io.containerd.runc.v2\n" May 15 10:44:56.193664 env[1356]: time="2025-05-15T10:44:56.193645468Z" level=info msg="StopContainer for \"0d22ab0188d86a0b3c257c6c8a21fa444a512ebfb10e241ac19137e95d10dcbf\" returns successfully" May 15 10:44:56.194035 env[1356]: time="2025-05-15T10:44:56.194014391Z" level=info msg="StopPodSandbox for \"f536e729699158d6cedd391725a16f9bd57bd482d622548ba4fc88c04eaa537b\"" May 15 10:44:56.194114 env[1356]: time="2025-05-15T10:44:56.194102087Z" level=info msg="Container to stop \"8fb1f81f878a0589a842e34c879ab96e17b42ef9c9abe66ad0fabe5af7170b3b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 15 10:44:56.194171 env[1356]: time="2025-05-15T10:44:56.194160605Z" level=info msg="Container to stop \"0d22ab0188d86a0b3c257c6c8a21fa444a512ebfb10e241ac19137e95d10dcbf\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 15 10:44:56.194221 env[1356]: time="2025-05-15T10:44:56.194208778Z" level=info msg="Container to stop \"6e65b0c8f956bac178ac1bce5471082333cb3eb61e553a24612a6b34f342a487\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 15 10:44:56.194269 env[1356]: time="2025-05-15T10:44:56.194257796Z" level=info msg="Container to stop \"1c2ea59371bd4fe33bf2045b7874817cd1ac64152bda919970c516765210b8f2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 15 10:44:56.194362 env[1356]: time="2025-05-15T10:44:56.194311161Z" level=info msg="Container to stop \"ffb9cd831efe22e98e191468f0ad8cec9fdb07e62e6247d3f6fab0fa50fadab1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 15 10:44:56.196872 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f536e729699158d6cedd391725a16f9bd57bd482d622548ba4fc88c04eaa537b-shm.mount: Deactivated successfully. May 15 10:44:56.200440 env[1356]: time="2025-05-15T10:44:56.200415019Z" level=info msg="shim disconnected" id=704291867eb85ac1b2ae1da298b8ce5c8de0e6c498a543089b4b4be85fe79651 May 15 10:44:56.200570 env[1356]: time="2025-05-15T10:44:56.200439514Z" level=warning msg="cleaning up after shim disconnected" id=704291867eb85ac1b2ae1da298b8ce5c8de0e6c498a543089b4b4be85fe79651 namespace=k8s.io May 15 10:44:56.200610 env[1356]: time="2025-05-15T10:44:56.200568693Z" level=info msg="cleaning up dead shim" May 15 10:44:56.209051 env[1356]: time="2025-05-15T10:44:56.209025217Z" level=warning msg="cleanup warnings time=\"2025-05-15T10:44:56Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4009 runtime=io.containerd.runc.v2\n" May 15 10:44:56.209453 env[1356]: time="2025-05-15T10:44:56.209436334Z" level=info msg="TearDown network for sandbox \"704291867eb85ac1b2ae1da298b8ce5c8de0e6c498a543089b4b4be85fe79651\" successfully" May 15 10:44:56.209488 env[1356]: time="2025-05-15T10:44:56.209452581Z" level=info msg="StopPodSandbox for \"704291867eb85ac1b2ae1da298b8ce5c8de0e6c498a543089b4b4be85fe79651\" returns successfully" May 15 10:44:56.220052 kubelet[2321]: I0515 10:44:56.220030 2321 scope.go:117] "RemoveContainer" containerID="adff8ee023975f4717793bb894141e94d42d76ceed6d278f3b23fa2240c72d45" May 15 10:44:56.222748 env[1356]: time="2025-05-15T10:44:56.222664000Z" level=info msg="RemoveContainer for \"adff8ee023975f4717793bb894141e94d42d76ceed6d278f3b23fa2240c72d45\"" May 15 10:44:56.225596 env[1356]: time="2025-05-15T10:44:56.225581410Z" level=info msg="RemoveContainer for \"adff8ee023975f4717793bb894141e94d42d76ceed6d278f3b23fa2240c72d45\" returns successfully" May 15 10:44:56.226259 env[1356]: time="2025-05-15T10:44:56.225814797Z" level=info msg="shim disconnected" id=f536e729699158d6cedd391725a16f9bd57bd482d622548ba4fc88c04eaa537b May 15 10:44:56.226335 env[1356]: time="2025-05-15T10:44:56.226325734Z" level=warning msg="cleaning up after shim disconnected" id=f536e729699158d6cedd391725a16f9bd57bd482d622548ba4fc88c04eaa537b namespace=k8s.io May 15 10:44:56.226448 env[1356]: time="2025-05-15T10:44:56.226438747Z" level=info msg="cleaning up dead shim" May 15 10:44:56.227700 kubelet[2321]: I0515 10:44:56.227147 2321 scope.go:117] "RemoveContainer" containerID="adff8ee023975f4717793bb894141e94d42d76ceed6d278f3b23fa2240c72d45" May 15 10:44:56.227757 env[1356]: time="2025-05-15T10:44:56.227260431Z" level=error msg="ContainerStatus for \"adff8ee023975f4717793bb894141e94d42d76ceed6d278f3b23fa2240c72d45\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"adff8ee023975f4717793bb894141e94d42d76ceed6d278f3b23fa2240c72d45\": not found" May 15 10:44:56.232219 kubelet[2321]: E0515 10:44:56.230286 2321 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"adff8ee023975f4717793bb894141e94d42d76ceed6d278f3b23fa2240c72d45\": not found" containerID="adff8ee023975f4717793bb894141e94d42d76ceed6d278f3b23fa2240c72d45" May 15 10:44:56.232219 kubelet[2321]: I0515 10:44:56.232150 2321 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"adff8ee023975f4717793bb894141e94d42d76ceed6d278f3b23fa2240c72d45"} err="failed to get container status \"adff8ee023975f4717793bb894141e94d42d76ceed6d278f3b23fa2240c72d45\": rpc error: code = NotFound desc = an error occurred when try to find container \"adff8ee023975f4717793bb894141e94d42d76ceed6d278f3b23fa2240c72d45\": not found" May 15 10:44:56.232938 env[1356]: time="2025-05-15T10:44:56.232916201Z" level=warning msg="cleanup warnings time=\"2025-05-15T10:44:56Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4037 runtime=io.containerd.runc.v2\n" May 15 10:44:56.233345 env[1356]: time="2025-05-15T10:44:56.233330663Z" level=info msg="TearDown network for sandbox \"f536e729699158d6cedd391725a16f9bd57bd482d622548ba4fc88c04eaa537b\" successfully" May 15 10:44:56.233453 env[1356]: time="2025-05-15T10:44:56.233439143Z" level=info msg="StopPodSandbox for \"f536e729699158d6cedd391725a16f9bd57bd482d622548ba4fc88c04eaa537b\" returns successfully" May 15 10:44:56.302433 kubelet[2321]: I0515 10:44:56.302406 2321 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qkx8g\" (UniqueName: \"kubernetes.io/projected/2b56cdb3-6b72-4493-896f-9c54ddf87971-kube-api-access-qkx8g\") pod \"2b56cdb3-6b72-4493-896f-9c54ddf87971\" (UID: \"2b56cdb3-6b72-4493-896f-9c54ddf87971\") " May 15 10:44:56.302565 kubelet[2321]: I0515 10:44:56.302555 2321 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2b56cdb3-6b72-4493-896f-9c54ddf87971-host-proc-sys-kernel\") pod \"2b56cdb3-6b72-4493-896f-9c54ddf87971\" (UID: \"2b56cdb3-6b72-4493-896f-9c54ddf87971\") " May 15 10:44:56.302621 kubelet[2321]: I0515 10:44:56.302613 2321 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2b56cdb3-6b72-4493-896f-9c54ddf87971-hostproc\") pod \"2b56cdb3-6b72-4493-896f-9c54ddf87971\" (UID: \"2b56cdb3-6b72-4493-896f-9c54ddf87971\") " May 15 10:44:56.302695 kubelet[2321]: I0515 10:44:56.302669 2321 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7xdh\" (UniqueName: \"kubernetes.io/projected/06162253-c5ba-41c4-a121-45d3a08feaea-kube-api-access-x7xdh\") pod \"06162253-c5ba-41c4-a121-45d3a08feaea\" (UID: \"06162253-c5ba-41c4-a121-45d3a08feaea\") " May 15 10:44:56.302753 kubelet[2321]: I0515 10:44:56.302745 2321 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2b56cdb3-6b72-4493-896f-9c54ddf87971-bpf-maps\") pod \"2b56cdb3-6b72-4493-896f-9c54ddf87971\" (UID: \"2b56cdb3-6b72-4493-896f-9c54ddf87971\") " May 15 10:44:56.303212 kubelet[2321]: I0515 10:44:56.303203 2321 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2b56cdb3-6b72-4493-896f-9c54ddf87971-host-proc-sys-net\") pod \"2b56cdb3-6b72-4493-896f-9c54ddf87971\" (UID: \"2b56cdb3-6b72-4493-896f-9c54ddf87971\") " May 15 10:44:56.304024 kubelet[2321]: I0515 10:44:56.303300 2321 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2b56cdb3-6b72-4493-896f-9c54ddf87971-lib-modules\") pod \"2b56cdb3-6b72-4493-896f-9c54ddf87971\" (UID: \"2b56cdb3-6b72-4493-896f-9c54ddf87971\") " May 15 10:44:56.304024 kubelet[2321]: I0515 10:44:56.303312 2321 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2b56cdb3-6b72-4493-896f-9c54ddf87971-cilium-run\") pod \"2b56cdb3-6b72-4493-896f-9c54ddf87971\" (UID: \"2b56cdb3-6b72-4493-896f-9c54ddf87971\") " May 15 10:44:56.304024 kubelet[2321]: I0515 10:44:56.303323 2321 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2b56cdb3-6b72-4493-896f-9c54ddf87971-cilium-config-path\") pod \"2b56cdb3-6b72-4493-896f-9c54ddf87971\" (UID: \"2b56cdb3-6b72-4493-896f-9c54ddf87971\") " May 15 10:44:56.304024 kubelet[2321]: I0515 10:44:56.303334 2321 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2b56cdb3-6b72-4493-896f-9c54ddf87971-cni-path\") pod \"2b56cdb3-6b72-4493-896f-9c54ddf87971\" (UID: \"2b56cdb3-6b72-4493-896f-9c54ddf87971\") " May 15 10:44:56.304024 kubelet[2321]: I0515 10:44:56.303343 2321 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2b56cdb3-6b72-4493-896f-9c54ddf87971-hubble-tls\") pod \"2b56cdb3-6b72-4493-896f-9c54ddf87971\" (UID: \"2b56cdb3-6b72-4493-896f-9c54ddf87971\") " May 15 10:44:56.304024 kubelet[2321]: I0515 10:44:56.303351 2321 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2b56cdb3-6b72-4493-896f-9c54ddf87971-xtables-lock\") pod \"2b56cdb3-6b72-4493-896f-9c54ddf87971\" (UID: \"2b56cdb3-6b72-4493-896f-9c54ddf87971\") " May 15 10:44:56.304151 kubelet[2321]: I0515 10:44:56.303365 2321 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/06162253-c5ba-41c4-a121-45d3a08feaea-cilium-config-path\") pod \"06162253-c5ba-41c4-a121-45d3a08feaea\" (UID: \"06162253-c5ba-41c4-a121-45d3a08feaea\") " May 15 10:44:56.304151 kubelet[2321]: I0515 10:44:56.303377 2321 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2b56cdb3-6b72-4493-896f-9c54ddf87971-clustermesh-secrets\") pod \"2b56cdb3-6b72-4493-896f-9c54ddf87971\" (UID: \"2b56cdb3-6b72-4493-896f-9c54ddf87971\") " May 15 10:44:56.304151 kubelet[2321]: I0515 10:44:56.303385 2321 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2b56cdb3-6b72-4493-896f-9c54ddf87971-cilium-cgroup\") pod \"2b56cdb3-6b72-4493-896f-9c54ddf87971\" (UID: \"2b56cdb3-6b72-4493-896f-9c54ddf87971\") " May 15 10:44:56.304151 kubelet[2321]: I0515 10:44:56.303395 2321 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2b56cdb3-6b72-4493-896f-9c54ddf87971-etc-cni-netd\") pod \"2b56cdb3-6b72-4493-896f-9c54ddf87971\" (UID: \"2b56cdb3-6b72-4493-896f-9c54ddf87971\") " May 15 10:44:56.306291 kubelet[2321]: I0515 10:44:56.303431 2321 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2b56cdb3-6b72-4493-896f-9c54ddf87971-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "2b56cdb3-6b72-4493-896f-9c54ddf87971" (UID: "2b56cdb3-6b72-4493-896f-9c54ddf87971"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 10:44:56.306378 kubelet[2321]: I0515 10:44:56.306367 2321 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2b56cdb3-6b72-4493-896f-9c54ddf87971-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "2b56cdb3-6b72-4493-896f-9c54ddf87971" (UID: "2b56cdb3-6b72-4493-896f-9c54ddf87971"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 10:44:56.306440 kubelet[2321]: I0515 10:44:56.306431 2321 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2b56cdb3-6b72-4493-896f-9c54ddf87971-hostproc" (OuterVolumeSpecName: "hostproc") pod "2b56cdb3-6b72-4493-896f-9c54ddf87971" (UID: "2b56cdb3-6b72-4493-896f-9c54ddf87971"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 10:44:56.307706 kubelet[2321]: I0515 10:44:56.307694 2321 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b56cdb3-6b72-4493-896f-9c54ddf87971-kube-api-access-qkx8g" (OuterVolumeSpecName: "kube-api-access-qkx8g") pod "2b56cdb3-6b72-4493-896f-9c54ddf87971" (UID: "2b56cdb3-6b72-4493-896f-9c54ddf87971"). InnerVolumeSpecName "kube-api-access-qkx8g". PluginName "kubernetes.io/projected", VolumeGidValue "" May 15 10:44:56.307783 kubelet[2321]: I0515 10:44:56.307773 2321 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2b56cdb3-6b72-4493-896f-9c54ddf87971-cni-path" (OuterVolumeSpecName: "cni-path") pod "2b56cdb3-6b72-4493-896f-9c54ddf87971" (UID: "2b56cdb3-6b72-4493-896f-9c54ddf87971"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 10:44:56.307842 kubelet[2321]: I0515 10:44:56.307833 2321 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2b56cdb3-6b72-4493-896f-9c54ddf87971-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "2b56cdb3-6b72-4493-896f-9c54ddf87971" (UID: "2b56cdb3-6b72-4493-896f-9c54ddf87971"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 10:44:56.307896 kubelet[2321]: I0515 10:44:56.307888 2321 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2b56cdb3-6b72-4493-896f-9c54ddf87971-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "2b56cdb3-6b72-4493-896f-9c54ddf87971" (UID: "2b56cdb3-6b72-4493-896f-9c54ddf87971"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 10:44:56.307948 kubelet[2321]: I0515 10:44:56.307940 2321 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2b56cdb3-6b72-4493-896f-9c54ddf87971-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "2b56cdb3-6b72-4493-896f-9c54ddf87971" (UID: "2b56cdb3-6b72-4493-896f-9c54ddf87971"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 10:44:56.308017 kubelet[2321]: I0515 10:44:56.308009 2321 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2b56cdb3-6b72-4493-896f-9c54ddf87971-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "2b56cdb3-6b72-4493-896f-9c54ddf87971" (UID: "2b56cdb3-6b72-4493-896f-9c54ddf87971"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 10:44:56.309360 kubelet[2321]: I0515 10:44:56.309348 2321 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2b56cdb3-6b72-4493-896f-9c54ddf87971-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "2b56cdb3-6b72-4493-896f-9c54ddf87971" (UID: "2b56cdb3-6b72-4493-896f-9c54ddf87971"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 15 10:44:56.310846 kubelet[2321]: I0515 10:44:56.310833 2321 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/06162253-c5ba-41c4-a121-45d3a08feaea-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "06162253-c5ba-41c4-a121-45d3a08feaea" (UID: "06162253-c5ba-41c4-a121-45d3a08feaea"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 15 10:44:56.312053 kubelet[2321]: I0515 10:44:56.312037 2321 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/06162253-c5ba-41c4-a121-45d3a08feaea-kube-api-access-x7xdh" (OuterVolumeSpecName: "kube-api-access-x7xdh") pod "06162253-c5ba-41c4-a121-45d3a08feaea" (UID: "06162253-c5ba-41c4-a121-45d3a08feaea"). InnerVolumeSpecName "kube-api-access-x7xdh". PluginName "kubernetes.io/projected", VolumeGidValue "" May 15 10:44:56.312875 kubelet[2321]: I0515 10:44:56.312862 2321 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b56cdb3-6b72-4493-896f-9c54ddf87971-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "2b56cdb3-6b72-4493-896f-9c54ddf87971" (UID: "2b56cdb3-6b72-4493-896f-9c54ddf87971"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" May 15 10:44:56.312948 kubelet[2321]: I0515 10:44:56.312939 2321 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2b56cdb3-6b72-4493-896f-9c54ddf87971-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "2b56cdb3-6b72-4493-896f-9c54ddf87971" (UID: "2b56cdb3-6b72-4493-896f-9c54ddf87971"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 10:44:56.313007 kubelet[2321]: I0515 10:44:56.312998 2321 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2b56cdb3-6b72-4493-896f-9c54ddf87971-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "2b56cdb3-6b72-4493-896f-9c54ddf87971" (UID: "2b56cdb3-6b72-4493-896f-9c54ddf87971"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 10:44:56.314572 kubelet[2321]: I0515 10:44:56.314552 2321 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b56cdb3-6b72-4493-896f-9c54ddf87971-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "2b56cdb3-6b72-4493-896f-9c54ddf87971" (UID: "2b56cdb3-6b72-4493-896f-9c54ddf87971"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" May 15 10:44:56.403629 kubelet[2321]: I0515 10:44:56.403551 2321 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2b56cdb3-6b72-4493-896f-9c54ddf87971-xtables-lock\") on node \"localhost\" DevicePath \"\"" May 15 10:44:56.403770 kubelet[2321]: I0515 10:44:56.403761 2321 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/06162253-c5ba-41c4-a121-45d3a08feaea-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 15 10:44:56.403820 kubelet[2321]: I0515 10:44:56.403812 2321 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2b56cdb3-6b72-4493-896f-9c54ddf87971-hubble-tls\") on node \"localhost\" DevicePath \"\"" May 15 10:44:56.403866 kubelet[2321]: I0515 10:44:56.403859 2321 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2b56cdb3-6b72-4493-896f-9c54ddf87971-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" May 15 10:44:56.403911 kubelet[2321]: I0515 10:44:56.403904 2321 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2b56cdb3-6b72-4493-896f-9c54ddf87971-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" May 15 10:44:56.403961 kubelet[2321]: I0515 10:44:56.403954 2321 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2b56cdb3-6b72-4493-896f-9c54ddf87971-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" May 15 10:44:56.404007 kubelet[2321]: I0515 10:44:56.403999 2321 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-qkx8g\" (UniqueName: \"kubernetes.io/projected/2b56cdb3-6b72-4493-896f-9c54ddf87971-kube-api-access-qkx8g\") on node \"localhost\" DevicePath \"\"" May 15 10:44:56.404053 kubelet[2321]: I0515 10:44:56.404046 2321 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2b56cdb3-6b72-4493-896f-9c54ddf87971-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" May 15 10:44:56.404099 kubelet[2321]: I0515 10:44:56.404092 2321 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2b56cdb3-6b72-4493-896f-9c54ddf87971-hostproc\") on node \"localhost\" DevicePath \"\"" May 15 10:44:56.404153 kubelet[2321]: I0515 10:44:56.404145 2321 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-x7xdh\" (UniqueName: \"kubernetes.io/projected/06162253-c5ba-41c4-a121-45d3a08feaea-kube-api-access-x7xdh\") on node \"localhost\" DevicePath \"\"" May 15 10:44:56.404204 kubelet[2321]: I0515 10:44:56.404196 2321 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2b56cdb3-6b72-4493-896f-9c54ddf87971-bpf-maps\") on node \"localhost\" DevicePath \"\"" May 15 10:44:56.404248 kubelet[2321]: I0515 10:44:56.404241 2321 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2b56cdb3-6b72-4493-896f-9c54ddf87971-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" May 15 10:44:56.404292 kubelet[2321]: I0515 10:44:56.404286 2321 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2b56cdb3-6b72-4493-896f-9c54ddf87971-lib-modules\") on node \"localhost\" DevicePath \"\"" May 15 10:44:56.404337 kubelet[2321]: I0515 10:44:56.404330 2321 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2b56cdb3-6b72-4493-896f-9c54ddf87971-cilium-run\") on node \"localhost\" DevicePath \"\"" May 15 10:44:56.404381 kubelet[2321]: I0515 10:44:56.404375 2321 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2b56cdb3-6b72-4493-896f-9c54ddf87971-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 15 10:44:56.404424 kubelet[2321]: I0515 10:44:56.404417 2321 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2b56cdb3-6b72-4493-896f-9c54ddf87971-cni-path\") on node \"localhost\" DevicePath \"\"" May 15 10:44:56.767093 kubelet[2321]: I0515 10:44:56.767037 2321 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="06162253-c5ba-41c4-a121-45d3a08feaea" path="/var/lib/kubelet/pods/06162253-c5ba-41c4-a121-45d3a08feaea/volumes" May 15 10:44:57.093220 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-704291867eb85ac1b2ae1da298b8ce5c8de0e6c498a543089b4b4be85fe79651-rootfs.mount: Deactivated successfully. May 15 10:44:57.093305 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f536e729699158d6cedd391725a16f9bd57bd482d622548ba4fc88c04eaa537b-rootfs.mount: Deactivated successfully. May 15 10:44:57.093362 systemd[1]: var-lib-kubelet-pods-2b56cdb3\x2d6b72\x2d4493\x2d896f\x2d9c54ddf87971-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 15 10:44:57.093415 systemd[1]: var-lib-kubelet-pods-06162253\x2dc5ba\x2d41c4\x2da121\x2d45d3a08feaea-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dx7xdh.mount: Deactivated successfully. May 15 10:44:57.093468 systemd[1]: var-lib-kubelet-pods-2b56cdb3\x2d6b72\x2d4493\x2d896f\x2d9c54ddf87971-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dqkx8g.mount: Deactivated successfully. May 15 10:44:57.093521 systemd[1]: var-lib-kubelet-pods-2b56cdb3\x2d6b72\x2d4493\x2d896f\x2d9c54ddf87971-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 15 10:44:57.228245 kubelet[2321]: I0515 10:44:57.228222 2321 scope.go:117] "RemoveContainer" containerID="0d22ab0188d86a0b3c257c6c8a21fa444a512ebfb10e241ac19137e95d10dcbf" May 15 10:44:57.229475 env[1356]: time="2025-05-15T10:44:57.229396697Z" level=info msg="RemoveContainer for \"0d22ab0188d86a0b3c257c6c8a21fa444a512ebfb10e241ac19137e95d10dcbf\"" May 15 10:44:57.230858 env[1356]: time="2025-05-15T10:44:57.230838672Z" level=info msg="RemoveContainer for \"0d22ab0188d86a0b3c257c6c8a21fa444a512ebfb10e241ac19137e95d10dcbf\" returns successfully" May 15 10:44:57.231007 kubelet[2321]: I0515 10:44:57.230995 2321 scope.go:117] "RemoveContainer" containerID="ffb9cd831efe22e98e191468f0ad8cec9fdb07e62e6247d3f6fab0fa50fadab1" May 15 10:44:57.235782 env[1356]: time="2025-05-15T10:44:57.235498944Z" level=info msg="RemoveContainer for \"ffb9cd831efe22e98e191468f0ad8cec9fdb07e62e6247d3f6fab0fa50fadab1\"" May 15 10:44:57.236781 env[1356]: time="2025-05-15T10:44:57.236740508Z" level=info msg="RemoveContainer for \"ffb9cd831efe22e98e191468f0ad8cec9fdb07e62e6247d3f6fab0fa50fadab1\" returns successfully" May 15 10:44:57.236878 kubelet[2321]: I0515 10:44:57.236818 2321 scope.go:117] "RemoveContainer" containerID="1c2ea59371bd4fe33bf2045b7874817cd1ac64152bda919970c516765210b8f2" May 15 10:44:57.237374 env[1356]: time="2025-05-15T10:44:57.237359491Z" level=info msg="RemoveContainer for \"1c2ea59371bd4fe33bf2045b7874817cd1ac64152bda919970c516765210b8f2\"" May 15 10:44:57.238375 env[1356]: time="2025-05-15T10:44:57.238358720Z" level=info msg="RemoveContainer for \"1c2ea59371bd4fe33bf2045b7874817cd1ac64152bda919970c516765210b8f2\" returns successfully" May 15 10:44:57.238465 kubelet[2321]: I0515 10:44:57.238434 2321 scope.go:117] "RemoveContainer" containerID="6e65b0c8f956bac178ac1bce5471082333cb3eb61e553a24612a6b34f342a487" May 15 10:44:57.239162 env[1356]: time="2025-05-15T10:44:57.238983454Z" level=info msg="RemoveContainer for \"6e65b0c8f956bac178ac1bce5471082333cb3eb61e553a24612a6b34f342a487\"" May 15 10:44:57.239983 env[1356]: time="2025-05-15T10:44:57.239946547Z" level=info msg="RemoveContainer for \"6e65b0c8f956bac178ac1bce5471082333cb3eb61e553a24612a6b34f342a487\" returns successfully" May 15 10:44:57.240029 kubelet[2321]: I0515 10:44:57.240021 2321 scope.go:117] "RemoveContainer" containerID="8fb1f81f878a0589a842e34c879ab96e17b42ef9c9abe66ad0fabe5af7170b3b" May 15 10:44:57.240738 env[1356]: time="2025-05-15T10:44:57.240721522Z" level=info msg="RemoveContainer for \"8fb1f81f878a0589a842e34c879ab96e17b42ef9c9abe66ad0fabe5af7170b3b\"" May 15 10:44:57.242469 env[1356]: time="2025-05-15T10:44:57.242452265Z" level=info msg="RemoveContainer for \"8fb1f81f878a0589a842e34c879ab96e17b42ef9c9abe66ad0fabe5af7170b3b\" returns successfully" May 15 10:44:57.450181 update_engine[1347]: I0515 10:44:57.450086 1347 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 15 10:44:57.451523 update_engine[1347]: I0515 10:44:57.450896 1347 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 15 10:44:57.451523 update_engine[1347]: E0515 10:44:57.450958 1347 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 15 10:44:57.451523 update_engine[1347]: I0515 10:44:57.450994 1347 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded May 15 10:44:57.451523 update_engine[1347]: I0515 10:44:57.450999 1347 omaha_request_action.cc:621] Omaha request response: May 15 10:44:57.451523 update_engine[1347]: E0515 10:44:57.451036 1347 omaha_request_action.cc:640] Omaha request network transfer failed. May 15 10:44:57.451523 update_engine[1347]: I0515 10:44:57.451047 1347 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. May 15 10:44:57.451523 update_engine[1347]: I0515 10:44:57.451049 1347 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction May 15 10:44:57.451523 update_engine[1347]: I0515 10:44:57.451051 1347 update_attempter.cc:306] Processing Done. May 15 10:44:57.451523 update_engine[1347]: E0515 10:44:57.451079 1347 update_attempter.cc:619] Update failed. May 15 10:44:57.451523 update_engine[1347]: I0515 10:44:57.451083 1347 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse May 15 10:44:57.451523 update_engine[1347]: I0515 10:44:57.451086 1347 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) May 15 10:44:57.451523 update_engine[1347]: I0515 10:44:57.451092 1347 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. May 15 10:44:57.451523 update_engine[1347]: I0515 10:44:57.451155 1347 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction May 15 10:44:57.451523 update_engine[1347]: I0515 10:44:57.451171 1347 omaha_request_action.cc:270] Posting an Omaha request to disabled May 15 10:44:57.451523 update_engine[1347]: I0515 10:44:57.451174 1347 omaha_request_action.cc:271] Request: May 15 10:44:57.451523 update_engine[1347]: May 15 10:44:57.451523 update_engine[1347]: May 15 10:44:57.453255 update_engine[1347]: May 15 10:44:57.453255 update_engine[1347]: May 15 10:44:57.453255 update_engine[1347]: May 15 10:44:57.453255 update_engine[1347]: May 15 10:44:57.453255 update_engine[1347]: I0515 10:44:57.451176 1347 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 15 10:44:57.453255 update_engine[1347]: I0515 10:44:57.451241 1347 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 15 10:44:57.453255 update_engine[1347]: E0515 10:44:57.451273 1347 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 15 10:44:57.453255 update_engine[1347]: I0515 10:44:57.451300 1347 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded May 15 10:44:57.453255 update_engine[1347]: I0515 10:44:57.451304 1347 omaha_request_action.cc:621] Omaha request response: May 15 10:44:57.453255 update_engine[1347]: I0515 10:44:57.451306 1347 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction May 15 10:44:57.453255 update_engine[1347]: I0515 10:44:57.451308 1347 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction May 15 10:44:57.453255 update_engine[1347]: I0515 10:44:57.451309 1347 update_attempter.cc:306] Processing Done. May 15 10:44:57.453255 update_engine[1347]: I0515 10:44:57.451310 1347 update_attempter.cc:310] Error event sent. May 15 10:44:57.453255 update_engine[1347]: I0515 10:44:57.451318 1347 update_check_scheduler.cc:74] Next update check in 42m53s May 15 10:44:57.461995 locksmithd[1403]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 May 15 10:44:57.462226 locksmithd[1403]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 May 15 10:44:57.866785 kubelet[2321]: E0515 10:44:57.866745 2321 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 15 10:44:58.045065 sshd[3879]: pam_unix(sshd:session): session closed for user core May 15 10:44:58.046809 systemd[1]: Started sshd@23-139.178.70.108:22-147.75.109.163:45326.service. May 15 10:44:58.052595 systemd[1]: sshd@22-139.178.70.108:22-147.75.109.163:53076.service: Deactivated successfully. May 15 10:44:58.053106 systemd[1]: session-24.scope: Deactivated successfully. May 15 10:44:58.053858 systemd-logind[1346]: Session 24 logged out. Waiting for processes to exit. May 15 10:44:58.054348 systemd-logind[1346]: Removed session 24. May 15 10:44:58.215788 sshd[4054]: Accepted publickey for core from 147.75.109.163 port 45326 ssh2: RSA SHA256:5jNLHoTZfjCzTOKQrCP5LbgIW1XBYqjk9sc7IZ/f9u8 May 15 10:44:58.216510 sshd[4054]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 10:44:58.220148 systemd[1]: Started session-25.scope. May 15 10:44:58.220417 systemd-logind[1346]: New session 25 of user core. May 15 10:44:58.544528 sshd[4054]: pam_unix(sshd:session): session closed for user core May 15 10:44:58.546228 systemd[1]: Started sshd@24-139.178.70.108:22-147.75.109.163:45328.service. May 15 10:44:58.551797 systemd[1]: sshd@23-139.178.70.108:22-147.75.109.163:45326.service: Deactivated successfully. May 15 10:44:58.553830 systemd[1]: session-25.scope: Deactivated successfully. May 15 10:44:58.556724 systemd-logind[1346]: Session 25 logged out. Waiting for processes to exit. May 15 10:44:58.557408 systemd-logind[1346]: Removed session 25. May 15 10:44:58.597343 sshd[4065]: Accepted publickey for core from 147.75.109.163 port 45328 ssh2: RSA SHA256:5jNLHoTZfjCzTOKQrCP5LbgIW1XBYqjk9sc7IZ/f9u8 May 15 10:44:58.600464 sshd[4065]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 10:44:58.602603 kubelet[2321]: I0515 10:44:58.602570 2321 topology_manager.go:215] "Topology Admit Handler" podUID="0b0e396f-91b5-4ec6-b3f6-a79ffa9967e6" podNamespace="kube-system" podName="cilium-b6k5c" May 15 10:44:58.610124 systemd[1]: Started session-26.scope. May 15 10:44:58.610406 systemd-logind[1346]: New session 26 of user core. May 15 10:44:58.611720 kubelet[2321]: E0515 10:44:58.611480 2321 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="2b56cdb3-6b72-4493-896f-9c54ddf87971" containerName="mount-cgroup" May 15 10:44:58.611720 kubelet[2321]: E0515 10:44:58.611516 2321 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="06162253-c5ba-41c4-a121-45d3a08feaea" containerName="cilium-operator" May 15 10:44:58.611720 kubelet[2321]: E0515 10:44:58.611525 2321 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="2b56cdb3-6b72-4493-896f-9c54ddf87971" containerName="clean-cilium-state" May 15 10:44:58.611720 kubelet[2321]: E0515 10:44:58.611531 2321 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="2b56cdb3-6b72-4493-896f-9c54ddf87971" containerName="cilium-agent" May 15 10:44:58.611720 kubelet[2321]: E0515 10:44:58.611534 2321 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="2b56cdb3-6b72-4493-896f-9c54ddf87971" containerName="mount-bpf-fs" May 15 10:44:58.611720 kubelet[2321]: E0515 10:44:58.611539 2321 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="2b56cdb3-6b72-4493-896f-9c54ddf87971" containerName="apply-sysctl-overwrites" May 15 10:44:58.611720 kubelet[2321]: I0515 10:44:58.611570 2321 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b56cdb3-6b72-4493-896f-9c54ddf87971" containerName="cilium-agent" May 15 10:44:58.611720 kubelet[2321]: I0515 10:44:58.611578 2321 memory_manager.go:354] "RemoveStaleState removing state" podUID="06162253-c5ba-41c4-a121-45d3a08feaea" containerName="cilium-operator" May 15 10:44:58.728788 kubelet[2321]: I0515 10:44:58.728755 2321 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0b0e396f-91b5-4ec6-b3f6-a79ffa9967e6-host-proc-sys-kernel\") pod \"cilium-b6k5c\" (UID: \"0b0e396f-91b5-4ec6-b3f6-a79ffa9967e6\") " pod="kube-system/cilium-b6k5c" May 15 10:44:58.728936 kubelet[2321]: I0515 10:44:58.728798 2321 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kgprs\" (UniqueName: \"kubernetes.io/projected/0b0e396f-91b5-4ec6-b3f6-a79ffa9967e6-kube-api-access-kgprs\") pod \"cilium-b6k5c\" (UID: \"0b0e396f-91b5-4ec6-b3f6-a79ffa9967e6\") " pod="kube-system/cilium-b6k5c" May 15 10:44:58.728936 kubelet[2321]: I0515 10:44:58.728817 2321 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0b0e396f-91b5-4ec6-b3f6-a79ffa9967e6-clustermesh-secrets\") pod \"cilium-b6k5c\" (UID: \"0b0e396f-91b5-4ec6-b3f6-a79ffa9967e6\") " pod="kube-system/cilium-b6k5c" May 15 10:44:58.728936 kubelet[2321]: I0515 10:44:58.728835 2321 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0b0e396f-91b5-4ec6-b3f6-a79ffa9967e6-bpf-maps\") pod \"cilium-b6k5c\" (UID: \"0b0e396f-91b5-4ec6-b3f6-a79ffa9967e6\") " pod="kube-system/cilium-b6k5c" May 15 10:44:58.728936 kubelet[2321]: I0515 10:44:58.728844 2321 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0b0e396f-91b5-4ec6-b3f6-a79ffa9967e6-hostproc\") pod \"cilium-b6k5c\" (UID: \"0b0e396f-91b5-4ec6-b3f6-a79ffa9967e6\") " pod="kube-system/cilium-b6k5c" May 15 10:44:58.728936 kubelet[2321]: I0515 10:44:58.728855 2321 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0b0e396f-91b5-4ec6-b3f6-a79ffa9967e6-cni-path\") pod \"cilium-b6k5c\" (UID: \"0b0e396f-91b5-4ec6-b3f6-a79ffa9967e6\") " pod="kube-system/cilium-b6k5c" May 15 10:44:58.728936 kubelet[2321]: I0515 10:44:58.728864 2321 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0b0e396f-91b5-4ec6-b3f6-a79ffa9967e6-cilium-run\") pod \"cilium-b6k5c\" (UID: \"0b0e396f-91b5-4ec6-b3f6-a79ffa9967e6\") " pod="kube-system/cilium-b6k5c" May 15 10:44:58.729151 kubelet[2321]: I0515 10:44:58.728874 2321 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0b0e396f-91b5-4ec6-b3f6-a79ffa9967e6-xtables-lock\") pod \"cilium-b6k5c\" (UID: \"0b0e396f-91b5-4ec6-b3f6-a79ffa9967e6\") " pod="kube-system/cilium-b6k5c" May 15 10:44:58.729151 kubelet[2321]: I0515 10:44:58.728884 2321 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0b0e396f-91b5-4ec6-b3f6-a79ffa9967e6-etc-cni-netd\") pod \"cilium-b6k5c\" (UID: \"0b0e396f-91b5-4ec6-b3f6-a79ffa9967e6\") " pod="kube-system/cilium-b6k5c" May 15 10:44:58.729151 kubelet[2321]: I0515 10:44:58.728893 2321 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0b0e396f-91b5-4ec6-b3f6-a79ffa9967e6-cilium-config-path\") pod \"cilium-b6k5c\" (UID: \"0b0e396f-91b5-4ec6-b3f6-a79ffa9967e6\") " pod="kube-system/cilium-b6k5c" May 15 10:44:58.729151 kubelet[2321]: I0515 10:44:58.728902 2321 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0b0e396f-91b5-4ec6-b3f6-a79ffa9967e6-host-proc-sys-net\") pod \"cilium-b6k5c\" (UID: \"0b0e396f-91b5-4ec6-b3f6-a79ffa9967e6\") " pod="kube-system/cilium-b6k5c" May 15 10:44:58.729151 kubelet[2321]: I0515 10:44:58.728914 2321 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0b0e396f-91b5-4ec6-b3f6-a79ffa9967e6-hubble-tls\") pod \"cilium-b6k5c\" (UID: \"0b0e396f-91b5-4ec6-b3f6-a79ffa9967e6\") " pod="kube-system/cilium-b6k5c" May 15 10:44:58.729151 kubelet[2321]: I0515 10:44:58.728927 2321 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0b0e396f-91b5-4ec6-b3f6-a79ffa9967e6-lib-modules\") pod \"cilium-b6k5c\" (UID: \"0b0e396f-91b5-4ec6-b3f6-a79ffa9967e6\") " pod="kube-system/cilium-b6k5c" May 15 10:44:58.729343 kubelet[2321]: I0515 10:44:58.728938 2321 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/0b0e396f-91b5-4ec6-b3f6-a79ffa9967e6-cilium-ipsec-secrets\") pod \"cilium-b6k5c\" (UID: \"0b0e396f-91b5-4ec6-b3f6-a79ffa9967e6\") " pod="kube-system/cilium-b6k5c" May 15 10:44:58.729343 kubelet[2321]: I0515 10:44:58.728950 2321 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0b0e396f-91b5-4ec6-b3f6-a79ffa9967e6-cilium-cgroup\") pod \"cilium-b6k5c\" (UID: \"0b0e396f-91b5-4ec6-b3f6-a79ffa9967e6\") " pod="kube-system/cilium-b6k5c" May 15 10:44:58.768237 kubelet[2321]: I0515 10:44:58.768213 2321 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2b56cdb3-6b72-4493-896f-9c54ddf87971" path="/var/lib/kubelet/pods/2b56cdb3-6b72-4493-896f-9c54ddf87971/volumes" May 15 10:44:58.895400 env[1356]: time="2025-05-15T10:44:58.895050153Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-b6k5c,Uid:0b0e396f-91b5-4ec6-b3f6-a79ffa9967e6,Namespace:kube-system,Attempt:0,}" May 15 10:44:58.908521 systemd[1]: Started sshd@25-139.178.70.108:22-147.75.109.163:45344.service. May 15 10:44:58.909535 sshd[4065]: pam_unix(sshd:session): session closed for user core May 15 10:44:58.910615 env[1356]: time="2025-05-15T10:44:58.906525162Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 10:44:58.910615 env[1356]: time="2025-05-15T10:44:58.906552353Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 10:44:58.910615 env[1356]: time="2025-05-15T10:44:58.906560904Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 10:44:58.910615 env[1356]: time="2025-05-15T10:44:58.906637662Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/afb5a8fd81d24d3273d522d920005369e97a2cedce984dacb517383e01fcbc82 pid=4088 runtime=io.containerd.runc.v2 May 15 10:44:58.912429 systemd-logind[1346]: Session 26 logged out. Waiting for processes to exit. May 15 10:44:58.913258 systemd[1]: sshd@24-139.178.70.108:22-147.75.109.163:45328.service: Deactivated successfully. May 15 10:44:58.913821 systemd[1]: session-26.scope: Deactivated successfully. May 15 10:44:58.914594 systemd-logind[1346]: Removed session 26. May 15 10:44:58.946279 env[1356]: time="2025-05-15T10:44:58.946248201Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-b6k5c,Uid:0b0e396f-91b5-4ec6-b3f6-a79ffa9967e6,Namespace:kube-system,Attempt:0,} returns sandbox id \"afb5a8fd81d24d3273d522d920005369e97a2cedce984dacb517383e01fcbc82\"" May 15 10:44:58.949131 env[1356]: time="2025-05-15T10:44:58.949105703Z" level=info msg="CreateContainer within sandbox \"afb5a8fd81d24d3273d522d920005369e97a2cedce984dacb517383e01fcbc82\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 15 10:44:58.949774 sshd[4098]: Accepted publickey for core from 147.75.109.163 port 45344 ssh2: RSA SHA256:5jNLHoTZfjCzTOKQrCP5LbgIW1XBYqjk9sc7IZ/f9u8 May 15 10:44:58.951851 sshd[4098]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 10:44:58.958266 systemd[1]: Started session-27.scope. May 15 10:44:58.958942 systemd-logind[1346]: New session 27 of user core. May 15 10:44:58.974392 env[1356]: time="2025-05-15T10:44:58.974358424Z" level=info msg="CreateContainer within sandbox \"afb5a8fd81d24d3273d522d920005369e97a2cedce984dacb517383e01fcbc82\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"4abd6cb940728b205b331536bec78ea280069b093d6cc1b5b62b6a66c44ee142\"" May 15 10:44:58.974943 env[1356]: time="2025-05-15T10:44:58.974874578Z" level=info msg="StartContainer for \"4abd6cb940728b205b331536bec78ea280069b093d6cc1b5b62b6a66c44ee142\"" May 15 10:44:59.012031 env[1356]: time="2025-05-15T10:44:59.011992720Z" level=info msg="StartContainer for \"4abd6cb940728b205b331536bec78ea280069b093d6cc1b5b62b6a66c44ee142\" returns successfully" May 15 10:44:59.061164 env[1356]: time="2025-05-15T10:44:59.061121895Z" level=info msg="shim disconnected" id=4abd6cb940728b205b331536bec78ea280069b093d6cc1b5b62b6a66c44ee142 May 15 10:44:59.061164 env[1356]: time="2025-05-15T10:44:59.061155506Z" level=warning msg="cleaning up after shim disconnected" id=4abd6cb940728b205b331536bec78ea280069b093d6cc1b5b62b6a66c44ee142 namespace=k8s.io May 15 10:44:59.061164 env[1356]: time="2025-05-15T10:44:59.061161673Z" level=info msg="cleaning up dead shim" May 15 10:44:59.068422 env[1356]: time="2025-05-15T10:44:59.067775397Z" level=warning msg="cleanup warnings time=\"2025-05-15T10:44:59Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4183 runtime=io.containerd.runc.v2\n" May 15 10:44:59.235933 env[1356]: time="2025-05-15T10:44:59.235856111Z" level=info msg="StopPodSandbox for \"afb5a8fd81d24d3273d522d920005369e97a2cedce984dacb517383e01fcbc82\"" May 15 10:44:59.235933 env[1356]: time="2025-05-15T10:44:59.235906164Z" level=info msg="Container to stop \"4abd6cb940728b205b331536bec78ea280069b093d6cc1b5b62b6a66c44ee142\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 15 10:44:59.266556 env[1356]: time="2025-05-15T10:44:59.266529355Z" level=info msg="shim disconnected" id=afb5a8fd81d24d3273d522d920005369e97a2cedce984dacb517383e01fcbc82 May 15 10:44:59.266706 env[1356]: time="2025-05-15T10:44:59.266694099Z" level=warning msg="cleaning up after shim disconnected" id=afb5a8fd81d24d3273d522d920005369e97a2cedce984dacb517383e01fcbc82 namespace=k8s.io May 15 10:44:59.266762 env[1356]: time="2025-05-15T10:44:59.266747159Z" level=info msg="cleaning up dead shim" May 15 10:44:59.272436 env[1356]: time="2025-05-15T10:44:59.272409782Z" level=warning msg="cleanup warnings time=\"2025-05-15T10:44:59Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4217 runtime=io.containerd.runc.v2\n" May 15 10:44:59.272746 env[1356]: time="2025-05-15T10:44:59.272731956Z" level=info msg="TearDown network for sandbox \"afb5a8fd81d24d3273d522d920005369e97a2cedce984dacb517383e01fcbc82\" successfully" May 15 10:44:59.272812 env[1356]: time="2025-05-15T10:44:59.272801018Z" level=info msg="StopPodSandbox for \"afb5a8fd81d24d3273d522d920005369e97a2cedce984dacb517383e01fcbc82\" returns successfully" May 15 10:44:59.333310 kubelet[2321]: I0515 10:44:59.333273 2321 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0b0e396f-91b5-4ec6-b3f6-a79ffa9967e6-cilium-config-path\") pod \"0b0e396f-91b5-4ec6-b3f6-a79ffa9967e6\" (UID: \"0b0e396f-91b5-4ec6-b3f6-a79ffa9967e6\") " May 15 10:44:59.333458 kubelet[2321]: I0515 10:44:59.333449 2321 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0b0e396f-91b5-4ec6-b3f6-a79ffa9967e6-lib-modules\") pod \"0b0e396f-91b5-4ec6-b3f6-a79ffa9967e6\" (UID: \"0b0e396f-91b5-4ec6-b3f6-a79ffa9967e6\") " May 15 10:44:59.333525 kubelet[2321]: I0515 10:44:59.333517 2321 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0b0e396f-91b5-4ec6-b3f6-a79ffa9967e6-xtables-lock\") pod \"0b0e396f-91b5-4ec6-b3f6-a79ffa9967e6\" (UID: \"0b0e396f-91b5-4ec6-b3f6-a79ffa9967e6\") " May 15 10:44:59.333610 kubelet[2321]: I0515 10:44:59.333599 2321 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/0b0e396f-91b5-4ec6-b3f6-a79ffa9967e6-cilium-ipsec-secrets\") pod \"0b0e396f-91b5-4ec6-b3f6-a79ffa9967e6\" (UID: \"0b0e396f-91b5-4ec6-b3f6-a79ffa9967e6\") " May 15 10:44:59.333709 kubelet[2321]: I0515 10:44:59.333698 2321 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kgprs\" (UniqueName: \"kubernetes.io/projected/0b0e396f-91b5-4ec6-b3f6-a79ffa9967e6-kube-api-access-kgprs\") pod \"0b0e396f-91b5-4ec6-b3f6-a79ffa9967e6\" (UID: \"0b0e396f-91b5-4ec6-b3f6-a79ffa9967e6\") " May 15 10:44:59.335329 kubelet[2321]: I0515 10:44:59.335317 2321 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0b0e396f-91b5-4ec6-b3f6-a79ffa9967e6-bpf-maps\") pod \"0b0e396f-91b5-4ec6-b3f6-a79ffa9967e6\" (UID: \"0b0e396f-91b5-4ec6-b3f6-a79ffa9967e6\") " May 15 10:44:59.335417 kubelet[2321]: I0515 10:44:59.335408 2321 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0b0e396f-91b5-4ec6-b3f6-a79ffa9967e6-hostproc\") pod \"0b0e396f-91b5-4ec6-b3f6-a79ffa9967e6\" (UID: \"0b0e396f-91b5-4ec6-b3f6-a79ffa9967e6\") " May 15 10:44:59.335479 kubelet[2321]: I0515 10:44:59.335462 2321 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0b0e396f-91b5-4ec6-b3f6-a79ffa9967e6-host-proc-sys-net\") pod \"0b0e396f-91b5-4ec6-b3f6-a79ffa9967e6\" (UID: \"0b0e396f-91b5-4ec6-b3f6-a79ffa9967e6\") " May 15 10:44:59.335563 kubelet[2321]: I0515 10:44:59.335555 2321 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0b0e396f-91b5-4ec6-b3f6-a79ffa9967e6-hubble-tls\") pod \"0b0e396f-91b5-4ec6-b3f6-a79ffa9967e6\" (UID: \"0b0e396f-91b5-4ec6-b3f6-a79ffa9967e6\") " May 15 10:44:59.335616 kubelet[2321]: I0515 10:44:59.335608 2321 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0b0e396f-91b5-4ec6-b3f6-a79ffa9967e6-cilium-cgroup\") pod \"0b0e396f-91b5-4ec6-b3f6-a79ffa9967e6\" (UID: \"0b0e396f-91b5-4ec6-b3f6-a79ffa9967e6\") " May 15 10:44:59.335673 kubelet[2321]: I0515 10:44:59.335666 2321 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0b0e396f-91b5-4ec6-b3f6-a79ffa9967e6-cni-path\") pod \"0b0e396f-91b5-4ec6-b3f6-a79ffa9967e6\" (UID: \"0b0e396f-91b5-4ec6-b3f6-a79ffa9967e6\") " May 15 10:44:59.335746 kubelet[2321]: I0515 10:44:59.335735 2321 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0b0e396f-91b5-4ec6-b3f6-a79ffa9967e6-host-proc-sys-kernel\") pod \"0b0e396f-91b5-4ec6-b3f6-a79ffa9967e6\" (UID: \"0b0e396f-91b5-4ec6-b3f6-a79ffa9967e6\") " May 15 10:44:59.335823 kubelet[2321]: I0515 10:44:59.335815 2321 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0b0e396f-91b5-4ec6-b3f6-a79ffa9967e6-clustermesh-secrets\") pod \"0b0e396f-91b5-4ec6-b3f6-a79ffa9967e6\" (UID: \"0b0e396f-91b5-4ec6-b3f6-a79ffa9967e6\") " May 15 10:44:59.335886 kubelet[2321]: I0515 10:44:59.335871 2321 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0b0e396f-91b5-4ec6-b3f6-a79ffa9967e6-cilium-run\") pod \"0b0e396f-91b5-4ec6-b3f6-a79ffa9967e6\" (UID: \"0b0e396f-91b5-4ec6-b3f6-a79ffa9967e6\") " May 15 10:44:59.335937 kubelet[2321]: I0515 10:44:59.335929 2321 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0b0e396f-91b5-4ec6-b3f6-a79ffa9967e6-etc-cni-netd\") pod \"0b0e396f-91b5-4ec6-b3f6-a79ffa9967e6\" (UID: \"0b0e396f-91b5-4ec6-b3f6-a79ffa9967e6\") " May 15 10:44:59.336015 kubelet[2321]: I0515 10:44:59.334754 2321 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0b0e396f-91b5-4ec6-b3f6-a79ffa9967e6-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "0b0e396f-91b5-4ec6-b3f6-a79ffa9967e6" (UID: "0b0e396f-91b5-4ec6-b3f6-a79ffa9967e6"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 10:44:59.336075 kubelet[2321]: I0515 10:44:59.334764 2321 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0b0e396f-91b5-4ec6-b3f6-a79ffa9967e6-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "0b0e396f-91b5-4ec6-b3f6-a79ffa9967e6" (UID: "0b0e396f-91b5-4ec6-b3f6-a79ffa9967e6"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 10:44:59.336159 kubelet[2321]: I0515 10:44:59.335263 2321 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b0e396f-91b5-4ec6-b3f6-a79ffa9967e6-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "0b0e396f-91b5-4ec6-b3f6-a79ffa9967e6" (UID: "0b0e396f-91b5-4ec6-b3f6-a79ffa9967e6"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 15 10:44:59.336227 kubelet[2321]: I0515 10:44:59.336006 2321 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0b0e396f-91b5-4ec6-b3f6-a79ffa9967e6-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "0b0e396f-91b5-4ec6-b3f6-a79ffa9967e6" (UID: "0b0e396f-91b5-4ec6-b3f6-a79ffa9967e6"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 10:44:59.336283 kubelet[2321]: I0515 10:44:59.336274 2321 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0b0e396f-91b5-4ec6-b3f6-a79ffa9967e6-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "0b0e396f-91b5-4ec6-b3f6-a79ffa9967e6" (UID: "0b0e396f-91b5-4ec6-b3f6-a79ffa9967e6"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 10:44:59.336382 kubelet[2321]: I0515 10:44:59.336369 2321 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0b0e396f-91b5-4ec6-b3f6-a79ffa9967e6-hostproc" (OuterVolumeSpecName: "hostproc") pod "0b0e396f-91b5-4ec6-b3f6-a79ffa9967e6" (UID: "0b0e396f-91b5-4ec6-b3f6-a79ffa9967e6"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 10:44:59.336453 kubelet[2321]: I0515 10:44:59.336443 2321 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0b0e396f-91b5-4ec6-b3f6-a79ffa9967e6-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "0b0e396f-91b5-4ec6-b3f6-a79ffa9967e6" (UID: "0b0e396f-91b5-4ec6-b3f6-a79ffa9967e6"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 10:44:59.336748 kubelet[2321]: I0515 10:44:59.336717 2321 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b0e396f-91b5-4ec6-b3f6-a79ffa9967e6-kube-api-access-kgprs" (OuterVolumeSpecName: "kube-api-access-kgprs") pod "0b0e396f-91b5-4ec6-b3f6-a79ffa9967e6" (UID: "0b0e396f-91b5-4ec6-b3f6-a79ffa9967e6"). InnerVolumeSpecName "kube-api-access-kgprs". PluginName "kubernetes.io/projected", VolumeGidValue "" May 15 10:44:59.336790 kubelet[2321]: I0515 10:44:59.336751 2321 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0b0e396f-91b5-4ec6-b3f6-a79ffa9967e6-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "0b0e396f-91b5-4ec6-b3f6-a79ffa9967e6" (UID: "0b0e396f-91b5-4ec6-b3f6-a79ffa9967e6"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 10:44:59.336790 kubelet[2321]: I0515 10:44:59.336768 2321 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0b0e396f-91b5-4ec6-b3f6-a79ffa9967e6-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "0b0e396f-91b5-4ec6-b3f6-a79ffa9967e6" (UID: "0b0e396f-91b5-4ec6-b3f6-a79ffa9967e6"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 10:44:59.336790 kubelet[2321]: I0515 10:44:59.336785 2321 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0b0e396f-91b5-4ec6-b3f6-a79ffa9967e6-cni-path" (OuterVolumeSpecName: "cni-path") pod "0b0e396f-91b5-4ec6-b3f6-a79ffa9967e6" (UID: "0b0e396f-91b5-4ec6-b3f6-a79ffa9967e6"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 10:44:59.337209 kubelet[2321]: I0515 10:44:59.337191 2321 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0b0e396f-91b5-4ec6-b3f6-a79ffa9967e6-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "0b0e396f-91b5-4ec6-b3f6-a79ffa9967e6" (UID: "0b0e396f-91b5-4ec6-b3f6-a79ffa9967e6"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 10:44:59.338790 kubelet[2321]: I0515 10:44:59.338778 2321 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b0e396f-91b5-4ec6-b3f6-a79ffa9967e6-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "0b0e396f-91b5-4ec6-b3f6-a79ffa9967e6" (UID: "0b0e396f-91b5-4ec6-b3f6-a79ffa9967e6"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" May 15 10:44:59.338970 kubelet[2321]: I0515 10:44:59.338950 2321 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b0e396f-91b5-4ec6-b3f6-a79ffa9967e6-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "0b0e396f-91b5-4ec6-b3f6-a79ffa9967e6" (UID: "0b0e396f-91b5-4ec6-b3f6-a79ffa9967e6"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" May 15 10:44:59.340225 kubelet[2321]: I0515 10:44:59.340211 2321 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b0e396f-91b5-4ec6-b3f6-a79ffa9967e6-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "0b0e396f-91b5-4ec6-b3f6-a79ffa9967e6" (UID: "0b0e396f-91b5-4ec6-b3f6-a79ffa9967e6"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" May 15 10:44:59.436620 kubelet[2321]: I0515 10:44:59.436591 2321 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0b0e396f-91b5-4ec6-b3f6-a79ffa9967e6-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 15 10:44:59.436846 kubelet[2321]: I0515 10:44:59.436837 2321 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0b0e396f-91b5-4ec6-b3f6-a79ffa9967e6-lib-modules\") on node \"localhost\" DevicePath \"\"" May 15 10:44:59.436907 kubelet[2321]: I0515 10:44:59.436899 2321 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0b0e396f-91b5-4ec6-b3f6-a79ffa9967e6-xtables-lock\") on node \"localhost\" DevicePath \"\"" May 15 10:44:59.436964 kubelet[2321]: I0515 10:44:59.436957 2321 reconciler_common.go:289] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/0b0e396f-91b5-4ec6-b3f6-a79ffa9967e6-cilium-ipsec-secrets\") on node \"localhost\" DevicePath \"\"" May 15 10:44:59.437012 kubelet[2321]: I0515 10:44:59.437004 2321 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-kgprs\" (UniqueName: \"kubernetes.io/projected/0b0e396f-91b5-4ec6-b3f6-a79ffa9967e6-kube-api-access-kgprs\") on node \"localhost\" DevicePath \"\"" May 15 10:44:59.437061 kubelet[2321]: I0515 10:44:59.437054 2321 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0b0e396f-91b5-4ec6-b3f6-a79ffa9967e6-bpf-maps\") on node \"localhost\" DevicePath \"\"" May 15 10:44:59.437108 kubelet[2321]: I0515 10:44:59.437101 2321 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0b0e396f-91b5-4ec6-b3f6-a79ffa9967e6-hostproc\") on node \"localhost\" DevicePath \"\"" May 15 10:44:59.437157 kubelet[2321]: I0515 10:44:59.437150 2321 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0b0e396f-91b5-4ec6-b3f6-a79ffa9967e6-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" May 15 10:44:59.437202 kubelet[2321]: I0515 10:44:59.437195 2321 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0b0e396f-91b5-4ec6-b3f6-a79ffa9967e6-hubble-tls\") on node \"localhost\" DevicePath \"\"" May 15 10:44:59.437248 kubelet[2321]: I0515 10:44:59.437241 2321 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0b0e396f-91b5-4ec6-b3f6-a79ffa9967e6-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" May 15 10:44:59.437294 kubelet[2321]: I0515 10:44:59.437287 2321 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0b0e396f-91b5-4ec6-b3f6-a79ffa9967e6-cni-path\") on node \"localhost\" DevicePath \"\"" May 15 10:44:59.437340 kubelet[2321]: I0515 10:44:59.437333 2321 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0b0e396f-91b5-4ec6-b3f6-a79ffa9967e6-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" May 15 10:44:59.437387 kubelet[2321]: I0515 10:44:59.437380 2321 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0b0e396f-91b5-4ec6-b3f6-a79ffa9967e6-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" May 15 10:44:59.437433 kubelet[2321]: I0515 10:44:59.437426 2321 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0b0e396f-91b5-4ec6-b3f6-a79ffa9967e6-cilium-run\") on node \"localhost\" DevicePath \"\"" May 15 10:44:59.437478 kubelet[2321]: I0515 10:44:59.437471 2321 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0b0e396f-91b5-4ec6-b3f6-a79ffa9967e6-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" May 15 10:44:59.834832 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-afb5a8fd81d24d3273d522d920005369e97a2cedce984dacb517383e01fcbc82-shm.mount: Deactivated successfully. May 15 10:44:59.834956 systemd[1]: var-lib-kubelet-pods-0b0e396f\x2d91b5\x2d4ec6\x2db3f6\x2da79ffa9967e6-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dkgprs.mount: Deactivated successfully. May 15 10:44:59.835051 systemd[1]: var-lib-kubelet-pods-0b0e396f\x2d91b5\x2d4ec6\x2db3f6\x2da79ffa9967e6-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. May 15 10:44:59.835135 systemd[1]: var-lib-kubelet-pods-0b0e396f\x2d91b5\x2d4ec6\x2db3f6\x2da79ffa9967e6-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 15 10:44:59.835219 systemd[1]: var-lib-kubelet-pods-0b0e396f\x2d91b5\x2d4ec6\x2db3f6\x2da79ffa9967e6-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 15 10:45:00.237247 kubelet[2321]: I0515 10:45:00.237180 2321 scope.go:117] "RemoveContainer" containerID="4abd6cb940728b205b331536bec78ea280069b093d6cc1b5b62b6a66c44ee142" May 15 10:45:00.238529 env[1356]: time="2025-05-15T10:45:00.238496838Z" level=info msg="RemoveContainer for \"4abd6cb940728b205b331536bec78ea280069b093d6cc1b5b62b6a66c44ee142\"" May 15 10:45:00.242247 env[1356]: time="2025-05-15T10:45:00.242181080Z" level=info msg="RemoveContainer for \"4abd6cb940728b205b331536bec78ea280069b093d6cc1b5b62b6a66c44ee142\" returns successfully" May 15 10:45:00.264914 kubelet[2321]: I0515 10:45:00.264753 2321 topology_manager.go:215] "Topology Admit Handler" podUID="eb50e73c-8769-4755-92fa-670cebc2931b" podNamespace="kube-system" podName="cilium-tb8gv" May 15 10:45:00.264914 kubelet[2321]: E0515 10:45:00.264789 2321 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0b0e396f-91b5-4ec6-b3f6-a79ffa9967e6" containerName="mount-cgroup" May 15 10:45:00.264914 kubelet[2321]: I0515 10:45:00.264805 2321 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b0e396f-91b5-4ec6-b3f6-a79ffa9967e6" containerName="mount-cgroup" May 15 10:45:00.340990 kubelet[2321]: I0515 10:45:00.340946 2321 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/eb50e73c-8769-4755-92fa-670cebc2931b-hostproc\") pod \"cilium-tb8gv\" (UID: \"eb50e73c-8769-4755-92fa-670cebc2931b\") " pod="kube-system/cilium-tb8gv" May 15 10:45:00.340990 kubelet[2321]: I0515 10:45:00.340991 2321 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/eb50e73c-8769-4755-92fa-670cebc2931b-lib-modules\") pod \"cilium-tb8gv\" (UID: \"eb50e73c-8769-4755-92fa-670cebc2931b\") " pod="kube-system/cilium-tb8gv" May 15 10:45:00.341153 kubelet[2321]: I0515 10:45:00.341010 2321 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/eb50e73c-8769-4755-92fa-670cebc2931b-cilium-ipsec-secrets\") pod \"cilium-tb8gv\" (UID: \"eb50e73c-8769-4755-92fa-670cebc2931b\") " pod="kube-system/cilium-tb8gv" May 15 10:45:00.341153 kubelet[2321]: I0515 10:45:00.341025 2321 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/eb50e73c-8769-4755-92fa-670cebc2931b-host-proc-sys-kernel\") pod \"cilium-tb8gv\" (UID: \"eb50e73c-8769-4755-92fa-670cebc2931b\") " pod="kube-system/cilium-tb8gv" May 15 10:45:00.341153 kubelet[2321]: I0515 10:45:00.341050 2321 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/eb50e73c-8769-4755-92fa-670cebc2931b-bpf-maps\") pod \"cilium-tb8gv\" (UID: \"eb50e73c-8769-4755-92fa-670cebc2931b\") " pod="kube-system/cilium-tb8gv" May 15 10:45:00.341153 kubelet[2321]: I0515 10:45:00.341063 2321 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/eb50e73c-8769-4755-92fa-670cebc2931b-clustermesh-secrets\") pod \"cilium-tb8gv\" (UID: \"eb50e73c-8769-4755-92fa-670cebc2931b\") " pod="kube-system/cilium-tb8gv" May 15 10:45:00.341153 kubelet[2321]: I0515 10:45:00.341077 2321 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/eb50e73c-8769-4755-92fa-670cebc2931b-host-proc-sys-net\") pod \"cilium-tb8gv\" (UID: \"eb50e73c-8769-4755-92fa-670cebc2931b\") " pod="kube-system/cilium-tb8gv" May 15 10:45:00.341153 kubelet[2321]: I0515 10:45:00.341094 2321 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/eb50e73c-8769-4755-92fa-670cebc2931b-hubble-tls\") pod \"cilium-tb8gv\" (UID: \"eb50e73c-8769-4755-92fa-670cebc2931b\") " pod="kube-system/cilium-tb8gv" May 15 10:45:00.341316 kubelet[2321]: I0515 10:45:00.341106 2321 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/eb50e73c-8769-4755-92fa-670cebc2931b-cilium-cgroup\") pod \"cilium-tb8gv\" (UID: \"eb50e73c-8769-4755-92fa-670cebc2931b\") " pod="kube-system/cilium-tb8gv" May 15 10:45:00.341316 kubelet[2321]: I0515 10:45:00.341128 2321 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/eb50e73c-8769-4755-92fa-670cebc2931b-etc-cni-netd\") pod \"cilium-tb8gv\" (UID: \"eb50e73c-8769-4755-92fa-670cebc2931b\") " pod="kube-system/cilium-tb8gv" May 15 10:45:00.341316 kubelet[2321]: I0515 10:45:00.341142 2321 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/eb50e73c-8769-4755-92fa-670cebc2931b-xtables-lock\") pod \"cilium-tb8gv\" (UID: \"eb50e73c-8769-4755-92fa-670cebc2931b\") " pod="kube-system/cilium-tb8gv" May 15 10:45:00.341316 kubelet[2321]: I0515 10:45:00.341153 2321 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/eb50e73c-8769-4755-92fa-670cebc2931b-cilium-config-path\") pod \"cilium-tb8gv\" (UID: \"eb50e73c-8769-4755-92fa-670cebc2931b\") " pod="kube-system/cilium-tb8gv" May 15 10:45:00.341316 kubelet[2321]: I0515 10:45:00.341166 2321 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-45rwr\" (UniqueName: \"kubernetes.io/projected/eb50e73c-8769-4755-92fa-670cebc2931b-kube-api-access-45rwr\") pod \"cilium-tb8gv\" (UID: \"eb50e73c-8769-4755-92fa-670cebc2931b\") " pod="kube-system/cilium-tb8gv" May 15 10:45:00.341316 kubelet[2321]: I0515 10:45:00.341178 2321 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/eb50e73c-8769-4755-92fa-670cebc2931b-cilium-run\") pod \"cilium-tb8gv\" (UID: \"eb50e73c-8769-4755-92fa-670cebc2931b\") " pod="kube-system/cilium-tb8gv" May 15 10:45:00.341481 kubelet[2321]: I0515 10:45:00.341199 2321 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/eb50e73c-8769-4755-92fa-670cebc2931b-cni-path\") pod \"cilium-tb8gv\" (UID: \"eb50e73c-8769-4755-92fa-670cebc2931b\") " pod="kube-system/cilium-tb8gv" May 15 10:45:00.568840 env[1356]: time="2025-05-15T10:45:00.568551066Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-tb8gv,Uid:eb50e73c-8769-4755-92fa-670cebc2931b,Namespace:kube-system,Attempt:0,}" May 15 10:45:00.575032 env[1356]: time="2025-05-15T10:45:00.574984654Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 10:45:00.575360 env[1356]: time="2025-05-15T10:45:00.575019150Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 10:45:00.575360 env[1356]: time="2025-05-15T10:45:00.575030484Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 10:45:00.575360 env[1356]: time="2025-05-15T10:45:00.575207998Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/1b0bec5e1b41cfd291b73f6bbc3a425e4d54d71a1989033e44116bd73b01320f pid=4245 runtime=io.containerd.runc.v2 May 15 10:45:00.597142 env[1356]: time="2025-05-15T10:45:00.597116420Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-tb8gv,Uid:eb50e73c-8769-4755-92fa-670cebc2931b,Namespace:kube-system,Attempt:0,} returns sandbox id \"1b0bec5e1b41cfd291b73f6bbc3a425e4d54d71a1989033e44116bd73b01320f\"" May 15 10:45:00.601556 env[1356]: time="2025-05-15T10:45:00.601364053Z" level=info msg="CreateContainer within sandbox \"1b0bec5e1b41cfd291b73f6bbc3a425e4d54d71a1989033e44116bd73b01320f\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 15 10:45:00.608696 env[1356]: time="2025-05-15T10:45:00.608652732Z" level=info msg="CreateContainer within sandbox \"1b0bec5e1b41cfd291b73f6bbc3a425e4d54d71a1989033e44116bd73b01320f\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"1d69e3c68df6f7065a50e9c83bb1b0a3f5ecc02b57f3f3c721414302947e1939\"" May 15 10:45:00.609847 env[1356]: time="2025-05-15T10:45:00.609831579Z" level=info msg="StartContainer for \"1d69e3c68df6f7065a50e9c83bb1b0a3f5ecc02b57f3f3c721414302947e1939\"" May 15 10:45:00.641572 env[1356]: time="2025-05-15T10:45:00.641548176Z" level=info msg="StartContainer for \"1d69e3c68df6f7065a50e9c83bb1b0a3f5ecc02b57f3f3c721414302947e1939\" returns successfully" May 15 10:45:00.654881 env[1356]: time="2025-05-15T10:45:00.654845387Z" level=info msg="shim disconnected" id=1d69e3c68df6f7065a50e9c83bb1b0a3f5ecc02b57f3f3c721414302947e1939 May 15 10:45:00.654881 env[1356]: time="2025-05-15T10:45:00.654878576Z" level=warning msg="cleaning up after shim disconnected" id=1d69e3c68df6f7065a50e9c83bb1b0a3f5ecc02b57f3f3c721414302947e1939 namespace=k8s.io May 15 10:45:00.654881 env[1356]: time="2025-05-15T10:45:00.654884394Z" level=info msg="cleaning up dead shim" May 15 10:45:00.659721 env[1356]: time="2025-05-15T10:45:00.659668150Z" level=warning msg="cleanup warnings time=\"2025-05-15T10:45:00Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4328 runtime=io.containerd.runc.v2\n" May 15 10:45:00.765974 kubelet[2321]: E0515 10:45:00.765805 2321 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7db6d8ff4d-qplrx" podUID="37ac3747-b1d3-4cf8-8f5e-cedf8441b4c4" May 15 10:45:00.768344 kubelet[2321]: I0515 10:45:00.768158 2321 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b0e396f-91b5-4ec6-b3f6-a79ffa9967e6" path="/var/lib/kubelet/pods/0b0e396f-91b5-4ec6-b3f6-a79ffa9967e6/volumes" May 15 10:45:01.240988 env[1356]: time="2025-05-15T10:45:01.240963184Z" level=info msg="CreateContainer within sandbox \"1b0bec5e1b41cfd291b73f6bbc3a425e4d54d71a1989033e44116bd73b01320f\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 15 10:45:01.264021 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1517384624.mount: Deactivated successfully. May 15 10:45:01.267406 env[1356]: time="2025-05-15T10:45:01.267375627Z" level=info msg="CreateContainer within sandbox \"1b0bec5e1b41cfd291b73f6bbc3a425e4d54d71a1989033e44116bd73b01320f\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"451631bb0b77c620e6fe5d508b79e413d8eba1858343f1657fb935358a75c5d5\"" May 15 10:45:01.268415 env[1356]: time="2025-05-15T10:45:01.267824446Z" level=info msg="StartContainer for \"451631bb0b77c620e6fe5d508b79e413d8eba1858343f1657fb935358a75c5d5\"" May 15 10:45:01.298603 env[1356]: time="2025-05-15T10:45:01.298489286Z" level=info msg="StartContainer for \"451631bb0b77c620e6fe5d508b79e413d8eba1858343f1657fb935358a75c5d5\" returns successfully" May 15 10:45:01.321796 env[1356]: time="2025-05-15T10:45:01.321767950Z" level=info msg="shim disconnected" id=451631bb0b77c620e6fe5d508b79e413d8eba1858343f1657fb935358a75c5d5 May 15 10:45:01.321931 env[1356]: time="2025-05-15T10:45:01.321919158Z" level=warning msg="cleaning up after shim disconnected" id=451631bb0b77c620e6fe5d508b79e413d8eba1858343f1657fb935358a75c5d5 namespace=k8s.io May 15 10:45:01.321981 env[1356]: time="2025-05-15T10:45:01.321971202Z" level=info msg="cleaning up dead shim" May 15 10:45:01.326297 env[1356]: time="2025-05-15T10:45:01.326279578Z" level=warning msg="cleanup warnings time=\"2025-05-15T10:45:01Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4390 runtime=io.containerd.runc.v2\n" May 15 10:45:01.834314 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-451631bb0b77c620e6fe5d508b79e413d8eba1858343f1657fb935358a75c5d5-rootfs.mount: Deactivated successfully. May 15 10:45:02.244921 env[1356]: time="2025-05-15T10:45:02.244740996Z" level=info msg="CreateContainer within sandbox \"1b0bec5e1b41cfd291b73f6bbc3a425e4d54d71a1989033e44116bd73b01320f\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 15 10:45:02.251485 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2566388703.mount: Deactivated successfully. May 15 10:45:02.274324 env[1356]: time="2025-05-15T10:45:02.274293288Z" level=info msg="CreateContainer within sandbox \"1b0bec5e1b41cfd291b73f6bbc3a425e4d54d71a1989033e44116bd73b01320f\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"7df72bd4b714d8fd9785790a234a7002df403c71710aab0b85da4019dee1b384\"" May 15 10:45:02.274917 env[1356]: time="2025-05-15T10:45:02.274899060Z" level=info msg="StartContainer for \"7df72bd4b714d8fd9785790a234a7002df403c71710aab0b85da4019dee1b384\"" May 15 10:45:02.322157 env[1356]: time="2025-05-15T10:45:02.322123627Z" level=info msg="StartContainer for \"7df72bd4b714d8fd9785790a234a7002df403c71710aab0b85da4019dee1b384\" returns successfully" May 15 10:45:02.448172 env[1356]: time="2025-05-15T10:45:02.448129249Z" level=info msg="shim disconnected" id=7df72bd4b714d8fd9785790a234a7002df403c71710aab0b85da4019dee1b384 May 15 10:45:02.448172 env[1356]: time="2025-05-15T10:45:02.448171191Z" level=warning msg="cleaning up after shim disconnected" id=7df72bd4b714d8fd9785790a234a7002df403c71710aab0b85da4019dee1b384 namespace=k8s.io May 15 10:45:02.448319 env[1356]: time="2025-05-15T10:45:02.448180485Z" level=info msg="cleaning up dead shim" May 15 10:45:02.453548 env[1356]: time="2025-05-15T10:45:02.453521235Z" level=warning msg="cleanup warnings time=\"2025-05-15T10:45:02Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4450 runtime=io.containerd.runc.v2\n" May 15 10:45:02.766029 kubelet[2321]: E0515 10:45:02.766001 2321 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7db6d8ff4d-qplrx" podUID="37ac3747-b1d3-4cf8-8f5e-cedf8441b4c4" May 15 10:45:02.834352 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7df72bd4b714d8fd9785790a234a7002df403c71710aab0b85da4019dee1b384-rootfs.mount: Deactivated successfully. May 15 10:45:02.867807 kubelet[2321]: E0515 10:45:02.867772 2321 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 15 10:45:03.247775 env[1356]: time="2025-05-15T10:45:03.247747349Z" level=info msg="CreateContainer within sandbox \"1b0bec5e1b41cfd291b73f6bbc3a425e4d54d71a1989033e44116bd73b01320f\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 15 10:45:03.258001 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount684405087.mount: Deactivated successfully. May 15 10:45:03.261971 env[1356]: time="2025-05-15T10:45:03.261943464Z" level=info msg="CreateContainer within sandbox \"1b0bec5e1b41cfd291b73f6bbc3a425e4d54d71a1989033e44116bd73b01320f\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"835ae390d10d6ba1a86126b031147b38e9347406e1d772be5c67143e46b6acc5\"" May 15 10:45:03.263096 env[1356]: time="2025-05-15T10:45:03.262377844Z" level=info msg="StartContainer for \"835ae390d10d6ba1a86126b031147b38e9347406e1d772be5c67143e46b6acc5\"" May 15 10:45:03.294038 env[1356]: time="2025-05-15T10:45:03.294014101Z" level=info msg="StartContainer for \"835ae390d10d6ba1a86126b031147b38e9347406e1d772be5c67143e46b6acc5\" returns successfully" May 15 10:45:03.304807 env[1356]: time="2025-05-15T10:45:03.304780109Z" level=info msg="shim disconnected" id=835ae390d10d6ba1a86126b031147b38e9347406e1d772be5c67143e46b6acc5 May 15 10:45:03.304947 env[1356]: time="2025-05-15T10:45:03.304935952Z" level=warning msg="cleaning up after shim disconnected" id=835ae390d10d6ba1a86126b031147b38e9347406e1d772be5c67143e46b6acc5 namespace=k8s.io May 15 10:45:03.305000 env[1356]: time="2025-05-15T10:45:03.304985564Z" level=info msg="cleaning up dead shim" May 15 10:45:03.309346 env[1356]: time="2025-05-15T10:45:03.309324937Z" level=warning msg="cleanup warnings time=\"2025-05-15T10:45:03Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4507 runtime=io.containerd.runc.v2\n" May 15 10:45:03.834415 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-835ae390d10d6ba1a86126b031147b38e9347406e1d772be5c67143e46b6acc5-rootfs.mount: Deactivated successfully. May 15 10:45:04.249692 env[1356]: time="2025-05-15T10:45:04.249514141Z" level=info msg="CreateContainer within sandbox \"1b0bec5e1b41cfd291b73f6bbc3a425e4d54d71a1989033e44116bd73b01320f\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 15 10:45:04.312401 env[1356]: time="2025-05-15T10:45:04.312369163Z" level=info msg="CreateContainer within sandbox \"1b0bec5e1b41cfd291b73f6bbc3a425e4d54d71a1989033e44116bd73b01320f\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"dc6a85b93c7b378345c5a8b8577c77126b4a26f9354986cb4b9cbb88080e4b88\"" May 15 10:45:04.312927 env[1356]: time="2025-05-15T10:45:04.312914797Z" level=info msg="StartContainer for \"dc6a85b93c7b378345c5a8b8577c77126b4a26f9354986cb4b9cbb88080e4b88\"" May 15 10:45:04.350851 env[1356]: time="2025-05-15T10:45:04.350823294Z" level=info msg="StartContainer for \"dc6a85b93c7b378345c5a8b8577c77126b4a26f9354986cb4b9cbb88080e4b88\" returns successfully" May 15 10:45:04.765317 kubelet[2321]: E0515 10:45:04.765232 2321 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7db6d8ff4d-qplrx" podUID="37ac3747-b1d3-4cf8-8f5e-cedf8441b4c4" May 15 10:45:04.834739 systemd[1]: run-containerd-runc-k8s.io-dc6a85b93c7b378345c5a8b8577c77126b4a26f9354986cb4b9cbb88080e4b88-runc.jp8O4a.mount: Deactivated successfully. May 15 10:45:05.287590 kubelet[2321]: I0515 10:45:05.287547 2321 setters.go:580] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-05-15T10:45:05Z","lastTransitionTime":"2025-05-15T10:45:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} May 15 10:45:05.959717 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) May 15 10:45:06.765270 kubelet[2321]: E0515 10:45:06.765232 2321 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7db6d8ff4d-qplrx" podUID="37ac3747-b1d3-4cf8-8f5e-cedf8441b4c4" May 15 10:45:07.267599 systemd[1]: run-containerd-runc-k8s.io-dc6a85b93c7b378345c5a8b8577c77126b4a26f9354986cb4b9cbb88080e4b88-runc.ngCua1.mount: Deactivated successfully. May 15 10:45:08.386757 systemd-networkd[1114]: lxc_health: Link UP May 15 10:45:08.451266 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready May 15 10:45:08.450808 systemd-networkd[1114]: lxc_health: Gained carrier May 15 10:45:08.592055 kubelet[2321]: I0515 10:45:08.592005 2321 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-tb8gv" podStartSLOduration=8.591992406 podStartE2EDuration="8.591992406s" podCreationTimestamp="2025-05-15 10:45:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 10:45:05.269252745 +0000 UTC m=+132.621474389" watchObservedRunningTime="2025-05-15 10:45:08.591992406 +0000 UTC m=+135.944214043" May 15 10:45:09.430414 systemd[1]: run-containerd-runc-k8s.io-dc6a85b93c7b378345c5a8b8577c77126b4a26f9354986cb4b9cbb88080e4b88-runc.stImIP.mount: Deactivated successfully. May 15 10:45:10.303822 systemd-networkd[1114]: lxc_health: Gained IPv6LL May 15 10:45:11.521586 systemd[1]: run-containerd-runc-k8s.io-dc6a85b93c7b378345c5a8b8577c77126b4a26f9354986cb4b9cbb88080e4b88-runc.NZdonY.mount: Deactivated successfully. May 15 10:45:13.624242 systemd[1]: run-containerd-runc-k8s.io-dc6a85b93c7b378345c5a8b8577c77126b4a26f9354986cb4b9cbb88080e4b88-runc.Kz4Htl.mount: Deactivated successfully. May 15 10:45:13.660289 sshd[4098]: pam_unix(sshd:session): session closed for user core May 15 10:45:13.664538 systemd[1]: sshd@25-139.178.70.108:22-147.75.109.163:45344.service: Deactivated successfully. May 15 10:45:13.665409 systemd[1]: session-27.scope: Deactivated successfully. May 15 10:45:13.665798 systemd-logind[1346]: Session 27 logged out. Waiting for processes to exit. May 15 10:45:13.666340 systemd-logind[1346]: Removed session 27.