Feb 12 21:53:14.650564 kernel: Linux version 5.15.148-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Mon Feb 12 18:05:31 -00 2024 Feb 12 21:53:14.650615 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=f2beb0668e3dab90bbcf0ace3803b7ee02142bfb86913ef12ef6d2ee81a411a4 Feb 12 21:53:14.650622 kernel: Disabled fast string operations Feb 12 21:53:14.650626 kernel: BIOS-provided physical RAM map: Feb 12 21:53:14.650630 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ebff] usable Feb 12 21:53:14.650634 kernel: BIOS-e820: [mem 0x000000000009ec00-0x000000000009ffff] reserved Feb 12 21:53:14.650640 kernel: BIOS-e820: [mem 0x00000000000dc000-0x00000000000fffff] reserved Feb 12 21:53:14.650645 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007fedffff] usable Feb 12 21:53:14.650649 kernel: BIOS-e820: [mem 0x000000007fee0000-0x000000007fefefff] ACPI data Feb 12 21:53:14.650653 kernel: BIOS-e820: [mem 0x000000007feff000-0x000000007fefffff] ACPI NVS Feb 12 21:53:14.650657 kernel: BIOS-e820: [mem 0x000000007ff00000-0x000000007fffffff] usable Feb 12 21:53:14.650661 kernel: BIOS-e820: [mem 0x00000000f0000000-0x00000000f7ffffff] reserved Feb 12 21:53:14.650665 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec0ffff] reserved Feb 12 21:53:14.650669 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved Feb 12 21:53:14.650675 kernel: BIOS-e820: [mem 0x00000000fffe0000-0x00000000ffffffff] reserved Feb 12 21:53:14.650680 kernel: NX (Execute Disable) protection: active Feb 12 21:53:14.650684 kernel: SMBIOS 2.7 present. Feb 12 21:53:14.650689 kernel: DMI: VMware, Inc. VMware Virtual Platform/440BX Desktop Reference Platform, BIOS 6.00 05/28/2020 Feb 12 21:53:14.650693 kernel: vmware: hypercall mode: 0x00 Feb 12 21:53:14.650698 kernel: Hypervisor detected: VMware Feb 12 21:53:14.650703 kernel: vmware: TSC freq read from hypervisor : 3408.000 MHz Feb 12 21:53:14.650707 kernel: vmware: Host bus clock speed read from hypervisor : 66000000 Hz Feb 12 21:53:14.650712 kernel: vmware: using clock offset of 7496660341 ns Feb 12 21:53:14.650716 kernel: tsc: Detected 3408.000 MHz processor Feb 12 21:53:14.650721 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Feb 12 21:53:14.650726 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Feb 12 21:53:14.650731 kernel: last_pfn = 0x80000 max_arch_pfn = 0x400000000 Feb 12 21:53:14.650735 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Feb 12 21:53:14.650740 kernel: total RAM covered: 3072M Feb 12 21:53:14.650745 kernel: Found optimal setting for mtrr clean up Feb 12 21:53:14.650750 kernel: gran_size: 64K chunk_size: 64K num_reg: 2 lose cover RAM: 0G Feb 12 21:53:14.650755 kernel: Using GB pages for direct mapping Feb 12 21:53:14.650759 kernel: ACPI: Early table checksum verification disabled Feb 12 21:53:14.650764 kernel: ACPI: RSDP 0x00000000000F6A00 000024 (v02 PTLTD ) Feb 12 21:53:14.650768 kernel: ACPI: XSDT 0x000000007FEE965B 00005C (v01 INTEL 440BX 06040000 VMW 01324272) Feb 12 21:53:14.650773 kernel: ACPI: FACP 0x000000007FEFEE73 0000F4 (v04 INTEL 440BX 06040000 PTL 000F4240) Feb 12 21:53:14.650778 kernel: ACPI: DSDT 0x000000007FEEAD55 01411E (v01 PTLTD Custom 06040000 MSFT 03000001) Feb 12 21:53:14.650782 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 Feb 12 21:53:14.650786 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 Feb 12 21:53:14.650792 kernel: ACPI: BOOT 0x000000007FEEAD2D 000028 (v01 PTLTD $SBFTBL$ 06040000 LTP 00000001) Feb 12 21:53:14.650798 kernel: ACPI: APIC 0x000000007FEEA5EB 000742 (v01 PTLTD ? APIC 06040000 LTP 00000000) Feb 12 21:53:14.650804 kernel: ACPI: MCFG 0x000000007FEEA5AF 00003C (v01 PTLTD $PCITBL$ 06040000 LTP 00000001) Feb 12 21:53:14.650809 kernel: ACPI: SRAT 0x000000007FEE9757 0008A8 (v02 VMWARE MEMPLUG 06040000 VMW 00000001) Feb 12 21:53:14.650814 kernel: ACPI: HPET 0x000000007FEE971F 000038 (v01 VMWARE VMW HPET 06040000 VMW 00000001) Feb 12 21:53:14.650820 kernel: ACPI: WAET 0x000000007FEE96F7 000028 (v01 VMWARE VMW WAET 06040000 VMW 00000001) Feb 12 21:53:14.650824 kernel: ACPI: Reserving FACP table memory at [mem 0x7fefee73-0x7fefef66] Feb 12 21:53:14.650829 kernel: ACPI: Reserving DSDT table memory at [mem 0x7feead55-0x7fefee72] Feb 12 21:53:14.650834 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] Feb 12 21:53:14.650839 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] Feb 12 21:53:14.650844 kernel: ACPI: Reserving BOOT table memory at [mem 0x7feead2d-0x7feead54] Feb 12 21:53:14.650849 kernel: ACPI: Reserving APIC table memory at [mem 0x7feea5eb-0x7feead2c] Feb 12 21:53:14.650854 kernel: ACPI: Reserving MCFG table memory at [mem 0x7feea5af-0x7feea5ea] Feb 12 21:53:14.650859 kernel: ACPI: Reserving SRAT table memory at [mem 0x7fee9757-0x7fee9ffe] Feb 12 21:53:14.650865 kernel: ACPI: Reserving HPET table memory at [mem 0x7fee971f-0x7fee9756] Feb 12 21:53:14.650870 kernel: ACPI: Reserving WAET table memory at [mem 0x7fee96f7-0x7fee971e] Feb 12 21:53:14.650875 kernel: system APIC only can use physical flat Feb 12 21:53:14.650879 kernel: Setting APIC routing to physical flat. Feb 12 21:53:14.650885 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Feb 12 21:53:14.650889 kernel: SRAT: PXM 0 -> APIC 0x02 -> Node 0 Feb 12 21:53:14.650894 kernel: SRAT: PXM 0 -> APIC 0x04 -> Node 0 Feb 12 21:53:14.650899 kernel: SRAT: PXM 0 -> APIC 0x06 -> Node 0 Feb 12 21:53:14.650904 kernel: SRAT: PXM 0 -> APIC 0x08 -> Node 0 Feb 12 21:53:14.650910 kernel: SRAT: PXM 0 -> APIC 0x0a -> Node 0 Feb 12 21:53:14.650915 kernel: SRAT: PXM 0 -> APIC 0x0c -> Node 0 Feb 12 21:53:14.650919 kernel: SRAT: PXM 0 -> APIC 0x0e -> Node 0 Feb 12 21:53:14.650924 kernel: SRAT: PXM 0 -> APIC 0x10 -> Node 0 Feb 12 21:53:14.650929 kernel: SRAT: PXM 0 -> APIC 0x12 -> Node 0 Feb 12 21:53:14.650934 kernel: SRAT: PXM 0 -> APIC 0x14 -> Node 0 Feb 12 21:53:14.650938 kernel: SRAT: PXM 0 -> APIC 0x16 -> Node 0 Feb 12 21:53:14.650943 kernel: SRAT: PXM 0 -> APIC 0x18 -> Node 0 Feb 12 21:53:14.650948 kernel: SRAT: PXM 0 -> APIC 0x1a -> Node 0 Feb 12 21:53:14.650953 kernel: SRAT: PXM 0 -> APIC 0x1c -> Node 0 Feb 12 21:53:14.650958 kernel: SRAT: PXM 0 -> APIC 0x1e -> Node 0 Feb 12 21:53:14.650963 kernel: SRAT: PXM 0 -> APIC 0x20 -> Node 0 Feb 12 21:53:14.650968 kernel: SRAT: PXM 0 -> APIC 0x22 -> Node 0 Feb 12 21:53:14.650973 kernel: SRAT: PXM 0 -> APIC 0x24 -> Node 0 Feb 12 21:53:14.650977 kernel: SRAT: PXM 0 -> APIC 0x26 -> Node 0 Feb 12 21:53:14.650982 kernel: SRAT: PXM 0 -> APIC 0x28 -> Node 0 Feb 12 21:53:14.650987 kernel: SRAT: PXM 0 -> APIC 0x2a -> Node 0 Feb 12 21:53:14.650992 kernel: SRAT: PXM 0 -> APIC 0x2c -> Node 0 Feb 12 21:53:14.650997 kernel: SRAT: PXM 0 -> APIC 0x2e -> Node 0 Feb 12 21:53:14.651002 kernel: SRAT: PXM 0 -> APIC 0x30 -> Node 0 Feb 12 21:53:14.651008 kernel: SRAT: PXM 0 -> APIC 0x32 -> Node 0 Feb 12 21:53:14.651012 kernel: SRAT: PXM 0 -> APIC 0x34 -> Node 0 Feb 12 21:53:14.651017 kernel: SRAT: PXM 0 -> APIC 0x36 -> Node 0 Feb 12 21:53:14.651022 kernel: SRAT: PXM 0 -> APIC 0x38 -> Node 0 Feb 12 21:53:14.651027 kernel: SRAT: PXM 0 -> APIC 0x3a -> Node 0 Feb 12 21:53:14.651032 kernel: SRAT: PXM 0 -> APIC 0x3c -> Node 0 Feb 12 21:53:14.651036 kernel: SRAT: PXM 0 -> APIC 0x3e -> Node 0 Feb 12 21:53:14.651041 kernel: SRAT: PXM 0 -> APIC 0x40 -> Node 0 Feb 12 21:53:14.651046 kernel: SRAT: PXM 0 -> APIC 0x42 -> Node 0 Feb 12 21:53:14.651051 kernel: SRAT: PXM 0 -> APIC 0x44 -> Node 0 Feb 12 21:53:14.651057 kernel: SRAT: PXM 0 -> APIC 0x46 -> Node 0 Feb 12 21:53:14.651062 kernel: SRAT: PXM 0 -> APIC 0x48 -> Node 0 Feb 12 21:53:14.651067 kernel: SRAT: PXM 0 -> APIC 0x4a -> Node 0 Feb 12 21:53:14.651071 kernel: SRAT: PXM 0 -> APIC 0x4c -> Node 0 Feb 12 21:53:14.651076 kernel: SRAT: PXM 0 -> APIC 0x4e -> Node 0 Feb 12 21:53:14.651081 kernel: SRAT: PXM 0 -> APIC 0x50 -> Node 0 Feb 12 21:53:14.651085 kernel: SRAT: PXM 0 -> APIC 0x52 -> Node 0 Feb 12 21:53:14.651090 kernel: SRAT: PXM 0 -> APIC 0x54 -> Node 0 Feb 12 21:53:14.651095 kernel: SRAT: PXM 0 -> APIC 0x56 -> Node 0 Feb 12 21:53:14.651100 kernel: SRAT: PXM 0 -> APIC 0x58 -> Node 0 Feb 12 21:53:14.651106 kernel: SRAT: PXM 0 -> APIC 0x5a -> Node 0 Feb 12 21:53:14.651111 kernel: SRAT: PXM 0 -> APIC 0x5c -> Node 0 Feb 12 21:53:14.651115 kernel: SRAT: PXM 0 -> APIC 0x5e -> Node 0 Feb 12 21:53:14.651120 kernel: SRAT: PXM 0 -> APIC 0x60 -> Node 0 Feb 12 21:53:14.651125 kernel: SRAT: PXM 0 -> APIC 0x62 -> Node 0 Feb 12 21:53:14.651130 kernel: SRAT: PXM 0 -> APIC 0x64 -> Node 0 Feb 12 21:53:14.651135 kernel: SRAT: PXM 0 -> APIC 0x66 -> Node 0 Feb 12 21:53:14.651140 kernel: SRAT: PXM 0 -> APIC 0x68 -> Node 0 Feb 12 21:53:14.651144 kernel: SRAT: PXM 0 -> APIC 0x6a -> Node 0 Feb 12 21:53:14.651149 kernel: SRAT: PXM 0 -> APIC 0x6c -> Node 0 Feb 12 21:53:14.651154 kernel: SRAT: PXM 0 -> APIC 0x6e -> Node 0 Feb 12 21:53:14.651159 kernel: SRAT: PXM 0 -> APIC 0x70 -> Node 0 Feb 12 21:53:14.651164 kernel: SRAT: PXM 0 -> APIC 0x72 -> Node 0 Feb 12 21:53:14.651169 kernel: SRAT: PXM 0 -> APIC 0x74 -> Node 0 Feb 12 21:53:14.651174 kernel: SRAT: PXM 0 -> APIC 0x76 -> Node 0 Feb 12 21:53:14.651179 kernel: SRAT: PXM 0 -> APIC 0x78 -> Node 0 Feb 12 21:53:14.651188 kernel: SRAT: PXM 0 -> APIC 0x7a -> Node 0 Feb 12 21:53:14.651194 kernel: SRAT: PXM 0 -> APIC 0x7c -> Node 0 Feb 12 21:53:14.651199 kernel: SRAT: PXM 0 -> APIC 0x7e -> Node 0 Feb 12 21:53:14.651204 kernel: SRAT: PXM 0 -> APIC 0x80 -> Node 0 Feb 12 21:53:14.651210 kernel: SRAT: PXM 0 -> APIC 0x82 -> Node 0 Feb 12 21:53:14.651215 kernel: SRAT: PXM 0 -> APIC 0x84 -> Node 0 Feb 12 21:53:14.651221 kernel: SRAT: PXM 0 -> APIC 0x86 -> Node 0 Feb 12 21:53:14.651226 kernel: SRAT: PXM 0 -> APIC 0x88 -> Node 0 Feb 12 21:53:14.651231 kernel: SRAT: PXM 0 -> APIC 0x8a -> Node 0 Feb 12 21:53:14.651236 kernel: SRAT: PXM 0 -> APIC 0x8c -> Node 0 Feb 12 21:53:14.651241 kernel: SRAT: PXM 0 -> APIC 0x8e -> Node 0 Feb 12 21:53:14.651246 kernel: SRAT: PXM 0 -> APIC 0x90 -> Node 0 Feb 12 21:53:14.651252 kernel: SRAT: PXM 0 -> APIC 0x92 -> Node 0 Feb 12 21:53:14.651258 kernel: SRAT: PXM 0 -> APIC 0x94 -> Node 0 Feb 12 21:53:14.651263 kernel: SRAT: PXM 0 -> APIC 0x96 -> Node 0 Feb 12 21:53:14.651268 kernel: SRAT: PXM 0 -> APIC 0x98 -> Node 0 Feb 12 21:53:14.651273 kernel: SRAT: PXM 0 -> APIC 0x9a -> Node 0 Feb 12 21:53:14.651278 kernel: SRAT: PXM 0 -> APIC 0x9c -> Node 0 Feb 12 21:53:14.651284 kernel: SRAT: PXM 0 -> APIC 0x9e -> Node 0 Feb 12 21:53:14.651289 kernel: SRAT: PXM 0 -> APIC 0xa0 -> Node 0 Feb 12 21:53:14.651294 kernel: SRAT: PXM 0 -> APIC 0xa2 -> Node 0 Feb 12 21:53:14.651299 kernel: SRAT: PXM 0 -> APIC 0xa4 -> Node 0 Feb 12 21:53:14.651305 kernel: SRAT: PXM 0 -> APIC 0xa6 -> Node 0 Feb 12 21:53:14.651310 kernel: SRAT: PXM 0 -> APIC 0xa8 -> Node 0 Feb 12 21:53:14.651315 kernel: SRAT: PXM 0 -> APIC 0xaa -> Node 0 Feb 12 21:53:14.651320 kernel: SRAT: PXM 0 -> APIC 0xac -> Node 0 Feb 12 21:53:14.651325 kernel: SRAT: PXM 0 -> APIC 0xae -> Node 0 Feb 12 21:53:14.651331 kernel: SRAT: PXM 0 -> APIC 0xb0 -> Node 0 Feb 12 21:53:14.651336 kernel: SRAT: PXM 0 -> APIC 0xb2 -> Node 0 Feb 12 21:53:14.651341 kernel: SRAT: PXM 0 -> APIC 0xb4 -> Node 0 Feb 12 21:53:14.651346 kernel: SRAT: PXM 0 -> APIC 0xb6 -> Node 0 Feb 12 21:53:14.651352 kernel: SRAT: PXM 0 -> APIC 0xb8 -> Node 0 Feb 12 21:53:14.651357 kernel: SRAT: PXM 0 -> APIC 0xba -> Node 0 Feb 12 21:53:14.651362 kernel: SRAT: PXM 0 -> APIC 0xbc -> Node 0 Feb 12 21:53:14.651367 kernel: SRAT: PXM 0 -> APIC 0xbe -> Node 0 Feb 12 21:53:14.651372 kernel: SRAT: PXM 0 -> APIC 0xc0 -> Node 0 Feb 12 21:53:14.651377 kernel: SRAT: PXM 0 -> APIC 0xc2 -> Node 0 Feb 12 21:53:14.651383 kernel: SRAT: PXM 0 -> APIC 0xc4 -> Node 0 Feb 12 21:53:14.651388 kernel: SRAT: PXM 0 -> APIC 0xc6 -> Node 0 Feb 12 21:53:14.651393 kernel: SRAT: PXM 0 -> APIC 0xc8 -> Node 0 Feb 12 21:53:14.651398 kernel: SRAT: PXM 0 -> APIC 0xca -> Node 0 Feb 12 21:53:14.651404 kernel: SRAT: PXM 0 -> APIC 0xcc -> Node 0 Feb 12 21:53:14.651409 kernel: SRAT: PXM 0 -> APIC 0xce -> Node 0 Feb 12 21:53:14.651414 kernel: SRAT: PXM 0 -> APIC 0xd0 -> Node 0 Feb 12 21:53:14.651419 kernel: SRAT: PXM 0 -> APIC 0xd2 -> Node 0 Feb 12 21:53:14.651424 kernel: SRAT: PXM 0 -> APIC 0xd4 -> Node 0 Feb 12 21:53:14.651429 kernel: SRAT: PXM 0 -> APIC 0xd6 -> Node 0 Feb 12 21:53:14.651435 kernel: SRAT: PXM 0 -> APIC 0xd8 -> Node 0 Feb 12 21:53:14.651440 kernel: SRAT: PXM 0 -> APIC 0xda -> Node 0 Feb 12 21:53:14.651445 kernel: SRAT: PXM 0 -> APIC 0xdc -> Node 0 Feb 12 21:53:14.651450 kernel: SRAT: PXM 0 -> APIC 0xde -> Node 0 Feb 12 21:53:14.651456 kernel: SRAT: PXM 0 -> APIC 0xe0 -> Node 0 Feb 12 21:53:14.651461 kernel: SRAT: PXM 0 -> APIC 0xe2 -> Node 0 Feb 12 21:53:14.651466 kernel: SRAT: PXM 0 -> APIC 0xe4 -> Node 0 Feb 12 21:53:14.651471 kernel: SRAT: PXM 0 -> APIC 0xe6 -> Node 0 Feb 12 21:53:14.651476 kernel: SRAT: PXM 0 -> APIC 0xe8 -> Node 0 Feb 12 21:53:14.651482 kernel: SRAT: PXM 0 -> APIC 0xea -> Node 0 Feb 12 21:53:14.651487 kernel: SRAT: PXM 0 -> APIC 0xec -> Node 0 Feb 12 21:53:14.651492 kernel: SRAT: PXM 0 -> APIC 0xee -> Node 0 Feb 12 21:53:14.651497 kernel: SRAT: PXM 0 -> APIC 0xf0 -> Node 0 Feb 12 21:53:14.651502 kernel: SRAT: PXM 0 -> APIC 0xf2 -> Node 0 Feb 12 21:53:14.651508 kernel: SRAT: PXM 0 -> APIC 0xf4 -> Node 0 Feb 12 21:53:14.651513 kernel: SRAT: PXM 0 -> APIC 0xf6 -> Node 0 Feb 12 21:53:14.651518 kernel: SRAT: PXM 0 -> APIC 0xf8 -> Node 0 Feb 12 21:53:14.651524 kernel: SRAT: PXM 0 -> APIC 0xfa -> Node 0 Feb 12 21:53:14.651529 kernel: SRAT: PXM 0 -> APIC 0xfc -> Node 0 Feb 12 21:53:14.651535 kernel: SRAT: PXM 0 -> APIC 0xfe -> Node 0 Feb 12 21:53:14.651540 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Feb 12 21:53:14.651545 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Feb 12 21:53:14.651551 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000-0xbfffffff] hotplug Feb 12 21:53:14.651556 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7fffffff] -> [mem 0x00000000-0x7fffffff] Feb 12 21:53:14.651563 kernel: NODE_DATA(0) allocated [mem 0x7fffa000-0x7fffffff] Feb 12 21:53:14.651578 kernel: Zone ranges: Feb 12 21:53:14.651585 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Feb 12 21:53:14.651590 kernel: DMA32 [mem 0x0000000001000000-0x000000007fffffff] Feb 12 21:53:14.651595 kernel: Normal empty Feb 12 21:53:14.651600 kernel: Movable zone start for each node Feb 12 21:53:14.651606 kernel: Early memory node ranges Feb 12 21:53:14.651611 kernel: node 0: [mem 0x0000000000001000-0x000000000009dfff] Feb 12 21:53:14.651616 kernel: node 0: [mem 0x0000000000100000-0x000000007fedffff] Feb 12 21:53:14.651624 kernel: node 0: [mem 0x000000007ff00000-0x000000007fffffff] Feb 12 21:53:14.651629 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007fffffff] Feb 12 21:53:14.651634 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 12 21:53:14.651640 kernel: On node 0, zone DMA: 98 pages in unavailable ranges Feb 12 21:53:14.651645 kernel: On node 0, zone DMA32: 32 pages in unavailable ranges Feb 12 21:53:14.651650 kernel: ACPI: PM-Timer IO Port: 0x1008 Feb 12 21:53:14.651656 kernel: system APIC only can use physical flat Feb 12 21:53:14.651661 kernel: ACPI: LAPIC_NMI (acpi_id[0x00] high edge lint[0x1]) Feb 12 21:53:14.651666 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) Feb 12 21:53:14.651672 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) Feb 12 21:53:14.651678 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) Feb 12 21:53:14.651683 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) Feb 12 21:53:14.651688 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) Feb 12 21:53:14.651693 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) Feb 12 21:53:14.651699 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) Feb 12 21:53:14.651704 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) Feb 12 21:53:14.651709 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) Feb 12 21:53:14.651714 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) Feb 12 21:53:14.651719 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) Feb 12 21:53:14.651726 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) Feb 12 21:53:14.651731 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) Feb 12 21:53:14.651736 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) Feb 12 21:53:14.651741 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) Feb 12 21:53:14.651746 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) Feb 12 21:53:14.651752 kernel: ACPI: LAPIC_NMI (acpi_id[0x11] high edge lint[0x1]) Feb 12 21:53:14.651757 kernel: ACPI: LAPIC_NMI (acpi_id[0x12] high edge lint[0x1]) Feb 12 21:53:14.651762 kernel: ACPI: LAPIC_NMI (acpi_id[0x13] high edge lint[0x1]) Feb 12 21:53:14.651767 kernel: ACPI: LAPIC_NMI (acpi_id[0x14] high edge lint[0x1]) Feb 12 21:53:14.651774 kernel: ACPI: LAPIC_NMI (acpi_id[0x15] high edge lint[0x1]) Feb 12 21:53:14.651779 kernel: ACPI: LAPIC_NMI (acpi_id[0x16] high edge lint[0x1]) Feb 12 21:53:14.651784 kernel: ACPI: LAPIC_NMI (acpi_id[0x17] high edge lint[0x1]) Feb 12 21:53:14.651789 kernel: ACPI: LAPIC_NMI (acpi_id[0x18] high edge lint[0x1]) Feb 12 21:53:14.651794 kernel: ACPI: LAPIC_NMI (acpi_id[0x19] high edge lint[0x1]) Feb 12 21:53:14.651800 kernel: ACPI: LAPIC_NMI (acpi_id[0x1a] high edge lint[0x1]) Feb 12 21:53:14.651805 kernel: ACPI: LAPIC_NMI (acpi_id[0x1b] high edge lint[0x1]) Feb 12 21:53:14.651810 kernel: ACPI: LAPIC_NMI (acpi_id[0x1c] high edge lint[0x1]) Feb 12 21:53:14.651815 kernel: ACPI: LAPIC_NMI (acpi_id[0x1d] high edge lint[0x1]) Feb 12 21:53:14.651821 kernel: ACPI: LAPIC_NMI (acpi_id[0x1e] high edge lint[0x1]) Feb 12 21:53:14.651827 kernel: ACPI: LAPIC_NMI (acpi_id[0x1f] high edge lint[0x1]) Feb 12 21:53:14.651832 kernel: ACPI: LAPIC_NMI (acpi_id[0x20] high edge lint[0x1]) Feb 12 21:53:14.651838 kernel: ACPI: LAPIC_NMI (acpi_id[0x21] high edge lint[0x1]) Feb 12 21:53:14.651843 kernel: ACPI: LAPIC_NMI (acpi_id[0x22] high edge lint[0x1]) Feb 12 21:53:14.651848 kernel: ACPI: LAPIC_NMI (acpi_id[0x23] high edge lint[0x1]) Feb 12 21:53:14.651853 kernel: ACPI: LAPIC_NMI (acpi_id[0x24] high edge lint[0x1]) Feb 12 21:53:14.651858 kernel: ACPI: LAPIC_NMI (acpi_id[0x25] high edge lint[0x1]) Feb 12 21:53:14.651864 kernel: ACPI: LAPIC_NMI (acpi_id[0x26] high edge lint[0x1]) Feb 12 21:53:14.651869 kernel: ACPI: LAPIC_NMI (acpi_id[0x27] high edge lint[0x1]) Feb 12 21:53:14.651875 kernel: ACPI: LAPIC_NMI (acpi_id[0x28] high edge lint[0x1]) Feb 12 21:53:14.651880 kernel: ACPI: LAPIC_NMI (acpi_id[0x29] high edge lint[0x1]) Feb 12 21:53:14.651885 kernel: ACPI: LAPIC_NMI (acpi_id[0x2a] high edge lint[0x1]) Feb 12 21:53:14.651890 kernel: ACPI: LAPIC_NMI (acpi_id[0x2b] high edge lint[0x1]) Feb 12 21:53:14.651896 kernel: ACPI: LAPIC_NMI (acpi_id[0x2c] high edge lint[0x1]) Feb 12 21:53:14.651901 kernel: ACPI: LAPIC_NMI (acpi_id[0x2d] high edge lint[0x1]) Feb 12 21:53:14.651906 kernel: ACPI: LAPIC_NMI (acpi_id[0x2e] high edge lint[0x1]) Feb 12 21:53:14.651911 kernel: ACPI: LAPIC_NMI (acpi_id[0x2f] high edge lint[0x1]) Feb 12 21:53:14.651916 kernel: ACPI: LAPIC_NMI (acpi_id[0x30] high edge lint[0x1]) Feb 12 21:53:14.651922 kernel: ACPI: LAPIC_NMI (acpi_id[0x31] high edge lint[0x1]) Feb 12 21:53:14.651928 kernel: ACPI: LAPIC_NMI (acpi_id[0x32] high edge lint[0x1]) Feb 12 21:53:14.651933 kernel: ACPI: LAPIC_NMI (acpi_id[0x33] high edge lint[0x1]) Feb 12 21:53:14.651938 kernel: ACPI: LAPIC_NMI (acpi_id[0x34] high edge lint[0x1]) Feb 12 21:53:14.651943 kernel: ACPI: LAPIC_NMI (acpi_id[0x35] high edge lint[0x1]) Feb 12 21:53:14.651948 kernel: ACPI: LAPIC_NMI (acpi_id[0x36] high edge lint[0x1]) Feb 12 21:53:14.651953 kernel: ACPI: LAPIC_NMI (acpi_id[0x37] high edge lint[0x1]) Feb 12 21:53:14.651959 kernel: ACPI: LAPIC_NMI (acpi_id[0x38] high edge lint[0x1]) Feb 12 21:53:14.651964 kernel: ACPI: LAPIC_NMI (acpi_id[0x39] high edge lint[0x1]) Feb 12 21:53:14.651969 kernel: ACPI: LAPIC_NMI (acpi_id[0x3a] high edge lint[0x1]) Feb 12 21:53:14.651975 kernel: ACPI: LAPIC_NMI (acpi_id[0x3b] high edge lint[0x1]) Feb 12 21:53:14.651980 kernel: ACPI: LAPIC_NMI (acpi_id[0x3c] high edge lint[0x1]) Feb 12 21:53:14.651986 kernel: ACPI: LAPIC_NMI (acpi_id[0x3d] high edge lint[0x1]) Feb 12 21:53:14.651991 kernel: ACPI: LAPIC_NMI (acpi_id[0x3e] high edge lint[0x1]) Feb 12 21:53:14.651996 kernel: ACPI: LAPIC_NMI (acpi_id[0x3f] high edge lint[0x1]) Feb 12 21:53:14.652001 kernel: ACPI: LAPIC_NMI (acpi_id[0x40] high edge lint[0x1]) Feb 12 21:53:14.652006 kernel: ACPI: LAPIC_NMI (acpi_id[0x41] high edge lint[0x1]) Feb 12 21:53:14.652012 kernel: ACPI: LAPIC_NMI (acpi_id[0x42] high edge lint[0x1]) Feb 12 21:53:14.652017 kernel: ACPI: LAPIC_NMI (acpi_id[0x43] high edge lint[0x1]) Feb 12 21:53:14.652022 kernel: ACPI: LAPIC_NMI (acpi_id[0x44] high edge lint[0x1]) Feb 12 21:53:14.652028 kernel: ACPI: LAPIC_NMI (acpi_id[0x45] high edge lint[0x1]) Feb 12 21:53:14.652034 kernel: ACPI: LAPIC_NMI (acpi_id[0x46] high edge lint[0x1]) Feb 12 21:53:14.652039 kernel: ACPI: LAPIC_NMI (acpi_id[0x47] high edge lint[0x1]) Feb 12 21:53:14.652044 kernel: ACPI: LAPIC_NMI (acpi_id[0x48] high edge lint[0x1]) Feb 12 21:53:14.652050 kernel: ACPI: LAPIC_NMI (acpi_id[0x49] high edge lint[0x1]) Feb 12 21:53:14.652055 kernel: ACPI: LAPIC_NMI (acpi_id[0x4a] high edge lint[0x1]) Feb 12 21:53:14.652060 kernel: ACPI: LAPIC_NMI (acpi_id[0x4b] high edge lint[0x1]) Feb 12 21:53:14.652065 kernel: ACPI: LAPIC_NMI (acpi_id[0x4c] high edge lint[0x1]) Feb 12 21:53:14.652070 kernel: ACPI: LAPIC_NMI (acpi_id[0x4d] high edge lint[0x1]) Feb 12 21:53:14.652077 kernel: ACPI: LAPIC_NMI (acpi_id[0x4e] high edge lint[0x1]) Feb 12 21:53:14.652082 kernel: ACPI: LAPIC_NMI (acpi_id[0x4f] high edge lint[0x1]) Feb 12 21:53:14.652087 kernel: ACPI: LAPIC_NMI (acpi_id[0x50] high edge lint[0x1]) Feb 12 21:53:14.652093 kernel: ACPI: LAPIC_NMI (acpi_id[0x51] high edge lint[0x1]) Feb 12 21:53:14.652098 kernel: ACPI: LAPIC_NMI (acpi_id[0x52] high edge lint[0x1]) Feb 12 21:53:14.652103 kernel: ACPI: LAPIC_NMI (acpi_id[0x53] high edge lint[0x1]) Feb 12 21:53:14.652108 kernel: ACPI: LAPIC_NMI (acpi_id[0x54] high edge lint[0x1]) Feb 12 21:53:14.652113 kernel: ACPI: LAPIC_NMI (acpi_id[0x55] high edge lint[0x1]) Feb 12 21:53:14.652118 kernel: ACPI: LAPIC_NMI (acpi_id[0x56] high edge lint[0x1]) Feb 12 21:53:14.652124 kernel: ACPI: LAPIC_NMI (acpi_id[0x57] high edge lint[0x1]) Feb 12 21:53:14.652130 kernel: ACPI: LAPIC_NMI (acpi_id[0x58] high edge lint[0x1]) Feb 12 21:53:14.652135 kernel: ACPI: LAPIC_NMI (acpi_id[0x59] high edge lint[0x1]) Feb 12 21:53:14.652140 kernel: ACPI: LAPIC_NMI (acpi_id[0x5a] high edge lint[0x1]) Feb 12 21:53:14.652145 kernel: ACPI: LAPIC_NMI (acpi_id[0x5b] high edge lint[0x1]) Feb 12 21:53:14.652150 kernel: ACPI: LAPIC_NMI (acpi_id[0x5c] high edge lint[0x1]) Feb 12 21:53:14.652155 kernel: ACPI: LAPIC_NMI (acpi_id[0x5d] high edge lint[0x1]) Feb 12 21:53:14.652161 kernel: ACPI: LAPIC_NMI (acpi_id[0x5e] high edge lint[0x1]) Feb 12 21:53:14.652166 kernel: ACPI: LAPIC_NMI (acpi_id[0x5f] high edge lint[0x1]) Feb 12 21:53:14.652171 kernel: ACPI: LAPIC_NMI (acpi_id[0x60] high edge lint[0x1]) Feb 12 21:53:14.652177 kernel: ACPI: LAPIC_NMI (acpi_id[0x61] high edge lint[0x1]) Feb 12 21:53:14.652182 kernel: ACPI: LAPIC_NMI (acpi_id[0x62] high edge lint[0x1]) Feb 12 21:53:14.652187 kernel: ACPI: LAPIC_NMI (acpi_id[0x63] high edge lint[0x1]) Feb 12 21:53:14.652193 kernel: ACPI: LAPIC_NMI (acpi_id[0x64] high edge lint[0x1]) Feb 12 21:53:14.652198 kernel: ACPI: LAPIC_NMI (acpi_id[0x65] high edge lint[0x1]) Feb 12 21:53:14.652203 kernel: ACPI: LAPIC_NMI (acpi_id[0x66] high edge lint[0x1]) Feb 12 21:53:14.652208 kernel: ACPI: LAPIC_NMI (acpi_id[0x67] high edge lint[0x1]) Feb 12 21:53:14.652213 kernel: ACPI: LAPIC_NMI (acpi_id[0x68] high edge lint[0x1]) Feb 12 21:53:14.652219 kernel: ACPI: LAPIC_NMI (acpi_id[0x69] high edge lint[0x1]) Feb 12 21:53:14.652224 kernel: ACPI: LAPIC_NMI (acpi_id[0x6a] high edge lint[0x1]) Feb 12 21:53:14.652230 kernel: ACPI: LAPIC_NMI (acpi_id[0x6b] high edge lint[0x1]) Feb 12 21:53:14.652235 kernel: ACPI: LAPIC_NMI (acpi_id[0x6c] high edge lint[0x1]) Feb 12 21:53:14.652240 kernel: ACPI: LAPIC_NMI (acpi_id[0x6d] high edge lint[0x1]) Feb 12 21:53:14.652246 kernel: ACPI: LAPIC_NMI (acpi_id[0x6e] high edge lint[0x1]) Feb 12 21:53:14.652251 kernel: ACPI: LAPIC_NMI (acpi_id[0x6f] high edge lint[0x1]) Feb 12 21:53:14.652256 kernel: ACPI: LAPIC_NMI (acpi_id[0x70] high edge lint[0x1]) Feb 12 21:53:14.652261 kernel: ACPI: LAPIC_NMI (acpi_id[0x71] high edge lint[0x1]) Feb 12 21:53:14.652266 kernel: ACPI: LAPIC_NMI (acpi_id[0x72] high edge lint[0x1]) Feb 12 21:53:14.652271 kernel: ACPI: LAPIC_NMI (acpi_id[0x73] high edge lint[0x1]) Feb 12 21:53:14.652278 kernel: ACPI: LAPIC_NMI (acpi_id[0x74] high edge lint[0x1]) Feb 12 21:53:14.652283 kernel: ACPI: LAPIC_NMI (acpi_id[0x75] high edge lint[0x1]) Feb 12 21:53:14.652288 kernel: ACPI: LAPIC_NMI (acpi_id[0x76] high edge lint[0x1]) Feb 12 21:53:14.652293 kernel: ACPI: LAPIC_NMI (acpi_id[0x77] high edge lint[0x1]) Feb 12 21:53:14.652298 kernel: ACPI: LAPIC_NMI (acpi_id[0x78] high edge lint[0x1]) Feb 12 21:53:14.652303 kernel: ACPI: LAPIC_NMI (acpi_id[0x79] high edge lint[0x1]) Feb 12 21:53:14.652309 kernel: ACPI: LAPIC_NMI (acpi_id[0x7a] high edge lint[0x1]) Feb 12 21:53:14.652314 kernel: ACPI: LAPIC_NMI (acpi_id[0x7b] high edge lint[0x1]) Feb 12 21:53:14.652319 kernel: ACPI: LAPIC_NMI (acpi_id[0x7c] high edge lint[0x1]) Feb 12 21:53:14.652325 kernel: ACPI: LAPIC_NMI (acpi_id[0x7d] high edge lint[0x1]) Feb 12 21:53:14.652331 kernel: ACPI: LAPIC_NMI (acpi_id[0x7e] high edge lint[0x1]) Feb 12 21:53:14.652336 kernel: ACPI: LAPIC_NMI (acpi_id[0x7f] high edge lint[0x1]) Feb 12 21:53:14.652341 kernel: IOAPIC[0]: apic_id 1, version 17, address 0xfec00000, GSI 0-23 Feb 12 21:53:14.652346 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 high edge) Feb 12 21:53:14.652352 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Feb 12 21:53:14.652357 kernel: ACPI: HPET id: 0x8086af01 base: 0xfed00000 Feb 12 21:53:14.652362 kernel: TSC deadline timer available Feb 12 21:53:14.652367 kernel: smpboot: Allowing 128 CPUs, 126 hotplug CPUs Feb 12 21:53:14.652372 kernel: [mem 0x80000000-0xefffffff] available for PCI devices Feb 12 21:53:14.652378 kernel: Booting paravirtualized kernel on VMware hypervisor Feb 12 21:53:14.652384 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Feb 12 21:53:14.652389 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:128 nr_node_ids:1 Feb 12 21:53:14.652395 kernel: percpu: Embedded 55 pages/cpu s185624 r8192 d31464 u262144 Feb 12 21:53:14.652400 kernel: pcpu-alloc: s185624 r8192 d31464 u262144 alloc=1*2097152 Feb 12 21:53:14.652405 kernel: pcpu-alloc: [0] 000 001 002 003 004 005 006 007 Feb 12 21:53:14.652410 kernel: pcpu-alloc: [0] 008 009 010 011 012 013 014 015 Feb 12 21:53:14.652415 kernel: pcpu-alloc: [0] 016 017 018 019 020 021 022 023 Feb 12 21:53:14.652421 kernel: pcpu-alloc: [0] 024 025 026 027 028 029 030 031 Feb 12 21:53:14.652426 kernel: pcpu-alloc: [0] 032 033 034 035 036 037 038 039 Feb 12 21:53:14.652431 kernel: pcpu-alloc: [0] 040 041 042 043 044 045 046 047 Feb 12 21:53:14.652437 kernel: pcpu-alloc: [0] 048 049 050 051 052 053 054 055 Feb 12 21:53:14.652450 kernel: pcpu-alloc: [0] 056 057 058 059 060 061 062 063 Feb 12 21:53:14.652456 kernel: pcpu-alloc: [0] 064 065 066 067 068 069 070 071 Feb 12 21:53:14.652462 kernel: pcpu-alloc: [0] 072 073 074 075 076 077 078 079 Feb 12 21:53:14.652467 kernel: pcpu-alloc: [0] 080 081 082 083 084 085 086 087 Feb 12 21:53:14.652473 kernel: pcpu-alloc: [0] 088 089 090 091 092 093 094 095 Feb 12 21:53:14.652480 kernel: pcpu-alloc: [0] 096 097 098 099 100 101 102 103 Feb 12 21:53:14.652485 kernel: pcpu-alloc: [0] 104 105 106 107 108 109 110 111 Feb 12 21:53:14.652490 kernel: pcpu-alloc: [0] 112 113 114 115 116 117 118 119 Feb 12 21:53:14.652496 kernel: pcpu-alloc: [0] 120 121 122 123 124 125 126 127 Feb 12 21:53:14.652502 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515808 Feb 12 21:53:14.652507 kernel: Policy zone: DMA32 Feb 12 21:53:14.652514 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=f2beb0668e3dab90bbcf0ace3803b7ee02142bfb86913ef12ef6d2ee81a411a4 Feb 12 21:53:14.652520 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 12 21:53:14.652526 kernel: printk: log_buf_len individual max cpu contribution: 4096 bytes Feb 12 21:53:14.652532 kernel: printk: log_buf_len total cpu_extra contributions: 520192 bytes Feb 12 21:53:14.652538 kernel: printk: log_buf_len min size: 262144 bytes Feb 12 21:53:14.652543 kernel: printk: log_buf_len: 1048576 bytes Feb 12 21:53:14.652549 kernel: printk: early log buf free: 239728(91%) Feb 12 21:53:14.652555 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 12 21:53:14.652560 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Feb 12 21:53:14.652571 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 12 21:53:14.652579 kernel: Memory: 1942952K/2096628K available (12294K kernel code, 2275K rwdata, 13700K rodata, 45496K init, 4048K bss, 153416K reserved, 0K cma-reserved) Feb 12 21:53:14.652587 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=128, Nodes=1 Feb 12 21:53:14.652593 kernel: ftrace: allocating 34475 entries in 135 pages Feb 12 21:53:14.652598 kernel: ftrace: allocated 135 pages with 4 groups Feb 12 21:53:14.652605 kernel: rcu: Hierarchical RCU implementation. Feb 12 21:53:14.652611 kernel: rcu: RCU event tracing is enabled. Feb 12 21:53:14.652617 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=128. Feb 12 21:53:14.652624 kernel: Rude variant of Tasks RCU enabled. Feb 12 21:53:14.652630 kernel: Tracing variant of Tasks RCU enabled. Feb 12 21:53:14.652636 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 12 21:53:14.652646 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=128 Feb 12 21:53:14.652652 kernel: NR_IRQS: 33024, nr_irqs: 1448, preallocated irqs: 16 Feb 12 21:53:14.652659 kernel: random: crng init done Feb 12 21:53:14.652665 kernel: Console: colour VGA+ 80x25 Feb 12 21:53:14.652670 kernel: printk: console [tty0] enabled Feb 12 21:53:14.652676 kernel: printk: console [ttyS0] enabled Feb 12 21:53:14.652682 kernel: ACPI: Core revision 20210730 Feb 12 21:53:14.652688 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 133484882848 ns Feb 12 21:53:14.652694 kernel: APIC: Switch to symmetric I/O mode setup Feb 12 21:53:14.652700 kernel: x2apic enabled Feb 12 21:53:14.652705 kernel: Switched APIC routing to physical x2apic. Feb 12 21:53:14.652711 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Feb 12 21:53:14.652717 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns Feb 12 21:53:14.652723 kernel: Calibrating delay loop (skipped) preset value.. 6816.00 BogoMIPS (lpj=3408000) Feb 12 21:53:14.652728 kernel: Disabled fast string operations Feb 12 21:53:14.652735 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Feb 12 21:53:14.652741 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Feb 12 21:53:14.652747 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Feb 12 21:53:14.652753 kernel: Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks! Feb 12 21:53:14.652759 kernel: Spectre V2 : Mitigation: Enhanced IBRS Feb 12 21:53:14.652765 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Feb 12 21:53:14.652771 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Feb 12 21:53:14.652776 kernel: RETBleed: Mitigation: Enhanced IBRS Feb 12 21:53:14.652782 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Feb 12 21:53:14.652789 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Feb 12 21:53:14.652795 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Feb 12 21:53:14.652801 kernel: SRBDS: Unknown: Dependent on hypervisor status Feb 12 21:53:14.652807 kernel: GDS: Unknown: Dependent on hypervisor status Feb 12 21:53:14.652813 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Feb 12 21:53:14.652818 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Feb 12 21:53:14.652824 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Feb 12 21:53:14.652830 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Feb 12 21:53:14.652835 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Feb 12 21:53:14.652842 kernel: Freeing SMP alternatives memory: 32K Feb 12 21:53:14.652848 kernel: pid_max: default: 131072 minimum: 1024 Feb 12 21:53:14.652853 kernel: LSM: Security Framework initializing Feb 12 21:53:14.652860 kernel: SELinux: Initializing. Feb 12 21:53:14.652865 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Feb 12 21:53:14.652871 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Feb 12 21:53:14.652877 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) Feb 12 21:53:14.652883 kernel: Performance Events: Skylake events, core PMU driver. Feb 12 21:53:14.652889 kernel: core: CPUID marked event: 'cpu cycles' unavailable Feb 12 21:53:14.652896 kernel: core: CPUID marked event: 'instructions' unavailable Feb 12 21:53:14.652901 kernel: core: CPUID marked event: 'bus cycles' unavailable Feb 12 21:53:14.652907 kernel: core: CPUID marked event: 'cache references' unavailable Feb 12 21:53:14.652912 kernel: core: CPUID marked event: 'cache misses' unavailable Feb 12 21:53:14.652917 kernel: core: CPUID marked event: 'branch instructions' unavailable Feb 12 21:53:14.652923 kernel: core: CPUID marked event: 'branch misses' unavailable Feb 12 21:53:14.652929 kernel: ... version: 1 Feb 12 21:53:14.652934 kernel: ... bit width: 48 Feb 12 21:53:14.652941 kernel: ... generic registers: 4 Feb 12 21:53:14.652946 kernel: ... value mask: 0000ffffffffffff Feb 12 21:53:14.652952 kernel: ... max period: 000000007fffffff Feb 12 21:53:14.652958 kernel: ... fixed-purpose events: 0 Feb 12 21:53:14.652963 kernel: ... event mask: 000000000000000f Feb 12 21:53:14.652969 kernel: signal: max sigframe size: 1776 Feb 12 21:53:14.652975 kernel: rcu: Hierarchical SRCU implementation. Feb 12 21:53:14.652980 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Feb 12 21:53:14.652986 kernel: smp: Bringing up secondary CPUs ... Feb 12 21:53:14.652992 kernel: x86: Booting SMP configuration: Feb 12 21:53:14.652999 kernel: .... node #0, CPUs: #1 Feb 12 21:53:14.653004 kernel: Disabled fast string operations Feb 12 21:53:14.653010 kernel: smpboot: CPU 1 Converting physical 2 to logical package 1 Feb 12 21:53:14.653016 kernel: smpboot: CPU 1 Converting physical 0 to logical die 1 Feb 12 21:53:14.653022 kernel: smp: Brought up 1 node, 2 CPUs Feb 12 21:53:14.653027 kernel: smpboot: Max logical packages: 128 Feb 12 21:53:14.653033 kernel: smpboot: Total of 2 processors activated (13632.00 BogoMIPS) Feb 12 21:53:14.653039 kernel: devtmpfs: initialized Feb 12 21:53:14.653045 kernel: x86/mm: Memory block size: 128MB Feb 12 21:53:14.653053 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7feff000-0x7fefffff] (4096 bytes) Feb 12 21:53:14.653059 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 12 21:53:14.653064 kernel: futex hash table entries: 32768 (order: 9, 2097152 bytes, linear) Feb 12 21:53:14.653070 kernel: pinctrl core: initialized pinctrl subsystem Feb 12 21:53:14.653076 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 12 21:53:14.653082 kernel: audit: initializing netlink subsys (disabled) Feb 12 21:53:14.653088 kernel: audit: type=2000 audit(1707774793.057:1): state=initialized audit_enabled=0 res=1 Feb 12 21:53:14.653094 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 12 21:53:14.653099 kernel: thermal_sys: Registered thermal governor 'user_space' Feb 12 21:53:14.653107 kernel: cpuidle: using governor menu Feb 12 21:53:14.653113 kernel: Simple Boot Flag at 0x36 set to 0x80 Feb 12 21:53:14.653119 kernel: ACPI: bus type PCI registered Feb 12 21:53:14.653124 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 12 21:53:14.653130 kernel: dca service started, version 1.12.1 Feb 12 21:53:14.653136 kernel: PCI: MMCONFIG for domain 0000 [bus 00-7f] at [mem 0xf0000000-0xf7ffffff] (base 0xf0000000) Feb 12 21:53:14.653142 kernel: PCI: MMCONFIG at [mem 0xf0000000-0xf7ffffff] reserved in E820 Feb 12 21:53:14.653148 kernel: PCI: Using configuration type 1 for base access Feb 12 21:53:14.653154 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Feb 12 21:53:14.653161 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Feb 12 21:53:14.653167 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Feb 12 21:53:14.653172 kernel: ACPI: Added _OSI(Module Device) Feb 12 21:53:14.653178 kernel: ACPI: Added _OSI(Processor Device) Feb 12 21:53:14.653184 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 12 21:53:14.653190 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 12 21:53:14.653196 kernel: ACPI: Added _OSI(Linux-Dell-Video) Feb 12 21:53:14.653202 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Feb 12 21:53:14.653207 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Feb 12 21:53:14.653214 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 12 21:53:14.653220 kernel: ACPI: [Firmware Bug]: BIOS _OSI(Linux) query ignored Feb 12 21:53:14.653225 kernel: ACPI: Interpreter enabled Feb 12 21:53:14.653231 kernel: ACPI: PM: (supports S0 S1 S5) Feb 12 21:53:14.653237 kernel: ACPI: Using IOAPIC for interrupt routing Feb 12 21:53:14.653243 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Feb 12 21:53:14.653249 kernel: ACPI: Enabled 4 GPEs in block 00 to 0F Feb 12 21:53:14.653255 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-7f]) Feb 12 21:53:14.653344 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 12 21:53:14.653410 kernel: acpi PNP0A03:00: _OSC: platform does not support [AER LTR] Feb 12 21:53:14.653469 kernel: acpi PNP0A03:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability] Feb 12 21:53:14.653477 kernel: PCI host bridge to bus 0000:00 Feb 12 21:53:14.653563 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Feb 12 21:53:14.653926 kernel: pci_bus 0000:00: root bus resource [mem 0x000cc000-0x000cffff window] Feb 12 21:53:14.653969 kernel: pci_bus 0000:00: root bus resource [mem 0x000d0000-0x000d3fff window] Feb 12 21:53:14.654012 kernel: pci_bus 0000:00: root bus resource [mem 0x000d4000-0x000d7fff window] Feb 12 21:53:14.654053 kernel: pci_bus 0000:00: root bus resource [mem 0x000d8000-0x000dbfff window] Feb 12 21:53:14.654091 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Feb 12 21:53:14.654132 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Feb 12 21:53:14.654173 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xfeff window] Feb 12 21:53:14.654230 kernel: pci_bus 0000:00: root bus resource [bus 00-7f] Feb 12 21:53:14.654285 kernel: pci 0000:00:00.0: [8086:7190] type 00 class 0x060000 Feb 12 21:53:14.654340 kernel: pci 0000:00:01.0: [8086:7191] type 01 class 0x060400 Feb 12 21:53:14.654392 kernel: pci 0000:00:07.0: [8086:7110] type 00 class 0x060100 Feb 12 21:53:14.654443 kernel: pci 0000:00:07.1: [8086:7111] type 00 class 0x01018a Feb 12 21:53:14.654491 kernel: pci 0000:00:07.1: reg 0x20: [io 0x1060-0x106f] Feb 12 21:53:14.654537 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Feb 12 21:53:14.654614 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Feb 12 21:53:14.654661 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Feb 12 21:53:14.654709 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Feb 12 21:53:14.654758 kernel: pci 0000:00:07.3: [8086:7113] type 00 class 0x068000 Feb 12 21:53:14.654804 kernel: pci 0000:00:07.3: quirk: [io 0x1000-0x103f] claimed by PIIX4 ACPI Feb 12 21:53:14.654851 kernel: pci 0000:00:07.3: quirk: [io 0x1040-0x104f] claimed by PIIX4 SMB Feb 12 21:53:14.654901 kernel: pci 0000:00:07.7: [15ad:0740] type 00 class 0x088000 Feb 12 21:53:14.654947 kernel: pci 0000:00:07.7: reg 0x10: [io 0x1080-0x10bf] Feb 12 21:53:14.654995 kernel: pci 0000:00:07.7: reg 0x14: [mem 0xfebfe000-0xfebfffff 64bit] Feb 12 21:53:14.655046 kernel: pci 0000:00:0f.0: [15ad:0405] type 00 class 0x030000 Feb 12 21:53:14.655092 kernel: pci 0000:00:0f.0: reg 0x10: [io 0x1070-0x107f] Feb 12 21:53:14.655138 kernel: pci 0000:00:0f.0: reg 0x14: [mem 0xe8000000-0xefffffff pref] Feb 12 21:53:14.655182 kernel: pci 0000:00:0f.0: reg 0x18: [mem 0xfe000000-0xfe7fffff] Feb 12 21:53:14.655227 kernel: pci 0000:00:0f.0: reg 0x30: [mem 0x00000000-0x00007fff pref] Feb 12 21:53:14.655273 kernel: pci 0000:00:0f.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Feb 12 21:53:14.655325 kernel: pci 0000:00:11.0: [15ad:0790] type 01 class 0x060401 Feb 12 21:53:14.655378 kernel: pci 0000:00:15.0: [15ad:07a0] type 01 class 0x060400 Feb 12 21:53:14.655425 kernel: pci 0000:00:15.0: PME# supported from D0 D3hot D3cold Feb 12 21:53:14.655475 kernel: pci 0000:00:15.1: [15ad:07a0] type 01 class 0x060400 Feb 12 21:53:14.655521 kernel: pci 0000:00:15.1: PME# supported from D0 D3hot D3cold Feb 12 21:53:14.658170 kernel: pci 0000:00:15.2: [15ad:07a0] type 01 class 0x060400 Feb 12 21:53:14.658241 kernel: pci 0000:00:15.2: PME# supported from D0 D3hot D3cold Feb 12 21:53:14.658299 kernel: pci 0000:00:15.3: [15ad:07a0] type 01 class 0x060400 Feb 12 21:53:14.658352 kernel: pci 0000:00:15.3: PME# supported from D0 D3hot D3cold Feb 12 21:53:14.658407 kernel: pci 0000:00:15.4: [15ad:07a0] type 01 class 0x060400 Feb 12 21:53:14.658459 kernel: pci 0000:00:15.4: PME# supported from D0 D3hot D3cold Feb 12 21:53:14.658513 kernel: pci 0000:00:15.5: [15ad:07a0] type 01 class 0x060400 Feb 12 21:53:14.658582 kernel: pci 0000:00:15.5: PME# supported from D0 D3hot D3cold Feb 12 21:53:14.663875 kernel: pci 0000:00:15.6: [15ad:07a0] type 01 class 0x060400 Feb 12 21:53:14.663939 kernel: pci 0000:00:15.6: PME# supported from D0 D3hot D3cold Feb 12 21:53:14.663992 kernel: pci 0000:00:15.7: [15ad:07a0] type 01 class 0x060400 Feb 12 21:53:14.664041 kernel: pci 0000:00:15.7: PME# supported from D0 D3hot D3cold Feb 12 21:53:14.664093 kernel: pci 0000:00:16.0: [15ad:07a0] type 01 class 0x060400 Feb 12 21:53:14.664145 kernel: pci 0000:00:16.0: PME# supported from D0 D3hot D3cold Feb 12 21:53:14.664195 kernel: pci 0000:00:16.1: [15ad:07a0] type 01 class 0x060400 Feb 12 21:53:14.664242 kernel: pci 0000:00:16.1: PME# supported from D0 D3hot D3cold Feb 12 21:53:14.664290 kernel: pci 0000:00:16.2: [15ad:07a0] type 01 class 0x060400 Feb 12 21:53:14.664336 kernel: pci 0000:00:16.2: PME# supported from D0 D3hot D3cold Feb 12 21:53:14.664395 kernel: pci 0000:00:16.3: [15ad:07a0] type 01 class 0x060400 Feb 12 21:53:14.666605 kernel: pci 0000:00:16.3: PME# supported from D0 D3hot D3cold Feb 12 21:53:14.666678 kernel: pci 0000:00:16.4: [15ad:07a0] type 01 class 0x060400 Feb 12 21:53:14.666731 kernel: pci 0000:00:16.4: PME# supported from D0 D3hot D3cold Feb 12 21:53:14.666788 kernel: pci 0000:00:16.5: [15ad:07a0] type 01 class 0x060400 Feb 12 21:53:14.666836 kernel: pci 0000:00:16.5: PME# supported from D0 D3hot D3cold Feb 12 21:53:14.666886 kernel: pci 0000:00:16.6: [15ad:07a0] type 01 class 0x060400 Feb 12 21:53:14.666936 kernel: pci 0000:00:16.6: PME# supported from D0 D3hot D3cold Feb 12 21:53:14.666985 kernel: pci 0000:00:16.7: [15ad:07a0] type 01 class 0x060400 Feb 12 21:53:14.667032 kernel: pci 0000:00:16.7: PME# supported from D0 D3hot D3cold Feb 12 21:53:14.667081 kernel: pci 0000:00:17.0: [15ad:07a0] type 01 class 0x060400 Feb 12 21:53:14.667128 kernel: pci 0000:00:17.0: PME# supported from D0 D3hot D3cold Feb 12 21:53:14.667178 kernel: pci 0000:00:17.1: [15ad:07a0] type 01 class 0x060400 Feb 12 21:53:14.667227 kernel: pci 0000:00:17.1: PME# supported from D0 D3hot D3cold Feb 12 21:53:14.667276 kernel: pci 0000:00:17.2: [15ad:07a0] type 01 class 0x060400 Feb 12 21:53:14.667322 kernel: pci 0000:00:17.2: PME# supported from D0 D3hot D3cold Feb 12 21:53:14.667375 kernel: pci 0000:00:17.3: [15ad:07a0] type 01 class 0x060400 Feb 12 21:53:14.667422 kernel: pci 0000:00:17.3: PME# supported from D0 D3hot D3cold Feb 12 21:53:14.667473 kernel: pci 0000:00:17.4: [15ad:07a0] type 01 class 0x060400 Feb 12 21:53:14.667520 kernel: pci 0000:00:17.4: PME# supported from D0 D3hot D3cold Feb 12 21:53:14.667591 kernel: pci 0000:00:17.5: [15ad:07a0] type 01 class 0x060400 Feb 12 21:53:14.667640 kernel: pci 0000:00:17.5: PME# supported from D0 D3hot D3cold Feb 12 21:53:14.667692 kernel: pci 0000:00:17.6: [15ad:07a0] type 01 class 0x060400 Feb 12 21:53:14.667740 kernel: pci 0000:00:17.6: PME# supported from D0 D3hot D3cold Feb 12 21:53:14.667790 kernel: pci 0000:00:17.7: [15ad:07a0] type 01 class 0x060400 Feb 12 21:53:14.667837 kernel: pci 0000:00:17.7: PME# supported from D0 D3hot D3cold Feb 12 21:53:14.667890 kernel: pci 0000:00:18.0: [15ad:07a0] type 01 class 0x060400 Feb 12 21:53:14.667936 kernel: pci 0000:00:18.0: PME# supported from D0 D3hot D3cold Feb 12 21:53:14.667987 kernel: pci 0000:00:18.1: [15ad:07a0] type 01 class 0x060400 Feb 12 21:53:14.668033 kernel: pci 0000:00:18.1: PME# supported from D0 D3hot D3cold Feb 12 21:53:14.668086 kernel: pci 0000:00:18.2: [15ad:07a0] type 01 class 0x060400 Feb 12 21:53:14.668132 kernel: pci 0000:00:18.2: PME# supported from D0 D3hot D3cold Feb 12 21:53:14.668186 kernel: pci 0000:00:18.3: [15ad:07a0] type 01 class 0x060400 Feb 12 21:53:14.668233 kernel: pci 0000:00:18.3: PME# supported from D0 D3hot D3cold Feb 12 21:53:14.668282 kernel: pci 0000:00:18.4: [15ad:07a0] type 01 class 0x060400 Feb 12 21:53:14.668328 kernel: pci 0000:00:18.4: PME# supported from D0 D3hot D3cold Feb 12 21:53:14.668377 kernel: pci 0000:00:18.5: [15ad:07a0] type 01 class 0x060400 Feb 12 21:53:14.668423 kernel: pci 0000:00:18.5: PME# supported from D0 D3hot D3cold Feb 12 21:53:14.668475 kernel: pci 0000:00:18.6: [15ad:07a0] type 01 class 0x060400 Feb 12 21:53:14.668522 kernel: pci 0000:00:18.6: PME# supported from D0 D3hot D3cold Feb 12 21:53:14.668581 kernel: pci 0000:00:18.7: [15ad:07a0] type 01 class 0x060400 Feb 12 21:53:14.668633 kernel: pci 0000:00:18.7: PME# supported from D0 D3hot D3cold Feb 12 21:53:14.668694 kernel: pci_bus 0000:01: extended config space not accessible Feb 12 21:53:14.668744 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Feb 12 21:53:14.668793 kernel: pci_bus 0000:02: extended config space not accessible Feb 12 21:53:14.668805 kernel: acpiphp: Slot [32] registered Feb 12 21:53:14.668811 kernel: acpiphp: Slot [33] registered Feb 12 21:53:14.668816 kernel: acpiphp: Slot [34] registered Feb 12 21:53:14.668823 kernel: acpiphp: Slot [35] registered Feb 12 21:53:14.668828 kernel: acpiphp: Slot [36] registered Feb 12 21:53:14.668834 kernel: acpiphp: Slot [37] registered Feb 12 21:53:14.668839 kernel: acpiphp: Slot [38] registered Feb 12 21:53:14.668845 kernel: acpiphp: Slot [39] registered Feb 12 21:53:14.668852 kernel: acpiphp: Slot [40] registered Feb 12 21:53:14.668858 kernel: acpiphp: Slot [41] registered Feb 12 21:53:14.668864 kernel: acpiphp: Slot [42] registered Feb 12 21:53:14.668869 kernel: acpiphp: Slot [43] registered Feb 12 21:53:14.668875 kernel: acpiphp: Slot [44] registered Feb 12 21:53:14.668881 kernel: acpiphp: Slot [45] registered Feb 12 21:53:14.668887 kernel: acpiphp: Slot [46] registered Feb 12 21:53:14.668893 kernel: acpiphp: Slot [47] registered Feb 12 21:53:14.668898 kernel: acpiphp: Slot [48] registered Feb 12 21:53:14.668904 kernel: acpiphp: Slot [49] registered Feb 12 21:53:14.668910 kernel: acpiphp: Slot [50] registered Feb 12 21:53:14.668916 kernel: acpiphp: Slot [51] registered Feb 12 21:53:14.668922 kernel: acpiphp: Slot [52] registered Feb 12 21:53:14.668928 kernel: acpiphp: Slot [53] registered Feb 12 21:53:14.668933 kernel: acpiphp: Slot [54] registered Feb 12 21:53:14.668939 kernel: acpiphp: Slot [55] registered Feb 12 21:53:14.668944 kernel: acpiphp: Slot [56] registered Feb 12 21:53:14.668950 kernel: acpiphp: Slot [57] registered Feb 12 21:53:14.668956 kernel: acpiphp: Slot [58] registered Feb 12 21:53:14.668962 kernel: acpiphp: Slot [59] registered Feb 12 21:53:14.668968 kernel: acpiphp: Slot [60] registered Feb 12 21:53:14.668974 kernel: acpiphp: Slot [61] registered Feb 12 21:53:14.668980 kernel: acpiphp: Slot [62] registered Feb 12 21:53:14.668985 kernel: acpiphp: Slot [63] registered Feb 12 21:53:14.669032 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] (subtractive decode) Feb 12 21:53:14.669079 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] Feb 12 21:53:14.669123 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] Feb 12 21:53:14.669168 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] Feb 12 21:53:14.669215 kernel: pci 0000:00:11.0: bridge window [mem 0x000a0000-0x000bffff window] (subtractive decode) Feb 12 21:53:14.669261 kernel: pci 0000:00:11.0: bridge window [mem 0x000cc000-0x000cffff window] (subtractive decode) Feb 12 21:53:14.669306 kernel: pci 0000:00:11.0: bridge window [mem 0x000d0000-0x000d3fff window] (subtractive decode) Feb 12 21:53:14.669351 kernel: pci 0000:00:11.0: bridge window [mem 0x000d4000-0x000d7fff window] (subtractive decode) Feb 12 21:53:14.669395 kernel: pci 0000:00:11.0: bridge window [mem 0x000d8000-0x000dbfff window] (subtractive decode) Feb 12 21:53:14.669440 kernel: pci 0000:00:11.0: bridge window [mem 0xc0000000-0xfebfffff window] (subtractive decode) Feb 12 21:53:14.669485 kernel: pci 0000:00:11.0: bridge window [io 0x0000-0x0cf7 window] (subtractive decode) Feb 12 21:53:14.669534 kernel: pci 0000:00:11.0: bridge window [io 0x0d00-0xfeff window] (subtractive decode) Feb 12 21:53:14.671405 kernel: pci 0000:03:00.0: [15ad:07c0] type 00 class 0x010700 Feb 12 21:53:14.671467 kernel: pci 0000:03:00.0: reg 0x10: [io 0x4000-0x4007] Feb 12 21:53:14.671518 kernel: pci 0000:03:00.0: reg 0x14: [mem 0xfd5f8000-0xfd5fffff 64bit] Feb 12 21:53:14.672831 kernel: pci 0000:03:00.0: reg 0x30: [mem 0x00000000-0x0000ffff pref] Feb 12 21:53:14.672897 kernel: pci 0000:03:00.0: PME# supported from D0 D3hot D3cold Feb 12 21:53:14.672949 kernel: pci 0000:03:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' Feb 12 21:53:14.673004 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] Feb 12 21:53:14.673058 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] Feb 12 21:53:14.673104 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] Feb 12 21:53:14.673153 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] Feb 12 21:53:14.673199 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] Feb 12 21:53:14.673246 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] Feb 12 21:53:14.673293 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] Feb 12 21:53:14.673342 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] Feb 12 21:53:14.673391 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] Feb 12 21:53:14.673437 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] Feb 12 21:53:14.673482 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] Feb 12 21:53:14.673530 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] Feb 12 21:53:14.675098 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] Feb 12 21:53:14.675160 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] Feb 12 21:53:14.675216 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] Feb 12 21:53:14.675557 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] Feb 12 21:53:14.675631 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] Feb 12 21:53:14.675695 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] Feb 12 21:53:14.675743 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] Feb 12 21:53:14.675790 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] Feb 12 21:53:14.675842 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] Feb 12 21:53:14.675889 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] Feb 12 21:53:14.675936 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] Feb 12 21:53:14.675983 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] Feb 12 21:53:14.676029 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] Feb 12 21:53:14.676075 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] Feb 12 21:53:14.676133 kernel: pci 0000:0b:00.0: [15ad:07b0] type 00 class 0x020000 Feb 12 21:53:14.676182 kernel: pci 0000:0b:00.0: reg 0x10: [mem 0xfd4fc000-0xfd4fcfff] Feb 12 21:53:14.676232 kernel: pci 0000:0b:00.0: reg 0x14: [mem 0xfd4fd000-0xfd4fdfff] Feb 12 21:53:14.676279 kernel: pci 0000:0b:00.0: reg 0x18: [mem 0xfd4fe000-0xfd4fffff] Feb 12 21:53:14.676325 kernel: pci 0000:0b:00.0: reg 0x1c: [io 0x5000-0x500f] Feb 12 21:53:14.676373 kernel: pci 0000:0b:00.0: reg 0x30: [mem 0x00000000-0x0000ffff pref] Feb 12 21:53:14.676420 kernel: pci 0000:0b:00.0: supports D1 D2 Feb 12 21:53:14.676466 kernel: pci 0000:0b:00.0: PME# supported from D0 D1 D2 D3hot D3cold Feb 12 21:53:14.676513 kernel: pci 0000:0b:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' Feb 12 21:53:14.676562 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] Feb 12 21:53:14.677543 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] Feb 12 21:53:14.677611 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] Feb 12 21:53:14.677661 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] Feb 12 21:53:14.677708 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] Feb 12 21:53:14.677754 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] Feb 12 21:53:14.677799 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] Feb 12 21:53:14.678634 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] Feb 12 21:53:14.678699 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] Feb 12 21:53:14.678748 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] Feb 12 21:53:14.678795 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] Feb 12 21:53:14.678845 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] Feb 12 21:53:14.678892 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] Feb 12 21:53:14.678937 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] Feb 12 21:53:14.678984 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] Feb 12 21:53:14.679030 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] Feb 12 21:53:14.679079 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] Feb 12 21:53:14.679125 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] Feb 12 21:53:14.679171 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] Feb 12 21:53:14.679216 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] Feb 12 21:53:14.679264 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] Feb 12 21:53:14.679309 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] Feb 12 21:53:14.679354 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] Feb 12 21:53:14.679401 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] Feb 12 21:53:14.679449 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] Feb 12 21:53:14.679494 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] Feb 12 21:53:14.679542 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] Feb 12 21:53:14.684665 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] Feb 12 21:53:14.684737 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] Feb 12 21:53:14.684787 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] Feb 12 21:53:14.684838 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] Feb 12 21:53:14.684885 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] Feb 12 21:53:14.684935 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] Feb 12 21:53:14.684980 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] Feb 12 21:53:14.685028 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] Feb 12 21:53:14.685074 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] Feb 12 21:53:14.685120 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] Feb 12 21:53:14.685171 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] Feb 12 21:53:14.685219 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] Feb 12 21:53:14.685268 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] Feb 12 21:53:14.685313 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] Feb 12 21:53:14.685359 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] Feb 12 21:53:14.685404 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] Feb 12 21:53:14.685448 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] Feb 12 21:53:14.685494 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] Feb 12 21:53:14.685539 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] Feb 12 21:53:14.687108 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] Feb 12 21:53:14.687173 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] Feb 12 21:53:14.687224 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] Feb 12 21:53:14.687271 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] Feb 12 21:53:14.687319 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] Feb 12 21:53:14.687365 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] Feb 12 21:53:14.687410 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] Feb 12 21:53:14.687459 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] Feb 12 21:53:14.687504 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] Feb 12 21:53:14.687552 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] Feb 12 21:53:14.687620 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] Feb 12 21:53:14.687671 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] Feb 12 21:53:14.687716 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] Feb 12 21:53:14.687762 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] Feb 12 21:53:14.687807 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] Feb 12 21:53:14.687856 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] Feb 12 21:53:14.687901 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] Feb 12 21:53:14.687950 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] Feb 12 21:53:14.687997 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] Feb 12 21:53:14.688042 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] Feb 12 21:53:14.688087 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] Feb 12 21:53:14.688133 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] Feb 12 21:53:14.688178 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] Feb 12 21:53:14.688224 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] Feb 12 21:53:14.688271 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] Feb 12 21:53:14.688319 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] Feb 12 21:53:14.688363 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] Feb 12 21:53:14.688409 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] Feb 12 21:53:14.688454 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] Feb 12 21:53:14.688499 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] Feb 12 21:53:14.688546 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] Feb 12 21:53:14.688927 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] Feb 12 21:53:14.688980 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] Feb 12 21:53:14.688991 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 9 Feb 12 21:53:14.688998 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 0 Feb 12 21:53:14.689004 kernel: ACPI: PCI: Interrupt link LNKB disabled Feb 12 21:53:14.689010 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Feb 12 21:53:14.689015 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 10 Feb 12 21:53:14.689021 kernel: iommu: Default domain type: Translated Feb 12 21:53:14.689027 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Feb 12 21:53:14.689075 kernel: pci 0000:00:0f.0: vgaarb: setting as boot VGA device Feb 12 21:53:14.689122 kernel: pci 0000:00:0f.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Feb 12 21:53:14.689175 kernel: pci 0000:00:0f.0: vgaarb: bridge control possible Feb 12 21:53:14.689183 kernel: vgaarb: loaded Feb 12 21:53:14.689189 kernel: pps_core: LinuxPPS API ver. 1 registered Feb 12 21:53:14.689195 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Feb 12 21:53:14.689201 kernel: PTP clock support registered Feb 12 21:53:14.689207 kernel: PCI: Using ACPI for IRQ routing Feb 12 21:53:14.689212 kernel: PCI: pci_cache_line_size set to 64 bytes Feb 12 21:53:14.689218 kernel: e820: reserve RAM buffer [mem 0x0009ec00-0x0009ffff] Feb 12 21:53:14.689224 kernel: e820: reserve RAM buffer [mem 0x7fee0000-0x7fffffff] Feb 12 21:53:14.689232 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 Feb 12 21:53:14.689237 kernel: hpet0: 16 comparators, 64-bit 14.318180 MHz counter Feb 12 21:53:14.689243 kernel: clocksource: Switched to clocksource tsc-early Feb 12 21:53:14.689249 kernel: VFS: Disk quotas dquot_6.6.0 Feb 12 21:53:14.689255 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 12 21:53:14.689260 kernel: pnp: PnP ACPI init Feb 12 21:53:14.689309 kernel: system 00:00: [io 0x1000-0x103f] has been reserved Feb 12 21:53:14.689351 kernel: system 00:00: [io 0x1040-0x104f] has been reserved Feb 12 21:53:14.689394 kernel: system 00:00: [io 0x0cf0-0x0cf1] has been reserved Feb 12 21:53:14.689437 kernel: system 00:04: [mem 0xfed00000-0xfed003ff] has been reserved Feb 12 21:53:14.689483 kernel: pnp 00:06: [dma 2] Feb 12 21:53:14.689527 kernel: system 00:07: [io 0xfce0-0xfcff] has been reserved Feb 12 21:53:14.689576 kernel: system 00:07: [mem 0xf0000000-0xf7ffffff] has been reserved Feb 12 21:53:14.689624 kernel: system 00:07: [mem 0xfe800000-0xfe9fffff] has been reserved Feb 12 21:53:14.689634 kernel: pnp: PnP ACPI: found 8 devices Feb 12 21:53:14.689648 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Feb 12 21:53:14.689658 kernel: NET: Registered PF_INET protocol family Feb 12 21:53:14.689665 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 12 21:53:14.689672 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Feb 12 21:53:14.689678 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 12 21:53:14.689684 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Feb 12 21:53:14.689690 kernel: TCP bind hash table entries: 16384 (order: 6, 262144 bytes, linear) Feb 12 21:53:14.689695 kernel: TCP: Hash tables configured (established 16384 bind 16384) Feb 12 21:53:14.689703 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Feb 12 21:53:14.689708 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Feb 12 21:53:14.689714 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 12 21:53:14.689720 kernel: NET: Registered PF_XDP protocol family Feb 12 21:53:14.689772 kernel: pci 0000:00:15.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 Feb 12 21:53:14.689821 kernel: pci 0000:00:15.3: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Feb 12 21:53:14.689870 kernel: pci 0000:00:15.4: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Feb 12 21:53:14.689921 kernel: pci 0000:00:15.5: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Feb 12 21:53:14.689970 kernel: pci 0000:00:15.6: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Feb 12 21:53:14.690018 kernel: pci 0000:00:15.7: bridge window [io 0x1000-0x0fff] to [bus 0a] add_size 1000 Feb 12 21:53:14.690066 kernel: pci 0000:00:16.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 0b] add_size 200000 add_align 100000 Feb 12 21:53:14.690113 kernel: pci 0000:00:16.3: bridge window [io 0x1000-0x0fff] to [bus 0e] add_size 1000 Feb 12 21:53:14.690161 kernel: pci 0000:00:16.4: bridge window [io 0x1000-0x0fff] to [bus 0f] add_size 1000 Feb 12 21:53:14.690211 kernel: pci 0000:00:16.5: bridge window [io 0x1000-0x0fff] to [bus 10] add_size 1000 Feb 12 21:53:14.690259 kernel: pci 0000:00:16.6: bridge window [io 0x1000-0x0fff] to [bus 11] add_size 1000 Feb 12 21:53:14.690306 kernel: pci 0000:00:16.7: bridge window [io 0x1000-0x0fff] to [bus 12] add_size 1000 Feb 12 21:53:14.690352 kernel: pci 0000:00:17.3: bridge window [io 0x1000-0x0fff] to [bus 16] add_size 1000 Feb 12 21:53:14.690400 kernel: pci 0000:00:17.4: bridge window [io 0x1000-0x0fff] to [bus 17] add_size 1000 Feb 12 21:53:14.690460 kernel: pci 0000:00:17.5: bridge window [io 0x1000-0x0fff] to [bus 18] add_size 1000 Feb 12 21:53:14.690510 kernel: pci 0000:00:17.6: bridge window [io 0x1000-0x0fff] to [bus 19] add_size 1000 Feb 12 21:53:14.690556 kernel: pci 0000:00:17.7: bridge window [io 0x1000-0x0fff] to [bus 1a] add_size 1000 Feb 12 21:53:14.690615 kernel: pci 0000:00:18.2: bridge window [io 0x1000-0x0fff] to [bus 1d] add_size 1000 Feb 12 21:53:14.690663 kernel: pci 0000:00:18.3: bridge window [io 0x1000-0x0fff] to [bus 1e] add_size 1000 Feb 12 21:53:14.690710 kernel: pci 0000:00:18.4: bridge window [io 0x1000-0x0fff] to [bus 1f] add_size 1000 Feb 12 21:53:14.690756 kernel: pci 0000:00:18.5: bridge window [io 0x1000-0x0fff] to [bus 20] add_size 1000 Feb 12 21:53:14.690804 kernel: pci 0000:00:18.6: bridge window [io 0x1000-0x0fff] to [bus 21] add_size 1000 Feb 12 21:53:14.690851 kernel: pci 0000:00:18.7: bridge window [io 0x1000-0x0fff] to [bus 22] add_size 1000 Feb 12 21:53:14.690897 kernel: pci 0000:00:15.0: BAR 15: assigned [mem 0xc0000000-0xc01fffff 64bit pref] Feb 12 21:53:14.690941 kernel: pci 0000:00:16.0: BAR 15: assigned [mem 0xc0200000-0xc03fffff 64bit pref] Feb 12 21:53:14.690987 kernel: pci 0000:00:15.3: BAR 13: no space for [io size 0x1000] Feb 12 21:53:14.691033 kernel: pci 0000:00:15.3: BAR 13: failed to assign [io size 0x1000] Feb 12 21:53:14.691079 kernel: pci 0000:00:15.4: BAR 13: no space for [io size 0x1000] Feb 12 21:53:14.691127 kernel: pci 0000:00:15.4: BAR 13: failed to assign [io size 0x1000] Feb 12 21:53:14.691173 kernel: pci 0000:00:15.5: BAR 13: no space for [io size 0x1000] Feb 12 21:53:14.691220 kernel: pci 0000:00:15.5: BAR 13: failed to assign [io size 0x1000] Feb 12 21:53:14.691265 kernel: pci 0000:00:15.6: BAR 13: no space for [io size 0x1000] Feb 12 21:53:14.691310 kernel: pci 0000:00:15.6: BAR 13: failed to assign [io size 0x1000] Feb 12 21:53:14.691356 kernel: pci 0000:00:15.7: BAR 13: no space for [io size 0x1000] Feb 12 21:53:14.691402 kernel: pci 0000:00:15.7: BAR 13: failed to assign [io size 0x1000] Feb 12 21:53:14.691447 kernel: pci 0000:00:16.3: BAR 13: no space for [io size 0x1000] Feb 12 21:53:14.691505 kernel: pci 0000:00:16.3: BAR 13: failed to assign [io size 0x1000] Feb 12 21:53:14.691557 kernel: pci 0000:00:16.4: BAR 13: no space for [io size 0x1000] Feb 12 21:53:14.691620 kernel: pci 0000:00:16.4: BAR 13: failed to assign [io size 0x1000] Feb 12 21:53:14.691666 kernel: pci 0000:00:16.5: BAR 13: no space for [io size 0x1000] Feb 12 21:53:14.691711 kernel: pci 0000:00:16.5: BAR 13: failed to assign [io size 0x1000] Feb 12 21:53:14.691757 kernel: pci 0000:00:16.6: BAR 13: no space for [io size 0x1000] Feb 12 21:53:14.691802 kernel: pci 0000:00:16.6: BAR 13: failed to assign [io size 0x1000] Feb 12 21:53:14.691983 kernel: pci 0000:00:16.7: BAR 13: no space for [io size 0x1000] Feb 12 21:53:14.692033 kernel: pci 0000:00:16.7: BAR 13: failed to assign [io size 0x1000] Feb 12 21:53:14.692078 kernel: pci 0000:00:17.3: BAR 13: no space for [io size 0x1000] Feb 12 21:53:14.692124 kernel: pci 0000:00:17.3: BAR 13: failed to assign [io size 0x1000] Feb 12 21:53:14.692169 kernel: pci 0000:00:17.4: BAR 13: no space for [io size 0x1000] Feb 12 21:53:14.692215 kernel: pci 0000:00:17.4: BAR 13: failed to assign [io size 0x1000] Feb 12 21:53:14.692261 kernel: pci 0000:00:17.5: BAR 13: no space for [io size 0x1000] Feb 12 21:53:14.692305 kernel: pci 0000:00:17.5: BAR 13: failed to assign [io size 0x1000] Feb 12 21:53:14.692351 kernel: pci 0000:00:17.6: BAR 13: no space for [io size 0x1000] Feb 12 21:53:14.692399 kernel: pci 0000:00:17.6: BAR 13: failed to assign [io size 0x1000] Feb 12 21:53:14.692444 kernel: pci 0000:00:17.7: BAR 13: no space for [io size 0x1000] Feb 12 21:53:14.692490 kernel: pci 0000:00:17.7: BAR 13: failed to assign [io size 0x1000] Feb 12 21:53:14.692536 kernel: pci 0000:00:18.2: BAR 13: no space for [io size 0x1000] Feb 12 21:53:14.692622 kernel: pci 0000:00:18.2: BAR 13: failed to assign [io size 0x1000] Feb 12 21:53:14.692675 kernel: pci 0000:00:18.3: BAR 13: no space for [io size 0x1000] Feb 12 21:53:14.692720 kernel: pci 0000:00:18.3: BAR 13: failed to assign [io size 0x1000] Feb 12 21:53:14.692766 kernel: pci 0000:00:18.4: BAR 13: no space for [io size 0x1000] Feb 12 21:53:14.692813 kernel: pci 0000:00:18.4: BAR 13: failed to assign [io size 0x1000] Feb 12 21:53:14.692857 kernel: pci 0000:00:18.5: BAR 13: no space for [io size 0x1000] Feb 12 21:53:14.692901 kernel: pci 0000:00:18.5: BAR 13: failed to assign [io size 0x1000] Feb 12 21:53:14.692945 kernel: pci 0000:00:18.6: BAR 13: no space for [io size 0x1000] Feb 12 21:53:14.692990 kernel: pci 0000:00:18.6: BAR 13: failed to assign [io size 0x1000] Feb 12 21:53:14.693035 kernel: pci 0000:00:18.7: BAR 13: no space for [io size 0x1000] Feb 12 21:53:14.693079 kernel: pci 0000:00:18.7: BAR 13: failed to assign [io size 0x1000] Feb 12 21:53:14.693179 kernel: pci 0000:00:18.7: BAR 13: no space for [io size 0x1000] Feb 12 21:53:14.693253 kernel: pci 0000:00:18.7: BAR 13: failed to assign [io size 0x1000] Feb 12 21:53:14.693298 kernel: pci 0000:00:18.6: BAR 13: no space for [io size 0x1000] Feb 12 21:53:14.693343 kernel: pci 0000:00:18.6: BAR 13: failed to assign [io size 0x1000] Feb 12 21:53:14.693388 kernel: pci 0000:00:18.5: BAR 13: no space for [io size 0x1000] Feb 12 21:53:14.693432 kernel: pci 0000:00:18.5: BAR 13: failed to assign [io size 0x1000] Feb 12 21:53:14.693943 kernel: pci 0000:00:18.4: BAR 13: no space for [io size 0x1000] Feb 12 21:53:14.694004 kernel: pci 0000:00:18.4: BAR 13: failed to assign [io size 0x1000] Feb 12 21:53:14.694056 kernel: pci 0000:00:18.3: BAR 13: no space for [io size 0x1000] Feb 12 21:53:14.694379 kernel: pci 0000:00:18.3: BAR 13: failed to assign [io size 0x1000] Feb 12 21:53:14.694441 kernel: pci 0000:00:18.2: BAR 13: no space for [io size 0x1000] Feb 12 21:53:14.694493 kernel: pci 0000:00:18.2: BAR 13: failed to assign [io size 0x1000] Feb 12 21:53:14.694541 kernel: pci 0000:00:17.7: BAR 13: no space for [io size 0x1000] Feb 12 21:53:14.694689 kernel: pci 0000:00:17.7: BAR 13: failed to assign [io size 0x1000] Feb 12 21:53:14.694737 kernel: pci 0000:00:17.6: BAR 13: no space for [io size 0x1000] Feb 12 21:53:14.694783 kernel: pci 0000:00:17.6: BAR 13: failed to assign [io size 0x1000] Feb 12 21:53:14.694829 kernel: pci 0000:00:17.5: BAR 13: no space for [io size 0x1000] Feb 12 21:53:14.694873 kernel: pci 0000:00:17.5: BAR 13: failed to assign [io size 0x1000] Feb 12 21:53:14.694919 kernel: pci 0000:00:17.4: BAR 13: no space for [io size 0x1000] Feb 12 21:53:14.694965 kernel: pci 0000:00:17.4: BAR 13: failed to assign [io size 0x1000] Feb 12 21:53:14.695013 kernel: pci 0000:00:17.3: BAR 13: no space for [io size 0x1000] Feb 12 21:53:14.695058 kernel: pci 0000:00:17.3: BAR 13: failed to assign [io size 0x1000] Feb 12 21:53:14.695104 kernel: pci 0000:00:16.7: BAR 13: no space for [io size 0x1000] Feb 12 21:53:14.695149 kernel: pci 0000:00:16.7: BAR 13: failed to assign [io size 0x1000] Feb 12 21:53:14.695195 kernel: pci 0000:00:16.6: BAR 13: no space for [io size 0x1000] Feb 12 21:53:14.695240 kernel: pci 0000:00:16.6: BAR 13: failed to assign [io size 0x1000] Feb 12 21:53:14.695285 kernel: pci 0000:00:16.5: BAR 13: no space for [io size 0x1000] Feb 12 21:53:14.695330 kernel: pci 0000:00:16.5: BAR 13: failed to assign [io size 0x1000] Feb 12 21:53:14.695376 kernel: pci 0000:00:16.4: BAR 13: no space for [io size 0x1000] Feb 12 21:53:14.695421 kernel: pci 0000:00:16.4: BAR 13: failed to assign [io size 0x1000] Feb 12 21:53:14.695469 kernel: pci 0000:00:16.3: BAR 13: no space for [io size 0x1000] Feb 12 21:53:14.695514 kernel: pci 0000:00:16.3: BAR 13: failed to assign [io size 0x1000] Feb 12 21:53:14.695561 kernel: pci 0000:00:15.7: BAR 13: no space for [io size 0x1000] Feb 12 21:53:14.695614 kernel: pci 0000:00:15.7: BAR 13: failed to assign [io size 0x1000] Feb 12 21:53:14.695667 kernel: pci 0000:00:15.6: BAR 13: no space for [io size 0x1000] Feb 12 21:53:14.695714 kernel: pci 0000:00:15.6: BAR 13: failed to assign [io size 0x1000] Feb 12 21:53:14.695759 kernel: pci 0000:00:15.5: BAR 13: no space for [io size 0x1000] Feb 12 21:53:14.695806 kernel: pci 0000:00:15.5: BAR 13: failed to assign [io size 0x1000] Feb 12 21:53:14.695851 kernel: pci 0000:00:15.4: BAR 13: no space for [io size 0x1000] Feb 12 21:53:14.695900 kernel: pci 0000:00:15.4: BAR 13: failed to assign [io size 0x1000] Feb 12 21:53:14.696252 kernel: pci 0000:00:15.3: BAR 13: no space for [io size 0x1000] Feb 12 21:53:14.696317 kernel: pci 0000:00:15.3: BAR 13: failed to assign [io size 0x1000] Feb 12 21:53:14.696681 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Feb 12 21:53:14.696744 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] Feb 12 21:53:14.696794 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] Feb 12 21:53:14.697123 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] Feb 12 21:53:14.697177 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] Feb 12 21:53:14.697231 kernel: pci 0000:03:00.0: BAR 6: assigned [mem 0xfd500000-0xfd50ffff pref] Feb 12 21:53:14.697279 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] Feb 12 21:53:14.697506 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] Feb 12 21:53:14.697555 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] Feb 12 21:53:14.697643 kernel: pci 0000:00:15.0: bridge window [mem 0xc0000000-0xc01fffff 64bit pref] Feb 12 21:53:14.697969 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] Feb 12 21:53:14.698020 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] Feb 12 21:53:14.698067 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] Feb 12 21:53:14.698144 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] Feb 12 21:53:14.698199 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] Feb 12 21:53:14.698245 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] Feb 12 21:53:14.698291 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] Feb 12 21:53:14.698336 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] Feb 12 21:53:14.698381 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] Feb 12 21:53:14.698426 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] Feb 12 21:53:14.698471 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] Feb 12 21:53:14.698520 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] Feb 12 21:53:14.698572 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] Feb 12 21:53:14.698620 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] Feb 12 21:53:14.698665 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] Feb 12 21:53:14.698726 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] Feb 12 21:53:14.698773 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] Feb 12 21:53:14.698819 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] Feb 12 21:53:14.698864 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] Feb 12 21:53:14.699201 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] Feb 12 21:53:14.699257 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] Feb 12 21:53:14.699305 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] Feb 12 21:53:14.699649 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] Feb 12 21:53:14.699718 kernel: pci 0000:0b:00.0: BAR 6: assigned [mem 0xfd400000-0xfd40ffff pref] Feb 12 21:53:14.699770 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] Feb 12 21:53:14.700112 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] Feb 12 21:53:14.700164 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] Feb 12 21:53:14.700212 kernel: pci 0000:00:16.0: bridge window [mem 0xc0200000-0xc03fffff 64bit pref] Feb 12 21:53:14.700260 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] Feb 12 21:53:14.700310 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] Feb 12 21:53:14.700356 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] Feb 12 21:53:14.700400 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] Feb 12 21:53:14.700446 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] Feb 12 21:53:14.700491 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] Feb 12 21:53:14.700536 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] Feb 12 21:53:14.700632 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] Feb 12 21:53:14.700679 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] Feb 12 21:53:14.700723 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] Feb 12 21:53:14.700771 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] Feb 12 21:53:14.700817 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] Feb 12 21:53:14.700861 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] Feb 12 21:53:14.700906 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] Feb 12 21:53:14.700952 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] Feb 12 21:53:14.700998 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] Feb 12 21:53:14.701042 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] Feb 12 21:53:14.701088 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] Feb 12 21:53:14.701133 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] Feb 12 21:53:14.701177 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] Feb 12 21:53:14.701225 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] Feb 12 21:53:14.701270 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] Feb 12 21:53:14.701314 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] Feb 12 21:53:14.701361 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] Feb 12 21:53:14.701405 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] Feb 12 21:53:14.701450 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] Feb 12 21:53:14.701494 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] Feb 12 21:53:14.701540 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] Feb 12 21:53:14.701626 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] Feb 12 21:53:14.701676 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] Feb 12 21:53:14.701722 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] Feb 12 21:53:14.701768 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] Feb 12 21:53:14.702004 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] Feb 12 21:53:14.702057 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] Feb 12 21:53:14.702104 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] Feb 12 21:53:14.702152 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] Feb 12 21:53:14.702481 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] Feb 12 21:53:14.702535 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] Feb 12 21:53:14.702660 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] Feb 12 21:53:14.702811 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] Feb 12 21:53:14.702864 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] Feb 12 21:53:14.703220 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] Feb 12 21:53:14.703287 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] Feb 12 21:53:14.703338 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] Feb 12 21:53:14.703391 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] Feb 12 21:53:14.703452 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] Feb 12 21:53:14.703505 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] Feb 12 21:53:14.703553 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] Feb 12 21:53:14.703620 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] Feb 12 21:53:14.703667 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] Feb 12 21:53:14.703714 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] Feb 12 21:53:14.703759 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] Feb 12 21:53:14.703804 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] Feb 12 21:53:14.703849 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] Feb 12 21:53:14.703897 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] Feb 12 21:53:14.703944 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] Feb 12 21:53:14.703988 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] Feb 12 21:53:14.704035 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] Feb 12 21:53:14.704087 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] Feb 12 21:53:14.704133 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] Feb 12 21:53:14.704178 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] Feb 12 21:53:14.704225 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] Feb 12 21:53:14.704270 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] Feb 12 21:53:14.704317 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] Feb 12 21:53:14.704373 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] Feb 12 21:53:14.704420 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] Feb 12 21:53:14.704465 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] Feb 12 21:53:14.704526 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] Feb 12 21:53:14.704632 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] Feb 12 21:53:14.704683 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] Feb 12 21:53:14.704731 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] Feb 12 21:53:14.704800 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] Feb 12 21:53:14.704871 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] Feb 12 21:53:14.705073 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] Feb 12 21:53:14.705123 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] Feb 12 21:53:14.705170 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] Feb 12 21:53:14.705552 kernel: pci_bus 0000:00: resource 4 [mem 0x000a0000-0x000bffff window] Feb 12 21:53:14.705634 kernel: pci_bus 0000:00: resource 5 [mem 0x000cc000-0x000cffff window] Feb 12 21:53:14.705684 kernel: pci_bus 0000:00: resource 6 [mem 0x000d0000-0x000d3fff window] Feb 12 21:53:14.705744 kernel: pci_bus 0000:00: resource 7 [mem 0x000d4000-0x000d7fff window] Feb 12 21:53:14.705932 kernel: pci_bus 0000:00: resource 8 [mem 0x000d8000-0x000dbfff window] Feb 12 21:53:14.705976 kernel: pci_bus 0000:00: resource 9 [mem 0xc0000000-0xfebfffff window] Feb 12 21:53:14.706016 kernel: pci_bus 0000:00: resource 10 [io 0x0000-0x0cf7 window] Feb 12 21:53:14.706078 kernel: pci_bus 0000:00: resource 11 [io 0x0d00-0xfeff window] Feb 12 21:53:14.706421 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x3fff] Feb 12 21:53:14.706466 kernel: pci_bus 0000:02: resource 1 [mem 0xfd600000-0xfdffffff] Feb 12 21:53:14.706508 kernel: pci_bus 0000:02: resource 2 [mem 0xe7b00000-0xe7ffffff 64bit pref] Feb 12 21:53:14.706712 kernel: pci_bus 0000:02: resource 4 [mem 0x000a0000-0x000bffff window] Feb 12 21:53:14.706763 kernel: pci_bus 0000:02: resource 5 [mem 0x000cc000-0x000cffff window] Feb 12 21:53:14.706806 kernel: pci_bus 0000:02: resource 6 [mem 0x000d0000-0x000d3fff window] Feb 12 21:53:14.706847 kernel: pci_bus 0000:02: resource 7 [mem 0x000d4000-0x000d7fff window] Feb 12 21:53:14.707163 kernel: pci_bus 0000:02: resource 8 [mem 0x000d8000-0x000dbfff window] Feb 12 21:53:14.707208 kernel: pci_bus 0000:02: resource 9 [mem 0xc0000000-0xfebfffff window] Feb 12 21:53:14.707250 kernel: pci_bus 0000:02: resource 10 [io 0x0000-0x0cf7 window] Feb 12 21:53:14.707292 kernel: pci_bus 0000:02: resource 11 [io 0x0d00-0xfeff window] Feb 12 21:53:14.707342 kernel: pci_bus 0000:03: resource 0 [io 0x4000-0x4fff] Feb 12 21:53:14.707385 kernel: pci_bus 0000:03: resource 1 [mem 0xfd500000-0xfd5fffff] Feb 12 21:53:14.707426 kernel: pci_bus 0000:03: resource 2 [mem 0xc0000000-0xc01fffff 64bit pref] Feb 12 21:53:14.707474 kernel: pci_bus 0000:04: resource 0 [io 0x8000-0x8fff] Feb 12 21:53:14.707517 kernel: pci_bus 0000:04: resource 1 [mem 0xfd100000-0xfd1fffff] Feb 12 21:53:14.707558 kernel: pci_bus 0000:04: resource 2 [mem 0xe7800000-0xe78fffff 64bit pref] Feb 12 21:53:14.707627 kernel: pci_bus 0000:05: resource 0 [io 0xc000-0xcfff] Feb 12 21:53:14.707670 kernel: pci_bus 0000:05: resource 1 [mem 0xfcd00000-0xfcdfffff] Feb 12 21:53:14.707712 kernel: pci_bus 0000:05: resource 2 [mem 0xe7400000-0xe74fffff 64bit pref] Feb 12 21:53:14.707757 kernel: pci_bus 0000:06: resource 1 [mem 0xfc900000-0xfc9fffff] Feb 12 21:53:14.707802 kernel: pci_bus 0000:06: resource 2 [mem 0xe7000000-0xe70fffff 64bit pref] Feb 12 21:53:14.707847 kernel: pci_bus 0000:07: resource 1 [mem 0xfc500000-0xfc5fffff] Feb 12 21:53:14.707890 kernel: pci_bus 0000:07: resource 2 [mem 0xe6c00000-0xe6cfffff 64bit pref] Feb 12 21:53:14.707939 kernel: pci_bus 0000:08: resource 1 [mem 0xfc100000-0xfc1fffff] Feb 12 21:53:14.707991 kernel: pci_bus 0000:08: resource 2 [mem 0xe6800000-0xe68fffff 64bit pref] Feb 12 21:53:14.708040 kernel: pci_bus 0000:09: resource 1 [mem 0xfbd00000-0xfbdfffff] Feb 12 21:53:14.708112 kernel: pci_bus 0000:09: resource 2 [mem 0xe6400000-0xe64fffff 64bit pref] Feb 12 21:53:14.708187 kernel: pci_bus 0000:0a: resource 1 [mem 0xfb900000-0xfb9fffff] Feb 12 21:53:14.708231 kernel: pci_bus 0000:0a: resource 2 [mem 0xe6000000-0xe60fffff 64bit pref] Feb 12 21:53:14.708278 kernel: pci_bus 0000:0b: resource 0 [io 0x5000-0x5fff] Feb 12 21:53:14.708321 kernel: pci_bus 0000:0b: resource 1 [mem 0xfd400000-0xfd4fffff] Feb 12 21:53:14.708363 kernel: pci_bus 0000:0b: resource 2 [mem 0xc0200000-0xc03fffff 64bit pref] Feb 12 21:53:14.708413 kernel: pci_bus 0000:0c: resource 0 [io 0x9000-0x9fff] Feb 12 21:53:14.708466 kernel: pci_bus 0000:0c: resource 1 [mem 0xfd000000-0xfd0fffff] Feb 12 21:53:14.708509 kernel: pci_bus 0000:0c: resource 2 [mem 0xe7700000-0xe77fffff 64bit pref] Feb 12 21:53:14.708555 kernel: pci_bus 0000:0d: resource 0 [io 0xd000-0xdfff] Feb 12 21:53:14.708669 kernel: pci_bus 0000:0d: resource 1 [mem 0xfcc00000-0xfccfffff] Feb 12 21:53:14.708723 kernel: pci_bus 0000:0d: resource 2 [mem 0xe7300000-0xe73fffff 64bit pref] Feb 12 21:53:14.708771 kernel: pci_bus 0000:0e: resource 1 [mem 0xfc800000-0xfc8fffff] Feb 12 21:53:14.708817 kernel: pci_bus 0000:0e: resource 2 [mem 0xe6f00000-0xe6ffffff 64bit pref] Feb 12 21:53:14.708864 kernel: pci_bus 0000:0f: resource 1 [mem 0xfc400000-0xfc4fffff] Feb 12 21:53:14.708914 kernel: pci_bus 0000:0f: resource 2 [mem 0xe6b00000-0xe6bfffff 64bit pref] Feb 12 21:53:14.708977 kernel: pci_bus 0000:10: resource 1 [mem 0xfc000000-0xfc0fffff] Feb 12 21:53:14.709022 kernel: pci_bus 0000:10: resource 2 [mem 0xe6700000-0xe67fffff 64bit pref] Feb 12 21:53:14.709069 kernel: pci_bus 0000:11: resource 1 [mem 0xfbc00000-0xfbcfffff] Feb 12 21:53:14.709123 kernel: pci_bus 0000:11: resource 2 [mem 0xe6300000-0xe63fffff 64bit pref] Feb 12 21:53:14.709171 kernel: pci_bus 0000:12: resource 1 [mem 0xfb800000-0xfb8fffff] Feb 12 21:53:14.709219 kernel: pci_bus 0000:12: resource 2 [mem 0xe5f00000-0xe5ffffff 64bit pref] Feb 12 21:53:14.709274 kernel: pci_bus 0000:13: resource 0 [io 0x6000-0x6fff] Feb 12 21:53:14.709319 kernel: pci_bus 0000:13: resource 1 [mem 0xfd300000-0xfd3fffff] Feb 12 21:53:14.709361 kernel: pci_bus 0000:13: resource 2 [mem 0xe7a00000-0xe7afffff 64bit pref] Feb 12 21:53:14.709409 kernel: pci_bus 0000:14: resource 0 [io 0xa000-0xafff] Feb 12 21:53:14.709453 kernel: pci_bus 0000:14: resource 1 [mem 0xfcf00000-0xfcffffff] Feb 12 21:53:14.709494 kernel: pci_bus 0000:14: resource 2 [mem 0xe7600000-0xe76fffff 64bit pref] Feb 12 21:53:14.709541 kernel: pci_bus 0000:15: resource 0 [io 0xe000-0xefff] Feb 12 21:53:14.709596 kernel: pci_bus 0000:15: resource 1 [mem 0xfcb00000-0xfcbfffff] Feb 12 21:53:14.709653 kernel: pci_bus 0000:15: resource 2 [mem 0xe7200000-0xe72fffff 64bit pref] Feb 12 21:53:14.709711 kernel: pci_bus 0000:16: resource 1 [mem 0xfc700000-0xfc7fffff] Feb 12 21:53:14.709758 kernel: pci_bus 0000:16: resource 2 [mem 0xe6e00000-0xe6efffff 64bit pref] Feb 12 21:53:14.709805 kernel: pci_bus 0000:17: resource 1 [mem 0xfc300000-0xfc3fffff] Feb 12 21:53:14.709847 kernel: pci_bus 0000:17: resource 2 [mem 0xe6a00000-0xe6afffff 64bit pref] Feb 12 21:53:14.709894 kernel: pci_bus 0000:18: resource 1 [mem 0xfbf00000-0xfbffffff] Feb 12 21:53:14.709937 kernel: pci_bus 0000:18: resource 2 [mem 0xe6600000-0xe66fffff 64bit pref] Feb 12 21:53:14.709983 kernel: pci_bus 0000:19: resource 1 [mem 0xfbb00000-0xfbbfffff] Feb 12 21:53:14.710028 kernel: pci_bus 0000:19: resource 2 [mem 0xe6200000-0xe62fffff 64bit pref] Feb 12 21:53:14.710078 kernel: pci_bus 0000:1a: resource 1 [mem 0xfb700000-0xfb7fffff] Feb 12 21:53:14.710121 kernel: pci_bus 0000:1a: resource 2 [mem 0xe5e00000-0xe5efffff 64bit pref] Feb 12 21:53:14.710167 kernel: pci_bus 0000:1b: resource 0 [io 0x7000-0x7fff] Feb 12 21:53:14.710210 kernel: pci_bus 0000:1b: resource 1 [mem 0xfd200000-0xfd2fffff] Feb 12 21:53:14.710252 kernel: pci_bus 0000:1b: resource 2 [mem 0xe7900000-0xe79fffff 64bit pref] Feb 12 21:53:14.710304 kernel: pci_bus 0000:1c: resource 0 [io 0xb000-0xbfff] Feb 12 21:53:14.710347 kernel: pci_bus 0000:1c: resource 1 [mem 0xfce00000-0xfcefffff] Feb 12 21:53:14.710389 kernel: pci_bus 0000:1c: resource 2 [mem 0xe7500000-0xe75fffff 64bit pref] Feb 12 21:53:14.710435 kernel: pci_bus 0000:1d: resource 1 [mem 0xfca00000-0xfcafffff] Feb 12 21:53:14.710477 kernel: pci_bus 0000:1d: resource 2 [mem 0xe7100000-0xe71fffff 64bit pref] Feb 12 21:53:14.710524 kernel: pci_bus 0000:1e: resource 1 [mem 0xfc600000-0xfc6fffff] Feb 12 21:53:14.710575 kernel: pci_bus 0000:1e: resource 2 [mem 0xe6d00000-0xe6dfffff 64bit pref] Feb 12 21:53:14.710623 kernel: pci_bus 0000:1f: resource 1 [mem 0xfc200000-0xfc2fffff] Feb 12 21:53:14.710666 kernel: pci_bus 0000:1f: resource 2 [mem 0xe6900000-0xe69fffff 64bit pref] Feb 12 21:53:14.710711 kernel: pci_bus 0000:20: resource 1 [mem 0xfbe00000-0xfbefffff] Feb 12 21:53:14.710755 kernel: pci_bus 0000:20: resource 2 [mem 0xe6500000-0xe65fffff 64bit pref] Feb 12 21:53:14.710802 kernel: pci_bus 0000:21: resource 1 [mem 0xfba00000-0xfbafffff] Feb 12 21:53:14.710845 kernel: pci_bus 0000:21: resource 2 [mem 0xe6100000-0xe61fffff 64bit pref] Feb 12 21:53:14.710894 kernel: pci_bus 0000:22: resource 1 [mem 0xfb600000-0xfb6fffff] Feb 12 21:53:14.710937 kernel: pci_bus 0000:22: resource 2 [mem 0xe5d00000-0xe5dfffff 64bit pref] Feb 12 21:53:14.710990 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Feb 12 21:53:14.711000 kernel: PCI: CLS 32 bytes, default 64 Feb 12 21:53:14.711007 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Feb 12 21:53:14.711014 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns Feb 12 21:53:14.711020 kernel: clocksource: Switched to clocksource tsc Feb 12 21:53:14.711027 kernel: Initialise system trusted keyrings Feb 12 21:53:14.711034 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Feb 12 21:53:14.711040 kernel: Key type asymmetric registered Feb 12 21:53:14.711046 kernel: Asymmetric key parser 'x509' registered Feb 12 21:53:14.711052 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Feb 12 21:53:14.711058 kernel: io scheduler mq-deadline registered Feb 12 21:53:14.711064 kernel: io scheduler kyber registered Feb 12 21:53:14.711070 kernel: io scheduler bfq registered Feb 12 21:53:14.711120 kernel: pcieport 0000:00:15.0: PME: Signaling with IRQ 24 Feb 12 21:53:14.711170 kernel: pcieport 0000:00:15.0: pciehp: Slot #160 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 12 21:53:14.711247 kernel: pcieport 0000:00:15.1: PME: Signaling with IRQ 25 Feb 12 21:53:14.711333 kernel: pcieport 0000:00:15.1: pciehp: Slot #161 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 12 21:53:14.711393 kernel: pcieport 0000:00:15.2: PME: Signaling with IRQ 26 Feb 12 21:53:14.711979 kernel: pcieport 0000:00:15.2: pciehp: Slot #162 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 12 21:53:14.712041 kernel: pcieport 0000:00:15.3: PME: Signaling with IRQ 27 Feb 12 21:53:14.712096 kernel: pcieport 0000:00:15.3: pciehp: Slot #163 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 12 21:53:14.712175 kernel: pcieport 0000:00:15.4: PME: Signaling with IRQ 28 Feb 12 21:53:14.712437 kernel: pcieport 0000:00:15.4: pciehp: Slot #164 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 12 21:53:14.712491 kernel: pcieport 0000:00:15.5: PME: Signaling with IRQ 29 Feb 12 21:53:14.712541 kernel: pcieport 0000:00:15.5: pciehp: Slot #165 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 12 21:53:14.712643 kernel: pcieport 0000:00:15.6: PME: Signaling with IRQ 30 Feb 12 21:53:14.712695 kernel: pcieport 0000:00:15.6: pciehp: Slot #166 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 12 21:53:14.712742 kernel: pcieport 0000:00:15.7: PME: Signaling with IRQ 31 Feb 12 21:53:14.712790 kernel: pcieport 0000:00:15.7: pciehp: Slot #167 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 12 21:53:14.712837 kernel: pcieport 0000:00:16.0: PME: Signaling with IRQ 32 Feb 12 21:53:14.712883 kernel: pcieport 0000:00:16.0: pciehp: Slot #192 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 12 21:53:14.712930 kernel: pcieport 0000:00:16.1: PME: Signaling with IRQ 33 Feb 12 21:53:14.713001 kernel: pcieport 0000:00:16.1: pciehp: Slot #193 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 12 21:53:14.713948 kernel: pcieport 0000:00:16.2: PME: Signaling with IRQ 34 Feb 12 21:53:14.714001 kernel: pcieport 0000:00:16.2: pciehp: Slot #194 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 12 21:53:14.714341 kernel: pcieport 0000:00:16.3: PME: Signaling with IRQ 35 Feb 12 21:53:14.714405 kernel: pcieport 0000:00:16.3: pciehp: Slot #195 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 12 21:53:14.714457 kernel: pcieport 0000:00:16.4: PME: Signaling with IRQ 36 Feb 12 21:53:14.714509 kernel: pcieport 0000:00:16.4: pciehp: Slot #196 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 12 21:53:14.714957 kernel: pcieport 0000:00:16.5: PME: Signaling with IRQ 37 Feb 12 21:53:14.715009 kernel: pcieport 0000:00:16.5: pciehp: Slot #197 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 12 21:53:14.715199 kernel: pcieport 0000:00:16.6: PME: Signaling with IRQ 38 Feb 12 21:53:14.715251 kernel: pcieport 0000:00:16.6: pciehp: Slot #198 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 12 21:53:14.715300 kernel: pcieport 0000:00:16.7: PME: Signaling with IRQ 39 Feb 12 21:53:14.715530 kernel: pcieport 0000:00:16.7: pciehp: Slot #199 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 12 21:53:14.715601 kernel: pcieport 0000:00:17.0: PME: Signaling with IRQ 40 Feb 12 21:53:14.715652 kernel: pcieport 0000:00:17.0: pciehp: Slot #224 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 12 21:53:14.715723 kernel: pcieport 0000:00:17.1: PME: Signaling with IRQ 41 Feb 12 21:53:14.716022 kernel: pcieport 0000:00:17.1: pciehp: Slot #225 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 12 21:53:14.716256 kernel: pcieport 0000:00:17.2: PME: Signaling with IRQ 42 Feb 12 21:53:14.716314 kernel: pcieport 0000:00:17.2: pciehp: Slot #226 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 12 21:53:14.716364 kernel: pcieport 0000:00:17.3: PME: Signaling with IRQ 43 Feb 12 21:53:14.716418 kernel: pcieport 0000:00:17.3: pciehp: Slot #227 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 12 21:53:14.716467 kernel: pcieport 0000:00:17.4: PME: Signaling with IRQ 44 Feb 12 21:53:14.716514 kernel: pcieport 0000:00:17.4: pciehp: Slot #228 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 12 21:53:14.716562 kernel: pcieport 0000:00:17.5: PME: Signaling with IRQ 45 Feb 12 21:53:14.716888 kernel: pcieport 0000:00:17.5: pciehp: Slot #229 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 12 21:53:14.717018 kernel: pcieport 0000:00:17.6: PME: Signaling with IRQ 46 Feb 12 21:53:14.717079 kernel: pcieport 0000:00:17.6: pciehp: Slot #230 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 12 21:53:14.717128 kernel: pcieport 0000:00:17.7: PME: Signaling with IRQ 47 Feb 12 21:53:14.717175 kernel: pcieport 0000:00:17.7: pciehp: Slot #231 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 12 21:53:14.717415 kernel: pcieport 0000:00:18.0: PME: Signaling with IRQ 48 Feb 12 21:53:14.717466 kernel: pcieport 0000:00:18.0: pciehp: Slot #256 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 12 21:53:14.717538 kernel: pcieport 0000:00:18.1: PME: Signaling with IRQ 49 Feb 12 21:53:14.717624 kernel: pcieport 0000:00:18.1: pciehp: Slot #257 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 12 21:53:14.717683 kernel: pcieport 0000:00:18.2: PME: Signaling with IRQ 50 Feb 12 21:53:14.717739 kernel: pcieport 0000:00:18.2: pciehp: Slot #258 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 12 21:53:14.717795 kernel: pcieport 0000:00:18.3: PME: Signaling with IRQ 51 Feb 12 21:53:14.717847 kernel: pcieport 0000:00:18.3: pciehp: Slot #259 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 12 21:53:14.717900 kernel: pcieport 0000:00:18.4: PME: Signaling with IRQ 52 Feb 12 21:53:14.717949 kernel: pcieport 0000:00:18.4: pciehp: Slot #260 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 12 21:53:14.717996 kernel: pcieport 0000:00:18.5: PME: Signaling with IRQ 53 Feb 12 21:53:14.718044 kernel: pcieport 0000:00:18.5: pciehp: Slot #261 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 12 21:53:14.718098 kernel: pcieport 0000:00:18.6: PME: Signaling with IRQ 54 Feb 12 21:53:14.718154 kernel: pcieport 0000:00:18.6: pciehp: Slot #262 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 12 21:53:14.718221 kernel: pcieport 0000:00:18.7: PME: Signaling with IRQ 55 Feb 12 21:53:14.718281 kernel: pcieport 0000:00:18.7: pciehp: Slot #263 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 12 21:53:14.718292 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Feb 12 21:53:14.718304 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 12 21:53:14.718312 kernel: 00:05: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Feb 12 21:53:14.718318 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBC,PNP0f13:MOUS] at 0x60,0x64 irq 1,12 Feb 12 21:53:14.718325 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Feb 12 21:53:14.718331 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Feb 12 21:53:14.718384 kernel: rtc_cmos 00:01: registered as rtc0 Feb 12 21:53:14.718438 kernel: rtc_cmos 00:01: setting system clock to 2024-02-12T21:53:14 UTC (1707774794) Feb 12 21:53:14.718482 kernel: rtc_cmos 00:01: alarms up to one month, y3k, 114 bytes nvram Feb 12 21:53:14.718490 kernel: fail to initialize ptp_kvm Feb 12 21:53:14.718499 kernel: intel_pstate: CPU model not supported Feb 12 21:53:14.718506 kernel: NET: Registered PF_INET6 protocol family Feb 12 21:53:14.718512 kernel: Segment Routing with IPv6 Feb 12 21:53:14.718518 kernel: In-situ OAM (IOAM) with IPv6 Feb 12 21:53:14.718524 kernel: NET: Registered PF_PACKET protocol family Feb 12 21:53:14.718535 kernel: Key type dns_resolver registered Feb 12 21:53:14.718545 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Feb 12 21:53:14.718552 kernel: IPI shorthand broadcast: enabled Feb 12 21:53:14.718560 kernel: sched_clock: Marking stable (851367174, 224742569)->(1144495101, -68385358) Feb 12 21:53:14.718896 kernel: registered taskstats version 1 Feb 12 21:53:14.718905 kernel: Loading compiled-in X.509 certificates Feb 12 21:53:14.718911 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.148-flatcar: 253e5c5c936b12e2ff2626e7f3214deb753330c8' Feb 12 21:53:14.718917 kernel: Key type .fscrypt registered Feb 12 21:53:14.718923 kernel: Key type fscrypt-provisioning registered Feb 12 21:53:14.718929 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 12 21:53:14.718935 kernel: ima: Allocated hash algorithm: sha1 Feb 12 21:53:14.718942 kernel: ima: No architecture policies found Feb 12 21:53:14.718950 kernel: Freeing unused kernel image (initmem) memory: 45496K Feb 12 21:53:14.718956 kernel: Write protecting the kernel read-only data: 28672k Feb 12 21:53:14.718963 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Feb 12 21:53:14.718969 kernel: Freeing unused kernel image (rodata/data gap) memory: 636K Feb 12 21:53:14.718975 kernel: Run /init as init process Feb 12 21:53:14.718981 kernel: with arguments: Feb 12 21:53:14.719177 kernel: /init Feb 12 21:53:14.719185 kernel: with environment: Feb 12 21:53:14.719191 kernel: HOME=/ Feb 12 21:53:14.719197 kernel: TERM=linux Feb 12 21:53:14.719205 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 12 21:53:14.719213 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 12 21:53:14.719221 systemd[1]: Detected virtualization vmware. Feb 12 21:53:14.719228 systemd[1]: Detected architecture x86-64. Feb 12 21:53:14.719234 systemd[1]: Running in initrd. Feb 12 21:53:14.719240 systemd[1]: No hostname configured, using default hostname. Feb 12 21:53:14.719246 systemd[1]: Hostname set to . Feb 12 21:53:14.719255 systemd[1]: Initializing machine ID from random generator. Feb 12 21:53:14.719261 systemd[1]: Queued start job for default target initrd.target. Feb 12 21:53:14.719267 systemd[1]: Started systemd-ask-password-console.path. Feb 12 21:53:14.719273 systemd[1]: Reached target cryptsetup.target. Feb 12 21:53:14.719279 systemd[1]: Reached target paths.target. Feb 12 21:53:14.719286 systemd[1]: Reached target slices.target. Feb 12 21:53:14.719292 systemd[1]: Reached target swap.target. Feb 12 21:53:14.719299 systemd[1]: Reached target timers.target. Feb 12 21:53:14.719306 systemd[1]: Listening on iscsid.socket. Feb 12 21:53:14.719312 systemd[1]: Listening on iscsiuio.socket. Feb 12 21:53:14.719319 systemd[1]: Listening on systemd-journald-audit.socket. Feb 12 21:53:14.719325 systemd[1]: Listening on systemd-journald-dev-log.socket. Feb 12 21:53:14.719331 systemd[1]: Listening on systemd-journald.socket. Feb 12 21:53:14.719338 systemd[1]: Listening on systemd-networkd.socket. Feb 12 21:53:14.719344 systemd[1]: Listening on systemd-udevd-control.socket. Feb 12 21:53:14.719350 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 12 21:53:14.719358 systemd[1]: Reached target sockets.target. Feb 12 21:53:14.719364 systemd[1]: Starting kmod-static-nodes.service... Feb 12 21:53:14.719387 systemd[1]: Finished network-cleanup.service. Feb 12 21:53:14.719397 systemd[1]: Starting systemd-fsck-usr.service... Feb 12 21:53:14.719405 systemd[1]: Starting systemd-journald.service... Feb 12 21:53:14.719412 systemd[1]: Starting systemd-modules-load.service... Feb 12 21:53:14.719422 systemd[1]: Starting systemd-resolved.service... Feb 12 21:53:14.719432 systemd[1]: Starting systemd-vconsole-setup.service... Feb 12 21:53:14.719441 systemd[1]: Finished kmod-static-nodes.service. Feb 12 21:53:14.719727 systemd[1]: Finished systemd-fsck-usr.service. Feb 12 21:53:14.719734 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 12 21:53:14.719741 systemd[1]: Finished systemd-vconsole-setup.service. Feb 12 21:53:14.719747 kernel: audit: type=1130 audit(1707774794.653:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:53:14.719754 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 12 21:53:14.719760 kernel: audit: type=1130 audit(1707774794.658:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:53:14.719767 systemd[1]: Starting dracut-cmdline-ask.service... Feb 12 21:53:14.719773 systemd[1]: Finished dracut-cmdline-ask.service. Feb 12 21:53:14.719781 systemd[1]: Starting dracut-cmdline.service... Feb 12 21:53:14.719790 kernel: audit: type=1130 audit(1707774794.677:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:53:14.719798 systemd[1]: Started systemd-resolved.service. Feb 12 21:53:14.719804 systemd[1]: Reached target nss-lookup.target. Feb 12 21:53:14.719811 kernel: audit: type=1130 audit(1707774794.686:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:53:14.719818 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 12 21:53:14.719825 kernel: Bridge firewalling registered Feb 12 21:53:14.719920 kernel: SCSI subsystem initialized Feb 12 21:53:14.719933 systemd-journald[216]: Journal started Feb 12 21:53:14.719972 systemd-journald[216]: Runtime Journal (/run/log/journal/d9b596893c2a499dafb1df5d718017ab) is 4.8M, max 38.8M, 34.0M free. Feb 12 21:53:14.653000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:53:14.658000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:53:14.677000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:53:14.686000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:53:14.657554 systemd-modules-load[217]: Inserted module 'overlay' Feb 12 21:53:14.678100 systemd-resolved[218]: Positive Trust Anchors: Feb 12 21:53:14.727667 systemd[1]: Started systemd-journald.service. Feb 12 21:53:14.727684 kernel: audit: type=1130 audit(1707774794.719:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:53:14.727715 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 12 21:53:14.727727 kernel: device-mapper: uevent: version 1.0.3 Feb 12 21:53:14.727735 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Feb 12 21:53:14.719000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:53:14.678107 systemd-resolved[218]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 12 21:53:14.678127 systemd-resolved[218]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 12 21:53:14.731141 kernel: audit: type=1130 audit(1707774794.726:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:53:14.726000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:53:14.686563 systemd-resolved[218]: Defaulting to hostname 'linux'. Feb 12 21:53:14.698896 systemd-modules-load[217]: Inserted module 'br_netfilter' Feb 12 21:53:14.727276 systemd-modules-load[217]: Inserted module 'dm_multipath' Feb 12 21:53:14.727710 systemd[1]: Finished systemd-modules-load.service. Feb 12 21:53:14.732532 dracut-cmdline[233]: dracut-dracut-053 Feb 12 21:53:14.732532 dracut-cmdline[233]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LA Feb 12 21:53:14.732532 dracut-cmdline[233]: BEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=f2beb0668e3dab90bbcf0ace3803b7ee02142bfb86913ef12ef6d2ee81a411a4 Feb 12 21:53:14.728252 systemd[1]: Starting systemd-sysctl.service... Feb 12 21:53:14.735136 systemd[1]: Finished systemd-sysctl.service. Feb 12 21:53:14.733000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:53:14.738635 kernel: audit: type=1130 audit(1707774794.733:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:53:14.749584 kernel: Loading iSCSI transport class v2.0-870. Feb 12 21:53:14.757589 kernel: iscsi: registered transport (tcp) Feb 12 21:53:14.771692 kernel: iscsi: registered transport (qla4xxx) Feb 12 21:53:14.771740 kernel: QLogic iSCSI HBA Driver Feb 12 21:53:14.788354 systemd[1]: Finished dracut-cmdline.service. Feb 12 21:53:14.786000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:53:14.788976 systemd[1]: Starting dracut-pre-udev.service... Feb 12 21:53:14.791785 kernel: audit: type=1130 audit(1707774794.786:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:53:14.825623 kernel: raid6: avx2x4 gen() 47563 MB/s Feb 12 21:53:14.842600 kernel: raid6: avx2x4 xor() 18783 MB/s Feb 12 21:53:14.859584 kernel: raid6: avx2x2 gen() 52170 MB/s Feb 12 21:53:14.876588 kernel: raid6: avx2x2 xor() 31483 MB/s Feb 12 21:53:14.893595 kernel: raid6: avx2x1 gen() 44556 MB/s Feb 12 21:53:14.910588 kernel: raid6: avx2x1 xor() 27602 MB/s Feb 12 21:53:14.927588 kernel: raid6: sse2x4 gen() 20890 MB/s Feb 12 21:53:14.944590 kernel: raid6: sse2x4 xor() 11836 MB/s Feb 12 21:53:14.961588 kernel: raid6: sse2x2 gen() 21268 MB/s Feb 12 21:53:14.978588 kernel: raid6: sse2x2 xor() 13345 MB/s Feb 12 21:53:14.995584 kernel: raid6: sse2x1 gen() 18099 MB/s Feb 12 21:53:15.012810 kernel: raid6: sse2x1 xor() 8881 MB/s Feb 12 21:53:15.012845 kernel: raid6: using algorithm avx2x2 gen() 52170 MB/s Feb 12 21:53:15.012852 kernel: raid6: .... xor() 31483 MB/s, rmw enabled Feb 12 21:53:15.013985 kernel: raid6: using avx2x2 recovery algorithm Feb 12 21:53:15.022585 kernel: xor: automatically using best checksumming function avx Feb 12 21:53:15.081589 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Feb 12 21:53:15.086255 systemd[1]: Finished dracut-pre-udev.service. Feb 12 21:53:15.086852 systemd[1]: Starting systemd-udevd.service... Feb 12 21:53:15.084000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:53:15.085000 audit: BPF prog-id=7 op=LOAD Feb 12 21:53:15.085000 audit: BPF prog-id=8 op=LOAD Feb 12 21:53:15.091582 kernel: audit: type=1130 audit(1707774795.084:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:53:15.096959 systemd-udevd[415]: Using default interface naming scheme 'v252'. Feb 12 21:53:15.099532 systemd[1]: Started systemd-udevd.service. Feb 12 21:53:15.100013 systemd[1]: Starting dracut-pre-trigger.service... Feb 12 21:53:15.098000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:53:15.107870 dracut-pre-trigger[420]: rd.md=0: removing MD RAID activation Feb 12 21:53:15.123750 systemd[1]: Finished dracut-pre-trigger.service. Feb 12 21:53:15.122000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:53:15.124284 systemd[1]: Starting systemd-udev-trigger.service... Feb 12 21:53:15.185399 systemd[1]: Finished systemd-udev-trigger.service. Feb 12 21:53:15.183000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:53:15.241580 kernel: VMware PVSCSI driver - version 1.0.7.0-k Feb 12 21:53:15.243409 kernel: vmw_pvscsi: using 64bit dma Feb 12 21:53:15.243425 kernel: vmw_pvscsi: max_id: 16 Feb 12 21:53:15.243436 kernel: vmw_pvscsi: setting ring_pages to 8 Feb 12 21:53:15.250582 kernel: VMware vmxnet3 virtual NIC driver - version 1.6.0.0-k-NAPI Feb 12 21:53:15.252578 kernel: vmxnet3 0000:0b:00.0: # of Tx queues : 2, # of Rx queues : 2 Feb 12 21:53:15.255810 kernel: vmw_pvscsi: enabling reqCallThreshold Feb 12 21:53:15.255830 kernel: vmw_pvscsi: driver-based request coalescing enabled Feb 12 21:53:15.255839 kernel: vmw_pvscsi: using MSI-X Feb 12 21:53:15.257112 kernel: scsi host0: VMware PVSCSI storage adapter rev 2, req/cmp/msg rings: 8/8/1 pages, cmd_per_lun=254 Feb 12 21:53:15.258656 kernel: vmw_pvscsi 0000:03:00.0: VMware PVSCSI rev 2 host #0 Feb 12 21:53:15.258740 kernel: scsi 0:0:0:0: Direct-Access VMware Virtual disk 2.0 PQ: 0 ANSI: 6 Feb 12 21:53:15.260582 kernel: vmxnet3 0000:0b:00.0 eth0: NIC Link is Up 10000 Mbps Feb 12 21:53:15.274583 kernel: libata version 3.00 loaded. Feb 12 21:53:15.276580 kernel: ata_piix 0000:00:07.1: version 2.13 Feb 12 21:53:15.277581 kernel: scsi host1: ata_piix Feb 12 21:53:15.279587 kernel: cryptd: max_cpu_qlen set to 1000 Feb 12 21:53:15.279603 kernel: scsi host2: ata_piix Feb 12 21:53:15.279678 kernel: ata1: PATA max UDMA/33 cmd 0x1f0 ctl 0x3f6 bmdma 0x1060 irq 14 Feb 12 21:53:15.281058 kernel: ata2: PATA max UDMA/33 cmd 0x170 ctl 0x376 bmdma 0x1068 irq 15 Feb 12 21:53:15.285581 kernel: vmxnet3 0000:0b:00.0 ens192: renamed from eth0 Feb 12 21:53:15.451630 kernel: ata2.00: ATAPI: VMware Virtual IDE CDROM Drive, 00000001, max UDMA/33 Feb 12 21:53:15.457580 kernel: scsi 2:0:0:0: CD-ROM NECVMWar VMware IDE CDR10 1.00 PQ: 0 ANSI: 5 Feb 12 21:53:15.463596 kernel: AVX2 version of gcm_enc/dec engaged. Feb 12 21:53:15.463644 kernel: AES CTR mode by8 optimization enabled Feb 12 21:53:15.472561 kernel: sd 0:0:0:0: [sda] 17805312 512-byte logical blocks: (9.12 GB/8.49 GiB) Feb 12 21:53:15.472713 kernel: sd 0:0:0:0: [sda] Write Protect is off Feb 12 21:53:15.472816 kernel: sd 0:0:0:0: [sda] Mode Sense: 31 00 00 00 Feb 12 21:53:15.472912 kernel: sd 0:0:0:0: [sda] Cache data unavailable Feb 12 21:53:15.473006 kernel: sd 0:0:0:0: [sda] Assuming drive cache: write through Feb 12 21:53:15.550880 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 12 21:53:15.550920 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Feb 12 21:53:15.574024 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 1x/1x writer dvd-ram cd/rw xa/form2 cdda tray Feb 12 21:53:15.574135 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Feb 12 21:53:15.579048 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Feb 12 21:53:15.583432 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Feb 12 21:53:15.583744 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Feb 12 21:53:15.584397 systemd[1]: Starting disk-uuid.service... Feb 12 21:53:15.586582 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by (udev-worker) (475) Feb 12 21:53:15.597583 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Feb 12 21:53:15.597594 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Feb 12 21:53:15.602474 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 12 21:53:16.670543 disk-uuid[547]: The operation has completed successfully. Feb 12 21:53:16.670806 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 12 21:53:16.770306 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 12 21:53:16.768000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:53:16.768000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:53:16.770369 systemd[1]: Finished disk-uuid.service. Feb 12 21:53:16.771005 systemd[1]: Starting verity-setup.service... Feb 12 21:53:16.782012 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Feb 12 21:53:16.826294 systemd[1]: Found device dev-mapper-usr.device. Feb 12 21:53:16.826000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:53:16.827361 systemd[1]: Mounting sysusr-usr.mount... Feb 12 21:53:16.827559 systemd[1]: Finished verity-setup.service. Feb 12 21:53:16.879581 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Feb 12 21:53:16.879728 systemd[1]: Mounted sysusr-usr.mount. Feb 12 21:53:16.880348 systemd[1]: Starting afterburn-network-kargs.service... Feb 12 21:53:16.880828 systemd[1]: Starting ignition-setup.service... Feb 12 21:53:16.982262 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 12 21:53:16.982299 kernel: BTRFS info (device sda6): using free space tree Feb 12 21:53:16.982308 kernel: BTRFS info (device sda6): has skinny extents Feb 12 21:53:17.032583 kernel: BTRFS info (device sda6): enabling ssd optimizations Feb 12 21:53:17.055872 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 12 21:53:17.240211 systemd[1]: Finished ignition-setup.service. Feb 12 21:53:17.238000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:53:17.240887 systemd[1]: Starting ignition-fetch-offline.service... Feb 12 21:53:17.320689 systemd[1]: Finished afterburn-network-kargs.service. Feb 12 21:53:17.321273 systemd[1]: Starting parse-ip-for-networkd.service... Feb 12 21:53:17.319000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=afterburn-network-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:53:17.362000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:53:17.362000 audit: BPF prog-id=9 op=LOAD Feb 12 21:53:17.363967 systemd[1]: Finished parse-ip-for-networkd.service. Feb 12 21:53:17.364854 systemd[1]: Starting systemd-networkd.service... Feb 12 21:53:17.378386 systemd-networkd[730]: lo: Link UP Feb 12 21:53:17.378393 systemd-networkd[730]: lo: Gained carrier Feb 12 21:53:17.378671 systemd-networkd[730]: Enumeration completed Feb 12 21:53:17.378860 systemd-networkd[730]: ens192: Configuring with /etc/systemd/network/10-dracut-cmdline-99.network. Feb 12 21:53:17.383723 kernel: vmxnet3 0000:0b:00.0 ens192: intr type 3, mode 0, 3 vectors allocated Feb 12 21:53:17.383841 kernel: vmxnet3 0000:0b:00.0 ens192: NIC Link is Up 10000 Mbps Feb 12 21:53:17.377000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:53:17.379313 systemd[1]: Started systemd-networkd.service. Feb 12 21:53:17.379459 systemd[1]: Reached target network.target. Feb 12 21:53:17.380025 systemd[1]: Starting iscsiuio.service... Feb 12 21:53:17.382802 systemd-networkd[730]: ens192: Link UP Feb 12 21:53:17.382804 systemd-networkd[730]: ens192: Gained carrier Feb 12 21:53:17.385561 systemd[1]: Started iscsiuio.service. Feb 12 21:53:17.384000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:53:17.386612 systemd[1]: Starting iscsid.service... Feb 12 21:53:17.389079 iscsid[735]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Feb 12 21:53:17.389079 iscsid[735]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Feb 12 21:53:17.389079 iscsid[735]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Feb 12 21:53:17.389079 iscsid[735]: If using hardware iscsi like qla4xxx this message can be ignored. Feb 12 21:53:17.389079 iscsid[735]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Feb 12 21:53:17.390053 iscsid[735]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Feb 12 21:53:17.390621 systemd[1]: Started iscsid.service. Feb 12 21:53:17.389000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:53:17.391577 systemd[1]: Starting dracut-initqueue.service... Feb 12 21:53:17.398024 systemd[1]: Finished dracut-initqueue.service. Feb 12 21:53:17.396000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:53:17.398345 systemd[1]: Reached target remote-fs-pre.target. Feb 12 21:53:17.398593 systemd[1]: Reached target remote-cryptsetup.target. Feb 12 21:53:17.398800 systemd[1]: Reached target remote-fs.target. Feb 12 21:53:17.399443 systemd[1]: Starting dracut-pre-mount.service... Feb 12 21:53:17.404473 systemd[1]: Finished dracut-pre-mount.service. Feb 12 21:53:17.403000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:53:17.440382 ignition[602]: Ignition 2.14.0 Feb 12 21:53:17.440390 ignition[602]: Stage: fetch-offline Feb 12 21:53:17.440426 ignition[602]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 12 21:53:17.440441 ignition[602]: parsing config with SHA512: bd85a898f7da4744ff98e02742aa4854e1ceea8026a4e95cb6fb599b39b54cff0db353847df13d3c55ae196a9dc5d648977228d55e5da3ea20cd600fa7cec8ed Feb 12 21:53:17.443835 ignition[602]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Feb 12 21:53:17.443908 ignition[602]: parsed url from cmdline: "" Feb 12 21:53:17.443909 ignition[602]: no config URL provided Feb 12 21:53:17.443912 ignition[602]: reading system config file "/usr/lib/ignition/user.ign" Feb 12 21:53:17.443917 ignition[602]: no config at "/usr/lib/ignition/user.ign" Feb 12 21:53:17.448614 ignition[602]: config successfully fetched Feb 12 21:53:17.448665 ignition[602]: parsing config with SHA512: 5a1cb433219c6cbc3fe0d94e14802fcf0b554693bf41ec39e2695b1959745864dccdef83291185ae9461434599e9801ab71260e6eb872b56f2d39eede9fa3328 Feb 12 21:53:17.466862 unknown[602]: fetched base config from "system" Feb 12 21:53:17.467065 unknown[602]: fetched user config from "vmware" Feb 12 21:53:17.467546 ignition[602]: fetch-offline: fetch-offline passed Feb 12 21:53:17.467715 ignition[602]: Ignition finished successfully Feb 12 21:53:17.466000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:53:17.468400 systemd[1]: Finished ignition-fetch-offline.service. Feb 12 21:53:17.468547 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Feb 12 21:53:17.469017 systemd[1]: Starting ignition-kargs.service... Feb 12 21:53:17.474252 ignition[750]: Ignition 2.14.0 Feb 12 21:53:17.474504 ignition[750]: Stage: kargs Feb 12 21:53:17.474690 ignition[750]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 12 21:53:17.474843 ignition[750]: parsing config with SHA512: bd85a898f7da4744ff98e02742aa4854e1ceea8026a4e95cb6fb599b39b54cff0db353847df13d3c55ae196a9dc5d648977228d55e5da3ea20cd600fa7cec8ed Feb 12 21:53:17.476262 ignition[750]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Feb 12 21:53:17.478132 ignition[750]: kargs: kargs passed Feb 12 21:53:17.478308 ignition[750]: Ignition finished successfully Feb 12 21:53:17.477000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:53:17.479245 systemd[1]: Finished ignition-kargs.service. Feb 12 21:53:17.479917 systemd[1]: Starting ignition-disks.service... Feb 12 21:53:17.484616 ignition[756]: Ignition 2.14.0 Feb 12 21:53:17.484890 ignition[756]: Stage: disks Feb 12 21:53:17.485069 ignition[756]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 12 21:53:17.485220 ignition[756]: parsing config with SHA512: bd85a898f7da4744ff98e02742aa4854e1ceea8026a4e95cb6fb599b39b54cff0db353847df13d3c55ae196a9dc5d648977228d55e5da3ea20cd600fa7cec8ed Feb 12 21:53:17.486529 ignition[756]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Feb 12 21:53:17.488143 ignition[756]: disks: disks passed Feb 12 21:53:17.488287 ignition[756]: Ignition finished successfully Feb 12 21:53:17.488886 systemd[1]: Finished ignition-disks.service. Feb 12 21:53:17.489055 systemd[1]: Reached target initrd-root-device.target. Feb 12 21:53:17.487000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:53:17.489164 systemd[1]: Reached target local-fs-pre.target. Feb 12 21:53:17.489325 systemd[1]: Reached target local-fs.target. Feb 12 21:53:17.489484 systemd[1]: Reached target sysinit.target. Feb 12 21:53:17.489652 systemd[1]: Reached target basic.target. Feb 12 21:53:17.490286 systemd[1]: Starting systemd-fsck-root.service... Feb 12 21:53:17.501020 systemd-fsck[764]: ROOT: clean, 602/1628000 files, 124050/1617920 blocks Feb 12 21:53:17.502452 systemd[1]: Finished systemd-fsck-root.service. Feb 12 21:53:17.503037 systemd[1]: Mounting sysroot.mount... Feb 12 21:53:17.500000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:53:17.542580 kernel: EXT4-fs (sda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Feb 12 21:53:17.542594 systemd[1]: Mounted sysroot.mount. Feb 12 21:53:17.542748 systemd[1]: Reached target initrd-root-fs.target. Feb 12 21:53:17.554772 systemd[1]: Mounting sysroot-usr.mount... Feb 12 21:53:17.555125 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Feb 12 21:53:17.555155 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 12 21:53:17.555171 systemd[1]: Reached target ignition-diskful.target. Feb 12 21:53:17.557293 systemd[1]: Mounted sysroot-usr.mount. Feb 12 21:53:17.557775 systemd[1]: Starting initrd-setup-root.service... Feb 12 21:53:17.567275 initrd-setup-root[774]: cut: /sysroot/etc/passwd: No such file or directory Feb 12 21:53:17.590634 initrd-setup-root[782]: cut: /sysroot/etc/group: No such file or directory Feb 12 21:53:17.597753 initrd-setup-root[790]: cut: /sysroot/etc/shadow: No such file or directory Feb 12 21:53:17.605375 initrd-setup-root[798]: cut: /sysroot/etc/gshadow: No such file or directory Feb 12 21:53:17.841971 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 12 21:53:17.840000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:53:17.842313 systemd[1]: Finished initrd-setup-root.service. Feb 12 21:53:17.842780 systemd[1]: Starting ignition-mount.service... Feb 12 21:53:17.843191 systemd[1]: Starting sysroot-boot.service... Feb 12 21:53:17.846817 bash[816]: umount: /sysroot/usr/share/oem: not mounted. Feb 12 21:53:17.851879 ignition[817]: INFO : Ignition 2.14.0 Feb 12 21:53:17.852096 ignition[817]: INFO : Stage: mount Feb 12 21:53:17.852265 ignition[817]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 12 21:53:17.852419 ignition[817]: DEBUG : parsing config with SHA512: bd85a898f7da4744ff98e02742aa4854e1ceea8026a4e95cb6fb599b39b54cff0db353847df13d3c55ae196a9dc5d648977228d55e5da3ea20cd600fa7cec8ed Feb 12 21:53:17.853886 ignition[817]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Feb 12 21:53:17.855362 ignition[817]: INFO : mount: mount passed Feb 12 21:53:17.855473 ignition[817]: INFO : Ignition finished successfully Feb 12 21:53:17.856003 systemd[1]: Finished ignition-mount.service. Feb 12 21:53:17.854000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:53:17.946592 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (810) Feb 12 21:53:17.964160 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 12 21:53:17.964205 kernel: BTRFS info (device sda6): using free space tree Feb 12 21:53:17.964213 kernel: BTRFS info (device sda6): has skinny extents Feb 12 21:53:18.025030 systemd[1]: Finished sysroot-boot.service. Feb 12 21:53:18.023000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:53:18.054587 kernel: BTRFS info (device sda6): enabling ssd optimizations Feb 12 21:53:18.070815 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 12 21:53:18.071399 systemd[1]: Starting ignition-files.service... Feb 12 21:53:18.080828 ignition[845]: INFO : Ignition 2.14.0 Feb 12 21:53:18.080828 ignition[845]: INFO : Stage: files Feb 12 21:53:18.081177 ignition[845]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 12 21:53:18.081177 ignition[845]: DEBUG : parsing config with SHA512: bd85a898f7da4744ff98e02742aa4854e1ceea8026a4e95cb6fb599b39b54cff0db353847df13d3c55ae196a9dc5d648977228d55e5da3ea20cd600fa7cec8ed Feb 12 21:53:18.082296 ignition[845]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Feb 12 21:53:18.090201 ignition[845]: DEBUG : files: compiled without relabeling support, skipping Feb 12 21:53:18.093506 ignition[845]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 12 21:53:18.093506 ignition[845]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 12 21:53:18.120000 ignition[845]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 12 21:53:18.120284 ignition[845]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 12 21:53:18.129452 unknown[845]: wrote ssh authorized keys file for user: core Feb 12 21:53:18.129862 ignition[845]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 12 21:53:18.131657 ignition[845]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Feb 12 21:53:18.131831 ignition[845]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Feb 12 21:53:18.131831 ignition[845]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.1.1.tgz" Feb 12 21:53:18.131831 ignition[845]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/containernetworking/plugins/releases/download/v1.1.1/cni-plugins-linux-amd64-v1.1.1.tgz: attempt #1 Feb 12 21:53:18.455749 systemd-networkd[730]: ens192: Gained IPv6LL Feb 12 21:53:18.508336 ignition[845]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Feb 12 21:53:18.612473 ignition[845]: DEBUG : files: createFilesystemsFiles: createFiles: op(4): file matches expected sum of: 4d0ed0abb5951b9cf83cba938ef84bdc5b681f4ac869da8143974f6a53a3ff30c666389fa462b9d14d30af09bf03f6cdf77598c572f8fb3ea00cecdda467a48d Feb 12 21:53:18.612789 ignition[845]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.1.1.tgz" Feb 12 21:53:18.612789 ignition[845]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/crictl-v1.26.0-linux-amd64.tar.gz" Feb 12 21:53:18.612789 ignition[845]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.26.0/crictl-v1.26.0-linux-amd64.tar.gz: attempt #1 Feb 12 21:53:19.019234 ignition[845]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Feb 12 21:53:19.081977 ignition[845]: DEBUG : files: createFilesystemsFiles: createFiles: op(5): file matches expected sum of: a3a2c02a90b008686c20babaf272e703924db2a3e2a0d4e2a7c81d994cbc68c47458a4a354ecc243af095b390815c7f203348b9749351ae817bd52a522300449 Feb 12 21:53:19.082284 ignition[845]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/crictl-v1.26.0-linux-amd64.tar.gz" Feb 12 21:53:19.082617 ignition[845]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/bin/kubeadm" Feb 12 21:53:19.082784 ignition[845]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://dl.k8s.io/release/v1.26.5/bin/linux/amd64/kubeadm: attempt #1 Feb 12 21:53:19.154924 ignition[845]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Feb 12 21:53:19.396405 ignition[845]: DEBUG : files: createFilesystemsFiles: createFiles: op(6): file matches expected sum of: 1c324cd645a7bf93d19d24c87498d9a17878eb1cc927e2680200ffeab2f85051ddec47d85b79b8e774042dc6726299ad3d7caf52c060701f00deba30dc33f660 Feb 12 21:53:19.396726 ignition[845]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/bin/kubeadm" Feb 12 21:53:19.396726 ignition[845]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/opt/bin/kubelet" Feb 12 21:53:19.396726 ignition[845]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET https://dl.k8s.io/release/v1.26.5/bin/linux/amd64/kubelet: attempt #1 Feb 12 21:53:19.442577 ignition[845]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET result: OK Feb 12 21:53:20.012677 ignition[845]: DEBUG : files: createFilesystemsFiles: createFiles: op(7): file matches expected sum of: 40daf2a9b9e666c14b10e627da931bd79978628b1f23ef6429c1cb4fcba261f86ccff440c0dbb0070ee760fe55772b4fd279c4582dfbb17fa30bc94b7f00126b Feb 12 21:53:20.013132 ignition[845]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/opt/bin/kubelet" Feb 12 21:53:20.013394 ignition[845]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/install.sh" Feb 12 21:53:20.013758 ignition[845]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/install.sh" Feb 12 21:53:20.013991 ignition[845]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/docker/daemon.json" Feb 12 21:53:20.014296 ignition[845]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/docker/daemon.json" Feb 12 21:53:20.014778 ignition[845]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 12 21:53:20.015066 ignition[845]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 12 21:53:20.015491 ignition[845]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/etc/systemd/system/vmtoolsd.service" Feb 12 21:53:20.016008 ignition[845]: INFO : files: createFilesystemsFiles: createFiles: op(b): oem config not found in "/usr/share/oem", looking on oem partition Feb 12 21:53:20.020963 ignition[845]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(c): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3551273052" Feb 12 21:53:20.021259 ignition[845]: CRITICAL : files: createFilesystemsFiles: createFiles: op(b): op(c): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3551273052": device or resource busy Feb 12 21:53:20.021482 ignition[845]: ERROR : files: createFilesystemsFiles: createFiles: op(b): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem3551273052", trying btrfs: device or resource busy Feb 12 21:53:20.021708 ignition[845]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(d): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3551273052" Feb 12 21:53:20.023251 ignition[845]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(d): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3551273052" Feb 12 21:53:20.023585 kernel: BTRFS info: devid 1 device path /dev/sda6 changed to /dev/disk/by-label/OEM scanned by ignition (850) Feb 12 21:53:20.023897 ignition[845]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(e): [started] unmounting "/mnt/oem3551273052" Feb 12 21:53:20.024125 ignition[845]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(e): [finished] unmounting "/mnt/oem3551273052" Feb 12 21:53:20.024323 ignition[845]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/etc/systemd/system/vmtoolsd.service" Feb 12 21:53:20.024686 ignition[845]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/etc/systemd/network/00-vmware.network" Feb 12 21:53:20.025026 ignition[845]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/etc/systemd/network/00-vmware.network" Feb 12 21:53:20.026921 ignition[845]: INFO : files: op(10): [started] processing unit "vmtoolsd.service" Feb 12 21:53:20.027038 systemd[1]: mnt-oem3551273052.mount: Deactivated successfully. Feb 12 21:53:20.027544 ignition[845]: INFO : files: op(10): [finished] processing unit "vmtoolsd.service" Feb 12 21:53:20.027711 ignition[845]: INFO : files: op(11): [started] processing unit "containerd.service" Feb 12 21:53:20.027885 ignition[845]: INFO : files: op(11): op(12): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Feb 12 21:53:20.028231 ignition[845]: INFO : files: op(11): op(12): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Feb 12 21:53:20.028446 ignition[845]: INFO : files: op(11): [finished] processing unit "containerd.service" Feb 12 21:53:20.028599 ignition[845]: INFO : files: op(13): [started] processing unit "prepare-cni-plugins.service" Feb 12 21:53:20.028768 ignition[845]: INFO : files: op(13): op(14): [started] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 12 21:53:20.029021 ignition[845]: INFO : files: op(13): op(14): [finished] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 12 21:53:20.029221 ignition[845]: INFO : files: op(13): [finished] processing unit "prepare-cni-plugins.service" Feb 12 21:53:20.029370 ignition[845]: INFO : files: op(15): [started] processing unit "prepare-critools.service" Feb 12 21:53:20.029536 ignition[845]: INFO : files: op(15): op(16): [started] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 12 21:53:20.029843 ignition[845]: INFO : files: op(15): op(16): [finished] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 12 21:53:20.030041 ignition[845]: INFO : files: op(15): [finished] processing unit "prepare-critools.service" Feb 12 21:53:20.030188 ignition[845]: INFO : files: op(17): [started] processing unit "coreos-metadata.service" Feb 12 21:53:20.030352 ignition[845]: INFO : files: op(17): op(18): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 12 21:53:20.030605 ignition[845]: INFO : files: op(17): op(18): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 12 21:53:20.030800 ignition[845]: INFO : files: op(17): [finished] processing unit "coreos-metadata.service" Feb 12 21:53:20.030948 ignition[845]: INFO : files: op(19): [started] setting preset to enabled for "vmtoolsd.service" Feb 12 21:53:20.031159 ignition[845]: INFO : files: op(19): [finished] setting preset to enabled for "vmtoolsd.service" Feb 12 21:53:20.031311 ignition[845]: INFO : files: op(1a): [started] setting preset to enabled for "prepare-cni-plugins.service" Feb 12 21:53:20.031483 ignition[845]: INFO : files: op(1a): [finished] setting preset to enabled for "prepare-cni-plugins.service" Feb 12 21:53:20.031653 ignition[845]: INFO : files: op(1b): [started] setting preset to enabled for "prepare-critools.service" Feb 12 21:53:20.031823 ignition[845]: INFO : files: op(1b): [finished] setting preset to enabled for "prepare-critools.service" Feb 12 21:53:20.031977 ignition[845]: INFO : files: op(1c): [started] setting preset to disabled for "coreos-metadata.service" Feb 12 21:53:20.032131 ignition[845]: INFO : files: op(1c): op(1d): [started] removing enablement symlink(s) for "coreos-metadata.service" Feb 12 21:53:20.081316 ignition[845]: INFO : files: op(1c): op(1d): [finished] removing enablement symlink(s) for "coreos-metadata.service" Feb 12 21:53:20.081588 ignition[845]: INFO : files: op(1c): [finished] setting preset to disabled for "coreos-metadata.service" Feb 12 21:53:20.081843 ignition[845]: INFO : files: createResultFile: createFiles: op(1e): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 12 21:53:20.082088 ignition[845]: INFO : files: createResultFile: createFiles: op(1e): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 12 21:53:20.082280 ignition[845]: INFO : files: files passed Feb 12 21:53:20.082415 ignition[845]: INFO : Ignition finished successfully Feb 12 21:53:20.084422 systemd[1]: Finished ignition-files.service. Feb 12 21:53:20.082000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:53:20.085257 systemd[1]: Starting initrd-setup-root-after-ignition.service... Feb 12 21:53:20.090178 kernel: kauditd_printk_skb: 24 callbacks suppressed Feb 12 21:53:20.090194 kernel: audit: type=1130 audit(1707774800.082:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:53:20.085406 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Feb 12 21:53:20.085946 systemd[1]: Starting ignition-quench.service... Feb 12 21:53:20.092254 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 12 21:53:20.092338 systemd[1]: Finished ignition-quench.service. Feb 12 21:53:20.091000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:53:20.093537 initrd-setup-root-after-ignition[871]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 12 21:53:20.091000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:53:20.099498 kernel: audit: type=1130 audit(1707774800.091:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:53:20.099528 kernel: audit: type=1131 audit(1707774800.091:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:53:20.099741 systemd[1]: Finished initrd-setup-root-after-ignition.service. Feb 12 21:53:20.099970 systemd[1]: Reached target ignition-complete.target. Feb 12 21:53:20.103487 kernel: audit: type=1130 audit(1707774800.098:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:53:20.098000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:53:20.104102 systemd[1]: Starting initrd-parse-etc.service... Feb 12 21:53:20.115784 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 12 21:53:20.115837 systemd[1]: Finished initrd-parse-etc.service. Feb 12 21:53:20.116022 systemd[1]: Reached target initrd-fs.target. Feb 12 21:53:20.116113 systemd[1]: Reached target initrd.target. Feb 12 21:53:20.116223 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Feb 12 21:53:20.116732 systemd[1]: Starting dracut-pre-pivot.service... Feb 12 21:53:20.114000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:53:20.114000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:53:20.124070 kernel: audit: type=1130 audit(1707774800.114:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:53:20.124098 kernel: audit: type=1131 audit(1707774800.114:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:53:20.129596 kernel: audit: type=1130 audit(1707774800.124:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:53:20.124000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:53:20.125928 systemd[1]: Finished dracut-pre-pivot.service. Feb 12 21:53:20.126516 systemd[1]: Starting initrd-cleanup.service... Feb 12 21:53:20.133806 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 12 21:53:20.133874 systemd[1]: Finished initrd-cleanup.service. Feb 12 21:53:20.132000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:53:20.134602 systemd[1]: Stopped target network.target. Feb 12 21:53:20.140192 kernel: audit: type=1130 audit(1707774800.132:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:53:20.140210 kernel: audit: type=1131 audit(1707774800.132:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:53:20.132000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:53:20.140362 systemd[1]: Stopped target nss-lookup.target. Feb 12 21:53:20.140692 systemd[1]: Stopped target remote-cryptsetup.target. Feb 12 21:53:20.140921 systemd[1]: Stopped target timers.target. Feb 12 21:53:20.141144 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 12 21:53:20.141454 systemd[1]: Stopped dracut-pre-pivot.service. Feb 12 21:53:20.140000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:53:20.141761 systemd[1]: Stopped target initrd.target. Feb 12 21:53:20.144862 systemd[1]: Stopped target basic.target. Feb 12 21:53:20.145062 systemd[1]: Stopped target ignition-complete.target. Feb 12 21:53:20.145270 systemd[1]: Stopped target ignition-diskful.target. Feb 12 21:53:20.145473 systemd[1]: Stopped target initrd-root-device.target. Feb 12 21:53:20.145622 kernel: audit: type=1131 audit(1707774800.140:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:53:20.145734 systemd[1]: Stopped target remote-fs.target. Feb 12 21:53:20.145930 systemd[1]: Stopped target remote-fs-pre.target. Feb 12 21:53:20.146146 systemd[1]: Stopped target sysinit.target. Feb 12 21:53:20.146345 systemd[1]: Stopped target local-fs.target. Feb 12 21:53:20.146541 systemd[1]: Stopped target local-fs-pre.target. Feb 12 21:53:20.146827 systemd[1]: Stopped target swap.target. Feb 12 21:53:20.147029 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 12 21:53:20.147191 systemd[1]: Stopped dracut-pre-mount.service. Feb 12 21:53:20.145000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:53:20.147461 systemd[1]: Stopped target cryptsetup.target. Feb 12 21:53:20.147680 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 12 21:53:20.147838 systemd[1]: Stopped dracut-initqueue.service. Feb 12 21:53:20.146000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:53:20.148159 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 12 21:53:20.148319 systemd[1]: Stopped ignition-fetch-offline.service. Feb 12 21:53:20.146000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:53:20.148590 systemd[1]: Stopped target paths.target. Feb 12 21:53:20.148783 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 12 21:53:20.150658 systemd[1]: Stopped systemd-ask-password-console.path. Feb 12 21:53:20.150912 systemd[1]: Stopped target slices.target. Feb 12 21:53:20.151109 systemd[1]: Stopped target sockets.target. Feb 12 21:53:20.151331 systemd[1]: iscsid.socket: Deactivated successfully. Feb 12 21:53:20.151468 systemd[1]: Closed iscsid.socket. Feb 12 21:53:20.151736 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 12 21:53:20.151876 systemd[1]: Closed iscsiuio.socket. Feb 12 21:53:20.152109 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 12 21:53:20.152276 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Feb 12 21:53:20.150000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:53:20.152528 systemd[1]: ignition-files.service: Deactivated successfully. Feb 12 21:53:20.152550 systemd[1]: Stopped ignition-files.service. Feb 12 21:53:20.151000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:53:20.153412 systemd[1]: Stopping ignition-mount.service... Feb 12 21:53:20.154120 systemd[1]: Stopping sysroot-boot.service... Feb 12 21:53:20.154433 systemd[1]: Stopping systemd-networkd.service... Feb 12 21:53:20.154738 systemd[1]: Stopping systemd-resolved.service... Feb 12 21:53:20.154947 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 12 21:53:20.155107 systemd[1]: Stopped systemd-udev-trigger.service. Feb 12 21:53:20.155365 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 12 21:53:20.155516 systemd[1]: Stopped dracut-pre-trigger.service. Feb 12 21:53:20.160587 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 12 21:53:20.153000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:53:20.154000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:53:20.161736 ignition[884]: INFO : Ignition 2.14.0 Feb 12 21:53:20.161736 ignition[884]: INFO : Stage: umount Feb 12 21:53:20.161736 ignition[884]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 12 21:53:20.161736 ignition[884]: DEBUG : parsing config with SHA512: bd85a898f7da4744ff98e02742aa4854e1ceea8026a4e95cb6fb599b39b54cff0db353847df13d3c55ae196a9dc5d648977228d55e5da3ea20cd600fa7cec8ed Feb 12 21:53:20.163311 systemd[1]: Stopped systemd-networkd.service. Feb 12 21:53:20.163639 ignition[884]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Feb 12 21:53:20.162000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:53:20.164539 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 12 21:53:20.164632 systemd[1]: Stopped systemd-resolved.service. Feb 12 21:53:20.164000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:53:20.166407 ignition[884]: INFO : umount: umount passed Feb 12 21:53:20.166407 ignition[884]: INFO : Ignition finished successfully Feb 12 21:53:20.166972 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 12 21:53:20.167190 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 12 21:53:20.165000 audit: BPF prog-id=9 op=UNLOAD Feb 12 21:53:20.165000 audit: BPF prog-id=6 op=UNLOAD Feb 12 21:53:20.167211 systemd[1]: Closed systemd-networkd.socket. Feb 12 21:53:20.170416 systemd[1]: Stopping network-cleanup.service... Feb 12 21:53:20.170641 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 12 21:53:20.170669 systemd[1]: Stopped parse-ip-for-networkd.service. Feb 12 21:53:20.169000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:53:20.171083 systemd[1]: afterburn-network-kargs.service: Deactivated successfully. Feb 12 21:53:20.171105 systemd[1]: Stopped afterburn-network-kargs.service. Feb 12 21:53:20.169000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=afterburn-network-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:53:20.171558 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 12 21:53:20.171791 systemd[1]: Stopped systemd-sysctl.service. Feb 12 21:53:20.170000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:53:20.172128 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 12 21:53:20.172158 systemd[1]: Stopped systemd-modules-load.service. Feb 12 21:53:20.170000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:53:20.173220 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Feb 12 21:53:20.172000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:53:20.173492 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 12 21:53:20.173538 systemd[1]: Stopped ignition-mount.service. Feb 12 21:53:20.173979 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 12 21:53:20.172000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:53:20.174005 systemd[1]: Stopped ignition-disks.service. Feb 12 21:53:20.172000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:53:20.174208 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 12 21:53:20.172000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:53:20.174227 systemd[1]: Stopped ignition-kargs.service. Feb 12 21:53:20.174363 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 12 21:53:20.174384 systemd[1]: Stopped ignition-setup.service. Feb 12 21:53:20.174561 systemd[1]: Stopping systemd-udevd.service... Feb 12 21:53:20.175000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:53:20.176491 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 12 21:53:20.176543 systemd[1]: Stopped network-cleanup.service. Feb 12 21:53:20.180467 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 12 21:53:20.180544 systemd[1]: Stopped systemd-udevd.service. Feb 12 21:53:20.179000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:53:20.180895 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 12 21:53:20.180918 systemd[1]: Closed systemd-udevd-control.socket. Feb 12 21:53:20.181131 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 12 21:53:20.181145 systemd[1]: Closed systemd-udevd-kernel.socket. Feb 12 21:53:20.179000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:53:20.180000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:53:20.181289 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 12 21:53:20.180000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:53:20.181309 systemd[1]: Stopped dracut-pre-udev.service. Feb 12 21:53:20.181483 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 12 21:53:20.181502 systemd[1]: Stopped dracut-cmdline.service. Feb 12 21:53:20.180000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:53:20.181000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:53:20.181000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:53:20.181642 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 12 21:53:20.181662 systemd[1]: Stopped dracut-cmdline-ask.service. Feb 12 21:53:20.182301 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Feb 12 21:53:20.182413 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 12 21:53:20.182440 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Feb 12 21:53:20.182707 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 12 21:53:20.182727 systemd[1]: Stopped kmod-static-nodes.service. Feb 12 21:53:20.182832 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 12 21:53:20.182851 systemd[1]: Stopped systemd-vconsole-setup.service. Feb 12 21:53:20.183645 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Feb 12 21:53:20.185694 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 12 21:53:20.185741 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Feb 12 21:53:20.184000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:53:20.184000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:53:20.595573 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 12 21:53:20.595632 systemd[1]: Stopped sysroot-boot.service. Feb 12 21:53:20.594000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:53:20.595916 systemd[1]: Reached target initrd-switch-root.target. Feb 12 21:53:20.596021 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 12 21:53:20.594000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:53:20.596047 systemd[1]: Stopped initrd-setup-root.service. Feb 12 21:53:20.596656 systemd[1]: Starting initrd-switch-root.service... Feb 12 21:53:20.611746 systemd[1]: Switching root. Feb 12 21:53:20.612000 audit: BPF prog-id=5 op=UNLOAD Feb 12 21:53:20.612000 audit: BPF prog-id=4 op=UNLOAD Feb 12 21:53:20.612000 audit: BPF prog-id=3 op=UNLOAD Feb 12 21:53:20.612000 audit: BPF prog-id=8 op=UNLOAD Feb 12 21:53:20.612000 audit: BPF prog-id=7 op=UNLOAD Feb 12 21:53:20.628628 systemd-journald[216]: Received SIGTERM from PID 1 (systemd). Feb 12 21:53:20.628679 iscsid[735]: iscsid shutting down. Feb 12 21:53:20.628823 systemd-journald[216]: Journal stopped Feb 12 21:53:23.063401 kernel: SELinux: Class mctp_socket not defined in policy. Feb 12 21:53:23.063422 kernel: SELinux: Class anon_inode not defined in policy. Feb 12 21:53:23.063430 kernel: SELinux: the above unknown classes and permissions will be allowed Feb 12 21:53:23.063435 kernel: SELinux: policy capability network_peer_controls=1 Feb 12 21:53:23.063440 kernel: SELinux: policy capability open_perms=1 Feb 12 21:53:23.063445 kernel: SELinux: policy capability extended_socket_class=1 Feb 12 21:53:23.063453 kernel: SELinux: policy capability always_check_network=0 Feb 12 21:53:23.063458 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 12 21:53:23.063464 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 12 21:53:23.063469 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 12 21:53:23.063474 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 12 21:53:23.063481 systemd[1]: Successfully loaded SELinux policy in 38.118ms. Feb 12 21:53:23.063489 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 5.894ms. Feb 12 21:53:23.063496 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 12 21:53:23.063504 systemd[1]: Detected virtualization vmware. Feb 12 21:53:23.063510 systemd[1]: Detected architecture x86-64. Feb 12 21:53:23.063516 systemd[1]: Detected first boot. Feb 12 21:53:23.063524 systemd[1]: Initializing machine ID from random generator. Feb 12 21:53:23.063530 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Feb 12 21:53:23.063537 systemd[1]: Populated /etc with preset unit settings. Feb 12 21:53:23.063543 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 12 21:53:23.063550 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 12 21:53:23.063558 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 12 21:53:23.063564 systemd[1]: Queued start job for default target multi-user.target. Feb 12 21:53:23.063600 systemd[1]: Created slice system-addon\x2dconfig.slice. Feb 12 21:53:23.063609 systemd[1]: Created slice system-addon\x2drun.slice. Feb 12 21:53:23.063616 systemd[1]: Created slice system-getty.slice. Feb 12 21:53:23.063622 systemd[1]: Created slice system-modprobe.slice. Feb 12 21:53:23.063628 systemd[1]: Created slice system-serial\x2dgetty.slice. Feb 12 21:53:23.063634 systemd[1]: Created slice system-system\x2dcloudinit.slice. Feb 12 21:53:23.063641 systemd[1]: Created slice system-systemd\x2dfsck.slice. Feb 12 21:53:23.063648 systemd[1]: Created slice user.slice. Feb 12 21:53:23.063655 systemd[1]: Started systemd-ask-password-console.path. Feb 12 21:53:23.063661 systemd[1]: Started systemd-ask-password-wall.path. Feb 12 21:53:23.063667 systemd[1]: Set up automount boot.automount. Feb 12 21:53:23.063673 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Feb 12 21:53:23.063680 systemd[1]: Reached target integritysetup.target. Feb 12 21:53:23.063686 systemd[1]: Reached target remote-cryptsetup.target. Feb 12 21:53:23.063696 systemd[1]: Reached target remote-fs.target. Feb 12 21:53:23.064034 systemd[1]: Reached target slices.target. Feb 12 21:53:23.064049 systemd[1]: Reached target swap.target. Feb 12 21:53:23.064058 systemd[1]: Reached target torcx.target. Feb 12 21:53:23.064065 systemd[1]: Reached target veritysetup.target. Feb 12 21:53:23.064072 systemd[1]: Listening on systemd-coredump.socket. Feb 12 21:53:23.064078 systemd[1]: Listening on systemd-initctl.socket. Feb 12 21:53:23.064085 systemd[1]: Listening on systemd-journald-audit.socket. Feb 12 21:53:23.064091 systemd[1]: Listening on systemd-journald-dev-log.socket. Feb 12 21:53:23.064098 systemd[1]: Listening on systemd-journald.socket. Feb 12 21:53:23.064105 systemd[1]: Listening on systemd-networkd.socket. Feb 12 21:53:23.064112 systemd[1]: Listening on systemd-udevd-control.socket. Feb 12 21:53:23.064118 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 12 21:53:23.064125 systemd[1]: Listening on systemd-userdbd.socket. Feb 12 21:53:23.064132 systemd[1]: Mounting dev-hugepages.mount... Feb 12 21:53:23.064140 systemd[1]: Mounting dev-mqueue.mount... Feb 12 21:53:23.064147 systemd[1]: Mounting media.mount... Feb 12 21:53:23.064154 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 12 21:53:23.064161 systemd[1]: Mounting sys-kernel-debug.mount... Feb 12 21:53:23.064167 systemd[1]: Mounting sys-kernel-tracing.mount... Feb 12 21:53:23.064174 systemd[1]: Mounting tmp.mount... Feb 12 21:53:23.064181 systemd[1]: Starting flatcar-tmpfiles.service... Feb 12 21:53:23.064195 systemd[1]: Starting ignition-delete-config.service... Feb 12 21:53:23.064203 systemd[1]: Starting kmod-static-nodes.service... Feb 12 21:53:23.064211 systemd[1]: Starting modprobe@configfs.service... Feb 12 21:53:23.064218 systemd[1]: Starting modprobe@dm_mod.service... Feb 12 21:53:23.064225 systemd[1]: Starting modprobe@drm.service... Feb 12 21:53:23.064232 systemd[1]: Starting modprobe@efi_pstore.service... Feb 12 21:53:23.064238 systemd[1]: Starting modprobe@fuse.service... Feb 12 21:53:23.064245 systemd[1]: Starting modprobe@loop.service... Feb 12 21:53:23.064252 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 12 21:53:23.064259 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Feb 12 21:53:23.064266 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) Feb 12 21:53:23.064273 systemd[1]: Starting systemd-journald.service... Feb 12 21:53:23.064280 kernel: fuse: init (API version 7.34) Feb 12 21:53:23.064286 systemd[1]: Starting systemd-modules-load.service... Feb 12 21:53:23.064293 systemd[1]: Starting systemd-network-generator.service... Feb 12 21:53:23.064300 systemd[1]: Starting systemd-remount-fs.service... Feb 12 21:53:23.064306 systemd[1]: Starting systemd-udev-trigger.service... Feb 12 21:53:23.064313 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 12 21:53:23.064319 systemd[1]: Mounted dev-hugepages.mount. Feb 12 21:53:23.064327 systemd[1]: Mounted dev-mqueue.mount. Feb 12 21:53:23.064334 systemd[1]: Mounted media.mount. Feb 12 21:53:23.064341 systemd[1]: Mounted sys-kernel-debug.mount. Feb 12 21:53:23.064348 systemd[1]: Mounted sys-kernel-tracing.mount. Feb 12 21:53:23.064355 systemd[1]: Mounted tmp.mount. Feb 12 21:53:23.064361 systemd[1]: Finished kmod-static-nodes.service. Feb 12 21:53:23.064368 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 12 21:53:23.064375 systemd[1]: Finished modprobe@configfs.service. Feb 12 21:53:23.064382 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 12 21:53:23.064390 systemd[1]: Finished modprobe@dm_mod.service. Feb 12 21:53:23.064396 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 12 21:53:23.064403 systemd[1]: Finished modprobe@drm.service. Feb 12 21:53:23.064410 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 12 21:53:23.064416 systemd[1]: Finished modprobe@efi_pstore.service. Feb 12 21:53:23.064423 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 12 21:53:23.064429 systemd[1]: Finished modprobe@fuse.service. Feb 12 21:53:23.064436 systemd[1]: Finished systemd-network-generator.service. Feb 12 21:53:23.064448 systemd-journald[1042]: Journal started Feb 12 21:53:23.064480 systemd-journald[1042]: Runtime Journal (/run/log/journal/2ad715094cd44ce18b97f1b4e0050a80) is 4.8M, max 38.8M, 34.0M free. Feb 12 21:53:23.044000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:53:23.048000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:53:23.048000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:53:23.051000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:53:23.051000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:53:23.055000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:53:23.055000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:53:23.057000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Feb 12 21:53:23.057000 audit[1042]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7ffe834320e0 a2=4000 a3=7ffe8343217c items=0 ppid=1 pid=1042 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 21:53:23.057000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Feb 12 21:53:23.060000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:53:23.060000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:53:23.062000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:53:23.062000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:53:23.065207 jq[1015]: true Feb 12 21:53:23.063000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:53:23.066192 systemd[1]: Started systemd-journald.service. Feb 12 21:53:23.065000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:53:23.066000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:53:23.067786 systemd[1]: Finished flatcar-tmpfiles.service. Feb 12 21:53:23.070651 jq[1069]: true Feb 12 21:53:23.070656 systemd[1]: Finished systemd-remount-fs.service. Feb 12 21:53:23.069000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:53:23.074630 systemd[1]: Finished systemd-modules-load.service. Feb 12 21:53:23.073000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:53:23.075693 systemd[1]: Reached target network-pre.target. Feb 12 21:53:23.076566 systemd[1]: Mounting sys-fs-fuse-connections.mount... Feb 12 21:53:23.077362 systemd[1]: Mounting sys-kernel-config.mount... Feb 12 21:53:23.077464 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 12 21:53:23.078629 kernel: loop: module loaded Feb 12 21:53:23.082858 systemd[1]: Starting systemd-hwdb-update.service... Feb 12 21:53:23.087021 systemd[1]: Starting systemd-journal-flush.service... Feb 12 21:53:23.087153 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 12 21:53:23.088378 systemd[1]: Starting systemd-random-seed.service... Feb 12 21:53:23.089527 systemd[1]: Starting systemd-sysctl.service... Feb 12 21:53:23.094033 systemd[1]: Starting systemd-sysusers.service... Feb 12 21:53:23.094000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:53:23.094000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:53:23.096132 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 12 21:53:23.096223 systemd[1]: Finished modprobe@loop.service. Feb 12 21:53:23.096401 systemd[1]: Mounted sys-fs-fuse-connections.mount. Feb 12 21:53:23.096525 systemd[1]: Mounted sys-kernel-config.mount. Feb 12 21:53:23.097009 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Feb 12 21:53:23.107710 systemd-journald[1042]: Time spent on flushing to /var/log/journal/2ad715094cd44ce18b97f1b4e0050a80 is 54.965ms for 1954 entries. Feb 12 21:53:23.107710 systemd-journald[1042]: System Journal (/var/log/journal/2ad715094cd44ce18b97f1b4e0050a80) is 8.0M, max 584.8M, 576.8M free. Feb 12 21:53:23.189924 systemd-journald[1042]: Received client request to flush runtime journal. Feb 12 21:53:23.111000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:53:23.131000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:53:23.142000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:53:23.182000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:53:23.113042 systemd[1]: Finished systemd-random-seed.service. Feb 12 21:53:23.113194 systemd[1]: Reached target first-boot-complete.target. Feb 12 21:53:23.133337 systemd[1]: Finished systemd-sysctl.service. Feb 12 21:53:23.144217 systemd[1]: Finished systemd-sysusers.service. Feb 12 21:53:23.145210 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 12 21:53:23.184428 systemd[1]: Finished systemd-udev-trigger.service. Feb 12 21:53:23.185474 systemd[1]: Starting systemd-udev-settle.service... Feb 12 21:53:23.190917 systemd[1]: Finished systemd-journal-flush.service. Feb 12 21:53:23.189000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:53:23.193624 udevadm[1103]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Feb 12 21:53:23.200430 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 12 21:53:23.198000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:53:23.222895 ignition[1074]: Ignition 2.14.0 Feb 12 21:53:23.223078 ignition[1074]: deleting config from guestinfo properties Feb 12 21:53:23.225110 ignition[1074]: Successfully deleted config Feb 12 21:53:23.225649 systemd[1]: Finished ignition-delete-config.service. Feb 12 21:53:23.224000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ignition-delete-config comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:53:23.567923 systemd[1]: Finished systemd-hwdb-update.service. Feb 12 21:53:23.566000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:53:23.568997 systemd[1]: Starting systemd-udevd.service... Feb 12 21:53:23.580917 systemd-udevd[1108]: Using default interface naming scheme 'v252'. Feb 12 21:53:23.602752 systemd[1]: Started systemd-udevd.service. Feb 12 21:53:23.601000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:53:23.603940 systemd[1]: Starting systemd-networkd.service... Feb 12 21:53:23.610818 systemd[1]: Starting systemd-userdbd.service... Feb 12 21:53:23.633842 systemd[1]: Found device dev-ttyS0.device. Feb 12 21:53:23.636625 systemd[1]: Started systemd-userdbd.service. Feb 12 21:53:23.635000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:53:23.676780 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Feb 12 21:53:23.681729 kernel: ACPI: button: Power Button [PWRF] Feb 12 21:53:23.689066 systemd-networkd[1110]: lo: Link UP Feb 12 21:53:23.689070 systemd-networkd[1110]: lo: Gained carrier Feb 12 21:53:23.689738 systemd-networkd[1110]: Enumeration completed Feb 12 21:53:23.689797 systemd-networkd[1110]: ens192: Configuring with /etc/systemd/network/00-vmware.network. Feb 12 21:53:23.689806 systemd[1]: Started systemd-networkd.service. Feb 12 21:53:23.688000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:53:23.692697 kernel: vmxnet3 0000:0b:00.0 ens192: intr type 3, mode 0, 3 vectors allocated Feb 12 21:53:23.692813 kernel: vmxnet3 0000:0b:00.0 ens192: NIC Link is Up 10000 Mbps Feb 12 21:53:23.693889 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): ens192: link becomes ready Feb 12 21:53:23.694162 systemd-networkd[1110]: ens192: Link UP Feb 12 21:53:23.694287 systemd-networkd[1110]: ens192: Gained carrier Feb 12 21:53:23.717584 kernel: BTRFS info: devid 1 device path /dev/disk/by-label/OEM changed to /dev/sda6 scanned by (udev-worker) (1109) Feb 12 21:53:23.752087 systemd[1]: dev-disk-by\x2dlabel-OEM.device was skipped because of an unmet condition check (ConditionPathExists=!/usr/.noupdate). Feb 12 21:53:23.759000 audit[1118]: AVC avc: denied { confidentiality } for pid=1118 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Feb 12 21:53:23.759000 audit[1118]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=555b44314d80 a1=32194 a2=7f7afd3a4bc5 a3=5 items=108 ppid=1108 pid=1118 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 21:53:23.759000 audit: CWD cwd="/" Feb 12 21:53:23.759000 audit: PATH item=0 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:53:23.763597 kernel: vmw_vmci 0000:00:07.7: Found VMCI PCI device at 0x11080, irq 16 Feb 12 21:53:23.759000 audit: PATH item=1 name=(null) inode=16355 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:53:23.759000 audit: PATH item=2 name=(null) inode=16355 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:53:23.759000 audit: PATH item=3 name=(null) inode=16356 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:53:23.759000 audit: PATH item=4 name=(null) inode=16355 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:53:23.759000 audit: PATH item=5 name=(null) inode=16357 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:53:23.759000 audit: PATH item=6 name=(null) inode=16355 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:53:23.759000 audit: PATH item=7 name=(null) inode=16358 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:53:23.759000 audit: PATH item=8 name=(null) inode=16358 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:53:23.759000 audit: PATH item=9 name=(null) inode=16359 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:53:23.759000 audit: PATH item=10 name=(null) inode=16358 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:53:23.759000 audit: PATH item=11 name=(null) inode=16360 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:53:23.759000 audit: PATH item=12 name=(null) inode=16358 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:53:23.759000 audit: PATH item=13 name=(null) inode=16361 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:53:23.759000 audit: PATH item=14 name=(null) inode=16358 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:53:23.759000 audit: PATH item=15 name=(null) inode=16362 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:53:23.759000 audit: PATH item=16 name=(null) inode=16358 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:53:23.759000 audit: PATH item=17 name=(null) inode=16363 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:53:23.759000 audit: PATH item=18 name=(null) inode=16355 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:53:23.759000 audit: PATH item=19 name=(null) inode=16364 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:53:23.759000 audit: PATH item=20 name=(null) inode=16364 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:53:23.759000 audit: PATH item=21 name=(null) inode=16365 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:53:23.759000 audit: PATH item=22 name=(null) inode=16364 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:53:23.759000 audit: PATH item=23 name=(null) inode=16366 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:53:23.759000 audit: PATH item=24 name=(null) inode=16364 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:53:23.759000 audit: PATH item=25 name=(null) inode=16367 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:53:23.759000 audit: PATH item=26 name=(null) inode=16364 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:53:23.759000 audit: PATH item=27 name=(null) inode=16368 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:53:23.759000 audit: PATH item=28 name=(null) inode=16364 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:53:23.759000 audit: PATH item=29 name=(null) inode=16369 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:53:23.759000 audit: PATH item=30 name=(null) inode=16355 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:53:23.759000 audit: PATH item=31 name=(null) inode=16370 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:53:23.759000 audit: PATH item=32 name=(null) inode=16370 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:53:23.759000 audit: PATH item=33 name=(null) inode=16371 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:53:23.759000 audit: PATH item=34 name=(null) inode=16370 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:53:23.759000 audit: PATH item=35 name=(null) inode=16372 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:53:23.759000 audit: PATH item=36 name=(null) inode=16370 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:53:23.759000 audit: PATH item=37 name=(null) inode=16373 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:53:23.759000 audit: PATH item=38 name=(null) inode=16370 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:53:23.759000 audit: PATH item=39 name=(null) inode=16374 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:53:23.759000 audit: PATH item=40 name=(null) inode=16370 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:53:23.759000 audit: PATH item=41 name=(null) inode=16375 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:53:23.759000 audit: PATH item=42 name=(null) inode=16355 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:53:23.759000 audit: PATH item=43 name=(null) inode=16376 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:53:23.759000 audit: PATH item=44 name=(null) inode=16376 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:53:23.759000 audit: PATH item=45 name=(null) inode=16377 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:53:23.759000 audit: PATH item=46 name=(null) inode=16376 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:53:23.759000 audit: PATH item=47 name=(null) inode=16378 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:53:23.759000 audit: PATH item=48 name=(null) inode=16376 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:53:23.759000 audit: PATH item=49 name=(null) inode=16379 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:53:23.759000 audit: PATH item=50 name=(null) inode=16376 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:53:23.759000 audit: PATH item=51 name=(null) inode=16380 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:53:23.759000 audit: PATH item=52 name=(null) inode=16376 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:53:23.759000 audit: PATH item=53 name=(null) inode=16381 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:53:23.759000 audit: PATH item=54 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:53:23.759000 audit: PATH item=55 name=(null) inode=16382 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:53:23.759000 audit: PATH item=56 name=(null) inode=16382 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:53:23.759000 audit: PATH item=57 name=(null) inode=16383 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:53:23.759000 audit: PATH item=58 name=(null) inode=16382 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:53:23.759000 audit: PATH item=59 name=(null) inode=16384 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:53:23.759000 audit: PATH item=60 name=(null) inode=16382 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:53:23.759000 audit: PATH item=61 name=(null) inode=25601 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:53:23.759000 audit: PATH item=62 name=(null) inode=25601 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:53:23.759000 audit: PATH item=63 name=(null) inode=25602 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:53:23.759000 audit: PATH item=64 name=(null) inode=25601 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:53:23.759000 audit: PATH item=65 name=(null) inode=25603 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:53:23.759000 audit: PATH item=66 name=(null) inode=25601 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:53:23.759000 audit: PATH item=67 name=(null) inode=25604 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:53:23.759000 audit: PATH item=68 name=(null) inode=25601 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:53:23.759000 audit: PATH item=69 name=(null) inode=25605 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:53:23.759000 audit: PATH item=70 name=(null) inode=25601 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:53:23.759000 audit: PATH item=71 name=(null) inode=25606 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:53:23.759000 audit: PATH item=72 name=(null) inode=16382 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:53:23.759000 audit: PATH item=73 name=(null) inode=25607 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:53:23.759000 audit: PATH item=74 name=(null) inode=25607 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:53:23.759000 audit: PATH item=75 name=(null) inode=25608 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:53:23.759000 audit: PATH item=76 name=(null) inode=25607 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:53:23.759000 audit: PATH item=77 name=(null) inode=25609 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:53:23.759000 audit: PATH item=78 name=(null) inode=25607 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:53:23.759000 audit: PATH item=79 name=(null) inode=25610 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:53:23.759000 audit: PATH item=80 name=(null) inode=25607 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:53:23.759000 audit: PATH item=81 name=(null) inode=25611 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:53:23.759000 audit: PATH item=82 name=(null) inode=25607 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:53:23.759000 audit: PATH item=83 name=(null) inode=25612 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:53:23.759000 audit: PATH item=84 name=(null) inode=16382 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:53:23.759000 audit: PATH item=85 name=(null) inode=25613 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:53:23.759000 audit: PATH item=86 name=(null) inode=25613 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:53:23.759000 audit: PATH item=87 name=(null) inode=25614 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:53:23.759000 audit: PATH item=88 name=(null) inode=25613 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:53:23.759000 audit: PATH item=89 name=(null) inode=25615 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:53:23.759000 audit: PATH item=90 name=(null) inode=25613 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:53:23.759000 audit: PATH item=91 name=(null) inode=25616 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:53:23.759000 audit: PATH item=92 name=(null) inode=25613 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:53:23.759000 audit: PATH item=93 name=(null) inode=25617 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:53:23.759000 audit: PATH item=94 name=(null) inode=25613 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:53:23.759000 audit: PATH item=95 name=(null) inode=25618 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:53:23.759000 audit: PATH item=96 name=(null) inode=16382 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:53:23.759000 audit: PATH item=97 name=(null) inode=25619 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:53:23.759000 audit: PATH item=98 name=(null) inode=25619 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:53:23.759000 audit: PATH item=99 name=(null) inode=25620 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:53:23.759000 audit: PATH item=100 name=(null) inode=25619 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:53:23.759000 audit: PATH item=101 name=(null) inode=25621 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:53:23.759000 audit: PATH item=102 name=(null) inode=25619 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:53:23.759000 audit: PATH item=103 name=(null) inode=25622 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:53:23.759000 audit: PATH item=104 name=(null) inode=25619 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:53:23.759000 audit: PATH item=105 name=(null) inode=25623 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:53:23.759000 audit: PATH item=106 name=(null) inode=25619 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:53:23.759000 audit: PATH item=107 name=(null) inode=25624 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:53:23.759000 audit: PROCTITLE proctitle="(udev-worker)" Feb 12 21:53:23.766582 kernel: piix4_smbus 0000:00:07.3: SMBus Host Controller not enabled! Feb 12 21:53:23.766696 kernel: vmw_vmci 0000:00:07.7: Using capabilities 0xc Feb 12 21:53:23.769585 kernel: Guest personality initialized and is active Feb 12 21:53:23.771628 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Feb 12 21:53:23.771674 kernel: Initialized host personality Feb 12 21:53:23.776608 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input3 Feb 12 21:53:23.797585 kernel: mousedev: PS/2 mouse device common for all mice Feb 12 21:53:23.800994 (udev-worker)[1123]: id: Truncating stdout of 'dmi_memory_id' up to 16384 byte. Feb 12 21:53:23.812000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:53:23.813845 systemd[1]: Finished systemd-udev-settle.service. Feb 12 21:53:23.814944 systemd[1]: Starting lvm2-activation-early.service... Feb 12 21:53:23.834317 lvm[1143]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 12 21:53:23.856000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:53:23.858178 systemd[1]: Finished lvm2-activation-early.service. Feb 12 21:53:23.858355 systemd[1]: Reached target cryptsetup.target. Feb 12 21:53:23.859332 systemd[1]: Starting lvm2-activation.service... Feb 12 21:53:23.862583 lvm[1145]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 12 21:53:23.885177 systemd[1]: Finished lvm2-activation.service. Feb 12 21:53:23.883000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:53:23.885347 systemd[1]: Reached target local-fs-pre.target. Feb 12 21:53:23.885442 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 12 21:53:23.885458 systemd[1]: Reached target local-fs.target. Feb 12 21:53:23.885546 systemd[1]: Reached target machines.target. Feb 12 21:53:23.886636 systemd[1]: Starting ldconfig.service... Feb 12 21:53:23.887338 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Feb 12 21:53:23.887372 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 12 21:53:23.888229 systemd[1]: Starting systemd-boot-update.service... Feb 12 21:53:23.888990 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Feb 12 21:53:23.889920 systemd[1]: Starting systemd-machine-id-commit.service... Feb 12 21:53:23.890078 systemd[1]: systemd-sysext.service was skipped because no trigger condition checks were met. Feb 12 21:53:23.890108 systemd[1]: ensure-sysext.service was skipped because no trigger condition checks were met. Feb 12 21:53:23.890890 systemd[1]: Starting systemd-tmpfiles-setup.service... Feb 12 21:53:23.897059 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1148 (bootctl) Feb 12 21:53:23.897880 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Feb 12 21:53:23.906293 systemd-tmpfiles[1151]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Feb 12 21:53:23.914802 systemd-tmpfiles[1151]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 12 21:53:23.921887 systemd-tmpfiles[1151]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 12 21:53:23.920000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:53:23.922316 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Feb 12 21:53:24.372777 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 12 21:53:24.373262 systemd[1]: Finished systemd-machine-id-commit.service. Feb 12 21:53:24.371000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:53:24.412036 systemd-fsck[1157]: fsck.fat 4.2 (2021-01-31) Feb 12 21:53:24.412036 systemd-fsck[1157]: /dev/sda1: 789 files, 115339/258078 clusters Feb 12 21:53:24.413132 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Feb 12 21:53:24.411000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:53:24.414212 systemd[1]: Mounting boot.mount... Feb 12 21:53:24.426296 systemd[1]: Mounted boot.mount. Feb 12 21:53:24.436330 systemd[1]: Finished systemd-boot-update.service. Feb 12 21:53:24.434000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:53:24.481947 systemd[1]: Finished systemd-tmpfiles-setup.service. Feb 12 21:53:24.480000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:53:24.483033 systemd[1]: Starting audit-rules.service... Feb 12 21:53:24.484037 systemd[1]: Starting clean-ca-certificates.service... Feb 12 21:53:24.484986 systemd[1]: Starting systemd-journal-catalog-update.service... Feb 12 21:53:24.486245 systemd[1]: Starting systemd-resolved.service... Feb 12 21:53:24.487265 systemd[1]: Starting systemd-timesyncd.service... Feb 12 21:53:24.490174 systemd[1]: Starting systemd-update-utmp.service... Feb 12 21:53:24.491396 systemd[1]: Finished clean-ca-certificates.service. Feb 12 21:53:24.490000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:53:24.491893 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 12 21:53:24.499000 audit[1172]: SYSTEM_BOOT pid=1172 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Feb 12 21:53:24.501769 systemd[1]: Finished systemd-update-utmp.service. Feb 12 21:53:24.500000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:53:24.524226 systemd[1]: Finished systemd-journal-catalog-update.service. Feb 12 21:53:24.522000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:53:24.540000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Feb 12 21:53:24.540000 audit[1188]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fffb3db38d0 a2=420 a3=0 items=0 ppid=1165 pid=1188 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 21:53:24.540000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Feb 12 21:53:24.542872 augenrules[1188]: No rules Feb 12 21:53:24.543022 systemd[1]: Finished audit-rules.service. Feb 12 21:53:24.551646 systemd-resolved[1168]: Positive Trust Anchors: Feb 12 21:53:24.551653 systemd-resolved[1168]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 12 21:53:24.551673 systemd-resolved[1168]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 12 21:53:24.562573 systemd[1]: Started systemd-timesyncd.service. Feb 12 21:53:24.562758 systemd[1]: Reached target time-set.target. Feb 12 21:53:24.574882 systemd-resolved[1168]: Defaulting to hostname 'linux'. Feb 12 21:53:24.575911 systemd[1]: Started systemd-resolved.service. Feb 12 21:53:24.576051 systemd[1]: Reached target network.target. Feb 12 21:53:24.576134 systemd[1]: Reached target nss-lookup.target. Feb 12 21:54:09.238400 systemd-timesyncd[1169]: Contacted time server 51.81.209.232:123 (0.flatcar.pool.ntp.org). Feb 12 21:54:09.238493 systemd-timesyncd[1169]: Initial clock synchronization to Mon 2024-02-12 21:54:09.238288 UTC. Feb 12 21:54:09.238518 systemd-resolved[1168]: Clock change detected. Flushing caches. Feb 12 21:54:09.346080 ldconfig[1147]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 12 21:54:09.364006 systemd[1]: Finished ldconfig.service. Feb 12 21:54:09.365116 systemd[1]: Starting systemd-update-done.service... Feb 12 21:54:09.369401 systemd[1]: Finished systemd-update-done.service. Feb 12 21:54:09.369556 systemd[1]: Reached target sysinit.target. Feb 12 21:54:09.369690 systemd[1]: Started motdgen.path. Feb 12 21:54:09.369786 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Feb 12 21:54:09.369959 systemd[1]: Started logrotate.timer. Feb 12 21:54:09.370090 systemd[1]: Started mdadm.timer. Feb 12 21:54:09.370168 systemd[1]: Started systemd-tmpfiles-clean.timer. Feb 12 21:54:09.370267 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 12 21:54:09.370287 systemd[1]: Reached target paths.target. Feb 12 21:54:09.370366 systemd[1]: Reached target timers.target. Feb 12 21:54:09.370613 systemd[1]: Listening on dbus.socket. Feb 12 21:54:09.371509 systemd[1]: Starting docker.socket... Feb 12 21:54:09.372697 systemd[1]: Listening on sshd.socket. Feb 12 21:54:09.372831 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 12 21:54:09.373068 systemd[1]: Listening on docker.socket. Feb 12 21:54:09.373164 systemd[1]: Reached target sockets.target. Feb 12 21:54:09.373256 systemd[1]: Reached target basic.target. Feb 12 21:54:09.373419 systemd[1]: System is tainted: cgroupsv1 Feb 12 21:54:09.373446 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 12 21:54:09.373459 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 12 21:54:09.374236 systemd[1]: Starting containerd.service... Feb 12 21:54:09.375027 systemd[1]: Starting dbus.service... Feb 12 21:54:09.375906 systemd[1]: Starting enable-oem-cloudinit.service... Feb 12 21:54:09.376709 systemd[1]: Starting extend-filesystems.service... Feb 12 21:54:09.377966 jq[1203]: false Feb 12 21:54:09.376970 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Feb 12 21:54:09.377693 systemd[1]: Starting motdgen.service... Feb 12 21:54:09.379742 systemd[1]: Starting prepare-cni-plugins.service... Feb 12 21:54:09.383144 systemd[1]: Starting prepare-critools.service... Feb 12 21:54:09.389300 systemd[1]: Starting ssh-key-proc-cmdline.service... Feb 12 21:54:09.390531 systemd[1]: Starting sshd-keygen.service... Feb 12 21:54:09.392186 systemd[1]: Starting systemd-logind.service... Feb 12 21:54:09.392440 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 12 21:54:09.392471 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 12 21:54:09.403318 jq[1220]: true Feb 12 21:54:09.393267 systemd[1]: Starting update-engine.service... Feb 12 21:54:09.394165 systemd[1]: Starting update-ssh-keys-after-ignition.service... Feb 12 21:54:09.395094 systemd[1]: Starting vmtoolsd.service... Feb 12 21:54:09.395996 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 12 21:54:09.432947 tar[1224]: crictl Feb 12 21:54:09.396620 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Feb 12 21:54:09.433145 tar[1223]: ./ Feb 12 21:54:09.433145 tar[1223]: ./macvlan Feb 12 21:54:09.402424 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 12 21:54:09.433339 jq[1231]: true Feb 12 21:54:09.402559 systemd[1]: Finished ssh-key-proc-cmdline.service. Feb 12 21:54:09.415907 systemd[1]: Started vmtoolsd.service. Feb 12 21:54:09.426454 systemd[1]: motdgen.service: Deactivated successfully. Feb 12 21:54:09.426581 systemd[1]: Finished motdgen.service. Feb 12 21:54:09.448287 extend-filesystems[1204]: Found sda Feb 12 21:54:09.448287 extend-filesystems[1204]: Found sda1 Feb 12 21:54:09.448287 extend-filesystems[1204]: Found sda2 Feb 12 21:54:09.448287 extend-filesystems[1204]: Found sda3 Feb 12 21:54:09.448287 extend-filesystems[1204]: Found usr Feb 12 21:54:09.448287 extend-filesystems[1204]: Found sda4 Feb 12 21:54:09.454475 extend-filesystems[1204]: Found sda6 Feb 12 21:54:09.454475 extend-filesystems[1204]: Found sda7 Feb 12 21:54:09.454475 extend-filesystems[1204]: Found sda9 Feb 12 21:54:09.454475 extend-filesystems[1204]: Checking size of /dev/sda9 Feb 12 21:54:09.450266 systemd[1]: Started dbus.service. Feb 12 21:54:09.450178 dbus-daemon[1201]: [system] SELinux support is enabled Feb 12 21:54:09.470426 kernel: NET: Registered PF_VSOCK protocol family Feb 12 21:54:09.451510 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 12 21:54:09.470528 extend-filesystems[1204]: Old size kept for /dev/sda9 Feb 12 21:54:09.470528 extend-filesystems[1204]: Found sr0 Feb 12 21:54:09.451524 systemd[1]: Reached target system-config.target. Feb 12 21:54:09.451641 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 12 21:54:09.451678 systemd[1]: Reached target user-config.target. Feb 12 21:54:09.464141 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 12 21:54:09.464292 systemd[1]: Finished extend-filesystems.service. Feb 12 21:54:09.533698 update_engine[1219]: I0212 21:54:09.528733 1219 main.cc:92] Flatcar Update Engine starting Feb 12 21:54:09.535711 bash[1276]: Updated "/home/core/.ssh/authorized_keys" Feb 12 21:54:09.535895 systemd[1]: Finished update-ssh-keys-after-ignition.service. Feb 12 21:54:09.537569 systemd[1]: Started update-engine.service. Feb 12 21:54:09.537692 update_engine[1219]: I0212 21:54:09.537597 1219 update_check_scheduler.cc:74] Next update check in 2m41s Feb 12 21:54:09.538795 systemd[1]: Started locksmithd.service. Feb 12 21:54:09.557035 env[1227]: time="2024-02-12T21:54:09.556851204Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Feb 12 21:54:09.566072 systemd-logind[1217]: Watching system buttons on /dev/input/event1 (Power Button) Feb 12 21:54:09.566269 systemd-logind[1217]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Feb 12 21:54:09.569042 systemd-logind[1217]: New seat seat0. Feb 12 21:54:09.572609 systemd[1]: Started systemd-logind.service. Feb 12 21:54:09.578496 tar[1223]: ./static Feb 12 21:54:09.587792 env[1227]: time="2024-02-12T21:54:09.587760841Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 12 21:54:09.587876 env[1227]: time="2024-02-12T21:54:09.587863757Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 12 21:54:09.588982 env[1227]: time="2024-02-12T21:54:09.588963034Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.148-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 12 21:54:09.588982 env[1227]: time="2024-02-12T21:54:09.588980001Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 12 21:54:09.589641 env[1227]: time="2024-02-12T21:54:09.589625148Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 12 21:54:09.589641 env[1227]: time="2024-02-12T21:54:09.589638187Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 12 21:54:09.589695 env[1227]: time="2024-02-12T21:54:09.589645987Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Feb 12 21:54:09.589695 env[1227]: time="2024-02-12T21:54:09.589651739Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 12 21:54:09.589729 env[1227]: time="2024-02-12T21:54:09.589693640Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 12 21:54:09.589831 env[1227]: time="2024-02-12T21:54:09.589818027Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 12 21:54:09.589920 env[1227]: time="2024-02-12T21:54:09.589905204Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 12 21:54:09.589920 env[1227]: time="2024-02-12T21:54:09.589917553Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 12 21:54:09.589962 env[1227]: time="2024-02-12T21:54:09.589943424Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Feb 12 21:54:09.589962 env[1227]: time="2024-02-12T21:54:09.589950957Z" level=info msg="metadata content store policy set" policy=shared Feb 12 21:54:09.597002 env[1227]: time="2024-02-12T21:54:09.596936677Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 12 21:54:09.597002 env[1227]: time="2024-02-12T21:54:09.596956849Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 12 21:54:09.597002 env[1227]: time="2024-02-12T21:54:09.596966457Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 12 21:54:09.597076 env[1227]: time="2024-02-12T21:54:09.596989312Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 12 21:54:09.597076 env[1227]: time="2024-02-12T21:54:09.597028893Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 12 21:54:09.597076 env[1227]: time="2024-02-12T21:54:09.597038831Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 12 21:54:09.597076 env[1227]: time="2024-02-12T21:54:09.597046466Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 12 21:54:09.597076 env[1227]: time="2024-02-12T21:54:09.597053923Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 12 21:54:09.597076 env[1227]: time="2024-02-12T21:54:09.597060995Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Feb 12 21:54:09.597076 env[1227]: time="2024-02-12T21:54:09.597068500Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 12 21:54:09.597180 env[1227]: time="2024-02-12T21:54:09.597088013Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 12 21:54:09.597180 env[1227]: time="2024-02-12T21:54:09.597098060Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 12 21:54:09.597180 env[1227]: time="2024-02-12T21:54:09.597160952Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 12 21:54:09.597224 env[1227]: time="2024-02-12T21:54:09.597208515Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 12 21:54:09.597616 env[1227]: time="2024-02-12T21:54:09.597599918Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 12 21:54:09.597649 env[1227]: time="2024-02-12T21:54:09.597636775Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 12 21:54:09.597668 env[1227]: time="2024-02-12T21:54:09.597651895Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 12 21:54:09.597711 env[1227]: time="2024-02-12T21:54:09.597697263Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 12 21:54:09.597735 env[1227]: time="2024-02-12T21:54:09.597711252Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 12 21:54:09.597735 env[1227]: time="2024-02-12T21:54:09.597722006Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 12 21:54:09.597735 env[1227]: time="2024-02-12T21:54:09.597731097Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 12 21:54:09.597783 env[1227]: time="2024-02-12T21:54:09.597740286Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 12 21:54:09.597783 env[1227]: time="2024-02-12T21:54:09.597749280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 12 21:54:09.597783 env[1227]: time="2024-02-12T21:54:09.597758831Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 12 21:54:09.597830 env[1227]: time="2024-02-12T21:54:09.597782152Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 12 21:54:09.597830 env[1227]: time="2024-02-12T21:54:09.597799687Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 12 21:54:09.597898 env[1227]: time="2024-02-12T21:54:09.597883264Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 12 21:54:09.597927 env[1227]: time="2024-02-12T21:54:09.597898089Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 12 21:54:09.597927 env[1227]: time="2024-02-12T21:54:09.597907946Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 12 21:54:09.597927 env[1227]: time="2024-02-12T21:54:09.597925020Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 12 21:54:09.597987 env[1227]: time="2024-02-12T21:54:09.597939830Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Feb 12 21:54:09.597987 env[1227]: time="2024-02-12T21:54:09.597949032Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 12 21:54:09.597987 env[1227]: time="2024-02-12T21:54:09.597962176Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Feb 12 21:54:09.598049 env[1227]: time="2024-02-12T21:54:09.597987058Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 12 21:54:09.598221 env[1227]: time="2024-02-12T21:54:09.598185786Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 12 21:54:09.603305 env[1227]: time="2024-02-12T21:54:09.598236699Z" level=info msg="Connect containerd service" Feb 12 21:54:09.603305 env[1227]: time="2024-02-12T21:54:09.598275548Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 12 21:54:09.603305 env[1227]: time="2024-02-12T21:54:09.598827663Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 12 21:54:09.603305 env[1227]: time="2024-02-12T21:54:09.599704430Z" level=info msg="Start subscribing containerd event" Feb 12 21:54:09.603305 env[1227]: time="2024-02-12T21:54:09.599941972Z" level=info msg="Start recovering state" Feb 12 21:54:09.603305 env[1227]: time="2024-02-12T21:54:09.599974400Z" level=info msg="Start event monitor" Feb 12 21:54:09.603305 env[1227]: time="2024-02-12T21:54:09.599981839Z" level=info msg="Start snapshots syncer" Feb 12 21:54:09.603305 env[1227]: time="2024-02-12T21:54:09.599986937Z" level=info msg="Start cni network conf syncer for default" Feb 12 21:54:09.603305 env[1227]: time="2024-02-12T21:54:09.600014560Z" level=info msg="Start streaming server" Feb 12 21:54:09.603305 env[1227]: time="2024-02-12T21:54:09.600165249Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 12 21:54:09.603305 env[1227]: time="2024-02-12T21:54:09.600187098Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 12 21:54:09.603305 env[1227]: time="2024-02-12T21:54:09.600212097Z" level=info msg="containerd successfully booted in 0.045752s" Feb 12 21:54:09.600285 systemd[1]: Started containerd.service. Feb 12 21:54:09.623448 systemd-networkd[1110]: ens192: Gained IPv6LL Feb 12 21:54:09.628852 tar[1223]: ./vlan Feb 12 21:54:09.686801 tar[1223]: ./portmap Feb 12 21:54:09.725683 tar[1223]: ./host-local Feb 12 21:54:09.757479 tar[1223]: ./vrf Feb 12 21:54:09.797189 tar[1223]: ./bridge Feb 12 21:54:09.840412 tar[1223]: ./tuning Feb 12 21:54:09.876035 tar[1223]: ./firewall Feb 12 21:54:09.929204 tar[1223]: ./host-device Feb 12 21:54:09.967225 tar[1223]: ./sbr Feb 12 21:54:10.000583 tar[1223]: ./loopback Feb 12 21:54:10.036657 tar[1223]: ./dhcp Feb 12 21:54:10.077144 systemd[1]: Finished prepare-critools.service. Feb 12 21:54:10.110896 tar[1223]: ./ptp Feb 12 21:54:10.135244 tar[1223]: ./ipvlan Feb 12 21:54:10.158167 tar[1223]: ./bandwidth Feb 12 21:54:10.187684 systemd[1]: Finished prepare-cni-plugins.service. Feb 12 21:54:10.229158 locksmithd[1279]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 12 21:54:10.760989 sshd_keygen[1244]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 12 21:54:10.773139 systemd[1]: Finished sshd-keygen.service. Feb 12 21:54:10.774521 systemd[1]: Starting issuegen.service... Feb 12 21:54:10.778673 systemd[1]: issuegen.service: Deactivated successfully. Feb 12 21:54:10.778815 systemd[1]: Finished issuegen.service. Feb 12 21:54:10.780117 systemd[1]: Starting systemd-user-sessions.service... Feb 12 21:54:10.785528 systemd[1]: Finished systemd-user-sessions.service. Feb 12 21:54:10.786733 systemd[1]: Started getty@tty1.service. Feb 12 21:54:10.787814 systemd[1]: Started serial-getty@ttyS0.service. Feb 12 21:54:10.788040 systemd[1]: Reached target getty.target. Feb 12 21:54:10.788176 systemd[1]: Reached target multi-user.target. Feb 12 21:54:10.789387 systemd[1]: Starting systemd-update-utmp-runlevel.service... Feb 12 21:54:10.795093 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Feb 12 21:54:10.795229 systemd[1]: Finished systemd-update-utmp-runlevel.service. Feb 12 21:54:10.795431 systemd[1]: Startup finished in 7.033s (kernel) + 5.465s (userspace) = 12.499s. Feb 12 21:54:10.819833 login[1354]: pam_lastlog(login:session): file /var/log/lastlog is locked/write Feb 12 21:54:10.821509 login[1355]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Feb 12 21:54:10.829689 systemd[1]: Created slice user-500.slice. Feb 12 21:54:10.830319 systemd[1]: Starting user-runtime-dir@500.service... Feb 12 21:54:10.831845 systemd-logind[1217]: New session 1 of user core. Feb 12 21:54:10.836501 systemd[1]: Finished user-runtime-dir@500.service. Feb 12 21:54:10.837218 systemd[1]: Starting user@500.service... Feb 12 21:54:10.846405 (systemd)[1360]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 12 21:54:10.918927 systemd[1360]: Queued start job for default target default.target. Feb 12 21:54:10.919292 systemd[1360]: Reached target paths.target. Feb 12 21:54:10.919308 systemd[1360]: Reached target sockets.target. Feb 12 21:54:10.919316 systemd[1360]: Reached target timers.target. Feb 12 21:54:10.919333 systemd[1360]: Reached target basic.target. Feb 12 21:54:10.919407 systemd[1]: Started user@500.service. Feb 12 21:54:10.919964 systemd[1]: Started session-1.scope. Feb 12 21:54:10.923024 systemd[1360]: Reached target default.target. Feb 12 21:54:10.923730 systemd[1360]: Startup finished in 74ms. Feb 12 21:54:11.820367 login[1354]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Feb 12 21:54:11.824174 systemd[1]: Started session-2.scope. Feb 12 21:54:11.824432 systemd-logind[1217]: New session 2 of user core. Feb 12 21:54:49.529392 systemd[1]: Created slice system-sshd.slice. Feb 12 21:54:49.530357 systemd[1]: Started sshd@0-139.178.70.105:22-139.178.89.65:34706.service. Feb 12 21:54:49.669053 sshd[1383]: Accepted publickey for core from 139.178.89.65 port 34706 ssh2: RSA SHA256:HiqmCZ5wMmSvO0wWrhK3vjnBlpa7aAHv9/SVtM7jhV0 Feb 12 21:54:49.670100 sshd[1383]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 21:54:49.673030 systemd[1]: Started session-3.scope. Feb 12 21:54:49.673400 systemd-logind[1217]: New session 3 of user core. Feb 12 21:54:49.721377 systemd[1]: Started sshd@1-139.178.70.105:22-139.178.89.65:34714.service. Feb 12 21:54:49.764633 sshd[1388]: Accepted publickey for core from 139.178.89.65 port 34714 ssh2: RSA SHA256:HiqmCZ5wMmSvO0wWrhK3vjnBlpa7aAHv9/SVtM7jhV0 Feb 12 21:54:49.765731 sshd[1388]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 21:54:49.768676 systemd[1]: Started session-4.scope. Feb 12 21:54:49.769049 systemd-logind[1217]: New session 4 of user core. Feb 12 21:54:49.820441 sshd[1388]: pam_unix(sshd:session): session closed for user core Feb 12 21:54:49.820298 systemd[1]: Started sshd@2-139.178.70.105:22-139.178.89.65:34716.service. Feb 12 21:54:49.823483 systemd[1]: sshd@1-139.178.70.105:22-139.178.89.65:34714.service: Deactivated successfully. Feb 12 21:54:49.823921 systemd[1]: session-4.scope: Deactivated successfully. Feb 12 21:54:49.824725 systemd-logind[1217]: Session 4 logged out. Waiting for processes to exit. Feb 12 21:54:49.827456 systemd-logind[1217]: Removed session 4. Feb 12 21:54:49.856787 sshd[1393]: Accepted publickey for core from 139.178.89.65 port 34716 ssh2: RSA SHA256:HiqmCZ5wMmSvO0wWrhK3vjnBlpa7aAHv9/SVtM7jhV0 Feb 12 21:54:49.857800 sshd[1393]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 21:54:49.860347 systemd-logind[1217]: New session 5 of user core. Feb 12 21:54:49.860670 systemd[1]: Started session-5.scope. Feb 12 21:54:49.907830 sshd[1393]: pam_unix(sshd:session): session closed for user core Feb 12 21:54:49.910112 systemd[1]: Started sshd@3-139.178.70.105:22-139.178.89.65:34718.service. Feb 12 21:54:49.910777 systemd[1]: sshd@2-139.178.70.105:22-139.178.89.65:34716.service: Deactivated successfully. Feb 12 21:54:49.911469 systemd-logind[1217]: Session 5 logged out. Waiting for processes to exit. Feb 12 21:54:49.911506 systemd[1]: session-5.scope: Deactivated successfully. Feb 12 21:54:49.912295 systemd-logind[1217]: Removed session 5. Feb 12 21:54:49.947081 sshd[1400]: Accepted publickey for core from 139.178.89.65 port 34718 ssh2: RSA SHA256:HiqmCZ5wMmSvO0wWrhK3vjnBlpa7aAHv9/SVtM7jhV0 Feb 12 21:54:49.947841 sshd[1400]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 21:54:49.950473 systemd-logind[1217]: New session 6 of user core. Feb 12 21:54:49.950740 systemd[1]: Started session-6.scope. Feb 12 21:54:50.001119 sshd[1400]: pam_unix(sshd:session): session closed for user core Feb 12 21:54:50.002643 systemd[1]: Started sshd@4-139.178.70.105:22-139.178.89.65:34732.service. Feb 12 21:54:50.003867 systemd[1]: sshd@3-139.178.70.105:22-139.178.89.65:34718.service: Deactivated successfully. Feb 12 21:54:50.004602 systemd[1]: session-6.scope: Deactivated successfully. Feb 12 21:54:50.004885 systemd-logind[1217]: Session 6 logged out. Waiting for processes to exit. Feb 12 21:54:50.005568 systemd-logind[1217]: Removed session 6. Feb 12 21:54:50.038073 sshd[1407]: Accepted publickey for core from 139.178.89.65 port 34732 ssh2: RSA SHA256:HiqmCZ5wMmSvO0wWrhK3vjnBlpa7aAHv9/SVtM7jhV0 Feb 12 21:54:50.039235 sshd[1407]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 21:54:50.042714 systemd[1]: Started session-7.scope. Feb 12 21:54:50.043285 systemd-logind[1217]: New session 7 of user core. Feb 12 21:54:50.114344 sudo[1413]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 12 21:54:50.114925 sudo[1413]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 12 21:54:50.628296 systemd[1]: Reloading. Feb 12 21:54:50.665913 /usr/lib/systemd/system-generators/torcx-generator[1442]: time="2024-02-12T21:54:50Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 12 21:54:50.667141 /usr/lib/systemd/system-generators/torcx-generator[1442]: time="2024-02-12T21:54:50Z" level=info msg="torcx already run" Feb 12 21:54:50.730297 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 12 21:54:50.730313 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 12 21:54:50.743157 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 12 21:54:50.788311 systemd[1]: Starting systemd-networkd-wait-online.service... Feb 12 21:54:50.793012 systemd[1]: Finished systemd-networkd-wait-online.service. Feb 12 21:54:50.793398 systemd[1]: Reached target network-online.target. Feb 12 21:54:50.794626 systemd[1]: Started kubelet.service. Feb 12 21:54:50.804534 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Feb 12 21:54:50.806427 systemd[1]: Starting coreos-metadata.service... Feb 12 21:54:50.835483 kubelet[1509]: E0212 21:54:50.835451 1509 run.go:74] "command failed" err="failed to validate kubelet flags: the container runtime endpoint address was not specified or empty, use --container-runtime-endpoint to set" Feb 12 21:54:50.836933 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 12 21:54:50.837020 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 12 21:54:50.853107 systemd[1]: coreos-metadata.service: Deactivated successfully. Feb 12 21:54:50.853234 systemd[1]: Finished coreos-metadata.service. Feb 12 21:54:51.690793 systemd[1]: Stopped kubelet.service. Feb 12 21:54:51.701259 systemd[1]: Reloading. Feb 12 21:54:51.736654 /usr/lib/systemd/system-generators/torcx-generator[1582]: time="2024-02-12T21:54:51Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 12 21:54:51.736845 /usr/lib/systemd/system-generators/torcx-generator[1582]: time="2024-02-12T21:54:51Z" level=info msg="torcx already run" Feb 12 21:54:51.799265 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 12 21:54:51.799277 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 12 21:54:51.811801 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 12 21:54:51.858758 systemd[1]: Started kubelet.service. Feb 12 21:54:51.884606 kubelet[1648]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 12 21:54:51.884835 kubelet[1648]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 12 21:54:51.884925 kubelet[1648]: I0212 21:54:51.884905 1648 server.go:198] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 12 21:54:51.885679 kubelet[1648]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 12 21:54:51.885720 kubelet[1648]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 12 21:54:52.296392 kubelet[1648]: I0212 21:54:52.296372 1648 server.go:412] "Kubelet version" kubeletVersion="v1.26.5" Feb 12 21:54:52.296392 kubelet[1648]: I0212 21:54:52.296387 1648 server.go:414] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 12 21:54:52.296531 kubelet[1648]: I0212 21:54:52.296520 1648 server.go:836] "Client rotation is on, will bootstrap in background" Feb 12 21:54:52.297838 kubelet[1648]: I0212 21:54:52.297828 1648 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 12 21:54:52.299452 kubelet[1648]: I0212 21:54:52.299438 1648 server.go:659] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 12 21:54:52.299683 kubelet[1648]: I0212 21:54:52.299673 1648 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 12 21:54:52.299729 kubelet[1648]: I0212 21:54:52.299717 1648 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:} {Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] ExperimentalTopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Feb 12 21:54:52.299793 kubelet[1648]: I0212 21:54:52.299738 1648 topology_manager.go:134] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Feb 12 21:54:52.299793 kubelet[1648]: I0212 21:54:52.299748 1648 container_manager_linux.go:308] "Creating device plugin manager" Feb 12 21:54:52.299865 kubelet[1648]: I0212 21:54:52.299802 1648 state_mem.go:36] "Initialized new in-memory state store" Feb 12 21:54:52.302736 kubelet[1648]: I0212 21:54:52.302724 1648 kubelet.go:398] "Attempting to sync node with API server" Feb 12 21:54:52.302839 kubelet[1648]: I0212 21:54:52.302829 1648 kubelet.go:286] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 12 21:54:52.302904 kubelet[1648]: I0212 21:54:52.302897 1648 kubelet.go:297] "Adding apiserver pod source" Feb 12 21:54:52.302964 kubelet[1648]: I0212 21:54:52.302946 1648 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 12 21:54:52.303320 kubelet[1648]: E0212 21:54:52.303305 1648 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:54:52.303352 kubelet[1648]: E0212 21:54:52.303339 1648 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:54:52.303838 kubelet[1648]: I0212 21:54:52.303830 1648 kuberuntime_manager.go:244] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 12 21:54:52.304052 kubelet[1648]: W0212 21:54:52.304044 1648 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 12 21:54:52.304784 kubelet[1648]: I0212 21:54:52.304776 1648 server.go:1186] "Started kubelet" Feb 12 21:54:52.308146 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Feb 12 21:54:52.308264 kubelet[1648]: I0212 21:54:52.308244 1648 server.go:161] "Starting to listen" address="0.0.0.0" port=10250 Feb 12 21:54:52.308714 kubelet[1648]: I0212 21:54:52.308700 1648 server.go:451] "Adding debug handlers to kubelet server" Feb 12 21:54:52.309328 kubelet[1648]: I0212 21:54:52.309320 1648 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 12 21:54:52.310308 kubelet[1648]: E0212 21:54:52.310292 1648 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 12 21:54:52.310350 kubelet[1648]: E0212 21:54:52.310311 1648 kubelet.go:1386] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 12 21:54:52.311194 kubelet[1648]: E0212 21:54:52.311129 1648 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.67.124.137.17b33c376422912d", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.67.124.137", UID:"10.67.124.137", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"10.67.124.137"}, FirstTimestamp:time.Date(2024, time.February, 12, 21, 54, 52, 304404781, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 21, 54, 52, 304404781, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 21:54:52.311589 kubelet[1648]: W0212 21:54:52.311577 1648 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes "10.67.124.137" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 12 21:54:52.311625 kubelet[1648]: E0212 21:54:52.311596 1648 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.67.124.137" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 12 21:54:52.311625 kubelet[1648]: W0212 21:54:52.311614 1648 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 12 21:54:52.311625 kubelet[1648]: E0212 21:54:52.311620 1648 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 12 21:54:52.313971 kubelet[1648]: E0212 21:54:52.313891 1648 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.67.124.137.17b33c37647c92d4", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.67.124.137", UID:"10.67.124.137", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"InvalidDiskCapacity", Message:"invalid capacity 0 on image filesystem", Source:v1.EventSource{Component:"kubelet", Host:"10.67.124.137"}, FirstTimestamp:time.Date(2024, time.February, 12, 21, 54, 52, 310303444, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 21, 54, 52, 310303444, time.Local), Count:1, Type:"Warning", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 21:54:52.314814 kubelet[1648]: I0212 21:54:52.314791 1648 volume_manager.go:293] "Starting Kubelet Volume Manager" Feb 12 21:54:52.314868 kubelet[1648]: I0212 21:54:52.314830 1648 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 12 21:54:52.330134 kubelet[1648]: E0212 21:54:52.330112 1648 controller.go:146] failed to ensure lease exists, will retry in 200ms, error: leases.coordination.k8s.io "10.67.124.137" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Feb 12 21:54:52.330224 kubelet[1648]: W0212 21:54:52.330149 1648 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 12 21:54:52.330224 kubelet[1648]: E0212 21:54:52.330164 1648 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 12 21:54:52.333675 kubelet[1648]: I0212 21:54:52.333663 1648 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 12 21:54:52.333753 kubelet[1648]: I0212 21:54:52.333745 1648 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 12 21:54:52.333829 kubelet[1648]: I0212 21:54:52.333822 1648 state_mem.go:36] "Initialized new in-memory state store" Feb 12 21:54:52.334952 kubelet[1648]: I0212 21:54:52.334941 1648 policy_none.go:49] "None policy: Start" Feb 12 21:54:52.335196 kubelet[1648]: E0212 21:54:52.335143 1648 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.67.124.137.17b33c3765daa2b2", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.67.124.137", UID:"10.67.124.137", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.67.124.137 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.67.124.137"}, FirstTimestamp:time.Date(2024, time.February, 12, 21, 54, 52, 333245106, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 21, 54, 52, 333245106, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 21:54:52.335650 kubelet[1648]: I0212 21:54:52.335624 1648 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 12 21:54:52.335712 kubelet[1648]: I0212 21:54:52.335705 1648 state_mem.go:35] "Initializing new in-memory state store" Feb 12 21:54:52.340085 kubelet[1648]: E0212 21:54:52.340027 1648 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.67.124.137.17b33c3765dad285", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.67.124.137", UID:"10.67.124.137", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.67.124.137 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.67.124.137"}, FirstTimestamp:time.Date(2024, time.February, 12, 21, 54, 52, 333257349, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 21, 54, 52, 333257349, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 21:54:52.346391 kubelet[1648]: I0212 21:54:52.346372 1648 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 12 21:54:52.346663 kubelet[1648]: I0212 21:54:52.346654 1648 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 12 21:54:52.347882 kubelet[1648]: E0212 21:54:52.347829 1648 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.67.124.137.17b33c3765dad8b8", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.67.124.137", UID:"10.67.124.137", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.67.124.137 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.67.124.137"}, FirstTimestamp:time.Date(2024, time.February, 12, 21, 54, 52, 333258936, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 21, 54, 52, 333258936, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 21:54:52.349427 kubelet[1648]: E0212 21:54:52.349357 1648 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.67.124.137.17b33c3766b43873", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.67.124.137", UID:"10.67.124.137", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeAllocatableEnforced", Message:"Updated Node Allocatable limit across pods", Source:v1.EventSource{Component:"kubelet", Host:"10.67.124.137"}, FirstTimestamp:time.Date(2024, time.February, 12, 21, 54, 52, 347504755, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 21, 54, 52, 347504755, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 21:54:52.349645 kubelet[1648]: E0212 21:54:52.349636 1648 eviction_manager.go:261] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.67.124.137\" not found" Feb 12 21:54:52.415987 kubelet[1648]: I0212 21:54:52.415972 1648 kubelet_node_status.go:70] "Attempting to register node" node="10.67.124.137" Feb 12 21:54:52.417148 kubelet[1648]: E0212 21:54:52.417135 1648 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.67.124.137" Feb 12 21:54:52.417284 kubelet[1648]: E0212 21:54:52.417232 1648 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.67.124.137.17b33c3765daa2b2", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.67.124.137", UID:"10.67.124.137", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.67.124.137 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.67.124.137"}, FirstTimestamp:time.Date(2024, time.February, 12, 21, 54, 52, 333245106, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 21, 54, 52, 415913148, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.67.124.137.17b33c3765daa2b2" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 21:54:52.417882 kubelet[1648]: E0212 21:54:52.417848 1648 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.67.124.137.17b33c3765dad285", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.67.124.137", UID:"10.67.124.137", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.67.124.137 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.67.124.137"}, FirstTimestamp:time.Date(2024, time.February, 12, 21, 54, 52, 333257349, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 21, 54, 52, 415918631, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.67.124.137.17b33c3765dad285" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 21:54:52.418435 kubelet[1648]: E0212 21:54:52.418390 1648 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.67.124.137.17b33c3765dad8b8", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.67.124.137", UID:"10.67.124.137", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.67.124.137 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.67.124.137"}, FirstTimestamp:time.Date(2024, time.February, 12, 21, 54, 52, 333258936, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 21, 54, 52, 415920220, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.67.124.137.17b33c3765dad8b8" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 21:54:52.480878 kubelet[1648]: I0212 21:54:52.480864 1648 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Feb 12 21:54:52.492102 kubelet[1648]: I0212 21:54:52.492088 1648 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Feb 12 21:54:52.492207 kubelet[1648]: I0212 21:54:52.492200 1648 status_manager.go:176] "Starting to sync pod status with apiserver" Feb 12 21:54:52.492313 kubelet[1648]: I0212 21:54:52.492306 1648 kubelet.go:2113] "Starting kubelet main sync loop" Feb 12 21:54:52.492383 kubelet[1648]: E0212 21:54:52.492378 1648 kubelet.go:2137] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Feb 12 21:54:52.493244 kubelet[1648]: W0212 21:54:52.493229 1648 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 12 21:54:52.493297 kubelet[1648]: E0212 21:54:52.493258 1648 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 12 21:54:52.531353 kubelet[1648]: E0212 21:54:52.531336 1648 controller.go:146] failed to ensure lease exists, will retry in 400ms, error: leases.coordination.k8s.io "10.67.124.137" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Feb 12 21:54:52.619076 kubelet[1648]: I0212 21:54:52.618245 1648 kubelet_node_status.go:70] "Attempting to register node" node="10.67.124.137" Feb 12 21:54:52.620131 kubelet[1648]: E0212 21:54:52.620059 1648 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.67.124.137" Feb 12 21:54:52.620355 kubelet[1648]: E0212 21:54:52.620293 1648 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.67.124.137.17b33c3765daa2b2", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.67.124.137", UID:"10.67.124.137", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.67.124.137 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.67.124.137"}, FirstTimestamp:time.Date(2024, time.February, 12, 21, 54, 52, 333245106, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 21, 54, 52, 618217610, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.67.124.137.17b33c3765daa2b2" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 21:54:52.621100 kubelet[1648]: E0212 21:54:52.621056 1648 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.67.124.137.17b33c3765dad285", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.67.124.137", UID:"10.67.124.137", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.67.124.137 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.67.124.137"}, FirstTimestamp:time.Date(2024, time.February, 12, 21, 54, 52, 333257349, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 21, 54, 52, 618221596, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.67.124.137.17b33c3765dad285" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 21:54:52.706157 kubelet[1648]: E0212 21:54:52.706089 1648 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.67.124.137.17b33c3765dad8b8", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.67.124.137", UID:"10.67.124.137", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.67.124.137 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.67.124.137"}, FirstTimestamp:time.Date(2024, time.February, 12, 21, 54, 52, 333258936, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 21, 54, 52, 618223355, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.67.124.137.17b33c3765dad8b8" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 21:54:52.933294 kubelet[1648]: E0212 21:54:52.933195 1648 controller.go:146] failed to ensure lease exists, will retry in 800ms, error: leases.coordination.k8s.io "10.67.124.137" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Feb 12 21:54:53.020979 kubelet[1648]: I0212 21:54:53.020959 1648 kubelet_node_status.go:70] "Attempting to register node" node="10.67.124.137" Feb 12 21:54:53.021551 kubelet[1648]: E0212 21:54:53.021540 1648 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.67.124.137" Feb 12 21:54:53.021767 kubelet[1648]: E0212 21:54:53.021723 1648 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.67.124.137.17b33c3765daa2b2", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.67.124.137", UID:"10.67.124.137", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.67.124.137 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.67.124.137"}, FirstTimestamp:time.Date(2024, time.February, 12, 21, 54, 52, 333245106, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 21, 54, 53, 20937276, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.67.124.137.17b33c3765daa2b2" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 21:54:53.105831 kubelet[1648]: E0212 21:54:53.105765 1648 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.67.124.137.17b33c3765dad285", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.67.124.137", UID:"10.67.124.137", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.67.124.137 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.67.124.137"}, FirstTimestamp:time.Date(2024, time.February, 12, 21, 54, 52, 333257349, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 21, 54, 53, 20940895, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.67.124.137.17b33c3765dad285" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 21:54:53.282216 kubelet[1648]: W0212 21:54:53.282193 1648 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 12 21:54:53.282216 kubelet[1648]: E0212 21:54:53.282218 1648 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 12 21:54:53.303435 kubelet[1648]: E0212 21:54:53.303413 1648 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:54:53.306374 kubelet[1648]: E0212 21:54:53.306309 1648 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.67.124.137.17b33c3765dad8b8", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.67.124.137", UID:"10.67.124.137", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.67.124.137 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.67.124.137"}, FirstTimestamp:time.Date(2024, time.February, 12, 21, 54, 52, 333258936, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 21, 54, 53, 20942357, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.67.124.137.17b33c3765dad8b8" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 21:54:53.349217 kubelet[1648]: W0212 21:54:53.349188 1648 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 12 21:54:53.349217 kubelet[1648]: E0212 21:54:53.349217 1648 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 12 21:54:53.564793 kubelet[1648]: W0212 21:54:53.564730 1648 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 12 21:54:53.564793 kubelet[1648]: E0212 21:54:53.564752 1648 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 12 21:54:53.612620 kubelet[1648]: W0212 21:54:53.612598 1648 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes "10.67.124.137" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 12 21:54:53.612760 kubelet[1648]: E0212 21:54:53.612750 1648 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.67.124.137" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 12 21:54:53.734289 kubelet[1648]: E0212 21:54:53.734268 1648 controller.go:146] failed to ensure lease exists, will retry in 1.6s, error: leases.coordination.k8s.io "10.67.124.137" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Feb 12 21:54:53.822216 kubelet[1648]: I0212 21:54:53.822146 1648 kubelet_node_status.go:70] "Attempting to register node" node="10.67.124.137" Feb 12 21:54:53.823195 kubelet[1648]: E0212 21:54:53.823151 1648 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.67.124.137.17b33c3765daa2b2", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.67.124.137", UID:"10.67.124.137", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.67.124.137 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.67.124.137"}, FirstTimestamp:time.Date(2024, time.February, 12, 21, 54, 52, 333245106, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 21, 54, 53, 822124772, time.Local), Count:5, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.67.124.137.17b33c3765daa2b2" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 21:54:53.823406 kubelet[1648]: E0212 21:54:53.823231 1648 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.67.124.137" Feb 12 21:54:53.823773 kubelet[1648]: E0212 21:54:53.823719 1648 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.67.124.137.17b33c3765dad285", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.67.124.137", UID:"10.67.124.137", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.67.124.137 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.67.124.137"}, FirstTimestamp:time.Date(2024, time.February, 12, 21, 54, 52, 333257349, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 21, 54, 53, 822128009, time.Local), Count:5, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.67.124.137.17b33c3765dad285" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 21:54:53.906012 kubelet[1648]: E0212 21:54:53.905910 1648 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.67.124.137.17b33c3765dad8b8", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.67.124.137", UID:"10.67.124.137", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.67.124.137 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.67.124.137"}, FirstTimestamp:time.Date(2024, time.February, 12, 21, 54, 52, 333258936, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 21, 54, 53, 822129416, time.Local), Count:5, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.67.124.137.17b33c3765dad8b8" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 21:54:54.301312 update_engine[1219]: I0212 21:54:54.301275 1219 update_attempter.cc:509] Updating boot flags... Feb 12 21:54:54.304024 kubelet[1648]: E0212 21:54:54.304009 1648 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:54:55.084601 kubelet[1648]: W0212 21:54:55.084564 1648 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 12 21:54:55.084601 kubelet[1648]: E0212 21:54:55.084584 1648 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 12 21:54:55.304869 kubelet[1648]: E0212 21:54:55.304812 1648 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:54:55.335675 kubelet[1648]: E0212 21:54:55.335579 1648 controller.go:146] failed to ensure lease exists, will retry in 3.2s, error: leases.coordination.k8s.io "10.67.124.137" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Feb 12 21:54:55.425012 kubelet[1648]: I0212 21:54:55.424710 1648 kubelet_node_status.go:70] "Attempting to register node" node="10.67.124.137" Feb 12 21:54:55.425482 kubelet[1648]: E0212 21:54:55.425427 1648 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.67.124.137.17b33c3765daa2b2", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.67.124.137", UID:"10.67.124.137", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.67.124.137 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.67.124.137"}, FirstTimestamp:time.Date(2024, time.February, 12, 21, 54, 52, 333245106, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 21, 54, 55, 424682357, time.Local), Count:6, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.67.124.137.17b33c3765daa2b2" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 21:54:55.425668 kubelet[1648]: E0212 21:54:55.425501 1648 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.67.124.137" Feb 12 21:54:55.426089 kubelet[1648]: E0212 21:54:55.426057 1648 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.67.124.137.17b33c3765dad285", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.67.124.137", UID:"10.67.124.137", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.67.124.137 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.67.124.137"}, FirstTimestamp:time.Date(2024, time.February, 12, 21, 54, 52, 333257349, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 21, 54, 55, 424687433, time.Local), Count:6, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.67.124.137.17b33c3765dad285" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 21:54:55.426521 kubelet[1648]: E0212 21:54:55.426490 1648 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.67.124.137.17b33c3765dad8b8", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.67.124.137", UID:"10.67.124.137", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.67.124.137 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.67.124.137"}, FirstTimestamp:time.Date(2024, time.February, 12, 21, 54, 52, 333258936, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 21, 54, 55, 424689012, time.Local), Count:6, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.67.124.137.17b33c3765dad8b8" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 21:54:56.139012 kubelet[1648]: W0212 21:54:56.138983 1648 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 12 21:54:56.139012 kubelet[1648]: E0212 21:54:56.139014 1648 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 12 21:54:56.274024 kubelet[1648]: W0212 21:54:56.274002 1648 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 12 21:54:56.274024 kubelet[1648]: E0212 21:54:56.274023 1648 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 12 21:54:56.305312 kubelet[1648]: E0212 21:54:56.305287 1648 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:54:56.350561 kubelet[1648]: W0212 21:54:56.350535 1648 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes "10.67.124.137" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 12 21:54:56.350561 kubelet[1648]: E0212 21:54:56.350560 1648 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.67.124.137" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 12 21:54:57.306428 kubelet[1648]: E0212 21:54:57.306402 1648 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:54:58.307007 kubelet[1648]: E0212 21:54:58.306973 1648 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:54:58.537178 kubelet[1648]: E0212 21:54:58.537155 1648 controller.go:146] failed to ensure lease exists, will retry in 6.4s, error: leases.coordination.k8s.io "10.67.124.137" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Feb 12 21:54:58.627265 kubelet[1648]: I0212 21:54:58.627188 1648 kubelet_node_status.go:70] "Attempting to register node" node="10.67.124.137" Feb 12 21:54:58.628459 kubelet[1648]: E0212 21:54:58.628443 1648 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.67.124.137" Feb 12 21:54:58.628557 kubelet[1648]: E0212 21:54:58.628516 1648 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.67.124.137.17b33c3765daa2b2", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.67.124.137", UID:"10.67.124.137", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.67.124.137 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.67.124.137"}, FirstTimestamp:time.Date(2024, time.February, 12, 21, 54, 52, 333245106, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 21, 54, 58, 627160125, time.Local), Count:7, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.67.124.137.17b33c3765daa2b2" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 21:54:58.629054 kubelet[1648]: E0212 21:54:58.629028 1648 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.67.124.137.17b33c3765dad285", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.67.124.137", UID:"10.67.124.137", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.67.124.137 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.67.124.137"}, FirstTimestamp:time.Date(2024, time.February, 12, 21, 54, 52, 333257349, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 21, 54, 58, 627166399, time.Local), Count:7, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.67.124.137.17b33c3765dad285" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 21:54:58.629516 kubelet[1648]: E0212 21:54:58.629488 1648 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.67.124.137.17b33c3765dad8b8", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.67.124.137", UID:"10.67.124.137", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.67.124.137 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.67.124.137"}, FirstTimestamp:time.Date(2024, time.February, 12, 21, 54, 52, 333258936, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 21, 54, 58, 627167930, time.Local), Count:7, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.67.124.137.17b33c3765dad8b8" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 21:54:59.307908 kubelet[1648]: E0212 21:54:59.307884 1648 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:54:59.707763 kubelet[1648]: W0212 21:54:59.707694 1648 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 12 21:54:59.707763 kubelet[1648]: E0212 21:54:59.707721 1648 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 12 21:55:00.308782 kubelet[1648]: E0212 21:55:00.308756 1648 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:55:01.309194 kubelet[1648]: E0212 21:55:01.309147 1648 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:55:02.098144 kubelet[1648]: W0212 21:55:02.098123 1648 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 12 21:55:02.098284 kubelet[1648]: E0212 21:55:02.098276 1648 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 12 21:55:02.298048 kubelet[1648]: I0212 21:55:02.298000 1648 transport.go:135] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 12 21:55:02.310285 kubelet[1648]: E0212 21:55:02.310240 1648 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:55:02.350456 kubelet[1648]: E0212 21:55:02.350399 1648 eviction_manager.go:261] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.67.124.137\" not found" Feb 12 21:55:02.656545 kubelet[1648]: E0212 21:55:02.656473 1648 csi_plugin.go:295] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "10.67.124.137" not found Feb 12 21:55:03.310595 kubelet[1648]: I0212 21:55:03.310560 1648 apiserver.go:52] "Watching apiserver" Feb 12 21:55:03.310843 kubelet[1648]: E0212 21:55:03.310573 1648 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:55:03.515115 kubelet[1648]: I0212 21:55:03.515093 1648 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 12 21:55:03.562553 kubelet[1648]: I0212 21:55:03.562466 1648 reconciler.go:41] "Reconciler: start to sync state" Feb 12 21:55:04.311325 kubelet[1648]: E0212 21:55:04.311298 1648 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:55:04.314786 kubelet[1648]: E0212 21:55:04.314774 1648 csi_plugin.go:295] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "10.67.124.137" not found Feb 12 21:55:04.940593 kubelet[1648]: E0212 21:55:04.940559 1648 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"10.67.124.137\" not found" node="10.67.124.137" Feb 12 21:55:05.029555 kubelet[1648]: I0212 21:55:05.029526 1648 kubelet_node_status.go:70] "Attempting to register node" node="10.67.124.137" Feb 12 21:55:05.312589 kubelet[1648]: E0212 21:55:05.312562 1648 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:55:05.315502 kubelet[1648]: I0212 21:55:05.315479 1648 kubelet_node_status.go:73] "Successfully registered node" node="10.67.124.137" Feb 12 21:55:05.345703 kubelet[1648]: I0212 21:55:05.345661 1648 topology_manager.go:210] "Topology Admit Handler" Feb 12 21:55:05.346603 kubelet[1648]: I0212 21:55:05.346578 1648 topology_manager.go:210] "Topology Admit Handler" Feb 12 21:55:05.370194 sudo[1413]: pam_unix(sudo:session): session closed for user root Feb 12 21:55:05.374576 sshd[1407]: pam_unix(sshd:session): session closed for user core Feb 12 21:55:05.376216 systemd[1]: sshd@4-139.178.70.105:22-139.178.89.65:34732.service: Deactivated successfully. Feb 12 21:55:05.376865 systemd[1]: session-7.scope: Deactivated successfully. Feb 12 21:55:05.377625 systemd-logind[1217]: Session 7 logged out. Waiting for processes to exit. Feb 12 21:55:05.379320 systemd-logind[1217]: Removed session 7. Feb 12 21:55:05.424087 kubelet[1648]: I0212 21:55:05.424062 1648 kuberuntime_manager.go:1114] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Feb 12 21:55:05.424473 env[1227]: time="2024-02-12T21:55:05.424406214Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 12 21:55:05.424813 kubelet[1648]: I0212 21:55:05.424794 1648 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Feb 12 21:55:05.474568 kubelet[1648]: I0212 21:55:05.474533 1648 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8565fa0e-4be7-4e21-b4bb-da49b9125bef-cilium-config-path\") pod \"cilium-tczf6\" (UID: \"8565fa0e-4be7-4e21-b4bb-da49b9125bef\") " pod="kube-system/cilium-tczf6" Feb 12 21:55:05.474568 kubelet[1648]: I0212 21:55:05.474568 1648 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6kltr\" (UniqueName: \"kubernetes.io/projected/8565fa0e-4be7-4e21-b4bb-da49b9125bef-kube-api-access-6kltr\") pod \"cilium-tczf6\" (UID: \"8565fa0e-4be7-4e21-b4bb-da49b9125bef\") " pod="kube-system/cilium-tczf6" Feb 12 21:55:05.474689 kubelet[1648]: I0212 21:55:05.474583 1648 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6f4s2\" (UniqueName: \"kubernetes.io/projected/2b2f8a97-3a5f-4f85-8ac2-a46c9277aff9-kube-api-access-6f4s2\") pod \"kube-proxy-c26x6\" (UID: \"2b2f8a97-3a5f-4f85-8ac2-a46c9277aff9\") " pod="kube-system/kube-proxy-c26x6" Feb 12 21:55:05.474689 kubelet[1648]: I0212 21:55:05.474598 1648 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8565fa0e-4be7-4e21-b4bb-da49b9125bef-cni-path\") pod \"cilium-tczf6\" (UID: \"8565fa0e-4be7-4e21-b4bb-da49b9125bef\") " pod="kube-system/cilium-tczf6" Feb 12 21:55:05.474689 kubelet[1648]: I0212 21:55:05.474610 1648 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8565fa0e-4be7-4e21-b4bb-da49b9125bef-etc-cni-netd\") pod \"cilium-tczf6\" (UID: \"8565fa0e-4be7-4e21-b4bb-da49b9125bef\") " pod="kube-system/cilium-tczf6" Feb 12 21:55:05.474689 kubelet[1648]: I0212 21:55:05.474621 1648 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8565fa0e-4be7-4e21-b4bb-da49b9125bef-xtables-lock\") pod \"cilium-tczf6\" (UID: \"8565fa0e-4be7-4e21-b4bb-da49b9125bef\") " pod="kube-system/cilium-tczf6" Feb 12 21:55:05.474689 kubelet[1648]: I0212 21:55:05.474631 1648 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8565fa0e-4be7-4e21-b4bb-da49b9125bef-cilium-run\") pod \"cilium-tczf6\" (UID: \"8565fa0e-4be7-4e21-b4bb-da49b9125bef\") " pod="kube-system/cilium-tczf6" Feb 12 21:55:05.474689 kubelet[1648]: I0212 21:55:05.474644 1648 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8565fa0e-4be7-4e21-b4bb-da49b9125bef-hostproc\") pod \"cilium-tczf6\" (UID: \"8565fa0e-4be7-4e21-b4bb-da49b9125bef\") " pod="kube-system/cilium-tczf6" Feb 12 21:55:05.474813 kubelet[1648]: I0212 21:55:05.474657 1648 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8565fa0e-4be7-4e21-b4bb-da49b9125bef-cilium-cgroup\") pod \"cilium-tczf6\" (UID: \"8565fa0e-4be7-4e21-b4bb-da49b9125bef\") " pod="kube-system/cilium-tczf6" Feb 12 21:55:05.474813 kubelet[1648]: I0212 21:55:05.474668 1648 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8565fa0e-4be7-4e21-b4bb-da49b9125bef-host-proc-sys-net\") pod \"cilium-tczf6\" (UID: \"8565fa0e-4be7-4e21-b4bb-da49b9125bef\") " pod="kube-system/cilium-tczf6" Feb 12 21:55:05.474813 kubelet[1648]: I0212 21:55:05.474679 1648 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2b2f8a97-3a5f-4f85-8ac2-a46c9277aff9-lib-modules\") pod \"kube-proxy-c26x6\" (UID: \"2b2f8a97-3a5f-4f85-8ac2-a46c9277aff9\") " pod="kube-system/kube-proxy-c26x6" Feb 12 21:55:05.474813 kubelet[1648]: I0212 21:55:05.474708 1648 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8565fa0e-4be7-4e21-b4bb-da49b9125bef-bpf-maps\") pod \"cilium-tczf6\" (UID: \"8565fa0e-4be7-4e21-b4bb-da49b9125bef\") " pod="kube-system/cilium-tczf6" Feb 12 21:55:05.474813 kubelet[1648]: I0212 21:55:05.474723 1648 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8565fa0e-4be7-4e21-b4bb-da49b9125bef-lib-modules\") pod \"cilium-tczf6\" (UID: \"8565fa0e-4be7-4e21-b4bb-da49b9125bef\") " pod="kube-system/cilium-tczf6" Feb 12 21:55:05.474813 kubelet[1648]: I0212 21:55:05.474735 1648 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8565fa0e-4be7-4e21-b4bb-da49b9125bef-host-proc-sys-kernel\") pod \"cilium-tczf6\" (UID: \"8565fa0e-4be7-4e21-b4bb-da49b9125bef\") " pod="kube-system/cilium-tczf6" Feb 12 21:55:05.474932 kubelet[1648]: I0212 21:55:05.474746 1648 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/2b2f8a97-3a5f-4f85-8ac2-a46c9277aff9-kube-proxy\") pod \"kube-proxy-c26x6\" (UID: \"2b2f8a97-3a5f-4f85-8ac2-a46c9277aff9\") " pod="kube-system/kube-proxy-c26x6" Feb 12 21:55:05.474932 kubelet[1648]: I0212 21:55:05.474758 1648 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2b2f8a97-3a5f-4f85-8ac2-a46c9277aff9-xtables-lock\") pod \"kube-proxy-c26x6\" (UID: \"2b2f8a97-3a5f-4f85-8ac2-a46c9277aff9\") " pod="kube-system/kube-proxy-c26x6" Feb 12 21:55:05.474932 kubelet[1648]: I0212 21:55:05.474770 1648 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8565fa0e-4be7-4e21-b4bb-da49b9125bef-clustermesh-secrets\") pod \"cilium-tczf6\" (UID: \"8565fa0e-4be7-4e21-b4bb-da49b9125bef\") " pod="kube-system/cilium-tczf6" Feb 12 21:55:05.474932 kubelet[1648]: I0212 21:55:05.474782 1648 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8565fa0e-4be7-4e21-b4bb-da49b9125bef-hubble-tls\") pod \"cilium-tczf6\" (UID: \"8565fa0e-4be7-4e21-b4bb-da49b9125bef\") " pod="kube-system/cilium-tczf6" Feb 12 21:55:06.313298 kubelet[1648]: E0212 21:55:06.313268 1648 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:55:06.512831 kubelet[1648]: I0212 21:55:06.512816 1648 request.go:690] Waited for 1.165912051s due to client-side throttling, not priority and fairness, request: GET:https://139.178.70.99:6443/api/v1/namespaces/kube-system/secrets?fieldSelector=metadata.name%3Dhubble-server-certs&limit=500&resourceVersion=0 Feb 12 21:55:06.576520 kubelet[1648]: E0212 21:55:06.576453 1648 configmap.go:199] Couldn't get configMap kube-system/cilium-config: failed to sync configmap cache: timed out waiting for the condition Feb 12 21:55:06.576916 kubelet[1648]: E0212 21:55:06.576767 1648 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8565fa0e-4be7-4e21-b4bb-da49b9125bef-cilium-config-path podName:8565fa0e-4be7-4e21-b4bb-da49b9125bef nodeName:}" failed. No retries permitted until 2024-02-12 21:55:07.076736232 +0000 UTC m=+15.215420597 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-config-path" (UniqueName: "kubernetes.io/configmap/8565fa0e-4be7-4e21-b4bb-da49b9125bef-cilium-config-path") pod "cilium-tczf6" (UID: "8565fa0e-4be7-4e21-b4bb-da49b9125bef") : failed to sync configmap cache: timed out waiting for the condition Feb 12 21:55:07.153039 env[1227]: time="2024-02-12T21:55:07.152615353Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-c26x6,Uid:2b2f8a97-3a5f-4f85-8ac2-a46c9277aff9,Namespace:kube-system,Attempt:0,}" Feb 12 21:55:07.314089 kubelet[1648]: E0212 21:55:07.314063 1648 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:55:07.451889 env[1227]: time="2024-02-12T21:55:07.451826461Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-tczf6,Uid:8565fa0e-4be7-4e21-b4bb-da49b9125bef,Namespace:kube-system,Attempt:0,}" Feb 12 21:55:07.727498 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3814905130.mount: Deactivated successfully. Feb 12 21:55:07.730219 env[1227]: time="2024-02-12T21:55:07.730187555Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 21:55:07.730804 env[1227]: time="2024-02-12T21:55:07.730790597Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 21:55:07.731550 env[1227]: time="2024-02-12T21:55:07.731532471Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 21:55:07.732818 env[1227]: time="2024-02-12T21:55:07.732802427Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 21:55:07.733973 env[1227]: time="2024-02-12T21:55:07.733952816Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 21:55:07.734349 env[1227]: time="2024-02-12T21:55:07.734334409Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 21:55:07.735489 env[1227]: time="2024-02-12T21:55:07.735477263Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 21:55:07.737584 env[1227]: time="2024-02-12T21:55:07.737566214Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 21:55:07.786614 env[1227]: time="2024-02-12T21:55:07.786579829Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 21:55:07.786714 env[1227]: time="2024-02-12T21:55:07.786700186Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 21:55:07.786846 env[1227]: time="2024-02-12T21:55:07.786828422Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 21:55:07.787078 env[1227]: time="2024-02-12T21:55:07.787063074Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/bd0aed9b2cf279eaa67bb2436302863928b85b4dfe160d371c6253061a359f26 pid=1758 runtime=io.containerd.runc.v2 Feb 12 21:55:07.790511 env[1227]: time="2024-02-12T21:55:07.790473086Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 21:55:07.790615 env[1227]: time="2024-02-12T21:55:07.790600573Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 21:55:07.790695 env[1227]: time="2024-02-12T21:55:07.790682143Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 21:55:07.790835 env[1227]: time="2024-02-12T21:55:07.790813558Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c7c761caf5bc7d4c813639abef9787fa68f51a4443b5b4d41dda0cefd7c36039 pid=1775 runtime=io.containerd.runc.v2 Feb 12 21:55:07.820642 env[1227]: time="2024-02-12T21:55:07.820462378Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-c26x6,Uid:2b2f8a97-3a5f-4f85-8ac2-a46c9277aff9,Namespace:kube-system,Attempt:0,} returns sandbox id \"bd0aed9b2cf279eaa67bb2436302863928b85b4dfe160d371c6253061a359f26\"" Feb 12 21:55:07.822634 env[1227]: time="2024-02-12T21:55:07.822607379Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.26.13\"" Feb 12 21:55:07.828206 env[1227]: time="2024-02-12T21:55:07.828177944Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-tczf6,Uid:8565fa0e-4be7-4e21-b4bb-da49b9125bef,Namespace:kube-system,Attempt:0,} returns sandbox id \"c7c761caf5bc7d4c813639abef9787fa68f51a4443b5b4d41dda0cefd7c36039\"" Feb 12 21:55:08.314571 kubelet[1648]: E0212 21:55:08.314546 1648 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:55:08.740485 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3318337245.mount: Deactivated successfully. Feb 12 21:55:09.102446 env[1227]: time="2024-02-12T21:55:09.102362480Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 21:55:09.103083 env[1227]: time="2024-02-12T21:55:09.103067541Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5a7325fa2b6e8d712e4a770abb4a5a5852e87b6de8df34552d67853e9bfb9f9f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 21:55:09.103785 env[1227]: time="2024-02-12T21:55:09.103771249Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 21:55:09.104492 env[1227]: time="2024-02-12T21:55:09.104480501Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:f6e0de32a002b910b9b2e0e8d769e2d7b05208240559c745ce4781082ab15f22,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 21:55:09.104808 env[1227]: time="2024-02-12T21:55:09.104793616Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.26.13\" returns image reference \"sha256:5a7325fa2b6e8d712e4a770abb4a5a5852e87b6de8df34552d67853e9bfb9f9f\"" Feb 12 21:55:09.105507 env[1227]: time="2024-02-12T21:55:09.105488553Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Feb 12 21:55:09.106686 env[1227]: time="2024-02-12T21:55:09.106664469Z" level=info msg="CreateContainer within sandbox \"bd0aed9b2cf279eaa67bb2436302863928b85b4dfe160d371c6253061a359f26\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 12 21:55:09.112219 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount645206470.mount: Deactivated successfully. Feb 12 21:55:09.147341 env[1227]: time="2024-02-12T21:55:09.147310582Z" level=info msg="CreateContainer within sandbox \"bd0aed9b2cf279eaa67bb2436302863928b85b4dfe160d371c6253061a359f26\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"f7f3fd3c8c0dc0cf32ad5884e243895dbb3b415eb1aaff90506b6b9e3ebee298\"" Feb 12 21:55:09.147884 env[1227]: time="2024-02-12T21:55:09.147866985Z" level=info msg="StartContainer for \"f7f3fd3c8c0dc0cf32ad5884e243895dbb3b415eb1aaff90506b6b9e3ebee298\"" Feb 12 21:55:09.209159 env[1227]: time="2024-02-12T21:55:09.209124684Z" level=info msg="StartContainer for \"f7f3fd3c8c0dc0cf32ad5884e243895dbb3b415eb1aaff90506b6b9e3ebee298\" returns successfully" Feb 12 21:55:09.314942 kubelet[1648]: E0212 21:55:09.314892 1648 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:55:09.520495 kubelet[1648]: I0212 21:55:09.520465 1648 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-c26x6" podStartSLOduration=-9.22337203233436e+09 pod.CreationTimestamp="2024-02-12 21:55:05 +0000 UTC" firstStartedPulling="2024-02-12 21:55:07.821873314 +0000 UTC m=+15.960557678" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 21:55:09.520091759 +0000 UTC m=+17.658776132" watchObservedRunningTime="2024-02-12 21:55:09.520415794 +0000 UTC m=+17.659100159" Feb 12 21:55:10.315564 kubelet[1648]: E0212 21:55:10.315535 1648 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:55:11.316219 kubelet[1648]: E0212 21:55:11.316195 1648 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:55:12.303518 kubelet[1648]: E0212 21:55:12.303487 1648 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:55:12.316829 kubelet[1648]: E0212 21:55:12.316800 1648 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:55:13.317608 kubelet[1648]: E0212 21:55:13.317579 1648 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:55:13.668415 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3105845155.mount: Deactivated successfully. Feb 12 21:55:14.317735 kubelet[1648]: E0212 21:55:14.317691 1648 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:55:15.318531 kubelet[1648]: E0212 21:55:15.318494 1648 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:55:15.814078 env[1227]: time="2024-02-12T21:55:15.813974449Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 21:55:15.836999 env[1227]: time="2024-02-12T21:55:15.836969659Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 21:55:15.846889 env[1227]: time="2024-02-12T21:55:15.846873416Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 21:55:15.847223 env[1227]: time="2024-02-12T21:55:15.847208719Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Feb 12 21:55:15.848646 env[1227]: time="2024-02-12T21:55:15.848625583Z" level=info msg="CreateContainer within sandbox \"c7c761caf5bc7d4c813639abef9787fa68f51a4443b5b4d41dda0cefd7c36039\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 12 21:55:15.899456 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4204459642.mount: Deactivated successfully. Feb 12 21:55:15.903375 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount946485639.mount: Deactivated successfully. Feb 12 21:55:15.944000 env[1227]: time="2024-02-12T21:55:15.943950078Z" level=info msg="CreateContainer within sandbox \"c7c761caf5bc7d4c813639abef9787fa68f51a4443b5b4d41dda0cefd7c36039\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"09db7208298275820296744b00b120fa7a7b539cfd63d2e95b61876037dfee1f\"" Feb 12 21:55:15.944439 env[1227]: time="2024-02-12T21:55:15.944426654Z" level=info msg="StartContainer for \"09db7208298275820296744b00b120fa7a7b539cfd63d2e95b61876037dfee1f\"" Feb 12 21:55:15.999813 env[1227]: time="2024-02-12T21:55:15.999783338Z" level=info msg="StartContainer for \"09db7208298275820296744b00b120fa7a7b539cfd63d2e95b61876037dfee1f\" returns successfully" Feb 12 21:55:16.318989 kubelet[1648]: E0212 21:55:16.318934 1648 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:55:16.738014 env[1227]: time="2024-02-12T21:55:16.737974313Z" level=info msg="shim disconnected" id=09db7208298275820296744b00b120fa7a7b539cfd63d2e95b61876037dfee1f Feb 12 21:55:16.738178 env[1227]: time="2024-02-12T21:55:16.738162870Z" level=warning msg="cleaning up after shim disconnected" id=09db7208298275820296744b00b120fa7a7b539cfd63d2e95b61876037dfee1f namespace=k8s.io Feb 12 21:55:16.738268 env[1227]: time="2024-02-12T21:55:16.738239666Z" level=info msg="cleaning up dead shim" Feb 12 21:55:16.744243 env[1227]: time="2024-02-12T21:55:16.744219716Z" level=warning msg="cleanup warnings time=\"2024-02-12T21:55:16Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2031 runtime=io.containerd.runc.v2\n" Feb 12 21:55:16.897480 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-09db7208298275820296744b00b120fa7a7b539cfd63d2e95b61876037dfee1f-rootfs.mount: Deactivated successfully. Feb 12 21:55:17.319055 kubelet[1648]: E0212 21:55:17.319021 1648 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:55:17.527123 env[1227]: time="2024-02-12T21:55:17.527086565Z" level=info msg="CreateContainer within sandbox \"c7c761caf5bc7d4c813639abef9787fa68f51a4443b5b4d41dda0cefd7c36039\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 12 21:55:17.533647 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2161054559.mount: Deactivated successfully. Feb 12 21:55:17.538134 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4036623518.mount: Deactivated successfully. Feb 12 21:55:17.543436 env[1227]: time="2024-02-12T21:55:17.543409760Z" level=info msg="CreateContainer within sandbox \"c7c761caf5bc7d4c813639abef9787fa68f51a4443b5b4d41dda0cefd7c36039\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"77b7d12862db0c04331033989058e5fd05ce6f6ab6ab7fc14256986f0ae4205e\"" Feb 12 21:55:17.543949 env[1227]: time="2024-02-12T21:55:17.543929402Z" level=info msg="StartContainer for \"77b7d12862db0c04331033989058e5fd05ce6f6ab6ab7fc14256986f0ae4205e\"" Feb 12 21:55:17.574129 env[1227]: time="2024-02-12T21:55:17.573874107Z" level=info msg="StartContainer for \"77b7d12862db0c04331033989058e5fd05ce6f6ab6ab7fc14256986f0ae4205e\" returns successfully" Feb 12 21:55:17.580698 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 12 21:55:17.580858 systemd[1]: Stopped systemd-sysctl.service. Feb 12 21:55:17.581155 systemd[1]: Stopping systemd-sysctl.service... Feb 12 21:55:17.582540 systemd[1]: Starting systemd-sysctl.service... Feb 12 21:55:17.588658 systemd[1]: Finished systemd-sysctl.service. Feb 12 21:55:17.600617 env[1227]: time="2024-02-12T21:55:17.600556263Z" level=info msg="shim disconnected" id=77b7d12862db0c04331033989058e5fd05ce6f6ab6ab7fc14256986f0ae4205e Feb 12 21:55:17.600765 env[1227]: time="2024-02-12T21:55:17.600753713Z" level=warning msg="cleaning up after shim disconnected" id=77b7d12862db0c04331033989058e5fd05ce6f6ab6ab7fc14256986f0ae4205e namespace=k8s.io Feb 12 21:55:17.600826 env[1227]: time="2024-02-12T21:55:17.600803839Z" level=info msg="cleaning up dead shim" Feb 12 21:55:17.608588 env[1227]: time="2024-02-12T21:55:17.608560007Z" level=warning msg="cleanup warnings time=\"2024-02-12T21:55:17Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2094 runtime=io.containerd.runc.v2\n" Feb 12 21:55:18.319306 kubelet[1648]: E0212 21:55:18.319286 1648 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:55:18.528284 env[1227]: time="2024-02-12T21:55:18.528244828Z" level=info msg="CreateContainer within sandbox \"c7c761caf5bc7d4c813639abef9787fa68f51a4443b5b4d41dda0cefd7c36039\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 12 21:55:18.534387 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2543068998.mount: Deactivated successfully. Feb 12 21:55:18.537989 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3119279353.mount: Deactivated successfully. Feb 12 21:55:18.540021 env[1227]: time="2024-02-12T21:55:18.540002325Z" level=info msg="CreateContainer within sandbox \"c7c761caf5bc7d4c813639abef9787fa68f51a4443b5b4d41dda0cefd7c36039\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"b764a2e420d3bfab128a475c26f305c555ca102fb6cef8e29180aa2a2243b98b\"" Feb 12 21:55:18.540445 env[1227]: time="2024-02-12T21:55:18.540432605Z" level=info msg="StartContainer for \"b764a2e420d3bfab128a475c26f305c555ca102fb6cef8e29180aa2a2243b98b\"" Feb 12 21:55:18.568710 env[1227]: time="2024-02-12T21:55:18.568684984Z" level=info msg="StartContainer for \"b764a2e420d3bfab128a475c26f305c555ca102fb6cef8e29180aa2a2243b98b\" returns successfully" Feb 12 21:55:18.581630 env[1227]: time="2024-02-12T21:55:18.581429634Z" level=info msg="shim disconnected" id=b764a2e420d3bfab128a475c26f305c555ca102fb6cef8e29180aa2a2243b98b Feb 12 21:55:18.581630 env[1227]: time="2024-02-12T21:55:18.581459318Z" level=warning msg="cleaning up after shim disconnected" id=b764a2e420d3bfab128a475c26f305c555ca102fb6cef8e29180aa2a2243b98b namespace=k8s.io Feb 12 21:55:18.581630 env[1227]: time="2024-02-12T21:55:18.581466873Z" level=info msg="cleaning up dead shim" Feb 12 21:55:18.585960 env[1227]: time="2024-02-12T21:55:18.585939046Z" level=warning msg="cleanup warnings time=\"2024-02-12T21:55:18Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2152 runtime=io.containerd.runc.v2\n" Feb 12 21:55:19.320522 kubelet[1648]: E0212 21:55:19.320507 1648 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:55:19.530042 env[1227]: time="2024-02-12T21:55:19.530014483Z" level=info msg="CreateContainer within sandbox \"c7c761caf5bc7d4c813639abef9787fa68f51a4443b5b4d41dda0cefd7c36039\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 12 21:55:19.535670 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1526709056.mount: Deactivated successfully. Feb 12 21:55:19.539216 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3967515747.mount: Deactivated successfully. Feb 12 21:55:19.541335 env[1227]: time="2024-02-12T21:55:19.541312449Z" level=info msg="CreateContainer within sandbox \"c7c761caf5bc7d4c813639abef9787fa68f51a4443b5b4d41dda0cefd7c36039\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"97765aa6578545ff809df050fb9d410f7b460942f61cd5ae0a62762ef784632f\"" Feb 12 21:55:19.541697 env[1227]: time="2024-02-12T21:55:19.541679969Z" level=info msg="StartContainer for \"97765aa6578545ff809df050fb9d410f7b460942f61cd5ae0a62762ef784632f\"" Feb 12 21:55:19.567233 env[1227]: time="2024-02-12T21:55:19.567210614Z" level=info msg="StartContainer for \"97765aa6578545ff809df050fb9d410f7b460942f61cd5ae0a62762ef784632f\" returns successfully" Feb 12 21:55:19.585118 env[1227]: time="2024-02-12T21:55:19.584938266Z" level=info msg="shim disconnected" id=97765aa6578545ff809df050fb9d410f7b460942f61cd5ae0a62762ef784632f Feb 12 21:55:19.585273 env[1227]: time="2024-02-12T21:55:19.585256014Z" level=warning msg="cleaning up after shim disconnected" id=97765aa6578545ff809df050fb9d410f7b460942f61cd5ae0a62762ef784632f namespace=k8s.io Feb 12 21:55:19.585330 env[1227]: time="2024-02-12T21:55:19.585320841Z" level=info msg="cleaning up dead shim" Feb 12 21:55:19.589701 env[1227]: time="2024-02-12T21:55:19.589681155Z" level=warning msg="cleanup warnings time=\"2024-02-12T21:55:19Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2209 runtime=io.containerd.runc.v2\n" Feb 12 21:55:20.321590 kubelet[1648]: E0212 21:55:20.321558 1648 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:55:20.533049 env[1227]: time="2024-02-12T21:55:20.533008057Z" level=info msg="CreateContainer within sandbox \"c7c761caf5bc7d4c813639abef9787fa68f51a4443b5b4d41dda0cefd7c36039\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 12 21:55:20.563452 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1295059732.mount: Deactivated successfully. Feb 12 21:55:20.566229 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1977125628.mount: Deactivated successfully. Feb 12 21:55:20.568099 env[1227]: time="2024-02-12T21:55:20.568077108Z" level=info msg="CreateContainer within sandbox \"c7c761caf5bc7d4c813639abef9787fa68f51a4443b5b4d41dda0cefd7c36039\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"e8c2f65c0a2703c2439bebe5a66693a93d42b55d150cd7840a48b7148d76c0ea\"" Feb 12 21:55:20.568652 env[1227]: time="2024-02-12T21:55:20.568629303Z" level=info msg="StartContainer for \"e8c2f65c0a2703c2439bebe5a66693a93d42b55d150cd7840a48b7148d76c0ea\"" Feb 12 21:55:20.612272 env[1227]: time="2024-02-12T21:55:20.612065326Z" level=info msg="StartContainer for \"e8c2f65c0a2703c2439bebe5a66693a93d42b55d150cd7840a48b7148d76c0ea\" returns successfully" Feb 12 21:55:20.664262 kernel: Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks! Feb 12 21:55:20.729748 kubelet[1648]: I0212 21:55:20.729725 1648 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Feb 12 21:55:20.878268 kernel: Initializing XFRM netlink socket Feb 12 21:55:20.880275 kernel: Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks! Feb 12 21:55:21.322071 kubelet[1648]: E0212 21:55:21.322044 1648 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:55:21.542770 kubelet[1648]: I0212 21:55:21.542739 1648 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-tczf6" podStartSLOduration=-9.223372020312063e+09 pod.CreationTimestamp="2024-02-12 21:55:05 +0000 UTC" firstStartedPulling="2024-02-12 21:55:07.829051576 +0000 UTC m=+15.967735945" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 21:55:21.54245175 +0000 UTC m=+29.681136122" watchObservedRunningTime="2024-02-12 21:55:21.542713339 +0000 UTC m=+29.681397704" Feb 12 21:55:21.988417 kubelet[1648]: I0212 21:55:21.987231 1648 topology_manager.go:210] "Topology Admit Handler" Feb 12 21:55:22.075468 kubelet[1648]: I0212 21:55:22.075441 1648 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6zrm2\" (UniqueName: \"kubernetes.io/projected/e58d2050-089a-4b39-a259-84e47d620777-kube-api-access-6zrm2\") pod \"nginx-deployment-8ffc5cf85-h6w8h\" (UID: \"e58d2050-089a-4b39-a259-84e47d620777\") " pod="default/nginx-deployment-8ffc5cf85-h6w8h" Feb 12 21:55:22.290386 env[1227]: time="2024-02-12T21:55:22.290228314Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8ffc5cf85-h6w8h,Uid:e58d2050-089a-4b39-a259-84e47d620777,Namespace:default,Attempt:0,}" Feb 12 21:55:22.322330 kubelet[1648]: E0212 21:55:22.322295 1648 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:55:22.489103 systemd-networkd[1110]: cilium_host: Link UP Feb 12 21:55:22.494164 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Feb 12 21:55:22.494217 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Feb 12 21:55:22.494337 systemd-networkd[1110]: cilium_net: Link UP Feb 12 21:55:22.494458 systemd-networkd[1110]: cilium_net: Gained carrier Feb 12 21:55:22.494544 systemd-networkd[1110]: cilium_host: Gained carrier Feb 12 21:55:22.575693 systemd-networkd[1110]: cilium_vxlan: Link UP Feb 12 21:55:22.575697 systemd-networkd[1110]: cilium_vxlan: Gained carrier Feb 12 21:55:22.717267 kernel: NET: Registered PF_ALG protocol family Feb 12 21:55:23.031373 systemd-networkd[1110]: cilium_net: Gained IPv6LL Feb 12 21:55:23.109532 systemd-networkd[1110]: lxc_health: Link UP Feb 12 21:55:23.117698 systemd-networkd[1110]: lxc_health: Gained carrier Feb 12 21:55:23.118265 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 12 21:55:23.316813 systemd-networkd[1110]: lxc676e324a0e37: Link UP Feb 12 21:55:23.323364 kubelet[1648]: E0212 21:55:23.323338 1648 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:55:23.324287 kernel: eth0: renamed from tmpb6609 Feb 12 21:55:23.334779 systemd-networkd[1110]: lxc676e324a0e37: Gained carrier Feb 12 21:55:23.335301 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc676e324a0e37: link becomes ready Feb 12 21:55:23.353329 systemd-networkd[1110]: cilium_host: Gained IPv6LL Feb 12 21:55:23.799376 systemd-networkd[1110]: cilium_vxlan: Gained IPv6LL Feb 12 21:55:24.324079 kubelet[1648]: E0212 21:55:24.324053 1648 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:55:24.537587 kubelet[1648]: I0212 21:55:24.537572 1648 prober_manager.go:287] "Failed to trigger a manual run" probe="Readiness" Feb 12 21:55:24.759388 systemd-networkd[1110]: lxc_health: Gained IPv6LL Feb 12 21:55:25.207337 systemd-networkd[1110]: lxc676e324a0e37: Gained IPv6LL Feb 12 21:55:25.325184 kubelet[1648]: E0212 21:55:25.325160 1648 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:55:25.829613 env[1227]: time="2024-02-12T21:55:25.829578004Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 21:55:25.829868 env[1227]: time="2024-02-12T21:55:25.829853545Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 21:55:25.829934 env[1227]: time="2024-02-12T21:55:25.829920753Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 21:55:25.830120 env[1227]: time="2024-02-12T21:55:25.830100603Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/b66091834344d283a33ed0bfcea6e18bfc5d90d58d6b71981ade386bf2e91f51 pid=2737 runtime=io.containerd.runc.v2 Feb 12 21:55:25.847222 systemd[1]: run-containerd-runc-k8s.io-b66091834344d283a33ed0bfcea6e18bfc5d90d58d6b71981ade386bf2e91f51-runc.gL1I8W.mount: Deactivated successfully. Feb 12 21:55:25.857160 systemd-resolved[1168]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 12 21:55:25.880610 env[1227]: time="2024-02-12T21:55:25.880583861Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8ffc5cf85-h6w8h,Uid:e58d2050-089a-4b39-a259-84e47d620777,Namespace:default,Attempt:0,} returns sandbox id \"b66091834344d283a33ed0bfcea6e18bfc5d90d58d6b71981ade386bf2e91f51\"" Feb 12 21:55:25.881870 env[1227]: time="2024-02-12T21:55:25.881811448Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Feb 12 21:55:26.326018 kubelet[1648]: E0212 21:55:26.325982 1648 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:55:27.327107 kubelet[1648]: E0212 21:55:27.327082 1648 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:55:28.327602 kubelet[1648]: E0212 21:55:28.327569 1648 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:55:29.032342 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3847434406.mount: Deactivated successfully. Feb 12 21:55:29.327934 kubelet[1648]: E0212 21:55:29.327741 1648 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:55:29.643217 env[1227]: time="2024-02-12T21:55:29.643042925Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 21:55:29.644344 env[1227]: time="2024-02-12T21:55:29.644332087Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3a8963c304a2f89d2bfa055e07403bae348b293c891b8ea01f7136642eaa277a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 21:55:29.645526 env[1227]: time="2024-02-12T21:55:29.645513770Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 21:55:29.646593 env[1227]: time="2024-02-12T21:55:29.646580891Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx@sha256:e34a272f01984c973b1e034e197c02f77dda18981038e3a54e957554ada4fec6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 21:55:29.646974 env[1227]: time="2024-02-12T21:55:29.646960846Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:3a8963c304a2f89d2bfa055e07403bae348b293c891b8ea01f7136642eaa277a\"" Feb 12 21:55:29.648348 env[1227]: time="2024-02-12T21:55:29.648333105Z" level=info msg="CreateContainer within sandbox \"b66091834344d283a33ed0bfcea6e18bfc5d90d58d6b71981ade386bf2e91f51\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Feb 12 21:55:29.653444 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1435434184.mount: Deactivated successfully. Feb 12 21:55:29.656478 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2927065987.mount: Deactivated successfully. Feb 12 21:55:29.666456 env[1227]: time="2024-02-12T21:55:29.666428400Z" level=info msg="CreateContainer within sandbox \"b66091834344d283a33ed0bfcea6e18bfc5d90d58d6b71981ade386bf2e91f51\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"5274c339e870689c289746e508705ed0ca8aeb700e66bf960c33b7e48693e58a\"" Feb 12 21:55:29.667012 env[1227]: time="2024-02-12T21:55:29.666990754Z" level=info msg="StartContainer for \"5274c339e870689c289746e508705ed0ca8aeb700e66bf960c33b7e48693e58a\"" Feb 12 21:55:29.697452 env[1227]: time="2024-02-12T21:55:29.697400841Z" level=info msg="StartContainer for \"5274c339e870689c289746e508705ed0ca8aeb700e66bf960c33b7e48693e58a\" returns successfully" Feb 12 21:55:30.328155 kubelet[1648]: E0212 21:55:30.328129 1648 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:55:30.550612 kubelet[1648]: I0212 21:55:30.550524 1648 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nginx-deployment-8ffc5cf85-h6w8h" podStartSLOduration=-9.223372027304274e+09 pod.CreationTimestamp="2024-02-12 21:55:21 +0000 UTC" firstStartedPulling="2024-02-12 21:55:25.881461572 +0000 UTC m=+34.020145935" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 21:55:30.550317771 +0000 UTC m=+38.689002155" watchObservedRunningTime="2024-02-12 21:55:30.550501741 +0000 UTC m=+38.689186113" Feb 12 21:55:31.328862 kubelet[1648]: E0212 21:55:31.328833 1648 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:55:32.303118 kubelet[1648]: E0212 21:55:32.303097 1648 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:55:32.329434 kubelet[1648]: E0212 21:55:32.329409 1648 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:55:33.330775 kubelet[1648]: E0212 21:55:33.330749 1648 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:55:33.996791 kubelet[1648]: I0212 21:55:33.996768 1648 topology_manager.go:210] "Topology Admit Handler" Feb 12 21:55:34.046644 kubelet[1648]: I0212 21:55:34.046623 1648 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/3357cdbd-0309-4a51-9ff5-845f71ff9d17-data\") pod \"nfs-server-provisioner-0\" (UID: \"3357cdbd-0309-4a51-9ff5-845f71ff9d17\") " pod="default/nfs-server-provisioner-0" Feb 12 21:55:34.046797 kubelet[1648]: I0212 21:55:34.046788 1648 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6g4j5\" (UniqueName: \"kubernetes.io/projected/3357cdbd-0309-4a51-9ff5-845f71ff9d17-kube-api-access-6g4j5\") pod \"nfs-server-provisioner-0\" (UID: \"3357cdbd-0309-4a51-9ff5-845f71ff9d17\") " pod="default/nfs-server-provisioner-0" Feb 12 21:55:34.300062 env[1227]: time="2024-02-12T21:55:34.299907583Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:3357cdbd-0309-4a51-9ff5-845f71ff9d17,Namespace:default,Attempt:0,}" Feb 12 21:55:34.333330 kubelet[1648]: E0212 21:55:34.333293 1648 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:55:34.342305 kernel: eth0: renamed from tmp6d694 Feb 12 21:55:34.347629 systemd-networkd[1110]: lxcd5c9eba89cbb: Link UP Feb 12 21:55:34.350210 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 12 21:55:34.350269 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcd5c9eba89cbb: link becomes ready Feb 12 21:55:34.350187 systemd-networkd[1110]: lxcd5c9eba89cbb: Gained carrier Feb 12 21:55:34.518057 env[1227]: time="2024-02-12T21:55:34.517847525Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 21:55:34.518057 env[1227]: time="2024-02-12T21:55:34.517955816Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 21:55:34.518057 env[1227]: time="2024-02-12T21:55:34.517963346Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 21:55:34.518194 env[1227]: time="2024-02-12T21:55:34.518068244Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/6d6945522d001423bbb0b55ed126398cf6a400e5e225b608318a0da0e15d3041 pid=2919 runtime=io.containerd.runc.v2 Feb 12 21:55:34.538380 systemd-resolved[1168]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 12 21:55:34.559131 env[1227]: time="2024-02-12T21:55:34.558918355Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:3357cdbd-0309-4a51-9ff5-845f71ff9d17,Namespace:default,Attempt:0,} returns sandbox id \"6d6945522d001423bbb0b55ed126398cf6a400e5e225b608318a0da0e15d3041\"" Feb 12 21:55:34.559951 env[1227]: time="2024-02-12T21:55:34.559931388Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Feb 12 21:55:34.669595 kubelet[1648]: I0212 21:55:34.669570 1648 prober_manager.go:287] "Failed to trigger a manual run" probe="Readiness" Feb 12 21:55:35.153464 systemd[1]: run-containerd-runc-k8s.io-6d6945522d001423bbb0b55ed126398cf6a400e5e225b608318a0da0e15d3041-runc.80Q1hL.mount: Deactivated successfully. Feb 12 21:55:35.334115 kubelet[1648]: E0212 21:55:35.334092 1648 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:55:36.023416 systemd-networkd[1110]: lxcd5c9eba89cbb: Gained IPv6LL Feb 12 21:55:36.334290 kubelet[1648]: E0212 21:55:36.334149 1648 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:55:36.820961 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount911614163.mount: Deactivated successfully. Feb 12 21:55:37.334764 kubelet[1648]: E0212 21:55:37.334739 1648 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:55:38.335080 kubelet[1648]: E0212 21:55:38.335044 1648 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:55:38.801037 env[1227]: time="2024-02-12T21:55:38.800991294Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 21:55:38.801955 env[1227]: time="2024-02-12T21:55:38.801943437Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 21:55:38.802916 env[1227]: time="2024-02-12T21:55:38.802905868Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 21:55:38.803911 env[1227]: time="2024-02-12T21:55:38.803899600Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 21:55:38.804369 env[1227]: time="2024-02-12T21:55:38.804354655Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" Feb 12 21:55:38.805449 env[1227]: time="2024-02-12T21:55:38.805430573Z" level=info msg="CreateContainer within sandbox \"6d6945522d001423bbb0b55ed126398cf6a400e5e225b608318a0da0e15d3041\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Feb 12 21:55:38.810673 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount913842215.mount: Deactivated successfully. Feb 12 21:55:38.813575 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2557599407.mount: Deactivated successfully. Feb 12 21:55:38.815440 env[1227]: time="2024-02-12T21:55:38.815419318Z" level=info msg="CreateContainer within sandbox \"6d6945522d001423bbb0b55ed126398cf6a400e5e225b608318a0da0e15d3041\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"6e20d56592b09a468fad51ae3720f51fe2a088c86a225440939cd03a32fe32e6\"" Feb 12 21:55:38.815786 env[1227]: time="2024-02-12T21:55:38.815771396Z" level=info msg="StartContainer for \"6e20d56592b09a468fad51ae3720f51fe2a088c86a225440939cd03a32fe32e6\"" Feb 12 21:55:38.857266 env[1227]: time="2024-02-12T21:55:38.857172185Z" level=info msg="StartContainer for \"6e20d56592b09a468fad51ae3720f51fe2a088c86a225440939cd03a32fe32e6\" returns successfully" Feb 12 21:55:39.335647 kubelet[1648]: E0212 21:55:39.335611 1648 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:55:39.564557 kubelet[1648]: I0212 21:55:39.564527 1648 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=-9.223372030290274e+09 pod.CreationTimestamp="2024-02-12 21:55:33 +0000 UTC" firstStartedPulling="2024-02-12 21:55:34.559810922 +0000 UTC m=+42.698495285" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 21:55:39.564197089 +0000 UTC m=+47.702881462" watchObservedRunningTime="2024-02-12 21:55:39.564501981 +0000 UTC m=+47.703186354" Feb 12 21:55:40.336277 kubelet[1648]: E0212 21:55:40.336241 1648 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:55:41.336691 kubelet[1648]: E0212 21:55:41.336661 1648 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:55:42.337491 kubelet[1648]: E0212 21:55:42.337457 1648 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:55:43.337666 kubelet[1648]: E0212 21:55:43.337629 1648 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:55:44.338110 kubelet[1648]: E0212 21:55:44.338077 1648 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:55:45.338676 kubelet[1648]: E0212 21:55:45.338644 1648 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:55:46.339479 kubelet[1648]: E0212 21:55:46.339440 1648 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:55:47.340485 kubelet[1648]: E0212 21:55:47.340444 1648 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:55:48.341187 kubelet[1648]: E0212 21:55:48.341138 1648 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:55:48.836705 kubelet[1648]: I0212 21:55:48.836673 1648 topology_manager.go:210] "Topology Admit Handler" Feb 12 21:55:48.919294 kubelet[1648]: I0212 21:55:48.919271 1648 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nfgd4\" (UniqueName: \"kubernetes.io/projected/66d1440a-91c4-4f91-b48e-10c1b8b6b64c-kube-api-access-nfgd4\") pod \"test-pod-1\" (UID: \"66d1440a-91c4-4f91-b48e-10c1b8b6b64c\") " pod="default/test-pod-1" Feb 12 21:55:48.919450 kubelet[1648]: I0212 21:55:48.919443 1648 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-8d968567-c56b-472f-96d4-13390e98035d\" (UniqueName: \"kubernetes.io/nfs/66d1440a-91c4-4f91-b48e-10c1b8b6b64c-pvc-8d968567-c56b-472f-96d4-13390e98035d\") pod \"test-pod-1\" (UID: \"66d1440a-91c4-4f91-b48e-10c1b8b6b64c\") " pod="default/test-pod-1" Feb 12 21:55:49.100272 kernel: FS-Cache: Loaded Feb 12 21:55:49.128873 kernel: RPC: Registered named UNIX socket transport module. Feb 12 21:55:49.128955 kernel: RPC: Registered udp transport module. Feb 12 21:55:49.128972 kernel: RPC: Registered tcp transport module. Feb 12 21:55:49.128992 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Feb 12 21:55:49.158272 kernel: FS-Cache: Netfs 'nfs' registered for caching Feb 12 21:55:49.289608 kernel: NFS: Registering the id_resolver key type Feb 12 21:55:49.289687 kernel: Key type id_resolver registered Feb 12 21:55:49.289704 kernel: Key type id_legacy registered Feb 12 21:55:49.308909 nfsidmap[3062]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Feb 12 21:55:49.310486 nfsidmap[3063]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Feb 12 21:55:49.341718 kubelet[1648]: E0212 21:55:49.341694 1648 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:55:49.440860 env[1227]: time="2024-02-12T21:55:49.440765942Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:66d1440a-91c4-4f91-b48e-10c1b8b6b64c,Namespace:default,Attempt:0,}" Feb 12 21:55:49.485682 systemd-networkd[1110]: lxc780f9325ecc4: Link UP Feb 12 21:55:49.494291 kernel: eth0: renamed from tmp5fab1 Feb 12 21:55:49.500590 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 12 21:55:49.500654 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc780f9325ecc4: link becomes ready Feb 12 21:55:49.500872 systemd-networkd[1110]: lxc780f9325ecc4: Gained carrier Feb 12 21:55:49.701680 env[1227]: time="2024-02-12T21:55:49.701501432Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 21:55:49.701680 env[1227]: time="2024-02-12T21:55:49.701531850Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 21:55:49.701915 env[1227]: time="2024-02-12T21:55:49.701539176Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 21:55:49.702055 env[1227]: time="2024-02-12T21:55:49.702030989Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/5fab17f395484b2bc57127acb1ba138ef1c6aba4034fa3c345e6611a3d462299 pid=3101 runtime=io.containerd.runc.v2 Feb 12 21:55:49.716614 systemd-resolved[1168]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 12 21:55:49.735946 env[1227]: time="2024-02-12T21:55:49.735918121Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:66d1440a-91c4-4f91-b48e-10c1b8b6b64c,Namespace:default,Attempt:0,} returns sandbox id \"5fab17f395484b2bc57127acb1ba138ef1c6aba4034fa3c345e6611a3d462299\"" Feb 12 21:55:49.736760 env[1227]: time="2024-02-12T21:55:49.736658934Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Feb 12 21:55:50.212008 env[1227]: time="2024-02-12T21:55:50.211968977Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 21:55:50.213307 env[1227]: time="2024-02-12T21:55:50.213283943Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:3a8963c304a2f89d2bfa055e07403bae348b293c891b8ea01f7136642eaa277a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 21:55:50.214863 env[1227]: time="2024-02-12T21:55:50.214848589Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 21:55:50.215981 env[1227]: time="2024-02-12T21:55:50.215964482Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx@sha256:e34a272f01984c973b1e034e197c02f77dda18981038e3a54e957554ada4fec6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 21:55:50.216330 env[1227]: time="2024-02-12T21:55:50.216312832Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:3a8963c304a2f89d2bfa055e07403bae348b293c891b8ea01f7136642eaa277a\"" Feb 12 21:55:50.217705 env[1227]: time="2024-02-12T21:55:50.217681885Z" level=info msg="CreateContainer within sandbox \"5fab17f395484b2bc57127acb1ba138ef1c6aba4034fa3c345e6611a3d462299\" for container &ContainerMetadata{Name:test,Attempt:0,}" Feb 12 21:55:50.222754 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount15170051.mount: Deactivated successfully. Feb 12 21:55:50.225409 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1059553978.mount: Deactivated successfully. Feb 12 21:55:50.227118 env[1227]: time="2024-02-12T21:55:50.227094031Z" level=info msg="CreateContainer within sandbox \"5fab17f395484b2bc57127acb1ba138ef1c6aba4034fa3c345e6611a3d462299\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"4bcba456b873b9ee752ea6f75a8fee33e455fc726fdb2cac05c80c02d79490c6\"" Feb 12 21:55:50.227482 env[1227]: time="2024-02-12T21:55:50.227429735Z" level=info msg="StartContainer for \"4bcba456b873b9ee752ea6f75a8fee33e455fc726fdb2cac05c80c02d79490c6\"" Feb 12 21:55:50.261678 env[1227]: time="2024-02-12T21:55:50.261644939Z" level=info msg="StartContainer for \"4bcba456b873b9ee752ea6f75a8fee33e455fc726fdb2cac05c80c02d79490c6\" returns successfully" Feb 12 21:55:50.343002 kubelet[1648]: E0212 21:55:50.342967 1648 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:55:50.579777 kubelet[1648]: I0212 21:55:50.579298 1648 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=-9.223372020275505e+09 pod.CreationTimestamp="2024-02-12 21:55:34 +0000 UTC" firstStartedPulling="2024-02-12 21:55:49.73653102 +0000 UTC m=+57.875215380" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 21:55:50.579242011 +0000 UTC m=+58.717926390" watchObservedRunningTime="2024-02-12 21:55:50.579270915 +0000 UTC m=+58.717955288" Feb 12 21:55:51.343158 kubelet[1648]: E0212 21:55:51.343136 1648 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:55:51.447496 systemd-networkd[1110]: lxc780f9325ecc4: Gained IPv6LL Feb 12 21:55:52.303947 kubelet[1648]: E0212 21:55:52.303924 1648 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:55:52.343532 kubelet[1648]: E0212 21:55:52.343477 1648 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:55:53.344547 kubelet[1648]: E0212 21:55:53.344520 1648 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:55:54.345850 kubelet[1648]: E0212 21:55:54.345824 1648 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:55:55.347160 kubelet[1648]: E0212 21:55:55.347117 1648 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:55:56.347772 kubelet[1648]: E0212 21:55:56.347737 1648 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:55:56.918885 systemd[1]: run-containerd-runc-k8s.io-e8c2f65c0a2703c2439bebe5a66693a93d42b55d150cd7840a48b7148d76c0ea-runc.dDhlYd.mount: Deactivated successfully. Feb 12 21:55:56.958013 env[1227]: time="2024-02-12T21:55:56.957966435Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 12 21:55:56.967088 env[1227]: time="2024-02-12T21:55:56.967070297Z" level=info msg="StopContainer for \"e8c2f65c0a2703c2439bebe5a66693a93d42b55d150cd7840a48b7148d76c0ea\" with timeout 1 (s)" Feb 12 21:55:56.967302 env[1227]: time="2024-02-12T21:55:56.967290223Z" level=info msg="Stop container \"e8c2f65c0a2703c2439bebe5a66693a93d42b55d150cd7840a48b7148d76c0ea\" with signal terminated" Feb 12 21:55:56.970943 systemd-networkd[1110]: lxc_health: Link DOWN Feb 12 21:55:56.970949 systemd-networkd[1110]: lxc_health: Lost carrier Feb 12 21:55:57.004177 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e8c2f65c0a2703c2439bebe5a66693a93d42b55d150cd7840a48b7148d76c0ea-rootfs.mount: Deactivated successfully. Feb 12 21:55:57.241996 env[1227]: time="2024-02-12T21:55:57.241952060Z" level=info msg="shim disconnected" id=e8c2f65c0a2703c2439bebe5a66693a93d42b55d150cd7840a48b7148d76c0ea Feb 12 21:55:57.241996 env[1227]: time="2024-02-12T21:55:57.241989236Z" level=warning msg="cleaning up after shim disconnected" id=e8c2f65c0a2703c2439bebe5a66693a93d42b55d150cd7840a48b7148d76c0ea namespace=k8s.io Feb 12 21:55:57.241996 env[1227]: time="2024-02-12T21:55:57.241996299Z" level=info msg="cleaning up dead shim" Feb 12 21:55:57.246818 env[1227]: time="2024-02-12T21:55:57.246795081Z" level=warning msg="cleanup warnings time=\"2024-02-12T21:55:57Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3231 runtime=io.containerd.runc.v2\n" Feb 12 21:55:57.251396 env[1227]: time="2024-02-12T21:55:57.251375222Z" level=info msg="StopContainer for \"e8c2f65c0a2703c2439bebe5a66693a93d42b55d150cd7840a48b7148d76c0ea\" returns successfully" Feb 12 21:55:57.251778 env[1227]: time="2024-02-12T21:55:57.251755790Z" level=info msg="StopPodSandbox for \"c7c761caf5bc7d4c813639abef9787fa68f51a4443b5b4d41dda0cefd7c36039\"" Feb 12 21:55:57.251816 env[1227]: time="2024-02-12T21:55:57.251797464Z" level=info msg="Container to stop \"e8c2f65c0a2703c2439bebe5a66693a93d42b55d150cd7840a48b7148d76c0ea\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 12 21:55:57.251816 env[1227]: time="2024-02-12T21:55:57.251806413Z" level=info msg="Container to stop \"97765aa6578545ff809df050fb9d410f7b460942f61cd5ae0a62762ef784632f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 12 21:55:57.251816 env[1227]: time="2024-02-12T21:55:57.251812425Z" level=info msg="Container to stop \"09db7208298275820296744b00b120fa7a7b539cfd63d2e95b61876037dfee1f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 12 21:55:57.253035 env[1227]: time="2024-02-12T21:55:57.251818731Z" level=info msg="Container to stop \"77b7d12862db0c04331033989058e5fd05ce6f6ab6ab7fc14256986f0ae4205e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 12 21:55:57.253035 env[1227]: time="2024-02-12T21:55:57.251824221Z" level=info msg="Container to stop \"b764a2e420d3bfab128a475c26f305c555ca102fb6cef8e29180aa2a2243b98b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 12 21:55:57.252920 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c7c761caf5bc7d4c813639abef9787fa68f51a4443b5b4d41dda0cefd7c36039-shm.mount: Deactivated successfully. Feb 12 21:55:57.278617 env[1227]: time="2024-02-12T21:55:57.278577155Z" level=info msg="shim disconnected" id=c7c761caf5bc7d4c813639abef9787fa68f51a4443b5b4d41dda0cefd7c36039 Feb 12 21:55:57.278617 env[1227]: time="2024-02-12T21:55:57.278612626Z" level=warning msg="cleaning up after shim disconnected" id=c7c761caf5bc7d4c813639abef9787fa68f51a4443b5b4d41dda0cefd7c36039 namespace=k8s.io Feb 12 21:55:57.278751 env[1227]: time="2024-02-12T21:55:57.278621393Z" level=info msg="cleaning up dead shim" Feb 12 21:55:57.283232 env[1227]: time="2024-02-12T21:55:57.283204412Z" level=warning msg="cleanup warnings time=\"2024-02-12T21:55:57Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3263 runtime=io.containerd.runc.v2\n" Feb 12 21:55:57.283415 env[1227]: time="2024-02-12T21:55:57.283397576Z" level=info msg="TearDown network for sandbox \"c7c761caf5bc7d4c813639abef9787fa68f51a4443b5b4d41dda0cefd7c36039\" successfully" Feb 12 21:55:57.283415 env[1227]: time="2024-02-12T21:55:57.283413373Z" level=info msg="StopPodSandbox for \"c7c761caf5bc7d4c813639abef9787fa68f51a4443b5b4d41dda0cefd7c36039\" returns successfully" Feb 12 21:55:57.348102 kubelet[1648]: E0212 21:55:57.348031 1648 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:55:57.357623 kubelet[1648]: E0212 21:55:57.357578 1648 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 12 21:55:57.364659 kubelet[1648]: I0212 21:55:57.363957 1648 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8565fa0e-4be7-4e21-b4bb-da49b9125bef-lib-modules\") pod \"8565fa0e-4be7-4e21-b4bb-da49b9125bef\" (UID: \"8565fa0e-4be7-4e21-b4bb-da49b9125bef\") " Feb 12 21:55:57.364659 kubelet[1648]: I0212 21:55:57.364009 1648 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8565fa0e-4be7-4e21-b4bb-da49b9125bef-clustermesh-secrets\") pod \"8565fa0e-4be7-4e21-b4bb-da49b9125bef\" (UID: \"8565fa0e-4be7-4e21-b4bb-da49b9125bef\") " Feb 12 21:55:57.364659 kubelet[1648]: I0212 21:55:57.364032 1648 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8565fa0e-4be7-4e21-b4bb-da49b9125bef-cilium-config-path\") pod \"8565fa0e-4be7-4e21-b4bb-da49b9125bef\" (UID: \"8565fa0e-4be7-4e21-b4bb-da49b9125bef\") " Feb 12 21:55:57.364659 kubelet[1648]: I0212 21:55:57.364057 1648 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8565fa0e-4be7-4e21-b4bb-da49b9125bef-etc-cni-netd\") pod \"8565fa0e-4be7-4e21-b4bb-da49b9125bef\" (UID: \"8565fa0e-4be7-4e21-b4bb-da49b9125bef\") " Feb 12 21:55:57.364659 kubelet[1648]: I0212 21:55:57.364073 1648 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8565fa0e-4be7-4e21-b4bb-da49b9125bef-host-proc-sys-kernel\") pod \"8565fa0e-4be7-4e21-b4bb-da49b9125bef\" (UID: \"8565fa0e-4be7-4e21-b4bb-da49b9125bef\") " Feb 12 21:55:57.364659 kubelet[1648]: I0212 21:55:57.364096 1648 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8565fa0e-4be7-4e21-b4bb-da49b9125bef-cilium-cgroup\") pod \"8565fa0e-4be7-4e21-b4bb-da49b9125bef\" (UID: \"8565fa0e-4be7-4e21-b4bb-da49b9125bef\") " Feb 12 21:55:57.364864 kubelet[1648]: I0212 21:55:57.364112 1648 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8565fa0e-4be7-4e21-b4bb-da49b9125bef-host-proc-sys-net\") pod \"8565fa0e-4be7-4e21-b4bb-da49b9125bef\" (UID: \"8565fa0e-4be7-4e21-b4bb-da49b9125bef\") " Feb 12 21:55:57.364864 kubelet[1648]: I0212 21:55:57.364145 1648 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8565fa0e-4be7-4e21-b4bb-da49b9125bef-bpf-maps\") pod \"8565fa0e-4be7-4e21-b4bb-da49b9125bef\" (UID: \"8565fa0e-4be7-4e21-b4bb-da49b9125bef\") " Feb 12 21:55:57.364864 kubelet[1648]: I0212 21:55:57.364169 1648 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8565fa0e-4be7-4e21-b4bb-da49b9125bef-hubble-tls\") pod \"8565fa0e-4be7-4e21-b4bb-da49b9125bef\" (UID: \"8565fa0e-4be7-4e21-b4bb-da49b9125bef\") " Feb 12 21:55:57.364864 kubelet[1648]: I0212 21:55:57.364185 1648 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6kltr\" (UniqueName: \"kubernetes.io/projected/8565fa0e-4be7-4e21-b4bb-da49b9125bef-kube-api-access-6kltr\") pod \"8565fa0e-4be7-4e21-b4bb-da49b9125bef\" (UID: \"8565fa0e-4be7-4e21-b4bb-da49b9125bef\") " Feb 12 21:55:57.364864 kubelet[1648]: I0212 21:55:57.364208 1648 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8565fa0e-4be7-4e21-b4bb-da49b9125bef-xtables-lock\") pod \"8565fa0e-4be7-4e21-b4bb-da49b9125bef\" (UID: \"8565fa0e-4be7-4e21-b4bb-da49b9125bef\") " Feb 12 21:55:57.364864 kubelet[1648]: I0212 21:55:57.364221 1648 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8565fa0e-4be7-4e21-b4bb-da49b9125bef-hostproc\") pod \"8565fa0e-4be7-4e21-b4bb-da49b9125bef\" (UID: \"8565fa0e-4be7-4e21-b4bb-da49b9125bef\") " Feb 12 21:55:57.365019 kubelet[1648]: I0212 21:55:57.364235 1648 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8565fa0e-4be7-4e21-b4bb-da49b9125bef-cni-path\") pod \"8565fa0e-4be7-4e21-b4bb-da49b9125bef\" (UID: \"8565fa0e-4be7-4e21-b4bb-da49b9125bef\") " Feb 12 21:55:57.365019 kubelet[1648]: I0212 21:55:57.364263 1648 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8565fa0e-4be7-4e21-b4bb-da49b9125bef-cilium-run\") pod \"8565fa0e-4be7-4e21-b4bb-da49b9125bef\" (UID: \"8565fa0e-4be7-4e21-b4bb-da49b9125bef\") " Feb 12 21:55:57.365019 kubelet[1648]: I0212 21:55:57.364320 1648 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8565fa0e-4be7-4e21-b4bb-da49b9125bef-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "8565fa0e-4be7-4e21-b4bb-da49b9125bef" (UID: "8565fa0e-4be7-4e21-b4bb-da49b9125bef"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 21:55:57.365019 kubelet[1648]: I0212 21:55:57.364344 1648 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8565fa0e-4be7-4e21-b4bb-da49b9125bef-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "8565fa0e-4be7-4e21-b4bb-da49b9125bef" (UID: "8565fa0e-4be7-4e21-b4bb-da49b9125bef"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 21:55:57.365019 kubelet[1648]: I0212 21:55:57.364374 1648 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8565fa0e-4be7-4e21-b4bb-da49b9125bef-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "8565fa0e-4be7-4e21-b4bb-da49b9125bef" (UID: "8565fa0e-4be7-4e21-b4bb-da49b9125bef"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 21:55:57.365425 kubelet[1648]: W0212 21:55:57.365384 1648 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/8565fa0e-4be7-4e21-b4bb-da49b9125bef/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 12 21:55:57.366005 kubelet[1648]: I0212 21:55:57.365986 1648 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8565fa0e-4be7-4e21-b4bb-da49b9125bef-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "8565fa0e-4be7-4e21-b4bb-da49b9125bef" (UID: "8565fa0e-4be7-4e21-b4bb-da49b9125bef"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 21:55:57.366060 kubelet[1648]: I0212 21:55:57.366017 1648 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8565fa0e-4be7-4e21-b4bb-da49b9125bef-hostproc" (OuterVolumeSpecName: "hostproc") pod "8565fa0e-4be7-4e21-b4bb-da49b9125bef" (UID: "8565fa0e-4be7-4e21-b4bb-da49b9125bef"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 21:55:57.366060 kubelet[1648]: I0212 21:55:57.366034 1648 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8565fa0e-4be7-4e21-b4bb-da49b9125bef-cni-path" (OuterVolumeSpecName: "cni-path") pod "8565fa0e-4be7-4e21-b4bb-da49b9125bef" (UID: "8565fa0e-4be7-4e21-b4bb-da49b9125bef"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 21:55:57.366060 kubelet[1648]: I0212 21:55:57.364326 1648 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8565fa0e-4be7-4e21-b4bb-da49b9125bef-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "8565fa0e-4be7-4e21-b4bb-da49b9125bef" (UID: "8565fa0e-4be7-4e21-b4bb-da49b9125bef"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 21:55:57.366279 kubelet[1648]: I0212 21:55:57.366265 1648 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8565fa0e-4be7-4e21-b4bb-da49b9125bef-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "8565fa0e-4be7-4e21-b4bb-da49b9125bef" (UID: "8565fa0e-4be7-4e21-b4bb-da49b9125bef"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 21:55:57.366459 kubelet[1648]: I0212 21:55:57.366361 1648 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8565fa0e-4be7-4e21-b4bb-da49b9125bef-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "8565fa0e-4be7-4e21-b4bb-da49b9125bef" (UID: "8565fa0e-4be7-4e21-b4bb-da49b9125bef"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 21:55:57.366535 kubelet[1648]: I0212 21:55:57.366375 1648 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8565fa0e-4be7-4e21-b4bb-da49b9125bef-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "8565fa0e-4be7-4e21-b4bb-da49b9125bef" (UID: "8565fa0e-4be7-4e21-b4bb-da49b9125bef"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 21:55:57.367634 kubelet[1648]: I0212 21:55:57.367621 1648 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8565fa0e-4be7-4e21-b4bb-da49b9125bef-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "8565fa0e-4be7-4e21-b4bb-da49b9125bef" (UID: "8565fa0e-4be7-4e21-b4bb-da49b9125bef"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 12 21:55:57.370119 kubelet[1648]: I0212 21:55:57.370084 1648 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8565fa0e-4be7-4e21-b4bb-da49b9125bef-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "8565fa0e-4be7-4e21-b4bb-da49b9125bef" (UID: "8565fa0e-4be7-4e21-b4bb-da49b9125bef"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 12 21:55:57.370184 kubelet[1648]: I0212 21:55:57.370167 1648 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8565fa0e-4be7-4e21-b4bb-da49b9125bef-kube-api-access-6kltr" (OuterVolumeSpecName: "kube-api-access-6kltr") pod "8565fa0e-4be7-4e21-b4bb-da49b9125bef" (UID: "8565fa0e-4be7-4e21-b4bb-da49b9125bef"). InnerVolumeSpecName "kube-api-access-6kltr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 12 21:55:57.371543 kubelet[1648]: I0212 21:55:57.371529 1648 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8565fa0e-4be7-4e21-b4bb-da49b9125bef-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "8565fa0e-4be7-4e21-b4bb-da49b9125bef" (UID: "8565fa0e-4be7-4e21-b4bb-da49b9125bef"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 12 21:55:57.464972 kubelet[1648]: I0212 21:55:57.464948 1648 reconciler_common.go:295] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8565fa0e-4be7-4e21-b4bb-da49b9125bef-cni-path\") on node \"10.67.124.137\" DevicePath \"\"" Feb 12 21:55:57.465107 kubelet[1648]: I0212 21:55:57.465099 1648 reconciler_common.go:295] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8565fa0e-4be7-4e21-b4bb-da49b9125bef-cilium-run\") on node \"10.67.124.137\" DevicePath \"\"" Feb 12 21:55:57.465162 kubelet[1648]: I0212 21:55:57.465154 1648 reconciler_common.go:295] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8565fa0e-4be7-4e21-b4bb-da49b9125bef-lib-modules\") on node \"10.67.124.137\" DevicePath \"\"" Feb 12 21:55:57.465209 kubelet[1648]: I0212 21:55:57.465203 1648 reconciler_common.go:295] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8565fa0e-4be7-4e21-b4bb-da49b9125bef-clustermesh-secrets\") on node \"10.67.124.137\" DevicePath \"\"" Feb 12 21:55:57.465270 kubelet[1648]: I0212 21:55:57.465264 1648 reconciler_common.go:295] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8565fa0e-4be7-4e21-b4bb-da49b9125bef-cilium-config-path\") on node \"10.67.124.137\" DevicePath \"\"" Feb 12 21:55:57.465327 kubelet[1648]: I0212 21:55:57.465320 1648 reconciler_common.go:295] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8565fa0e-4be7-4e21-b4bb-da49b9125bef-etc-cni-netd\") on node \"10.67.124.137\" DevicePath \"\"" Feb 12 21:55:57.465373 kubelet[1648]: I0212 21:55:57.465367 1648 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8565fa0e-4be7-4e21-b4bb-da49b9125bef-host-proc-sys-kernel\") on node \"10.67.124.137\" DevicePath \"\"" Feb 12 21:55:57.465419 kubelet[1648]: I0212 21:55:57.465412 1648 reconciler_common.go:295] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8565fa0e-4be7-4e21-b4bb-da49b9125bef-hubble-tls\") on node \"10.67.124.137\" DevicePath \"\"" Feb 12 21:55:57.465465 kubelet[1648]: I0212 21:55:57.465458 1648 reconciler_common.go:295] "Volume detached for volume \"kube-api-access-6kltr\" (UniqueName: \"kubernetes.io/projected/8565fa0e-4be7-4e21-b4bb-da49b9125bef-kube-api-access-6kltr\") on node \"10.67.124.137\" DevicePath \"\"" Feb 12 21:55:57.465510 kubelet[1648]: I0212 21:55:57.465504 1648 reconciler_common.go:295] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8565fa0e-4be7-4e21-b4bb-da49b9125bef-xtables-lock\") on node \"10.67.124.137\" DevicePath \"\"" Feb 12 21:55:57.465555 kubelet[1648]: I0212 21:55:57.465549 1648 reconciler_common.go:295] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8565fa0e-4be7-4e21-b4bb-da49b9125bef-hostproc\") on node \"10.67.124.137\" DevicePath \"\"" Feb 12 21:55:57.465602 kubelet[1648]: I0212 21:55:57.465596 1648 reconciler_common.go:295] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8565fa0e-4be7-4e21-b4bb-da49b9125bef-cilium-cgroup\") on node \"10.67.124.137\" DevicePath \"\"" Feb 12 21:55:57.465647 kubelet[1648]: I0212 21:55:57.465641 1648 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8565fa0e-4be7-4e21-b4bb-da49b9125bef-host-proc-sys-net\") on node \"10.67.124.137\" DevicePath \"\"" Feb 12 21:55:57.465694 kubelet[1648]: I0212 21:55:57.465687 1648 reconciler_common.go:295] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8565fa0e-4be7-4e21-b4bb-da49b9125bef-bpf-maps\") on node \"10.67.124.137\" DevicePath \"\"" Feb 12 21:55:57.582757 kubelet[1648]: I0212 21:55:57.582695 1648 scope.go:115] "RemoveContainer" containerID="e8c2f65c0a2703c2439bebe5a66693a93d42b55d150cd7840a48b7148d76c0ea" Feb 12 21:55:57.584956 env[1227]: time="2024-02-12T21:55:57.584769579Z" level=info msg="RemoveContainer for \"e8c2f65c0a2703c2439bebe5a66693a93d42b55d150cd7840a48b7148d76c0ea\"" Feb 12 21:55:57.593748 env[1227]: time="2024-02-12T21:55:57.593661502Z" level=info msg="RemoveContainer for \"e8c2f65c0a2703c2439bebe5a66693a93d42b55d150cd7840a48b7148d76c0ea\" returns successfully" Feb 12 21:55:57.593963 kubelet[1648]: I0212 21:55:57.593951 1648 scope.go:115] "RemoveContainer" containerID="97765aa6578545ff809df050fb9d410f7b460942f61cd5ae0a62762ef784632f" Feb 12 21:55:57.594733 env[1227]: time="2024-02-12T21:55:57.594585543Z" level=info msg="RemoveContainer for \"97765aa6578545ff809df050fb9d410f7b460942f61cd5ae0a62762ef784632f\"" Feb 12 21:55:57.597873 env[1227]: time="2024-02-12T21:55:57.597801926Z" level=info msg="RemoveContainer for \"97765aa6578545ff809df050fb9d410f7b460942f61cd5ae0a62762ef784632f\" returns successfully" Feb 12 21:55:57.598547 kubelet[1648]: I0212 21:55:57.598530 1648 scope.go:115] "RemoveContainer" containerID="b764a2e420d3bfab128a475c26f305c555ca102fb6cef8e29180aa2a2243b98b" Feb 12 21:55:57.599593 env[1227]: time="2024-02-12T21:55:57.599396012Z" level=info msg="RemoveContainer for \"b764a2e420d3bfab128a475c26f305c555ca102fb6cef8e29180aa2a2243b98b\"" Feb 12 21:55:57.600850 env[1227]: time="2024-02-12T21:55:57.600827611Z" level=info msg="RemoveContainer for \"b764a2e420d3bfab128a475c26f305c555ca102fb6cef8e29180aa2a2243b98b\" returns successfully" Feb 12 21:55:57.601092 kubelet[1648]: I0212 21:55:57.601079 1648 scope.go:115] "RemoveContainer" containerID="77b7d12862db0c04331033989058e5fd05ce6f6ab6ab7fc14256986f0ae4205e" Feb 12 21:55:57.601983 env[1227]: time="2024-02-12T21:55:57.601963046Z" level=info msg="RemoveContainer for \"77b7d12862db0c04331033989058e5fd05ce6f6ab6ab7fc14256986f0ae4205e\"" Feb 12 21:55:57.603620 env[1227]: time="2024-02-12T21:55:57.603596776Z" level=info msg="RemoveContainer for \"77b7d12862db0c04331033989058e5fd05ce6f6ab6ab7fc14256986f0ae4205e\" returns successfully" Feb 12 21:55:57.604046 kubelet[1648]: I0212 21:55:57.604030 1648 scope.go:115] "RemoveContainer" containerID="09db7208298275820296744b00b120fa7a7b539cfd63d2e95b61876037dfee1f" Feb 12 21:55:57.605714 env[1227]: time="2024-02-12T21:55:57.605681992Z" level=info msg="RemoveContainer for \"09db7208298275820296744b00b120fa7a7b539cfd63d2e95b61876037dfee1f\"" Feb 12 21:55:57.607564 env[1227]: time="2024-02-12T21:55:57.607529770Z" level=info msg="RemoveContainer for \"09db7208298275820296744b00b120fa7a7b539cfd63d2e95b61876037dfee1f\" returns successfully" Feb 12 21:55:57.607767 kubelet[1648]: I0212 21:55:57.607755 1648 scope.go:115] "RemoveContainer" containerID="e8c2f65c0a2703c2439bebe5a66693a93d42b55d150cd7840a48b7148d76c0ea" Feb 12 21:55:57.608082 env[1227]: time="2024-02-12T21:55:57.607985244Z" level=error msg="ContainerStatus for \"e8c2f65c0a2703c2439bebe5a66693a93d42b55d150cd7840a48b7148d76c0ea\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e8c2f65c0a2703c2439bebe5a66693a93d42b55d150cd7840a48b7148d76c0ea\": not found" Feb 12 21:55:57.608196 kubelet[1648]: E0212 21:55:57.608187 1648 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e8c2f65c0a2703c2439bebe5a66693a93d42b55d150cd7840a48b7148d76c0ea\": not found" containerID="e8c2f65c0a2703c2439bebe5a66693a93d42b55d150cd7840a48b7148d76c0ea" Feb 12 21:55:57.608285 kubelet[1648]: I0212 21:55:57.608276 1648 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:e8c2f65c0a2703c2439bebe5a66693a93d42b55d150cd7840a48b7148d76c0ea} err="failed to get container status \"e8c2f65c0a2703c2439bebe5a66693a93d42b55d150cd7840a48b7148d76c0ea\": rpc error: code = NotFound desc = an error occurred when try to find container \"e8c2f65c0a2703c2439bebe5a66693a93d42b55d150cd7840a48b7148d76c0ea\": not found" Feb 12 21:55:57.608343 kubelet[1648]: I0212 21:55:57.608333 1648 scope.go:115] "RemoveContainer" containerID="97765aa6578545ff809df050fb9d410f7b460942f61cd5ae0a62762ef784632f" Feb 12 21:55:57.608532 env[1227]: time="2024-02-12T21:55:57.608485119Z" level=error msg="ContainerStatus for \"97765aa6578545ff809df050fb9d410f7b460942f61cd5ae0a62762ef784632f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"97765aa6578545ff809df050fb9d410f7b460942f61cd5ae0a62762ef784632f\": not found" Feb 12 21:55:57.608618 kubelet[1648]: E0212 21:55:57.608610 1648 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"97765aa6578545ff809df050fb9d410f7b460942f61cd5ae0a62762ef784632f\": not found" containerID="97765aa6578545ff809df050fb9d410f7b460942f61cd5ae0a62762ef784632f" Feb 12 21:55:57.608675 kubelet[1648]: I0212 21:55:57.608667 1648 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:97765aa6578545ff809df050fb9d410f7b460942f61cd5ae0a62762ef784632f} err="failed to get container status \"97765aa6578545ff809df050fb9d410f7b460942f61cd5ae0a62762ef784632f\": rpc error: code = NotFound desc = an error occurred when try to find container \"97765aa6578545ff809df050fb9d410f7b460942f61cd5ae0a62762ef784632f\": not found" Feb 12 21:55:57.608723 kubelet[1648]: I0212 21:55:57.608716 1648 scope.go:115] "RemoveContainer" containerID="b764a2e420d3bfab128a475c26f305c555ca102fb6cef8e29180aa2a2243b98b" Feb 12 21:55:57.608909 env[1227]: time="2024-02-12T21:55:57.608846840Z" level=error msg="ContainerStatus for \"b764a2e420d3bfab128a475c26f305c555ca102fb6cef8e29180aa2a2243b98b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b764a2e420d3bfab128a475c26f305c555ca102fb6cef8e29180aa2a2243b98b\": not found" Feb 12 21:55:57.608987 kubelet[1648]: E0212 21:55:57.608980 1648 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b764a2e420d3bfab128a475c26f305c555ca102fb6cef8e29180aa2a2243b98b\": not found" containerID="b764a2e420d3bfab128a475c26f305c555ca102fb6cef8e29180aa2a2243b98b" Feb 12 21:55:57.609049 kubelet[1648]: I0212 21:55:57.609042 1648 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:b764a2e420d3bfab128a475c26f305c555ca102fb6cef8e29180aa2a2243b98b} err="failed to get container status \"b764a2e420d3bfab128a475c26f305c555ca102fb6cef8e29180aa2a2243b98b\": rpc error: code = NotFound desc = an error occurred when try to find container \"b764a2e420d3bfab128a475c26f305c555ca102fb6cef8e29180aa2a2243b98b\": not found" Feb 12 21:55:57.609106 kubelet[1648]: I0212 21:55:57.609099 1648 scope.go:115] "RemoveContainer" containerID="77b7d12862db0c04331033989058e5fd05ce6f6ab6ab7fc14256986f0ae4205e" Feb 12 21:55:57.609304 env[1227]: time="2024-02-12T21:55:57.609245682Z" level=error msg="ContainerStatus for \"77b7d12862db0c04331033989058e5fd05ce6f6ab6ab7fc14256986f0ae4205e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"77b7d12862db0c04331033989058e5fd05ce6f6ab6ab7fc14256986f0ae4205e\": not found" Feb 12 21:55:57.609391 kubelet[1648]: E0212 21:55:57.609383 1648 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"77b7d12862db0c04331033989058e5fd05ce6f6ab6ab7fc14256986f0ae4205e\": not found" containerID="77b7d12862db0c04331033989058e5fd05ce6f6ab6ab7fc14256986f0ae4205e" Feb 12 21:55:57.609459 kubelet[1648]: I0212 21:55:57.609452 1648 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:77b7d12862db0c04331033989058e5fd05ce6f6ab6ab7fc14256986f0ae4205e} err="failed to get container status \"77b7d12862db0c04331033989058e5fd05ce6f6ab6ab7fc14256986f0ae4205e\": rpc error: code = NotFound desc = an error occurred when try to find container \"77b7d12862db0c04331033989058e5fd05ce6f6ab6ab7fc14256986f0ae4205e\": not found" Feb 12 21:55:57.609512 kubelet[1648]: I0212 21:55:57.609504 1648 scope.go:115] "RemoveContainer" containerID="09db7208298275820296744b00b120fa7a7b539cfd63d2e95b61876037dfee1f" Feb 12 21:55:57.609723 env[1227]: time="2024-02-12T21:55:57.609662161Z" level=error msg="ContainerStatus for \"09db7208298275820296744b00b120fa7a7b539cfd63d2e95b61876037dfee1f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"09db7208298275820296744b00b120fa7a7b539cfd63d2e95b61876037dfee1f\": not found" Feb 12 21:55:57.609809 kubelet[1648]: E0212 21:55:57.609802 1648 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"09db7208298275820296744b00b120fa7a7b539cfd63d2e95b61876037dfee1f\": not found" containerID="09db7208298275820296744b00b120fa7a7b539cfd63d2e95b61876037dfee1f" Feb 12 21:55:57.609865 kubelet[1648]: I0212 21:55:57.609858 1648 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:09db7208298275820296744b00b120fa7a7b539cfd63d2e95b61876037dfee1f} err="failed to get container status \"09db7208298275820296744b00b120fa7a7b539cfd63d2e95b61876037dfee1f\": rpc error: code = NotFound desc = an error occurred when try to find container \"09db7208298275820296744b00b120fa7a7b539cfd63d2e95b61876037dfee1f\": not found" Feb 12 21:55:57.915330 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c7c761caf5bc7d4c813639abef9787fa68f51a4443b5b4d41dda0cefd7c36039-rootfs.mount: Deactivated successfully. Feb 12 21:55:57.915473 systemd[1]: var-lib-kubelet-pods-8565fa0e\x2d4be7\x2d4e21\x2db4bb\x2dda49b9125bef-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d6kltr.mount: Deactivated successfully. Feb 12 21:55:57.915571 systemd[1]: var-lib-kubelet-pods-8565fa0e\x2d4be7\x2d4e21\x2db4bb\x2dda49b9125bef-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 12 21:55:57.915651 systemd[1]: var-lib-kubelet-pods-8565fa0e\x2d4be7\x2d4e21\x2db4bb\x2dda49b9125bef-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 12 21:55:58.348782 kubelet[1648]: E0212 21:55:58.348733 1648 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:55:58.493540 env[1227]: time="2024-02-12T21:55:58.493460630Z" level=info msg="StopContainer for \"e8c2f65c0a2703c2439bebe5a66693a93d42b55d150cd7840a48b7148d76c0ea\" with timeout 1 (s)" Feb 12 21:55:58.493540 env[1227]: time="2024-02-12T21:55:58.493489046Z" level=error msg="StopContainer for \"e8c2f65c0a2703c2439bebe5a66693a93d42b55d150cd7840a48b7148d76c0ea\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e8c2f65c0a2703c2439bebe5a66693a93d42b55d150cd7840a48b7148d76c0ea\": not found" Feb 12 21:55:58.493896 kubelet[1648]: I0212 21:55:58.493879 1648 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=8565fa0e-4be7-4e21-b4bb-da49b9125bef path="/var/lib/kubelet/pods/8565fa0e-4be7-4e21-b4bb-da49b9125bef/volumes" Feb 12 21:55:58.494162 kubelet[1648]: E0212 21:55:58.494148 1648 remote_runtime.go:349] "StopContainer from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e8c2f65c0a2703c2439bebe5a66693a93d42b55d150cd7840a48b7148d76c0ea\": not found" containerID="e8c2f65c0a2703c2439bebe5a66693a93d42b55d150cd7840a48b7148d76c0ea" Feb 12 21:55:58.494282 env[1227]: time="2024-02-12T21:55:58.494245186Z" level=info msg="StopPodSandbox for \"c7c761caf5bc7d4c813639abef9787fa68f51a4443b5b4d41dda0cefd7c36039\"" Feb 12 21:55:58.494352 env[1227]: time="2024-02-12T21:55:58.494304496Z" level=info msg="TearDown network for sandbox \"c7c761caf5bc7d4c813639abef9787fa68f51a4443b5b4d41dda0cefd7c36039\" successfully" Feb 12 21:55:58.494352 env[1227]: time="2024-02-12T21:55:58.494326646Z" level=info msg="StopPodSandbox for \"c7c761caf5bc7d4c813639abef9787fa68f51a4443b5b4d41dda0cefd7c36039\" returns successfully" Feb 12 21:55:59.349006 kubelet[1648]: E0212 21:55:59.348982 1648 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:55:59.513670 kubelet[1648]: I0212 21:55:59.513623 1648 topology_manager.go:210] "Topology Admit Handler" Feb 12 21:55:59.513833 kubelet[1648]: E0212 21:55:59.513702 1648 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="8565fa0e-4be7-4e21-b4bb-da49b9125bef" containerName="mount-cgroup" Feb 12 21:55:59.513833 kubelet[1648]: E0212 21:55:59.513711 1648 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="8565fa0e-4be7-4e21-b4bb-da49b9125bef" containerName="mount-bpf-fs" Feb 12 21:55:59.513833 kubelet[1648]: E0212 21:55:59.513715 1648 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="8565fa0e-4be7-4e21-b4bb-da49b9125bef" containerName="clean-cilium-state" Feb 12 21:55:59.513833 kubelet[1648]: E0212 21:55:59.513719 1648 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="8565fa0e-4be7-4e21-b4bb-da49b9125bef" containerName="cilium-agent" Feb 12 21:55:59.513833 kubelet[1648]: E0212 21:55:59.513723 1648 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="8565fa0e-4be7-4e21-b4bb-da49b9125bef" containerName="apply-sysctl-overwrites" Feb 12 21:55:59.513833 kubelet[1648]: I0212 21:55:59.513753 1648 memory_manager.go:346] "RemoveStaleState removing state" podUID="8565fa0e-4be7-4e21-b4bb-da49b9125bef" containerName="cilium-agent" Feb 12 21:55:59.540301 kubelet[1648]: I0212 21:55:59.540271 1648 topology_manager.go:210] "Topology Admit Handler" Feb 12 21:55:59.577208 kubelet[1648]: I0212 21:55:59.577186 1648 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2a0570dc-b5b6-466d-adcc-c97e6c7d0322-clustermesh-secrets\") pod \"cilium-7c8cg\" (UID: \"2a0570dc-b5b6-466d-adcc-c97e6c7d0322\") " pod="kube-system/cilium-7c8cg" Feb 12 21:55:59.577375 kubelet[1648]: I0212 21:55:59.577364 1648 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2a0570dc-b5b6-466d-adcc-c97e6c7d0322-cilium-config-path\") pod \"cilium-7c8cg\" (UID: \"2a0570dc-b5b6-466d-adcc-c97e6c7d0322\") " pod="kube-system/cilium-7c8cg" Feb 12 21:55:59.577469 kubelet[1648]: I0212 21:55:59.577456 1648 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2a0570dc-b5b6-466d-adcc-c97e6c7d0322-hostproc\") pod \"cilium-7c8cg\" (UID: \"2a0570dc-b5b6-466d-adcc-c97e6c7d0322\") " pod="kube-system/cilium-7c8cg" Feb 12 21:55:59.577509 kubelet[1648]: I0212 21:55:59.577478 1648 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2a0570dc-b5b6-466d-adcc-c97e6c7d0322-lib-modules\") pod \"cilium-7c8cg\" (UID: \"2a0570dc-b5b6-466d-adcc-c97e6c7d0322\") " pod="kube-system/cilium-7c8cg" Feb 12 21:55:59.577509 kubelet[1648]: I0212 21:55:59.577495 1648 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2a0570dc-b5b6-466d-adcc-c97e6c7d0322-xtables-lock\") pod \"cilium-7c8cg\" (UID: \"2a0570dc-b5b6-466d-adcc-c97e6c7d0322\") " pod="kube-system/cilium-7c8cg" Feb 12 21:55:59.577550 kubelet[1648]: I0212 21:55:59.577510 1648 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/74db4fd8-7c59-4b75-aa7f-8b9b1511eed2-cilium-config-path\") pod \"cilium-operator-f59cbd8c6-rgjl6\" (UID: \"74db4fd8-7c59-4b75-aa7f-8b9b1511eed2\") " pod="kube-system/cilium-operator-f59cbd8c6-rgjl6" Feb 12 21:55:59.577550 kubelet[1648]: I0212 21:55:59.577522 1648 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2a0570dc-b5b6-466d-adcc-c97e6c7d0322-bpf-maps\") pod \"cilium-7c8cg\" (UID: \"2a0570dc-b5b6-466d-adcc-c97e6c7d0322\") " pod="kube-system/cilium-7c8cg" Feb 12 21:55:59.577550 kubelet[1648]: I0212 21:55:59.577535 1648 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2a0570dc-b5b6-466d-adcc-c97e6c7d0322-cilium-cgroup\") pod \"cilium-7c8cg\" (UID: \"2a0570dc-b5b6-466d-adcc-c97e6c7d0322\") " pod="kube-system/cilium-7c8cg" Feb 12 21:55:59.577609 kubelet[1648]: I0212 21:55:59.577551 1648 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c9jx5\" (UniqueName: \"kubernetes.io/projected/74db4fd8-7c59-4b75-aa7f-8b9b1511eed2-kube-api-access-c9jx5\") pod \"cilium-operator-f59cbd8c6-rgjl6\" (UID: \"74db4fd8-7c59-4b75-aa7f-8b9b1511eed2\") " pod="kube-system/cilium-operator-f59cbd8c6-rgjl6" Feb 12 21:55:59.577609 kubelet[1648]: I0212 21:55:59.577565 1648 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2a0570dc-b5b6-466d-adcc-c97e6c7d0322-cilium-run\") pod \"cilium-7c8cg\" (UID: \"2a0570dc-b5b6-466d-adcc-c97e6c7d0322\") " pod="kube-system/cilium-7c8cg" Feb 12 21:55:59.577609 kubelet[1648]: I0212 21:55:59.577577 1648 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/2a0570dc-b5b6-466d-adcc-c97e6c7d0322-cilium-ipsec-secrets\") pod \"cilium-7c8cg\" (UID: \"2a0570dc-b5b6-466d-adcc-c97e6c7d0322\") " pod="kube-system/cilium-7c8cg" Feb 12 21:55:59.577609 kubelet[1648]: I0212 21:55:59.577588 1648 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2a0570dc-b5b6-466d-adcc-c97e6c7d0322-host-proc-sys-net\") pod \"cilium-7c8cg\" (UID: \"2a0570dc-b5b6-466d-adcc-c97e6c7d0322\") " pod="kube-system/cilium-7c8cg" Feb 12 21:55:59.577609 kubelet[1648]: I0212 21:55:59.577601 1648 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2a0570dc-b5b6-466d-adcc-c97e6c7d0322-host-proc-sys-kernel\") pod \"cilium-7c8cg\" (UID: \"2a0570dc-b5b6-466d-adcc-c97e6c7d0322\") " pod="kube-system/cilium-7c8cg" Feb 12 21:55:59.577706 kubelet[1648]: I0212 21:55:59.577616 1648 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2a0570dc-b5b6-466d-adcc-c97e6c7d0322-hubble-tls\") pod \"cilium-7c8cg\" (UID: \"2a0570dc-b5b6-466d-adcc-c97e6c7d0322\") " pod="kube-system/cilium-7c8cg" Feb 12 21:55:59.577706 kubelet[1648]: I0212 21:55:59.577630 1648 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vbngq\" (UniqueName: \"kubernetes.io/projected/2a0570dc-b5b6-466d-adcc-c97e6c7d0322-kube-api-access-vbngq\") pod \"cilium-7c8cg\" (UID: \"2a0570dc-b5b6-466d-adcc-c97e6c7d0322\") " pod="kube-system/cilium-7c8cg" Feb 12 21:55:59.577706 kubelet[1648]: I0212 21:55:59.577642 1648 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2a0570dc-b5b6-466d-adcc-c97e6c7d0322-cni-path\") pod \"cilium-7c8cg\" (UID: \"2a0570dc-b5b6-466d-adcc-c97e6c7d0322\") " pod="kube-system/cilium-7c8cg" Feb 12 21:55:59.577706 kubelet[1648]: I0212 21:55:59.577655 1648 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2a0570dc-b5b6-466d-adcc-c97e6c7d0322-etc-cni-netd\") pod \"cilium-7c8cg\" (UID: \"2a0570dc-b5b6-466d-adcc-c97e6c7d0322\") " pod="kube-system/cilium-7c8cg" Feb 12 21:55:59.817459 env[1227]: time="2024-02-12T21:55:59.817423989Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-7c8cg,Uid:2a0570dc-b5b6-466d-adcc-c97e6c7d0322,Namespace:kube-system,Attempt:0,}" Feb 12 21:55:59.844507 env[1227]: time="2024-02-12T21:55:59.844166847Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-f59cbd8c6-rgjl6,Uid:74db4fd8-7c59-4b75-aa7f-8b9b1511eed2,Namespace:kube-system,Attempt:0,}" Feb 12 21:55:59.847784 env[1227]: time="2024-02-12T21:55:59.847753831Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 21:55:59.847882 env[1227]: time="2024-02-12T21:55:59.847867932Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 21:55:59.847954 env[1227]: time="2024-02-12T21:55:59.847941302Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 21:55:59.848142 env[1227]: time="2024-02-12T21:55:59.848114901Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/e2acc59b9769d05d18be778ac35563fb68bc15051c4f630531cee7d0b65699af pid=3291 runtime=io.containerd.runc.v2 Feb 12 21:55:59.870542 env[1227]: time="2024-02-12T21:55:59.870513086Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-7c8cg,Uid:2a0570dc-b5b6-466d-adcc-c97e6c7d0322,Namespace:kube-system,Attempt:0,} returns sandbox id \"e2acc59b9769d05d18be778ac35563fb68bc15051c4f630531cee7d0b65699af\"" Feb 12 21:55:59.871913 env[1227]: time="2024-02-12T21:55:59.871892504Z" level=info msg="CreateContainer within sandbox \"e2acc59b9769d05d18be778ac35563fb68bc15051c4f630531cee7d0b65699af\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 12 21:55:59.901037 env[1227]: time="2024-02-12T21:55:59.900993487Z" level=info msg="CreateContainer within sandbox \"e2acc59b9769d05d18be778ac35563fb68bc15051c4f630531cee7d0b65699af\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"dc86db6db81f53f3141fcb8d3e5f6786bbcc9db279420bf08d1076f3a38f67ad\"" Feb 12 21:55:59.901155 env[1227]: time="2024-02-12T21:55:59.899937982Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 21:55:59.901155 env[1227]: time="2024-02-12T21:55:59.899983402Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 21:55:59.901155 env[1227]: time="2024-02-12T21:55:59.899990703Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 21:55:59.901155 env[1227]: time="2024-02-12T21:55:59.900317865Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/28676d8ed95d3e615a026ecfd743bb1b1451eec3c4bf8a5a753b9e56e2ce8c0e pid=3335 runtime=io.containerd.runc.v2 Feb 12 21:55:59.902061 env[1227]: time="2024-02-12T21:55:59.902041553Z" level=info msg="StartContainer for \"dc86db6db81f53f3141fcb8d3e5f6786bbcc9db279420bf08d1076f3a38f67ad\"" Feb 12 21:55:59.939832 env[1227]: time="2024-02-12T21:55:59.939802962Z" level=info msg="StartContainer for \"dc86db6db81f53f3141fcb8d3e5f6786bbcc9db279420bf08d1076f3a38f67ad\" returns successfully" Feb 12 21:55:59.957691 env[1227]: time="2024-02-12T21:55:59.957658756Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-f59cbd8c6-rgjl6,Uid:74db4fd8-7c59-4b75-aa7f-8b9b1511eed2,Namespace:kube-system,Attempt:0,} returns sandbox id \"28676d8ed95d3e615a026ecfd743bb1b1451eec3c4bf8a5a753b9e56e2ce8c0e\"" Feb 12 21:55:59.960380 env[1227]: time="2024-02-12T21:55:59.960359439Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Feb 12 21:56:00.014309 env[1227]: time="2024-02-12T21:56:00.014277465Z" level=info msg="shim disconnected" id=dc86db6db81f53f3141fcb8d3e5f6786bbcc9db279420bf08d1076f3a38f67ad Feb 12 21:56:00.014480 env[1227]: time="2024-02-12T21:56:00.014468055Z" level=warning msg="cleaning up after shim disconnected" id=dc86db6db81f53f3141fcb8d3e5f6786bbcc9db279420bf08d1076f3a38f67ad namespace=k8s.io Feb 12 21:56:00.014532 env[1227]: time="2024-02-12T21:56:00.014517290Z" level=info msg="cleaning up dead shim" Feb 12 21:56:00.019739 env[1227]: time="2024-02-12T21:56:00.019707022Z" level=warning msg="cleanup warnings time=\"2024-02-12T21:56:00Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3421 runtime=io.containerd.runc.v2\n" Feb 12 21:56:00.350199 kubelet[1648]: E0212 21:56:00.350165 1648 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:56:00.588047 env[1227]: time="2024-02-12T21:56:00.588006571Z" level=info msg="StopPodSandbox for \"e2acc59b9769d05d18be778ac35563fb68bc15051c4f630531cee7d0b65699af\"" Feb 12 21:56:00.588047 env[1227]: time="2024-02-12T21:56:00.588079771Z" level=info msg="Container to stop \"dc86db6db81f53f3141fcb8d3e5f6786bbcc9db279420bf08d1076f3a38f67ad\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 12 21:56:00.624164 env[1227]: time="2024-02-12T21:56:00.624094413Z" level=info msg="shim disconnected" id=e2acc59b9769d05d18be778ac35563fb68bc15051c4f630531cee7d0b65699af Feb 12 21:56:00.624164 env[1227]: time="2024-02-12T21:56:00.624121598Z" level=warning msg="cleaning up after shim disconnected" id=e2acc59b9769d05d18be778ac35563fb68bc15051c4f630531cee7d0b65699af namespace=k8s.io Feb 12 21:56:00.624164 env[1227]: time="2024-02-12T21:56:00.624127607Z" level=info msg="cleaning up dead shim" Feb 12 21:56:00.629588 env[1227]: time="2024-02-12T21:56:00.629537338Z" level=warning msg="cleanup warnings time=\"2024-02-12T21:56:00Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3453 runtime=io.containerd.runc.v2\n" Feb 12 21:56:00.629774 env[1227]: time="2024-02-12T21:56:00.629750877Z" level=info msg="TearDown network for sandbox \"e2acc59b9769d05d18be778ac35563fb68bc15051c4f630531cee7d0b65699af\" successfully" Feb 12 21:56:00.629774 env[1227]: time="2024-02-12T21:56:00.629770147Z" level=info msg="StopPodSandbox for \"e2acc59b9769d05d18be778ac35563fb68bc15051c4f630531cee7d0b65699af\" returns successfully" Feb 12 21:56:00.684267 kubelet[1648]: I0212 21:56:00.684068 1648 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2a0570dc-b5b6-466d-adcc-c97e6c7d0322-etc-cni-netd\") pod \"2a0570dc-b5b6-466d-adcc-c97e6c7d0322\" (UID: \"2a0570dc-b5b6-466d-adcc-c97e6c7d0322\") " Feb 12 21:56:00.684267 kubelet[1648]: I0212 21:56:00.684114 1648 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2a0570dc-b5b6-466d-adcc-c97e6c7d0322-host-proc-sys-net\") pod \"2a0570dc-b5b6-466d-adcc-c97e6c7d0322\" (UID: \"2a0570dc-b5b6-466d-adcc-c97e6c7d0322\") " Feb 12 21:56:00.684267 kubelet[1648]: I0212 21:56:00.684133 1648 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2a0570dc-b5b6-466d-adcc-c97e6c7d0322-hubble-tls\") pod \"2a0570dc-b5b6-466d-adcc-c97e6c7d0322\" (UID: \"2a0570dc-b5b6-466d-adcc-c97e6c7d0322\") " Feb 12 21:56:00.684267 kubelet[1648]: I0212 21:56:00.684136 1648 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2a0570dc-b5b6-466d-adcc-c97e6c7d0322-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "2a0570dc-b5b6-466d-adcc-c97e6c7d0322" (UID: "2a0570dc-b5b6-466d-adcc-c97e6c7d0322"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 21:56:00.684267 kubelet[1648]: I0212 21:56:00.684153 1648 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vbngq\" (UniqueName: \"kubernetes.io/projected/2a0570dc-b5b6-466d-adcc-c97e6c7d0322-kube-api-access-vbngq\") pod \"2a0570dc-b5b6-466d-adcc-c97e6c7d0322\" (UID: \"2a0570dc-b5b6-466d-adcc-c97e6c7d0322\") " Feb 12 21:56:00.684452 kubelet[1648]: I0212 21:56:00.684160 1648 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2a0570dc-b5b6-466d-adcc-c97e6c7d0322-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "2a0570dc-b5b6-466d-adcc-c97e6c7d0322" (UID: "2a0570dc-b5b6-466d-adcc-c97e6c7d0322"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 21:56:00.684452 kubelet[1648]: I0212 21:56:00.684167 1648 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2a0570dc-b5b6-466d-adcc-c97e6c7d0322-lib-modules\") pod \"2a0570dc-b5b6-466d-adcc-c97e6c7d0322\" (UID: \"2a0570dc-b5b6-466d-adcc-c97e6c7d0322\") " Feb 12 21:56:00.684452 kubelet[1648]: I0212 21:56:00.684191 1648 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2a0570dc-b5b6-466d-adcc-c97e6c7d0322-cilium-cgroup\") pod \"2a0570dc-b5b6-466d-adcc-c97e6c7d0322\" (UID: \"2a0570dc-b5b6-466d-adcc-c97e6c7d0322\") " Feb 12 21:56:00.684452 kubelet[1648]: I0212 21:56:00.684207 1648 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/2a0570dc-b5b6-466d-adcc-c97e6c7d0322-cilium-ipsec-secrets\") pod \"2a0570dc-b5b6-466d-adcc-c97e6c7d0322\" (UID: \"2a0570dc-b5b6-466d-adcc-c97e6c7d0322\") " Feb 12 21:56:00.684452 kubelet[1648]: I0212 21:56:00.684218 1648 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2a0570dc-b5b6-466d-adcc-c97e6c7d0322-host-proc-sys-kernel\") pod \"2a0570dc-b5b6-466d-adcc-c97e6c7d0322\" (UID: \"2a0570dc-b5b6-466d-adcc-c97e6c7d0322\") " Feb 12 21:56:00.684452 kubelet[1648]: I0212 21:56:00.684231 1648 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2a0570dc-b5b6-466d-adcc-c97e6c7d0322-cilium-config-path\") pod \"2a0570dc-b5b6-466d-adcc-c97e6c7d0322\" (UID: \"2a0570dc-b5b6-466d-adcc-c97e6c7d0322\") " Feb 12 21:56:00.684567 kubelet[1648]: I0212 21:56:00.684242 1648 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2a0570dc-b5b6-466d-adcc-c97e6c7d0322-bpf-maps\") pod \"2a0570dc-b5b6-466d-adcc-c97e6c7d0322\" (UID: \"2a0570dc-b5b6-466d-adcc-c97e6c7d0322\") " Feb 12 21:56:00.687625 kubelet[1648]: I0212 21:56:00.684596 1648 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2a0570dc-b5b6-466d-adcc-c97e6c7d0322-cni-path\") pod \"2a0570dc-b5b6-466d-adcc-c97e6c7d0322\" (UID: \"2a0570dc-b5b6-466d-adcc-c97e6c7d0322\") " Feb 12 21:56:00.687625 kubelet[1648]: I0212 21:56:00.684612 1648 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2a0570dc-b5b6-466d-adcc-c97e6c7d0322-cilium-run\") pod \"2a0570dc-b5b6-466d-adcc-c97e6c7d0322\" (UID: \"2a0570dc-b5b6-466d-adcc-c97e6c7d0322\") " Feb 12 21:56:00.687625 kubelet[1648]: I0212 21:56:00.684625 1648 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2a0570dc-b5b6-466d-adcc-c97e6c7d0322-clustermesh-secrets\") pod \"2a0570dc-b5b6-466d-adcc-c97e6c7d0322\" (UID: \"2a0570dc-b5b6-466d-adcc-c97e6c7d0322\") " Feb 12 21:56:00.687625 kubelet[1648]: I0212 21:56:00.684637 1648 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2a0570dc-b5b6-466d-adcc-c97e6c7d0322-hostproc\") pod \"2a0570dc-b5b6-466d-adcc-c97e6c7d0322\" (UID: \"2a0570dc-b5b6-466d-adcc-c97e6c7d0322\") " Feb 12 21:56:00.687625 kubelet[1648]: I0212 21:56:00.684646 1648 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2a0570dc-b5b6-466d-adcc-c97e6c7d0322-xtables-lock\") pod \"2a0570dc-b5b6-466d-adcc-c97e6c7d0322\" (UID: \"2a0570dc-b5b6-466d-adcc-c97e6c7d0322\") " Feb 12 21:56:00.687625 kubelet[1648]: I0212 21:56:00.684668 1648 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2a0570dc-b5b6-466d-adcc-c97e6c7d0322-host-proc-sys-net\") on node \"10.67.124.137\" DevicePath \"\"" Feb 12 21:56:00.687625 kubelet[1648]: I0212 21:56:00.684675 1648 reconciler_common.go:295] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2a0570dc-b5b6-466d-adcc-c97e6c7d0322-etc-cni-netd\") on node \"10.67.124.137\" DevicePath \"\"" Feb 12 21:56:00.687794 kubelet[1648]: I0212 21:56:00.684687 1648 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2a0570dc-b5b6-466d-adcc-c97e6c7d0322-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "2a0570dc-b5b6-466d-adcc-c97e6c7d0322" (UID: "2a0570dc-b5b6-466d-adcc-c97e6c7d0322"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 21:56:00.687794 kubelet[1648]: I0212 21:56:00.684699 1648 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2a0570dc-b5b6-466d-adcc-c97e6c7d0322-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "2a0570dc-b5b6-466d-adcc-c97e6c7d0322" (UID: "2a0570dc-b5b6-466d-adcc-c97e6c7d0322"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 21:56:00.687794 kubelet[1648]: W0212 21:56:00.684764 1648 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/2a0570dc-b5b6-466d-adcc-c97e6c7d0322/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 12 21:56:00.691785 kubelet[1648]: I0212 21:56:00.687869 1648 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2a0570dc-b5b6-466d-adcc-c97e6c7d0322-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "2a0570dc-b5b6-466d-adcc-c97e6c7d0322" (UID: "2a0570dc-b5b6-466d-adcc-c97e6c7d0322"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 21:56:00.691785 kubelet[1648]: I0212 21:56:00.687888 1648 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2a0570dc-b5b6-466d-adcc-c97e6c7d0322-cni-path" (OuterVolumeSpecName: "cni-path") pod "2a0570dc-b5b6-466d-adcc-c97e6c7d0322" (UID: "2a0570dc-b5b6-466d-adcc-c97e6c7d0322"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 21:56:00.691785 kubelet[1648]: I0212 21:56:00.687898 1648 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2a0570dc-b5b6-466d-adcc-c97e6c7d0322-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "2a0570dc-b5b6-466d-adcc-c97e6c7d0322" (UID: "2a0570dc-b5b6-466d-adcc-c97e6c7d0322"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 21:56:00.691785 kubelet[1648]: I0212 21:56:00.687995 1648 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2a0570dc-b5b6-466d-adcc-c97e6c7d0322-hostproc" (OuterVolumeSpecName: "hostproc") pod "2a0570dc-b5b6-466d-adcc-c97e6c7d0322" (UID: "2a0570dc-b5b6-466d-adcc-c97e6c7d0322"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 21:56:00.691785 kubelet[1648]: I0212 21:56:00.688010 1648 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2a0570dc-b5b6-466d-adcc-c97e6c7d0322-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "2a0570dc-b5b6-466d-adcc-c97e6c7d0322" (UID: "2a0570dc-b5b6-466d-adcc-c97e6c7d0322"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 21:56:00.689090 systemd[1]: var-lib-kubelet-pods-2a0570dc\x2db5b6\x2d466d\x2dadcc\x2dc97e6c7d0322-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 12 21:56:00.692055 kubelet[1648]: I0212 21:56:00.688091 1648 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2a0570dc-b5b6-466d-adcc-c97e6c7d0322-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "2a0570dc-b5b6-466d-adcc-c97e6c7d0322" (UID: "2a0570dc-b5b6-466d-adcc-c97e6c7d0322"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 21:56:00.692055 kubelet[1648]: I0212 21:56:00.689728 1648 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a0570dc-b5b6-466d-adcc-c97e6c7d0322-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "2a0570dc-b5b6-466d-adcc-c97e6c7d0322" (UID: "2a0570dc-b5b6-466d-adcc-c97e6c7d0322"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 12 21:56:00.692055 kubelet[1648]: I0212 21:56:00.691837 1648 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a0570dc-b5b6-466d-adcc-c97e6c7d0322-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "2a0570dc-b5b6-466d-adcc-c97e6c7d0322" (UID: "2a0570dc-b5b6-466d-adcc-c97e6c7d0322"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 12 21:56:00.691497 systemd[1]: var-lib-kubelet-pods-2a0570dc\x2db5b6\x2d466d\x2dadcc\x2dc97e6c7d0322-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Feb 12 21:56:00.692454 kubelet[1648]: I0212 21:56:00.692439 1648 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2a0570dc-b5b6-466d-adcc-c97e6c7d0322-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "2a0570dc-b5b6-466d-adcc-c97e6c7d0322" (UID: "2a0570dc-b5b6-466d-adcc-c97e6c7d0322"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 12 21:56:00.694166 systemd[1]: var-lib-kubelet-pods-2a0570dc\x2db5b6\x2d466d\x2dadcc\x2dc97e6c7d0322-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 12 21:56:00.694620 kubelet[1648]: I0212 21:56:00.694605 1648 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a0570dc-b5b6-466d-adcc-c97e6c7d0322-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "2a0570dc-b5b6-466d-adcc-c97e6c7d0322" (UID: "2a0570dc-b5b6-466d-adcc-c97e6c7d0322"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 12 21:56:00.696678 systemd[1]: var-lib-kubelet-pods-2a0570dc\x2db5b6\x2d466d\x2dadcc\x2dc97e6c7d0322-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dvbngq.mount: Deactivated successfully. Feb 12 21:56:00.697120 kubelet[1648]: I0212 21:56:00.697103 1648 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a0570dc-b5b6-466d-adcc-c97e6c7d0322-kube-api-access-vbngq" (OuterVolumeSpecName: "kube-api-access-vbngq") pod "2a0570dc-b5b6-466d-adcc-c97e6c7d0322" (UID: "2a0570dc-b5b6-466d-adcc-c97e6c7d0322"). InnerVolumeSpecName "kube-api-access-vbngq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 12 21:56:00.785420 kubelet[1648]: I0212 21:56:00.785395 1648 reconciler_common.go:295] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2a0570dc-b5b6-466d-adcc-c97e6c7d0322-bpf-maps\") on node \"10.67.124.137\" DevicePath \"\"" Feb 12 21:56:00.785420 kubelet[1648]: I0212 21:56:00.785417 1648 reconciler_common.go:295] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2a0570dc-b5b6-466d-adcc-c97e6c7d0322-cni-path\") on node \"10.67.124.137\" DevicePath \"\"" Feb 12 21:56:00.785420 kubelet[1648]: I0212 21:56:00.785424 1648 reconciler_common.go:295] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2a0570dc-b5b6-466d-adcc-c97e6c7d0322-cilium-config-path\") on node \"10.67.124.137\" DevicePath \"\"" Feb 12 21:56:00.785420 kubelet[1648]: I0212 21:56:00.785430 1648 reconciler_common.go:295] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2a0570dc-b5b6-466d-adcc-c97e6c7d0322-cilium-run\") on node \"10.67.124.137\" DevicePath \"\"" Feb 12 21:56:00.785589 kubelet[1648]: I0212 21:56:00.785436 1648 reconciler_common.go:295] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2a0570dc-b5b6-466d-adcc-c97e6c7d0322-clustermesh-secrets\") on node \"10.67.124.137\" DevicePath \"\"" Feb 12 21:56:00.785589 kubelet[1648]: I0212 21:56:00.785443 1648 reconciler_common.go:295] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2a0570dc-b5b6-466d-adcc-c97e6c7d0322-hostproc\") on node \"10.67.124.137\" DevicePath \"\"" Feb 12 21:56:00.785589 kubelet[1648]: I0212 21:56:00.785451 1648 reconciler_common.go:295] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2a0570dc-b5b6-466d-adcc-c97e6c7d0322-xtables-lock\") on node \"10.67.124.137\" DevicePath \"\"" Feb 12 21:56:00.785589 kubelet[1648]: I0212 21:56:00.785465 1648 reconciler_common.go:295] "Volume detached for volume \"kube-api-access-vbngq\" (UniqueName: \"kubernetes.io/projected/2a0570dc-b5b6-466d-adcc-c97e6c7d0322-kube-api-access-vbngq\") on node \"10.67.124.137\" DevicePath \"\"" Feb 12 21:56:00.785589 kubelet[1648]: I0212 21:56:00.785475 1648 reconciler_common.go:295] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2a0570dc-b5b6-466d-adcc-c97e6c7d0322-hubble-tls\") on node \"10.67.124.137\" DevicePath \"\"" Feb 12 21:56:00.785589 kubelet[1648]: I0212 21:56:00.785485 1648 reconciler_common.go:295] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2a0570dc-b5b6-466d-adcc-c97e6c7d0322-lib-modules\") on node \"10.67.124.137\" DevicePath \"\"" Feb 12 21:56:00.785589 kubelet[1648]: I0212 21:56:00.785494 1648 reconciler_common.go:295] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2a0570dc-b5b6-466d-adcc-c97e6c7d0322-cilium-cgroup\") on node \"10.67.124.137\" DevicePath \"\"" Feb 12 21:56:00.785589 kubelet[1648]: I0212 21:56:00.785504 1648 reconciler_common.go:295] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/2a0570dc-b5b6-466d-adcc-c97e6c7d0322-cilium-ipsec-secrets\") on node \"10.67.124.137\" DevicePath \"\"" Feb 12 21:56:00.785749 kubelet[1648]: I0212 21:56:00.785513 1648 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2a0570dc-b5b6-466d-adcc-c97e6c7d0322-host-proc-sys-kernel\") on node \"10.67.124.137\" DevicePath \"\"" Feb 12 21:56:01.350633 kubelet[1648]: E0212 21:56:01.350606 1648 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:56:01.589988 kubelet[1648]: I0212 21:56:01.589962 1648 scope.go:115] "RemoveContainer" containerID="dc86db6db81f53f3141fcb8d3e5f6786bbcc9db279420bf08d1076f3a38f67ad" Feb 12 21:56:01.593381 env[1227]: time="2024-02-12T21:56:01.593355659Z" level=info msg="RemoveContainer for \"dc86db6db81f53f3141fcb8d3e5f6786bbcc9db279420bf08d1076f3a38f67ad\"" Feb 12 21:56:01.596709 env[1227]: time="2024-02-12T21:56:01.596678363Z" level=info msg="RemoveContainer for \"dc86db6db81f53f3141fcb8d3e5f6786bbcc9db279420bf08d1076f3a38f67ad\" returns successfully" Feb 12 21:56:01.639177 kubelet[1648]: I0212 21:56:01.638991 1648 topology_manager.go:210] "Topology Admit Handler" Feb 12 21:56:01.639177 kubelet[1648]: E0212 21:56:01.639042 1648 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="2a0570dc-b5b6-466d-adcc-c97e6c7d0322" containerName="mount-cgroup" Feb 12 21:56:01.639177 kubelet[1648]: I0212 21:56:01.639065 1648 memory_manager.go:346] "RemoveStaleState removing state" podUID="2a0570dc-b5b6-466d-adcc-c97e6c7d0322" containerName="mount-cgroup" Feb 12 21:56:01.689902 kubelet[1648]: I0212 21:56:01.689876 1648 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a68ade92-7ef7-4a9c-840e-cfda4b56c926-cilium-run\") pod \"cilium-64n96\" (UID: \"a68ade92-7ef7-4a9c-840e-cfda4b56c926\") " pod="kube-system/cilium-64n96" Feb 12 21:56:01.689992 kubelet[1648]: I0212 21:56:01.689909 1648 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a68ade92-7ef7-4a9c-840e-cfda4b56c926-clustermesh-secrets\") pod \"cilium-64n96\" (UID: \"a68ade92-7ef7-4a9c-840e-cfda4b56c926\") " pod="kube-system/cilium-64n96" Feb 12 21:56:01.689992 kubelet[1648]: I0212 21:56:01.689926 1648 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a68ade92-7ef7-4a9c-840e-cfda4b56c926-cilium-config-path\") pod \"cilium-64n96\" (UID: \"a68ade92-7ef7-4a9c-840e-cfda4b56c926\") " pod="kube-system/cilium-64n96" Feb 12 21:56:01.689992 kubelet[1648]: I0212 21:56:01.689938 1648 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q9cdc\" (UniqueName: \"kubernetes.io/projected/a68ade92-7ef7-4a9c-840e-cfda4b56c926-kube-api-access-q9cdc\") pod \"cilium-64n96\" (UID: \"a68ade92-7ef7-4a9c-840e-cfda4b56c926\") " pod="kube-system/cilium-64n96" Feb 12 21:56:01.689992 kubelet[1648]: I0212 21:56:01.689949 1648 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a68ade92-7ef7-4a9c-840e-cfda4b56c926-host-proc-sys-net\") pod \"cilium-64n96\" (UID: \"a68ade92-7ef7-4a9c-840e-cfda4b56c926\") " pod="kube-system/cilium-64n96" Feb 12 21:56:01.689992 kubelet[1648]: I0212 21:56:01.689960 1648 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a68ade92-7ef7-4a9c-840e-cfda4b56c926-cilium-cgroup\") pod \"cilium-64n96\" (UID: \"a68ade92-7ef7-4a9c-840e-cfda4b56c926\") " pod="kube-system/cilium-64n96" Feb 12 21:56:01.690121 kubelet[1648]: I0212 21:56:01.689971 1648 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a68ade92-7ef7-4a9c-840e-cfda4b56c926-cni-path\") pod \"cilium-64n96\" (UID: \"a68ade92-7ef7-4a9c-840e-cfda4b56c926\") " pod="kube-system/cilium-64n96" Feb 12 21:56:01.690121 kubelet[1648]: I0212 21:56:01.689981 1648 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/a68ade92-7ef7-4a9c-840e-cfda4b56c926-cilium-ipsec-secrets\") pod \"cilium-64n96\" (UID: \"a68ade92-7ef7-4a9c-840e-cfda4b56c926\") " pod="kube-system/cilium-64n96" Feb 12 21:56:01.690121 kubelet[1648]: I0212 21:56:01.689993 1648 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a68ade92-7ef7-4a9c-840e-cfda4b56c926-host-proc-sys-kernel\") pod \"cilium-64n96\" (UID: \"a68ade92-7ef7-4a9c-840e-cfda4b56c926\") " pod="kube-system/cilium-64n96" Feb 12 21:56:01.690121 kubelet[1648]: I0212 21:56:01.690004 1648 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a68ade92-7ef7-4a9c-840e-cfda4b56c926-hubble-tls\") pod \"cilium-64n96\" (UID: \"a68ade92-7ef7-4a9c-840e-cfda4b56c926\") " pod="kube-system/cilium-64n96" Feb 12 21:56:01.690121 kubelet[1648]: I0212 21:56:01.690015 1648 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a68ade92-7ef7-4a9c-840e-cfda4b56c926-hostproc\") pod \"cilium-64n96\" (UID: \"a68ade92-7ef7-4a9c-840e-cfda4b56c926\") " pod="kube-system/cilium-64n96" Feb 12 21:56:01.690121 kubelet[1648]: I0212 21:56:01.690026 1648 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a68ade92-7ef7-4a9c-840e-cfda4b56c926-bpf-maps\") pod \"cilium-64n96\" (UID: \"a68ade92-7ef7-4a9c-840e-cfda4b56c926\") " pod="kube-system/cilium-64n96" Feb 12 21:56:01.690240 kubelet[1648]: I0212 21:56:01.690036 1648 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a68ade92-7ef7-4a9c-840e-cfda4b56c926-etc-cni-netd\") pod \"cilium-64n96\" (UID: \"a68ade92-7ef7-4a9c-840e-cfda4b56c926\") " pod="kube-system/cilium-64n96" Feb 12 21:56:01.690240 kubelet[1648]: I0212 21:56:01.690046 1648 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a68ade92-7ef7-4a9c-840e-cfda4b56c926-lib-modules\") pod \"cilium-64n96\" (UID: \"a68ade92-7ef7-4a9c-840e-cfda4b56c926\") " pod="kube-system/cilium-64n96" Feb 12 21:56:01.690240 kubelet[1648]: I0212 21:56:01.690056 1648 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a68ade92-7ef7-4a9c-840e-cfda4b56c926-xtables-lock\") pod \"cilium-64n96\" (UID: \"a68ade92-7ef7-4a9c-840e-cfda4b56c926\") " pod="kube-system/cilium-64n96" Feb 12 21:56:01.828262 env[1227]: time="2024-02-12T21:56:01.827743539Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 21:56:01.828262 env[1227]: time="2024-02-12T21:56:01.828223841Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 21:56:01.828996 env[1227]: time="2024-02-12T21:56:01.828975845Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 21:56:01.829371 env[1227]: time="2024-02-12T21:56:01.829351342Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Feb 12 21:56:01.830606 env[1227]: time="2024-02-12T21:56:01.830529383Z" level=info msg="CreateContainer within sandbox \"28676d8ed95d3e615a026ecfd743bb1b1451eec3c4bf8a5a753b9e56e2ce8c0e\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Feb 12 21:56:01.842957 env[1227]: time="2024-02-12T21:56:01.842922634Z" level=info msg="CreateContainer within sandbox \"28676d8ed95d3e615a026ecfd743bb1b1451eec3c4bf8a5a753b9e56e2ce8c0e\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"782ad82ad88c4cc63aa06dcf892801724189207de5b8d362bcc039045ff86cb7\"" Feb 12 21:56:01.843230 env[1227]: time="2024-02-12T21:56:01.843198739Z" level=info msg="StartContainer for \"782ad82ad88c4cc63aa06dcf892801724189207de5b8d362bcc039045ff86cb7\"" Feb 12 21:56:01.878394 env[1227]: time="2024-02-12T21:56:01.878366898Z" level=info msg="StartContainer for \"782ad82ad88c4cc63aa06dcf892801724189207de5b8d362bcc039045ff86cb7\" returns successfully" Feb 12 21:56:01.943676 env[1227]: time="2024-02-12T21:56:01.943614487Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-64n96,Uid:a68ade92-7ef7-4a9c-840e-cfda4b56c926,Namespace:kube-system,Attempt:0,}" Feb 12 21:56:01.972224 env[1227]: time="2024-02-12T21:56:01.971698667Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 21:56:01.972224 env[1227]: time="2024-02-12T21:56:01.971738409Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 21:56:01.972224 env[1227]: time="2024-02-12T21:56:01.971750103Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 21:56:01.972401 env[1227]: time="2024-02-12T21:56:01.972301502Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/b882c439c1e3b4037c4b3d5798b93e5276b5515c1e0f433dd58a3b2d91d5fadd pid=3518 runtime=io.containerd.runc.v2 Feb 12 21:56:02.005663 env[1227]: time="2024-02-12T21:56:02.005629623Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-64n96,Uid:a68ade92-7ef7-4a9c-840e-cfda4b56c926,Namespace:kube-system,Attempt:0,} returns sandbox id \"b882c439c1e3b4037c4b3d5798b93e5276b5515c1e0f433dd58a3b2d91d5fadd\"" Feb 12 21:56:02.007656 env[1227]: time="2024-02-12T21:56:02.007638327Z" level=info msg="CreateContainer within sandbox \"b882c439c1e3b4037c4b3d5798b93e5276b5515c1e0f433dd58a3b2d91d5fadd\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 12 21:56:02.012434 env[1227]: time="2024-02-12T21:56:02.012392998Z" level=info msg="CreateContainer within sandbox \"b882c439c1e3b4037c4b3d5798b93e5276b5515c1e0f433dd58a3b2d91d5fadd\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"c98b2b782d4cf350fa579abe61b3c543d90c62b5e8f64efb582385ece696264a\"" Feb 12 21:56:02.013013 env[1227]: time="2024-02-12T21:56:02.012989872Z" level=info msg="StartContainer for \"c98b2b782d4cf350fa579abe61b3c543d90c62b5e8f64efb582385ece696264a\"" Feb 12 21:56:02.050307 env[1227]: time="2024-02-12T21:56:02.050281885Z" level=info msg="StartContainer for \"c98b2b782d4cf350fa579abe61b3c543d90c62b5e8f64efb582385ece696264a\" returns successfully" Feb 12 21:56:02.351255 kubelet[1648]: E0212 21:56:02.351189 1648 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:56:02.358862 kubelet[1648]: E0212 21:56:02.358835 1648 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 12 21:56:02.389491 env[1227]: time="2024-02-12T21:56:02.389429656Z" level=error msg="collecting metrics for c98b2b782d4cf350fa579abe61b3c543d90c62b5e8f64efb582385ece696264a" error="cgroups: cgroup deleted: unknown" Feb 12 21:56:02.493473 env[1227]: time="2024-02-12T21:56:02.493449737Z" level=info msg="StopPodSandbox for \"e2acc59b9769d05d18be778ac35563fb68bc15051c4f630531cee7d0b65699af\"" Feb 12 21:56:02.493679 env[1227]: time="2024-02-12T21:56:02.493652909Z" level=info msg="TearDown network for sandbox \"e2acc59b9769d05d18be778ac35563fb68bc15051c4f630531cee7d0b65699af\" successfully" Feb 12 21:56:02.493753 env[1227]: time="2024-02-12T21:56:02.493737258Z" level=info msg="StopPodSandbox for \"e2acc59b9769d05d18be778ac35563fb68bc15051c4f630531cee7d0b65699af\" returns successfully" Feb 12 21:56:02.494054 kubelet[1648]: I0212 21:56:02.494044 1648 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=2a0570dc-b5b6-466d-adcc-c97e6c7d0322 path="/var/lib/kubelet/pods/2a0570dc-b5b6-466d-adcc-c97e6c7d0322/volumes" Feb 12 21:56:02.506575 env[1227]: time="2024-02-12T21:56:02.506536243Z" level=info msg="shim disconnected" id=c98b2b782d4cf350fa579abe61b3c543d90c62b5e8f64efb582385ece696264a Feb 12 21:56:02.506575 env[1227]: time="2024-02-12T21:56:02.506573400Z" level=warning msg="cleaning up after shim disconnected" id=c98b2b782d4cf350fa579abe61b3c543d90c62b5e8f64efb582385ece696264a namespace=k8s.io Feb 12 21:56:02.506671 env[1227]: time="2024-02-12T21:56:02.506582937Z" level=info msg="cleaning up dead shim" Feb 12 21:56:02.511966 env[1227]: time="2024-02-12T21:56:02.511945139Z" level=warning msg="cleanup warnings time=\"2024-02-12T21:56:02Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3606 runtime=io.containerd.runc.v2\n" Feb 12 21:56:02.596833 env[1227]: time="2024-02-12T21:56:02.596806868Z" level=info msg="CreateContainer within sandbox \"b882c439c1e3b4037c4b3d5798b93e5276b5515c1e0f433dd58a3b2d91d5fadd\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 12 21:56:02.600671 kubelet[1648]: I0212 21:56:02.600417 1648 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-f59cbd8c6-rgjl6" podStartSLOduration=-9.22337203325439e+09 pod.CreationTimestamp="2024-02-12 21:55:59 +0000 UTC" firstStartedPulling="2024-02-12 21:55:59.959839018 +0000 UTC m=+68.098523382" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 21:56:02.600356896 +0000 UTC m=+70.739041263" watchObservedRunningTime="2024-02-12 21:56:02.600386044 +0000 UTC m=+70.739070410" Feb 12 21:56:02.603357 env[1227]: time="2024-02-12T21:56:02.603029974Z" level=info msg="CreateContainer within sandbox \"b882c439c1e3b4037c4b3d5798b93e5276b5515c1e0f433dd58a3b2d91d5fadd\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"ed898841c70fd291200de6f6e47e3ebecebab82557abe7044d7f64037e3ecef4\"" Feb 12 21:56:02.603614 env[1227]: time="2024-02-12T21:56:02.603600532Z" level=info msg="StartContainer for \"ed898841c70fd291200de6f6e47e3ebecebab82557abe7044d7f64037e3ecef4\"" Feb 12 21:56:02.638243 env[1227]: time="2024-02-12T21:56:02.638212353Z" level=info msg="StartContainer for \"ed898841c70fd291200de6f6e47e3ebecebab82557abe7044d7f64037e3ecef4\" returns successfully" Feb 12 21:56:02.682816 env[1227]: time="2024-02-12T21:56:02.682770831Z" level=info msg="shim disconnected" id=ed898841c70fd291200de6f6e47e3ebecebab82557abe7044d7f64037e3ecef4 Feb 12 21:56:02.682816 env[1227]: time="2024-02-12T21:56:02.682811516Z" level=warning msg="cleaning up after shim disconnected" id=ed898841c70fd291200de6f6e47e3ebecebab82557abe7044d7f64037e3ecef4 namespace=k8s.io Feb 12 21:56:02.682816 env[1227]: time="2024-02-12T21:56:02.682820247Z" level=info msg="cleaning up dead shim" Feb 12 21:56:02.691752 env[1227]: time="2024-02-12T21:56:02.691728844Z" level=warning msg="cleanup warnings time=\"2024-02-12T21:56:02Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3668 runtime=io.containerd.runc.v2\n" Feb 12 21:56:03.351388 kubelet[1648]: E0212 21:56:03.351357 1648 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:56:03.598872 env[1227]: time="2024-02-12T21:56:03.598843727Z" level=info msg="CreateContainer within sandbox \"b882c439c1e3b4037c4b3d5798b93e5276b5515c1e0f433dd58a3b2d91d5fadd\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 12 21:56:03.631904 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2736501335.mount: Deactivated successfully. Feb 12 21:56:03.636093 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount579126303.mount: Deactivated successfully. Feb 12 21:56:03.655391 env[1227]: time="2024-02-12T21:56:03.655355603Z" level=info msg="CreateContainer within sandbox \"b882c439c1e3b4037c4b3d5798b93e5276b5515c1e0f433dd58a3b2d91d5fadd\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"8bc7269b25250afa8435861dc871e6dcc68c02cba82ff3c83d0906b88ba3c709\"" Feb 12 21:56:03.655967 env[1227]: time="2024-02-12T21:56:03.655939886Z" level=info msg="StartContainer for \"8bc7269b25250afa8435861dc871e6dcc68c02cba82ff3c83d0906b88ba3c709\"" Feb 12 21:56:03.690302 env[1227]: time="2024-02-12T21:56:03.690271878Z" level=info msg="StartContainer for \"8bc7269b25250afa8435861dc871e6dcc68c02cba82ff3c83d0906b88ba3c709\" returns successfully" Feb 12 21:56:03.710790 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8bc7269b25250afa8435861dc871e6dcc68c02cba82ff3c83d0906b88ba3c709-rootfs.mount: Deactivated successfully. Feb 12 21:56:03.713925 env[1227]: time="2024-02-12T21:56:03.713893417Z" level=info msg="shim disconnected" id=8bc7269b25250afa8435861dc871e6dcc68c02cba82ff3c83d0906b88ba3c709 Feb 12 21:56:03.714017 env[1227]: time="2024-02-12T21:56:03.713924540Z" level=warning msg="cleaning up after shim disconnected" id=8bc7269b25250afa8435861dc871e6dcc68c02cba82ff3c83d0906b88ba3c709 namespace=k8s.io Feb 12 21:56:03.714017 env[1227]: time="2024-02-12T21:56:03.713932410Z" level=info msg="cleaning up dead shim" Feb 12 21:56:03.718906 env[1227]: time="2024-02-12T21:56:03.718880399Z" level=warning msg="cleanup warnings time=\"2024-02-12T21:56:03Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3725 runtime=io.containerd.runc.v2\n" Feb 12 21:56:04.351844 kubelet[1648]: E0212 21:56:04.351812 1648 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:56:04.602179 env[1227]: time="2024-02-12T21:56:04.602005842Z" level=info msg="CreateContainer within sandbox \"b882c439c1e3b4037c4b3d5798b93e5276b5515c1e0f433dd58a3b2d91d5fadd\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 12 21:56:04.639907 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3628081706.mount: Deactivated successfully. Feb 12 21:56:04.654491 env[1227]: time="2024-02-12T21:56:04.654453706Z" level=info msg="CreateContainer within sandbox \"b882c439c1e3b4037c4b3d5798b93e5276b5515c1e0f433dd58a3b2d91d5fadd\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"8e9fae7d612f02d30b99b02684001eca40c182f17a38b87168ca2f062a233533\"" Feb 12 21:56:04.655194 env[1227]: time="2024-02-12T21:56:04.655180586Z" level=info msg="StartContainer for \"8e9fae7d612f02d30b99b02684001eca40c182f17a38b87168ca2f062a233533\"" Feb 12 21:56:04.683759 env[1227]: time="2024-02-12T21:56:04.683732307Z" level=info msg="StartContainer for \"8e9fae7d612f02d30b99b02684001eca40c182f17a38b87168ca2f062a233533\" returns successfully" Feb 12 21:56:04.686650 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3781254073.mount: Deactivated successfully. Feb 12 21:56:04.696517 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8e9fae7d612f02d30b99b02684001eca40c182f17a38b87168ca2f062a233533-rootfs.mount: Deactivated successfully. Feb 12 21:56:04.700487 env[1227]: time="2024-02-12T21:56:04.700386350Z" level=info msg="shim disconnected" id=8e9fae7d612f02d30b99b02684001eca40c182f17a38b87168ca2f062a233533 Feb 12 21:56:04.700487 env[1227]: time="2024-02-12T21:56:04.700415810Z" level=warning msg="cleaning up after shim disconnected" id=8e9fae7d612f02d30b99b02684001eca40c182f17a38b87168ca2f062a233533 namespace=k8s.io Feb 12 21:56:04.700487 env[1227]: time="2024-02-12T21:56:04.700431393Z" level=info msg="cleaning up dead shim" Feb 12 21:56:04.706538 env[1227]: time="2024-02-12T21:56:04.706508846Z" level=warning msg="cleanup warnings time=\"2024-02-12T21:56:04Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3782 runtime=io.containerd.runc.v2\n" Feb 12 21:56:05.352896 kubelet[1648]: E0212 21:56:05.352859 1648 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:56:05.605819 env[1227]: time="2024-02-12T21:56:05.605660005Z" level=info msg="CreateContainer within sandbox \"b882c439c1e3b4037c4b3d5798b93e5276b5515c1e0f433dd58a3b2d91d5fadd\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 12 21:56:05.651447 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1869885007.mount: Deactivated successfully. Feb 12 21:56:05.671950 env[1227]: time="2024-02-12T21:56:05.671893130Z" level=info msg="CreateContainer within sandbox \"b882c439c1e3b4037c4b3d5798b93e5276b5515c1e0f433dd58a3b2d91d5fadd\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"0e602063f91a1f164df734abb9f58c6323c5919774839e993aa9c915b2797a35\"" Feb 12 21:56:05.672398 env[1227]: time="2024-02-12T21:56:05.672379821Z" level=info msg="StartContainer for \"0e602063f91a1f164df734abb9f58c6323c5919774839e993aa9c915b2797a35\"" Feb 12 21:56:05.686770 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3864233471.mount: Deactivated successfully. Feb 12 21:56:05.714289 env[1227]: time="2024-02-12T21:56:05.714255773Z" level=info msg="StartContainer for \"0e602063f91a1f164df734abb9f58c6323c5919774839e993aa9c915b2797a35\" returns successfully" Feb 12 21:56:05.728674 systemd[1]: run-containerd-runc-k8s.io-0e602063f91a1f164df734abb9f58c6323c5919774839e993aa9c915b2797a35-runc.FEkzeS.mount: Deactivated successfully. Feb 12 21:56:06.353906 kubelet[1648]: E0212 21:56:06.353881 1648 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:56:06.419270 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Feb 12 21:56:06.548006 kubelet[1648]: I0212 21:56:06.547953 1648 setters.go:548] "Node became not ready" node="10.67.124.137" condition={Type:Ready Status:False LastHeartbeatTime:2024-02-12 21:56:06.54787667 +0000 UTC m=+74.686561033 LastTransitionTime:2024-02-12 21:56:06.54787667 +0000 UTC m=+74.686561033 Reason:KubeletNotReady Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized} Feb 12 21:56:06.621177 kubelet[1648]: I0212 21:56:06.621075 1648 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-64n96" podStartSLOduration=5.621038504 pod.CreationTimestamp="2024-02-12 21:56:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 21:56:06.620732584 +0000 UTC m=+74.759416957" watchObservedRunningTime="2024-02-12 21:56:06.621038504 +0000 UTC m=+74.759722871" Feb 12 21:56:07.354458 kubelet[1648]: E0212 21:56:07.354427 1648 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:56:07.989310 systemd[1]: run-containerd-runc-k8s.io-0e602063f91a1f164df734abb9f58c6323c5919774839e993aa9c915b2797a35-runc.0q6UC2.mount: Deactivated successfully. Feb 12 21:56:08.355301 kubelet[1648]: E0212 21:56:08.355217 1648 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:56:08.721269 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 12 21:56:08.718483 systemd-networkd[1110]: lxc_health: Link UP Feb 12 21:56:08.722466 systemd-networkd[1110]: lxc_health: Gained carrier Feb 12 21:56:09.355585 kubelet[1648]: E0212 21:56:09.355552 1648 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:56:10.299461 systemd[1]: run-containerd-runc-k8s.io-0e602063f91a1f164df734abb9f58c6323c5919774839e993aa9c915b2797a35-runc.dcHx0w.mount: Deactivated successfully. Feb 12 21:56:10.328362 systemd-networkd[1110]: lxc_health: Gained IPv6LL Feb 12 21:56:10.355988 kubelet[1648]: E0212 21:56:10.355912 1648 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:56:11.356391 kubelet[1648]: E0212 21:56:11.356366 1648 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:56:12.303875 kubelet[1648]: E0212 21:56:12.303844 1648 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:56:12.357321 kubelet[1648]: E0212 21:56:12.357294 1648 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:56:12.387805 systemd[1]: run-containerd-runc-k8s.io-0e602063f91a1f164df734abb9f58c6323c5919774839e993aa9c915b2797a35-runc.djUoYn.mount: Deactivated successfully. Feb 12 21:56:12.426646 kubelet[1648]: E0212 21:56:12.426539 1648 upgradeaware.go:426] Error proxying data from client to backend: readfrom tcp 127.0.0.1:35798->127.0.0.1:41059: write tcp 127.0.0.1:35798->127.0.0.1:41059: write: broken pipe Feb 12 21:56:13.358738 kubelet[1648]: E0212 21:56:13.358709 1648 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:56:14.359339 kubelet[1648]: E0212 21:56:14.359307 1648 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:56:14.464466 systemd[1]: run-containerd-runc-k8s.io-0e602063f91a1f164df734abb9f58c6323c5919774839e993aa9c915b2797a35-runc.BVeY3c.mount: Deactivated successfully. Feb 12 21:56:15.360333 kubelet[1648]: E0212 21:56:15.360296 1648 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:56:16.360681 kubelet[1648]: E0212 21:56:16.360655 1648 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"